diff --git a/docs-cn/02-intro.md b/docs-cn/02-intro.md index 8daea48e3e..949c21472d 100644 --- a/docs-cn/02-intro.md +++ b/docs-cn/02-intro.md @@ -62,7 +62,7 @@ TDengine的主要功能如下:
-![TDengine技术生态图](eco_system.png) +![TDengine技术生态图](eco_system.webp)
图 1. TDengine技术生态图
diff --git a/docs-cn/12-taos-sql/08-interval.md b/docs-cn/12-taos-sql/08-interval.md index d62e11b0db..7c796e0046 100644 --- a/docs-cn/12-taos-sql/08-interval.md +++ b/docs-cn/12-taos-sql/08-interval.md @@ -11,7 +11,7 @@ TDengine 支持按时间段窗口切分方式进行聚合结果查询,比如 INTERVAL 子句用于产生相等时间周期的窗口,SLIDING 用以指定窗口向前滑动的时间。每次执行的查询是一个时间窗口,时间窗口随着时间流动向前滑动。在定义连续查询的时候需要指定时间窗口(time window )大小和每次前向增量时间(forward sliding times)。如图,[t0s, t0e] ,[t1s , t1e], [t2s, t2e] 是分别是执行三次连续查询的时间窗口范围,窗口的前向滑动的时间范围 sliding time 标识 。查询过滤、聚合等操作按照每个时间窗口为独立的单位执行。当 SLIDING 与 INTERVAL 相等的时候,滑动窗口即为翻转窗口。 -![时间窗口示意图](/img/sql/timewindow-1.png) +![时间窗口示意图](./timewindow-1.webp) INTERVAL 和 SLIDING 子句需要配合聚合和选择函数来使用。以下 SQL 语句非法: @@ -33,7 +33,7 @@ _ 从 2.1.5.0 版本开始,INTERVAL 语句允许的最短时间间隔调整为 使用整数(布尔值)或字符串来标识产生记录时候设备的状态量。产生的记录如果具有相同的状态量数值则归属于同一个状态窗口,数值改变后该窗口关闭。如下图所示,根据状态量确定的状态窗口分别是[2019-04-28 14:22:07,2019-04-28 14:22:10]和[2019-04-28 14:22:11,2019-04-28 14:22:12]两个。(状态窗口暂不支持对超级表使用) -![时间窗口示意图](/img/sql/timewindow-3.png) +![时间窗口示意图](./timewindow-3.webp) 使用 STATE_WINDOW 来确定状态窗口划分的列。例如: @@ -45,7 +45,7 @@ SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status); 会话窗口根据记录的时间戳主键的值来确定是否属于同一个会话。如下图所示,如果设置时间戳的连续的间隔小于等于 12 秒,则以下 6 条记录构成 2 个会话窗口,分别是:[2019-04-28 14:22:10,2019-04-28 14:22:30]和[2019-04-28 14:23:10,2019-04-28 14:23:30]。因为 2019-04-28 14:22:30 与 2019-04-28 14:23:10 之间的时间间隔是 40 秒,超过了连续时间间隔(12 秒)。 -![时间窗口示意图](/img/sql/timewindow-2.png) +![时间窗口示意图](./timewindow-2.webp) 在 tol_value 时间间隔范围内的结果都认为归属于同一个窗口,如果连续的两条记录的时间超过 tol_val,则自动开启下一个窗口。(会话窗口暂不支持对超级表使用) diff --git a/docs-cn/12-taos-sql/timewindow-1.webp b/docs-cn/12-taos-sql/timewindow-1.webp new file mode 100644 index 0000000000..82747558e9 Binary files /dev/null and b/docs-cn/12-taos-sql/timewindow-1.webp differ diff --git a/docs-cn/12-taos-sql/timewindow-2.webp b/docs-cn/12-taos-sql/timewindow-2.webp new file mode 100644 index 0000000000..8f1314ae34 Binary files /dev/null and b/docs-cn/12-taos-sql/timewindow-2.webp differ diff --git a/docs-cn/12-taos-sql/timewindow-3.webp b/docs-cn/12-taos-sql/timewindow-3.webp new file mode 100644 index 0000000000..5bd16e68e7 Binary files /dev/null and b/docs-cn/12-taos-sql/timewindow-3.webp differ diff --git a/docs-cn/14-reference/03-connector/03-connector.mdx b/docs-cn/14-reference/03-connector/03-connector.mdx index c0e714f148..aac358bea0 100644 --- a/docs-cn/14-reference/03-connector/03-connector.mdx +++ b/docs-cn/14-reference/03-connector/03-connector.mdx @@ -4,7 +4,7 @@ title: 连接器 TDengine 提供了丰富的应用程序开发接口,为了便于用户快速开发自己的应用,TDengine 支持了多种编程语言的连接器,其中官方连接器包括支持 C/C++、Java、Python、Go、Node.js、C# 和 Rust 的连接器。这些连接器支持使用原生接口(taosc)和 REST 接口(部分语言暂不支持)连接 TDengine 集群。社区开发者也贡献了多个非官方连接器,例如 ADO.NET 连接器、Lua 连接器和 PHP 连接器。 -![image-connector](/img/connector.png) +![image-connector](./connector.webp) ## 支持的平台 diff --git a/docs-cn/14-reference/03-connector/connector.webp b/docs-cn/14-reference/03-connector/connector.webp new file mode 100644 index 0000000000..040cf5c26c Binary files /dev/null and b/docs-cn/14-reference/03-connector/connector.webp differ diff --git a/docs-cn/14-reference/03-connector/java.mdx b/docs-cn/14-reference/03-connector/java.mdx index 55abf84fd5..813e82e82c 100644 --- a/docs-cn/14-reference/03-connector/java.mdx +++ b/docs-cn/14-reference/03-connector/java.mdx @@ -11,7 +11,7 @@ import TabItem from '@theme/TabItem'; `taos-jdbcdriver` 是 TDengine 的官方 Java 语言连接器,Java 开发人员可以通过它开发存取 TDengine 数据库的应用软件。`taos-jdbcdriver` 实现了 JDBC driver 标准的接口,并提供两种形式的连接器。一种是通过 TDengine 客户端驱动程序(taosc)原生连接 TDengine 实例,支持数据写入、查询、订阅、schemaless 接口和参数绑定接口等功能,一种是通过 taosAdapter 提供的 REST 接口连接 TDengine 实例(2.4.0.0 及更高版本)。REST 连接实现的功能集合和原生连接有少量不同。 -![tdengine-connector](tdengine-jdbc-connector.png) +![tdengine-connector](tdengine-jdbc-connector.webp) 上图显示了两种 Java 应用使用连接器访问 TDengine 的两种方式: diff --git a/docs-cn/14-reference/03-connector/tdengine-jdbc-connector.png b/docs-cn/14-reference/03-connector/tdengine-jdbc-connector.png deleted file mode 100644 index 1cb8401ea3..0000000000 Binary files a/docs-cn/14-reference/03-connector/tdengine-jdbc-connector.png and /dev/null differ diff --git a/docs-cn/14-reference/03-connector/tdengine-jdbc-connector.webp b/docs-cn/14-reference/03-connector/tdengine-jdbc-connector.webp new file mode 100644 index 0000000000..0956d6005f Binary files /dev/null and b/docs-cn/14-reference/03-connector/tdengine-jdbc-connector.webp differ diff --git a/docs-cn/14-reference/04-taosadapter.md b/docs-cn/14-reference/04-taosadapter.md index 90a31ec94c..5fc9a28281 100644 --- a/docs-cn/14-reference/04-taosadapter.md +++ b/docs-cn/14-reference/04-taosadapter.md @@ -24,7 +24,7 @@ taosAdapter 提供以下功能: ## taosAdapter 架构图 -![taosAdapter Architecture](taosAdapter-architecture.png) +![taosAdapter Architecture](taosAdapter-architecture.webp) ## taosAdapter 部署方法 diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.png b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.png deleted file mode 100644 index 4708f836fe..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.webp b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.webp new file mode 100644 index 0000000000..a78e18028a Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.png b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.png deleted file mode 100644 index f2684e6eed..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.webp b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.webp new file mode 100644 index 0000000000..b152418d09 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.png b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.png deleted file mode 100644 index 74686691e4..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.webp b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.webp new file mode 100644 index 0000000000..f58f48b7f1 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-4-requests.png b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-4-requests.png deleted file mode 100644 index 2796421556..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-4-requests.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-4-requests.webp b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-4-requests.webp new file mode 100644 index 0000000000..00afcce013 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-4-requests.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-5-database.png b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-5-database.png deleted file mode 100644 index b0d3abbf21..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-5-database.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-5-database.webp b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-5-database.webp new file mode 100644 index 0000000000..567e5694f9 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-5-database.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.png b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.png deleted file mode 100644 index 2b54cbeb83..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.webp b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.webp new file mode 100644 index 0000000000..cc8a912810 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-7-login-history.png b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-7-login-history.png deleted file mode 100644 index eb3848657f..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-7-login-history.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-7-login-history.webp b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-7-login-history.webp new file mode 100644 index 0000000000..651b716bc5 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-7-login-history.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.png b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.png deleted file mode 100644 index d94b2e02ac..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.webp b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.webp new file mode 100644 index 0000000000..8666193f59 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-full.png b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-full.png deleted file mode 100644 index 654df29345..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-full.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/TDinsight-full.webp b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-full.webp new file mode 100644 index 0000000000..7f38a76a2b Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/TDinsight-full.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/alert-manager-status.png b/docs-cn/14-reference/07-tdinsight/assets/alert-manager-status.png deleted file mode 100644 index e3afa22c03..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/alert-manager-status.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/alert-manager-status.webp b/docs-cn/14-reference/07-tdinsight/assets/alert-manager-status.webp new file mode 100644 index 0000000000..3d7fe932a2 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/alert-manager-status.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/alert-notification-channel.png b/docs-cn/14-reference/07-tdinsight/assets/alert-notification-channel.png deleted file mode 100644 index 198bf37141..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/alert-notification-channel.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/alert-notification-channel.webp b/docs-cn/14-reference/07-tdinsight/assets/alert-notification-channel.webp new file mode 100644 index 0000000000..517123954e Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/alert-notification-channel.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/alert-query-demo.png b/docs-cn/14-reference/07-tdinsight/assets/alert-query-demo.png deleted file mode 100644 index ace3aa3c2f..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/alert-query-demo.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/alert-query-demo.webp b/docs-cn/14-reference/07-tdinsight/assets/alert-query-demo.webp new file mode 100644 index 0000000000..6666296ac1 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/alert-query-demo.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.png b/docs-cn/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.png deleted file mode 100644 index 7082e49f6b..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.webp b/docs-cn/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.webp new file mode 100644 index 0000000000..6f74bc3a47 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/alert-rule-test.png b/docs-cn/14-reference/07-tdinsight/assets/alert-rule-test.png deleted file mode 100644 index ffd4911b53..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/alert-rule-test.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/alert-rule-test.webp b/docs-cn/14-reference/07-tdinsight/assets/alert-rule-test.webp new file mode 100644 index 0000000000..acda3b24a6 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/alert-rule-test.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-button.png b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-button.png deleted file mode 100644 index 802c7366f9..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-button.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-button.webp b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-button.webp new file mode 100644 index 0000000000..903e236e2a Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-button.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.png b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.png deleted file mode 100644 index 019ec921b6..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.webp b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.webp new file mode 100644 index 0000000000..14fcfe9d18 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-test.png b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-test.png deleted file mode 100644 index 3963abb4ea..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-test.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-test.webp b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-test.webp new file mode 100644 index 0000000000..00b50cc619 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource-test.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource.png b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource.png deleted file mode 100644 index 837100464b..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource.webp b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource.webp new file mode 100644 index 0000000000..06d0ff6ed5 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/howto-add-datasource.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-display.png b/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-display.png deleted file mode 100644 index 98223df254..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-display.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-display.webp b/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-display.webp new file mode 100644 index 0000000000..e2ec052b91 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-display.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-import-options.png b/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-import-options.png deleted file mode 100644 index 07aba348f0..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-import-options.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-import-options.webp b/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-import-options.webp new file mode 100644 index 0000000000..665c035f97 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/howto-dashboard-import-options.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-import-dashboard.png b/docs-cn/14-reference/07-tdinsight/assets/howto-import-dashboard.png deleted file mode 100644 index 7e28939ead..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/howto-import-dashboard.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/howto-import-dashboard.webp b/docs-cn/14-reference/07-tdinsight/assets/howto-import-dashboard.webp new file mode 100644 index 0000000000..7dc42eeba9 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/howto-import-dashboard.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-15167.png b/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-15167.png deleted file mode 100644 index 981f640b14..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-15167.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-15167.webp b/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-15167.webp new file mode 100644 index 0000000000..7ef081900f Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-15167.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.png b/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.png deleted file mode 100644 index 94ef4fa5fe..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.webp b/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.webp new file mode 100644 index 0000000000..602452fc4c Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.png b/docs-cn/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.png deleted file mode 100644 index 670cacc377..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.webp b/docs-cn/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.webp new file mode 100644 index 0000000000..35a3ebba78 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/import_dashboard.png b/docs-cn/14-reference/07-tdinsight/assets/import_dashboard.png deleted file mode 100644 index d74cd36c96..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/import_dashboard.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/import_dashboard.webp b/docs-cn/14-reference/07-tdinsight/assets/import_dashboard.webp new file mode 100644 index 0000000000..fb7958f1b9 Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/import_dashboard.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/tdengine_dashboard.png b/docs-cn/14-reference/07-tdinsight/assets/tdengine_dashboard.png deleted file mode 100644 index 0101e7430c..0000000000 Binary files a/docs-cn/14-reference/07-tdinsight/assets/tdengine_dashboard.png and /dev/null differ diff --git a/docs-cn/14-reference/07-tdinsight/assets/tdengine_dashboard.webp b/docs-cn/14-reference/07-tdinsight/assets/tdengine_dashboard.webp new file mode 100644 index 0000000000..49f1d88f4a Binary files /dev/null and b/docs-cn/14-reference/07-tdinsight/assets/tdengine_dashboard.webp differ diff --git a/docs-cn/14-reference/07-tdinsight/index.md b/docs-cn/14-reference/07-tdinsight/index.md index a554d7ee6b..d7511fde3b 100644 --- a/docs-cn/14-reference/07-tdinsight/index.md +++ b/docs-cn/14-reference/07-tdinsight/index.md @@ -233,33 +233,33 @@ sudo systemctl enable grafana-server 指向 **Configurations** -> **Data Sources** 菜单,然后点击 **Add data source** 按钮。 -![添加数据源按钮](./assets/howto-add-datasource-button.png) +![添加数据源按钮](./assets/howto-add-datasource-button.webp) 搜索并选择**TDengine**。 -![添加数据源](./assets/howto-add-datasource-tdengine.png) +![添加数据源](./assets/howto-add-datasource-tdengine.webp) 配置 TDengine 数据源。 -![数据源配置](./assets/howto-add-datasource.png) +![数据源配置](./assets/howto-add-datasource.webp) 保存并测试,正常情况下会报告 'TDengine Data source is working'。 -![数据源测试](./assets/howto-add-datasource-test.png) +![数据源测试](./assets/howto-add-datasource-test.webp) ### 导入仪表盘 指向 **+** / **Create** - **import**(或 `/dashboard/import` url)。 -![导入仪表盘和配置](./assets/import_dashboard.png) +![导入仪表盘和配置](./assets/import_dashboard.webp) 在 **Import via grafana.com** 位置键入仪表盘 ID `15167` 并 **Load**。 -![通过 grafana.com 导入](./assets/import-dashboard-15167.png) +![通过 grafana.com 导入](./assets/import-dashboard-15167.webp) 导入完成后,TDinsight 的完整页面视图如下所示。 -![显示](./assets/TDinsight-full.png) +![显示](./assets/TDinsight-full.webp) ## TDinsight 仪表盘详细信息 @@ -269,7 +269,7 @@ TDinsight 仪表盘旨在提供 TDengine 相关资源使用情况[dnodes, mnodes ### 集群状态 -![tdinsight-mnodes-overview](./assets/TDinsight-1-cluster-status.png) +![tdinsight-mnodes-overview](./assets/TDinsight-1-cluster-status.webp) 这部分包括集群当前信息和状态,告警信息也在此处(从左到右,从上到下)。 @@ -289,7 +289,7 @@ TDinsight 仪表盘旨在提供 TDengine 相关资源使用情况[dnodes, mnodes ### DNodes 状态 -![tdinsight-mnodes-overview](./assets/TDinsight-2-dnodes.png) +![tdinsight-mnodes-overview](./assets/TDinsight-2-dnodes.webp) - **DNodes Status**:`show dnodes` 的简单表格视图。 - **DNodes Lifetime**:从创建 dnode 开始经过的时间。 @@ -298,14 +298,14 @@ TDinsight 仪表盘旨在提供 TDengine 相关资源使用情况[dnodes, mnodes ### MNode 概述 -![tdinsight-mnodes-overview](./assets/TDinsight-3-mnodes.png) +![tdinsight-mnodes-overview](./assets/TDinsight-3-mnodes.webp) 1. **MNodes Status**:`show mnodes` 的简单表格视图。 2. **MNodes Number**:类似于`DNodes Number`,MNodes 数量变化。 ### 请求 -![tdinsight-requests](./assets/TDinsight-4-requests.png) +![tdinsight-requests](./assets/TDinsight-4-requests.webp) 1. **Requests Rate(Inserts per Second)**:平均每秒插入次数。 2. **Requests (Selects)**:查询请求数及变化率(count of second)。 @@ -313,7 +313,7 @@ TDinsight 仪表盘旨在提供 TDengine 相关资源使用情况[dnodes, mnodes ### 数据库 -![tdinsight-database](./assets/TDinsight-5-database.png) +![tdinsight-database](./assets/TDinsight-5-database.webp) 数据库使用情况,对变量 `$database` 的每个值即每个数据库进行重复多行展示。 @@ -325,7 +325,7 @@ TDinsight 仪表盘旨在提供 TDengine 相关资源使用情况[dnodes, mnodes ### DNode 资源使用情况 -![dnode-usage](./assets/TDinsight-6-dnode-usage.png) +![dnode-usage](./assets/TDinsight-6-dnode-usage.webp) 数据节点资源使用情况展示,对变量 `$fqdn` 即每个数据节点进行重复多行展示。包括: @@ -346,13 +346,13 @@ TDinsight 仪表盘旨在提供 TDengine 相关资源使用情况[dnodes, mnodes ### 登录历史 -![登录历史](./assets/TDinsight-7-login-history.png) +![登录历史](./assets/TDinsight-7-login-history.webp) 目前只报告每分钟登录次数。 ### 监控 taosAdapter -![taosadapter](./assets/TDinsight-8-taosadapter.png) +![taosadapter](./assets/TDinsight-8-taosadapter.webp) 支持监控 taosAdapter 请求统计和状态详情。包括: diff --git a/docs-cn/14-reference/taosAdapter-architecture.png b/docs-cn/14-reference/taosAdapter-architecture.png deleted file mode 100644 index 08a9018553..0000000000 Binary files a/docs-cn/14-reference/taosAdapter-architecture.png and /dev/null differ diff --git a/docs-cn/14-reference/taosAdapter-architecture.webp b/docs-cn/14-reference/taosAdapter-architecture.webp new file mode 100644 index 0000000000..a4162b0a03 Binary files /dev/null and b/docs-cn/14-reference/taosAdapter-architecture.webp differ diff --git a/docs-cn/20-third-party/01-grafana.mdx b/docs-cn/20-third-party/01-grafana.mdx index 9a4c33d8ac..f9f7a26aa1 100644 --- a/docs-cn/20-third-party/01-grafana.mdx +++ b/docs-cn/20-third-party/01-grafana.mdx @@ -64,15 +64,15 @@ GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=tdengine-datasource 用户可以直接通过 http://localhost:3000 的网址,登录 Grafana 服务器(用户名/密码:admin/admin),通过左侧 `Configuration -> Data Sources` 可以添加数据源,如下图所示: -![img](/img/connections/add_datasource1.jpg) +![img](./add_datasource1.webp) 点击 `Add data source` 可进入新增数据源页面,在查询框中输入 TDengine 可选择添加,如下图所示: -![img](/img/connections/add_datasource2.jpg) +![img](./add_datasource2.webp) 进入数据源配置页面,按照默认提示修改相应配置即可: -![img](/img/connections/add_datasource3.jpg) +![img](./add_datasource3.webp) - Host: TDengine 集群中提供 REST 服务 (在 2.4 之前由 taosd 提供, 从 2.4 开始由 taosAdapter 提供)的组件所在服务器的 IP 地址与 TDengine REST 服务的端口号(6041),默认 http://localhost:6041。 - User:TDengine 用户名。 @@ -80,13 +80,13 @@ GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=tdengine-datasource 点击 `Save & Test` 进行测试,成功会有如下提示: -![img](/img/connections/add_datasource4.jpg) +![img](./add_datasource4.webp) ### 创建 Dashboard 回到主界面创建 Dashboard,点击 Add Query 进入面板查询页面: -![img](/img/connections/create_dashboard1.jpg) +![img](./create_dashboard1.webp) 如上图所示,在 Query 中选中 `TDengine` 数据源,在下方查询框可输入相应 SQL 进行查询,具体说明如下: @@ -96,7 +96,7 @@ GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=tdengine-datasource 按照默认提示查询当前 TDengine 部署所在服务器指定间隔系统内存平均使用量如下: -![img](/img/connections/create_dashboard2.jpg) +![img](./create_dashboard2.webp) > 关于如何使用 Grafana 创建相应的监测界面以及更多有关使用 Grafana 的信息,请参考 Grafana 官方的[文档](https://grafana.com/docs/)。 diff --git a/docs-cn/20-third-party/09-emq-broker.md b/docs-cn/20-third-party/09-emq-broker.md index f57ccb20e6..b9d099c145 100644 --- a/docs-cn/20-third-party/09-emq-broker.md +++ b/docs-cn/20-third-party/09-emq-broker.md @@ -45,25 +45,25 @@ MQTT 是流行的物联网数据传输协议,[EMQX](https://github.com/emqx/em 使用浏览器打开网址 http://IP:18083 并登录 EMQX Dashboard。初次安装用户名为 `admin` 密码为:`public` -![img](./emqx/login-dashboard.png) +![img](./emqx/login-dashboard.webp) ### 创建规则(Rule) 选择左侧“规则引擎(Rule Engine)”中的“规则(Rule)”并点击“创建(Create)”按钮: -![img](./emqx/rule-engine.png) +![img](./emqx/rule-engine.webp) ### 编辑 SQL 字段 -![img](./emqx/create-rule.png) +![img](./emqx/create-rule.webp) ### 新增“动作(action handler)” -![img](./emqx/add-action-handler.png) +![img](./emqx/add-action-handler.webp) ### 新增“资源(Resource)” -![img](./emqx/create-resource.png) +![img](./emqx/create-resource.webp) 选择“发送数据到 Web 服务“并点击“新建资源”按钮: @@ -71,13 +71,13 @@ MQTT 是流行的物联网数据传输协议,[EMQX](https://github.com/emqx/em 选择“发送数据到 Web 服务“并填写 请求 URL 为 运行 taosAdapter 的服务器地址和端口(默认为 6041)。其他属性请保持默认值。 -![img](./emqx/edit-resource.png) +![img](./emqx/edit-resource.webp) ### 编辑“动作(action)” 编辑资源配置,增加 Authorization 认证的键/值配对项,相关文档请参考[ TDengine REST API 文档](https://docs.taosdata.com/reference/rest-api/)。在消息体中输入规则引擎替换模板。 -![img](./emqx/edit-action.png) +![img](./emqx/edit-action.webp) ## 编写模拟测试程序 @@ -164,7 +164,7 @@ MQTT 是流行的物联网数据传输协议,[EMQX](https://github.com/emqx/em 注意:代码中 CLIENT_NUM 在开始测试中可以先设置一个较小的值,避免硬件性能不能完全处理较大并发客户端数量。 -![img](./emqx/client-num.png) +![img](./emqx/client-num.webp) ## 执行测试模拟发送 MQTT 数据 @@ -173,19 +173,19 @@ npm install mqtt mockjs --save --registry=https://registry.npm.taobao.org node mock.js ``` -![img](./emqx/run-mock.png) +![img](./emqx/run-mock.webp) ## 验证 EMQX 接收到数据 在 EMQX Dashboard 规则引擎界面进行刷新,可以看到有多少条记录被正确接收到: -![img](./emqx/check-rule-matched.png) +![img](./emqx/check-rule-matched.webp) ## 验证数据写入到 TDengine 使用 TDengine CLI 程序登录并查询相应数据库和表,验证数据是否被正确写入到 TDengine 中: -![img](./emqx/check-result-in-taos.png) +![img](./emqx/check-result-in-taos.webp) TDengine 详细使用方法请参考 [TDengine 官方文档](https://docs.taosdata.com/)。 EMQX 详细使用方法请参考 [EMQX 官方文档](https://www.emqx.io/docs/zh/v4.4/rule/rule-engine.html)。 diff --git a/docs-cn/20-third-party/11-kafka.md b/docs-cn/20-third-party/11-kafka.md index d12d5fab75..beb8f1bd6b 100644 --- a/docs-cn/20-third-party/11-kafka.md +++ b/docs-cn/20-third-party/11-kafka.md @@ -9,11 +9,11 @@ TDengine Kafka Connector 包含两个插件: TDengine Source Connector 和 TDeng Kafka Connect 是 Apache Kafka 的一个组件,用于使其它系统,比如数据库、云服务、文件系统等能方便地连接到 Kafka。数据既可以通过 Kafka Connect 从其它系统流向 Kafka, 也可以通过 Kafka Connect 从 Kafka 流向其它系统。从其它系统读数据的插件称为 Source Connector, 写数据到其它系统的插件称为 Sink Connector。Source Connector 和 Sink Connector 都不会直接连接 Kafka Broker,Source Connector 把数据转交给 Kafka Connect。Sink Connector 从 Kafka Connect 接收数据。 -![](kafka/Kafka_Connect.png) +![](kafka/Kafka_Connect.webp) TDengine Source Connector 用于把数据实时地从 TDengine 读出来发送给 Kafka Connect。TDengine Sink Connector 用于 从 Kafka Connect 接收数据并写入 TDengine。 -![](kafka/streaming-integration-with-kafka-connect.png) +![](kafka/streaming-integration-with-kafka-connect.webp) ## 什么是 Confluent? @@ -26,7 +26,7 @@ Confluent 在 Kafka 的基础上增加很多扩展功能。包括: 5. 管理和监控 Kafka 的 GUI —— Confluent 控制中心 这些扩展功能有的包含在社区版本的 Confluent 中,有的只有企业版能用。 -![](kafka/confluentPlatform.png) +![](kafka/confluentPlatform.webp) Confluent 企业版提供了 `confluent` 命令行工具管理各个组件。 diff --git a/docs-cn/20-third-party/add_datasource1.webp b/docs-cn/20-third-party/add_datasource1.webp new file mode 100644 index 0000000000..211edc4457 Binary files /dev/null and b/docs-cn/20-third-party/add_datasource1.webp differ diff --git a/docs-cn/20-third-party/add_datasource2.webp b/docs-cn/20-third-party/add_datasource2.webp new file mode 100644 index 0000000000..8ab547231f Binary files /dev/null and b/docs-cn/20-third-party/add_datasource2.webp differ diff --git a/docs-cn/20-third-party/add_datasource3.webp b/docs-cn/20-third-party/add_datasource3.webp new file mode 100644 index 0000000000..d8a733360a Binary files /dev/null and b/docs-cn/20-third-party/add_datasource3.webp differ diff --git a/docs-cn/20-third-party/add_datasource4.webp b/docs-cn/20-third-party/add_datasource4.webp new file mode 100644 index 0000000000..b1e0fc6e2b Binary files /dev/null and b/docs-cn/20-third-party/add_datasource4.webp differ diff --git a/docs-cn/20-third-party/create_dashboard1.webp b/docs-cn/20-third-party/create_dashboard1.webp new file mode 100644 index 0000000000..55eb388833 Binary files /dev/null and b/docs-cn/20-third-party/create_dashboard1.webp differ diff --git a/docs-cn/20-third-party/create_dashboard2.webp b/docs-cn/20-third-party/create_dashboard2.webp new file mode 100644 index 0000000000..bb40e40718 Binary files /dev/null and b/docs-cn/20-third-party/create_dashboard2.webp differ diff --git a/docs-cn/20-third-party/dashboard-15146.webp b/docs-cn/20-third-party/dashboard-15146.webp new file mode 100644 index 0000000000..fae586f5c7 Binary files /dev/null and b/docs-cn/20-third-party/dashboard-15146.webp differ diff --git a/docs-cn/20-third-party/emqx/add-action-handler.png b/docs-cn/20-third-party/emqx/add-action-handler.png deleted file mode 100644 index 97a1f933ec..0000000000 Binary files a/docs-cn/20-third-party/emqx/add-action-handler.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/add-action-handler.webp b/docs-cn/20-third-party/emqx/add-action-handler.webp new file mode 100644 index 0000000000..4a8d105f71 Binary files /dev/null and b/docs-cn/20-third-party/emqx/add-action-handler.webp differ diff --git a/docs-cn/20-third-party/emqx/check-result-in-taos.png b/docs-cn/20-third-party/emqx/check-result-in-taos.png deleted file mode 100644 index c17a5c1ea2..0000000000 Binary files a/docs-cn/20-third-party/emqx/check-result-in-taos.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/check-result-in-taos.webp b/docs-cn/20-third-party/emqx/check-result-in-taos.webp new file mode 100644 index 0000000000..8fa040a861 Binary files /dev/null and b/docs-cn/20-third-party/emqx/check-result-in-taos.webp differ diff --git a/docs-cn/20-third-party/emqx/check-rule-matched.png b/docs-cn/20-third-party/emqx/check-rule-matched.png deleted file mode 100644 index 9e9a466946..0000000000 Binary files a/docs-cn/20-third-party/emqx/check-rule-matched.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/check-rule-matched.webp b/docs-cn/20-third-party/emqx/check-rule-matched.webp new file mode 100644 index 0000000000..e5a6140357 Binary files /dev/null and b/docs-cn/20-third-party/emqx/check-rule-matched.webp differ diff --git a/docs-cn/20-third-party/emqx/client-num.png b/docs-cn/20-third-party/emqx/client-num.png deleted file mode 100644 index fff48cbf3b..0000000000 Binary files a/docs-cn/20-third-party/emqx/client-num.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/client-num.webp b/docs-cn/20-third-party/emqx/client-num.webp new file mode 100644 index 0000000000..a151b18484 Binary files /dev/null and b/docs-cn/20-third-party/emqx/client-num.webp differ diff --git a/docs-cn/20-third-party/emqx/create-resource.png b/docs-cn/20-third-party/emqx/create-resource.png deleted file mode 100644 index 58da4c391a..0000000000 Binary files a/docs-cn/20-third-party/emqx/create-resource.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/create-resource.webp b/docs-cn/20-third-party/emqx/create-resource.webp new file mode 100644 index 0000000000..bf9cccbe49 Binary files /dev/null and b/docs-cn/20-third-party/emqx/create-resource.webp differ diff --git a/docs-cn/20-third-party/emqx/create-rule.png b/docs-cn/20-third-party/emqx/create-rule.png deleted file mode 100644 index 73b0b6ee3e..0000000000 Binary files a/docs-cn/20-third-party/emqx/create-rule.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/create-rule.webp b/docs-cn/20-third-party/emqx/create-rule.webp new file mode 100644 index 0000000000..13e8fc83d4 Binary files /dev/null and b/docs-cn/20-third-party/emqx/create-rule.webp differ diff --git a/docs-cn/20-third-party/emqx/edit-action.png b/docs-cn/20-third-party/emqx/edit-action.png deleted file mode 100644 index 2a43ee369a..0000000000 Binary files a/docs-cn/20-third-party/emqx/edit-action.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/edit-action.webp b/docs-cn/20-third-party/emqx/edit-action.webp new file mode 100644 index 0000000000..7f6d2e36a8 Binary files /dev/null and b/docs-cn/20-third-party/emqx/edit-action.webp differ diff --git a/docs-cn/20-third-party/emqx/edit-resource.png b/docs-cn/20-third-party/emqx/edit-resource.png deleted file mode 100644 index 0a0b356004..0000000000 Binary files a/docs-cn/20-third-party/emqx/edit-resource.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/edit-resource.webp b/docs-cn/20-third-party/emqx/edit-resource.webp new file mode 100644 index 0000000000..fd5d278fab Binary files /dev/null and b/docs-cn/20-third-party/emqx/edit-resource.webp differ diff --git a/docs-cn/20-third-party/emqx/login-dashboard.png b/docs-cn/20-third-party/emqx/login-dashboard.png deleted file mode 100644 index d6c5035c98..0000000000 Binary files a/docs-cn/20-third-party/emqx/login-dashboard.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/login-dashboard.webp b/docs-cn/20-third-party/emqx/login-dashboard.webp new file mode 100644 index 0000000000..f84cee668f Binary files /dev/null and b/docs-cn/20-third-party/emqx/login-dashboard.webp differ diff --git a/docs-cn/20-third-party/emqx/rule-engine.png b/docs-cn/20-third-party/emqx/rule-engine.png deleted file mode 100644 index db110a837b..0000000000 Binary files a/docs-cn/20-third-party/emqx/rule-engine.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/rule-engine.webp b/docs-cn/20-third-party/emqx/rule-engine.webp new file mode 100644 index 0000000000..c1711c8cc7 Binary files /dev/null and b/docs-cn/20-third-party/emqx/rule-engine.webp differ diff --git a/docs-cn/20-third-party/emqx/rule-header-key-value.png b/docs-cn/20-third-party/emqx/rule-header-key-value.png deleted file mode 100644 index b81b9a9684..0000000000 Binary files a/docs-cn/20-third-party/emqx/rule-header-key-value.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/rule-header-key-value.webp b/docs-cn/20-third-party/emqx/rule-header-key-value.webp new file mode 100644 index 0000000000..e645b3822d Binary files /dev/null and b/docs-cn/20-third-party/emqx/rule-header-key-value.webp differ diff --git a/docs-cn/20-third-party/emqx/run-mock.png b/docs-cn/20-third-party/emqx/run-mock.png deleted file mode 100644 index 0da2581857..0000000000 Binary files a/docs-cn/20-third-party/emqx/run-mock.png and /dev/null differ diff --git a/docs-cn/20-third-party/emqx/run-mock.webp b/docs-cn/20-third-party/emqx/run-mock.webp new file mode 100644 index 0000000000..ed33f1666d Binary files /dev/null and b/docs-cn/20-third-party/emqx/run-mock.webp differ diff --git a/docs-cn/20-third-party/import_dashboard1.webp b/docs-cn/20-third-party/import_dashboard1.webp new file mode 100644 index 0000000000..d4fb374ce8 Binary files /dev/null and b/docs-cn/20-third-party/import_dashboard1.webp differ diff --git a/docs-cn/20-third-party/import_dashboard2.webp b/docs-cn/20-third-party/import_dashboard2.webp new file mode 100644 index 0000000000..9f74dc96be Binary files /dev/null and b/docs-cn/20-third-party/import_dashboard2.webp differ diff --git a/docs-cn/20-third-party/kafka/Kafka_Connect.png b/docs-cn/20-third-party/kafka/Kafka_Connect.png deleted file mode 100644 index f3dc02ea2a..0000000000 Binary files a/docs-cn/20-third-party/kafka/Kafka_Connect.png and /dev/null differ diff --git a/docs-cn/20-third-party/kafka/Kafka_Connect.webp b/docs-cn/20-third-party/kafka/Kafka_Connect.webp new file mode 100644 index 0000000000..8f2000a749 Binary files /dev/null and b/docs-cn/20-third-party/kafka/Kafka_Connect.webp differ diff --git a/docs-cn/20-third-party/kafka/confluentPlatform.png b/docs-cn/20-third-party/kafka/confluentPlatform.png deleted file mode 100644 index f8e69f2c7f..0000000000 Binary files a/docs-cn/20-third-party/kafka/confluentPlatform.png and /dev/null differ diff --git a/docs-cn/20-third-party/kafka/confluentPlatform.webp b/docs-cn/20-third-party/kafka/confluentPlatform.webp new file mode 100644 index 0000000000..ff03d4e51a Binary files /dev/null and b/docs-cn/20-third-party/kafka/confluentPlatform.webp differ diff --git a/docs-cn/20-third-party/kafka/streaming-integration-with-kafka-connect.png b/docs-cn/20-third-party/kafka/streaming-integration-with-kafka-connect.png deleted file mode 100644 index 26d8a866d7..0000000000 Binary files a/docs-cn/20-third-party/kafka/streaming-integration-with-kafka-connect.png and /dev/null differ diff --git a/docs-cn/20-third-party/kafka/streaming-integration-with-kafka-connect.webp b/docs-cn/20-third-party/kafka/streaming-integration-with-kafka-connect.webp new file mode 100644 index 0000000000..120d534ec1 Binary files /dev/null and b/docs-cn/20-third-party/kafka/streaming-integration-with-kafka-connect.webp differ diff --git a/docs-cn/21-tdinternal/01-arch.md b/docs-cn/21-tdinternal/01-arch.md index 6f479efc1a..456d4bea91 100644 --- a/docs-cn/21-tdinternal/01-arch.md +++ b/docs-cn/21-tdinternal/01-arch.md @@ -11,7 +11,7 @@ TDengine 的设计是基于单个硬件、软件系统不可靠,基于任何 TDengine 分布式架构的逻辑结构图如下: -![TDengine架构示意图](/img/architecture/structure.png) +![TDengine架构示意图](./structure.webp)
图 1 TDengine架构示意图
@@ -63,7 +63,7 @@ TDengine 分布式架构的逻辑结构图如下: 为解释 vnode、mnode、taosc 和应用之间的关系以及各自扮演的角色,下面对写入数据这个典型操作的流程进行剖析。 -![TDengine典型的操作流程](/img/architecture/message.png) +![TDengine典型的操作流程](./message.webp)
图 2 TDengine 典型的操作流程
@@ -135,7 +135,7 @@ TDengine 除 vnode 分片之外,还对时序数据按照时间段进行分区 Master Vnode 遵循下面的写入流程: -![TDengine Master写入流程](/img/architecture/write_master.png) +![TDengine Master写入流程](./write_master.webp)
图 3 TDengine Master 写入流程
@@ -150,7 +150,7 @@ Master Vnode 遵循下面的写入流程: 对于 slave vnode,写入流程是: -![TDengine Slave 写入流程](/img/architecture/write_slave.png) +![TDengine Slave 写入流程](./write_slave.webp)
图 4 TDengine Slave 写入流程
@@ -284,7 +284,7 @@ SELECT COUNT(*) FROM d1001 WHERE ts >= '2017-7-14 00:00:00' AND ts < '2017-7-14 TDengine 对每个数据采集点单独建表,但在实际应用中经常需要对不同的采集点数据进行聚合。为高效的进行聚合操作,TDengine 引入超级表(STable)的概念。超级表用来代表一特定类型的数据采集点,它是包含多张表的表集合,集合里每张表的模式(schema)完全一致,但每张表都带有自己的静态标签,标签可以有多个,可以随时增加、删除和修改。应用可通过指定标签的过滤条件,对一个 STable 下的全部或部分表进行聚合或统计操作,这样大大简化应用的开发。其具体流程如下图所示: -![多表聚合查询原理图](/img/architecture/multi_tables.png) +![多表聚合查询原理图](./multi_tables.webp)
图 5 多表聚合查询原理图
diff --git a/docs-cn/21-tdinternal/dnode.webp b/docs-cn/21-tdinternal/dnode.webp new file mode 100644 index 0000000000..a56c7e4594 Binary files /dev/null and b/docs-cn/21-tdinternal/dnode.webp differ diff --git a/docs-cn/21-tdinternal/message.webp b/docs-cn/21-tdinternal/message.webp new file mode 100644 index 0000000000..a2a42abff3 Binary files /dev/null and b/docs-cn/21-tdinternal/message.webp differ diff --git a/docs-cn/21-tdinternal/modules.webp b/docs-cn/21-tdinternal/modules.webp new file mode 100644 index 0000000000..718a6abccd Binary files /dev/null and b/docs-cn/21-tdinternal/modules.webp differ diff --git a/docs-cn/21-tdinternal/multi_tables.webp b/docs-cn/21-tdinternal/multi_tables.webp new file mode 100644 index 0000000000..8f649e34a3 Binary files /dev/null and b/docs-cn/21-tdinternal/multi_tables.webp differ diff --git a/docs-cn/21-tdinternal/replica-forward.webp b/docs-cn/21-tdinternal/replica-forward.webp new file mode 100644 index 0000000000..512efd4eba Binary files /dev/null and b/docs-cn/21-tdinternal/replica-forward.webp differ diff --git a/docs-cn/21-tdinternal/replica-master.webp b/docs-cn/21-tdinternal/replica-master.webp new file mode 100644 index 0000000000..57030a11f5 Binary files /dev/null and b/docs-cn/21-tdinternal/replica-master.webp differ diff --git a/docs-cn/21-tdinternal/replica-restore.webp b/docs-cn/21-tdinternal/replica-restore.webp new file mode 100644 index 0000000000..f282c2d4d2 Binary files /dev/null and b/docs-cn/21-tdinternal/replica-restore.webp differ diff --git a/docs-cn/21-tdinternal/structure.webp b/docs-cn/21-tdinternal/structure.webp new file mode 100644 index 0000000000..b77a42c074 Binary files /dev/null and b/docs-cn/21-tdinternal/structure.webp differ diff --git a/docs-cn/21-tdinternal/vnode.webp b/docs-cn/21-tdinternal/vnode.webp new file mode 100644 index 0000000000..fae3104c89 Binary files /dev/null and b/docs-cn/21-tdinternal/vnode.webp differ diff --git a/docs-cn/21-tdinternal/write_master.webp b/docs-cn/21-tdinternal/write_master.webp new file mode 100644 index 0000000000..9624036ed3 Binary files /dev/null and b/docs-cn/21-tdinternal/write_master.webp differ diff --git a/docs-cn/21-tdinternal/write_slave.webp b/docs-cn/21-tdinternal/write_slave.webp new file mode 100644 index 0000000000..7c45dec11b Binary files /dev/null and b/docs-cn/21-tdinternal/write_slave.webp differ diff --git a/docs-cn/25-application/01-telegraf.md b/docs-cn/25-application/01-telegraf.md index f63a6701ee..5bfc94c534 100644 --- a/docs-cn/25-application/01-telegraf.md +++ b/docs-cn/25-application/01-telegraf.md @@ -16,7 +16,7 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如 本文介绍不需要写一行代码,通过简单修改几行配置文件,就可以快速搭建一个基于 TDengine + Telegraf + Grafana 的 IT 运维系统。架构如下图: -![IT-DevOps-Solutions-Telegraf.png](/img/IT-DevOps-Solutions-Telegraf.png) +![IT-DevOps-Solutions-Telegraf.webp](./IT-DevOps-Solutions-Telegraf.webp) ## 安装步骤 @@ -75,7 +75,7 @@ sudo systemctl start telegraf 点击左侧齿轮图标并选择 `Plugins`,应该可以找到 TDengine data source 插件图标。 点击左侧加号图标并选择 `Import`,从 `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard-v0.1.0.json` 下载 dashboard JSON 文件后导入。之后可以看到如下界面的仪表盘: -![IT-DevOps-Solutions-telegraf-dashboard.png](/img/IT-DevOps-Solutions-telegraf-dashboard.png) +![IT-DevOps-Solutions-telegraf-dashboard.webp]./IT-DevOps-Solutions-telegraf-dashboard.webp) ## 总结 diff --git a/docs-cn/25-application/02-collectd.md b/docs-cn/25-application/02-collectd.md index 5e6bc6577b..5966f2d654 100644 --- a/docs-cn/25-application/02-collectd.md +++ b/docs-cn/25-application/02-collectd.md @@ -16,7 +16,7 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如 本文介绍不需要写一行代码,通过简单修改几行配置文件,就可以快速搭建一个基于 TDengine + collectd / statsD + Grafana 的 IT 运维系统。架构如下图: -![IT-DevOps-Solutions-Collectd-StatsD.png](/img/IT-DevOps-Solutions-Collectd-StatsD.png) +![IT-DevOps-Solutions-Collectd-StatsD.webp](./IT-DevOps-Solutions-Collectd-StatsD.webp) ## 安装步骤 @@ -81,12 +81,12 @@ repeater 部分添加 { host:'', port: Figure 1. TDengine Technical Ecosystem -On the left side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right side, visualization/BI tools, HMI, Python/R, and IoT Apps can be connected. TDengine itself provides interactive command-line interface and web interface for management and maintenance. +On the left-hand side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right-hand side, visualization/BI tools, HMI, Python/R, and IoT Apps can be connected. TDengine itself provides an interactive command-line interface and a web interface for management and maintenance. -## Suited Scenarios +## Typical Use Cases -As a high-performance, scalable and SQL supported time-series database, TDengine's typical application scenarios include but are not limited to IoT, Industrial Internet, Connected Vehicles, IT operation and maintenance, energy, financial markets and other fields. TDengine is a purpose-built database optimized for the characteristics of time series data, it cannot be used to process data from web crawlers, social media, e-commerce, ERP, CRM, etc. This section makes a more detailed analysis of the applicable scenarios. +As a high-performance, scalable and SQL supported time-series database, TDengine's typical use case include but are not limited to IoT, Industrial Internet, Connected Vehicles, IT operation and maintenance, energy, financial markets and other fields. TDengine is a purpose-built database optimized for the characteristics of time series data. As such, it cannot be used to process data from web crawlers, social media, e-commerce, ERP, CRM and so on. More generally TDengine is not a suitable storage engine for non-time-series data. This section makes a more detailed analysis of the applicable scenarios. ### Characteristics and Requirements of Data Sources diff --git a/docs-en/04-concept/index.md b/docs-en/04-concept/index.md index abc553ab6d..d714bace1d 100644 --- a/docs-en/04-concept/index.md +++ b/docs-en/04-concept/index.md @@ -2,7 +2,7 @@ title: Concepts --- -In order to explain the basic concepts and provide some sample code, the TDengine documentation takes smart meters as a typical time series data scenario. Assuming that each smart meter collects three metrics of current, voltage, and phase, there are multiple smart meters, and each meter has static attributes like location and group ID, the collected data will be similar to the following table: +In order to explain the basic concepts and provide some sample code, the TDengine documentation smart meters as a typical time series use case. We assume the following: 1. Each smart meter collects three metrics i.e. current, voltage, and phase 2. There are multiple smart meters, and 3. Each meter has static attributes like location and group ID. Based on this, collected data will look similar to the following table:
@@ -29,7 +29,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin - + @@ -38,7 +38,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin - + @@ -47,7 +47,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin - + @@ -56,7 +56,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin - + @@ -65,7 +65,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin - + @@ -74,7 +74,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin - + @@ -83,7 +83,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin - + @@ -92,7 +92,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin - + @@ -112,7 +112,7 @@ Label/Tag refers to the static properties of sensors, equipment or other types o ## Data Collection Point -Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipments, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car, so in this example the car would have three data collection points. +Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipment, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car. So in this example the car would have three data collection points. ## Table @@ -122,10 +122,10 @@ To make full use of time-series data characteristics, TDengine adopts a strategy 1. Since the metric data from different DCP are fully independent, the data source of each DCP is unique, and a table has only one writer. In this way, data points can be written in a lock-free manner, and the writing speed can be greatly improved. 2. For a DCP, the metric data generated by DCP is ordered by timestamp, so the write operation can be implemented by simple appending, which further greatly improves the data writing speed. -3. The metric data from a DCP is continuously stored in block by block. If you read data for a period of time, it can greatly reduce random read operations and improve read and query performance by orders of magnitude. -4. Inside a data block for a DCP, columnar storage is used, and different compression algorithms are used for different data types. Metrics generally don't vary as significantly between themselves over a time range as compared to other metrics, this allows for a higher compression rate. +3. The metric data from a DCP is continuously stored, block by block. If you read data for a period of time, it can greatly reduce random read operations and improve read and query performance by orders of magnitude. +4. Inside a data block for a DCP, columnar storage is used, and different compression algorithms are used for different data types. Metrics generally don't vary as significantly between themselves over a time range as compared to other metrics, which allows for a higher compression rate. -If the metric data of multiple DCPs are traditionally written into a single table, due to the uncontrollable network delay, the timing of the data from different DCPs arriving at the server cannot be guaranteed, the writing operation must be protected by locks, and the metric data from one DCP cannot be guaranteed to be continuously stored together. **One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest extent.** +If the metric data of multiple DCPs are traditionally written into a single table, due to uncontrollable network delays, the timing of the data from different DCPs arriving at the server cannot be guaranteed, write operations must be protected by locks, and metric data from one DCP cannot be guaranteed to be continuously stored together. **One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest possible extent.** TDengine suggests using DCP ID as the table name (like D1001 in the above table). Each DCP may collect one or multiple metrics (like the current, voltage, phase as above). Each metric has a corresponding column in the table. The data type for a column can be int, float, string and others. In addition, the first column in the table must be a timestamp. TDengine uses the time stamp as the index, and won’t build the index on any metrics stored. Column wise storage is used. @@ -139,7 +139,7 @@ In the design of TDengine, **a table is used to represent a specific data collec ## Subtable -When creating a table for a specific data collection point, the user can use a STable as a template and specifies the tag values of this specific DCP to create it. **The table created by using a STable as the template is called subtable** in TDengine. The difference between regular table and subtable is: +When creating a table for a specific data collection point, the user can use a STable as a template and specify the tag values of this specific DCP to create it. **The table created by using a STable as the template is called subtable** in TDengine. The difference between regular table and subtable is: 1. Subtable is a table, all SQL commands applied on a regular table can be applied on subtable. 2. Subtable is a table with extensions, it has static tags (labels), and these tags can be added, deleted, and updated after it is created. But a regular table does not have tags. 3. A subtable belongs to only one STable, but a STable may have many subtables. Regular tables do not belong to a STable. @@ -151,7 +151,7 @@ The relationship between a STable and the subtables created based on this STable 2. The schema of metrics or labels cannot be adjusted through subtables, and it can only be changed via STable. Changes to the schema of a STable takes effect immediately for all associated subtables. 3. STable defines only one template and does not store any data or label information by itself. Therefore, data cannot be written to a STable, only to subtables. -Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which can greatly reduce the data sets to be scanned, thus greatly improving the performance of data aggregation across multiple DCPs. +Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which reduces the number of data sets to be scanned which in turn greatly improves the performance of data aggregation across multiple DCPs. In TDengine, it is recommended to use a subtable instead of a regular table for a DCP. @@ -167,4 +167,4 @@ FQDN (Fully Qualified Domain Name) is the full domain name of a specific compute Each node of a TDengine cluster is uniquely identified by an End Point, which consists of an FQDN and a Port, such as h1.tdengine.com:6030. In this way, when the IP changes, we can still use the FQDN to dynamically find the node without changing any configuration of the cluster. In addition, FQDN is used to facilitate unified access to the same cluster from the Intranet and the Internet. -TDengine does not recommend using an IP address to access the cluster, FQDN is recommended for cluster management. +TDengine does not recommend using an IP address to access the cluster. FQDN is recommended for cluster management. diff --git a/docs-en/05-get-started/index.md b/docs-en/05-get-started/index.md index 39b2d02eca..596d42d32f 100644 --- a/docs-en/05-get-started/index.md +++ b/docs-en/05-get-started/index.md @@ -10,7 +10,7 @@ import AptGetInstall from "./\_apt_get_install.mdx"; ## Quick Install -The full package of TDengine includes the server(taosd), taosAdapter for connecting with third-party systems and providing a RESTful interface, client driver(taosc), command-line program(CLI, taos) and some tools. For the current version, the server taosd and taosAdapter can only be installed and run on Linux systems. In the future taosd and taosAdapter will also be supported on Windows, macOS and other systems. The client driver taosc and TDengine CLI can be installed and run on Windows or Linux. In addition to the connectors of multiple languages, [RESTful interface](/reference/rest-api) is also provided by [taosAdapter](/reference/taosadapter) in TDengine. Prior to version 2.4.0.0, however, there is no taosAdapter, the RESTful interface is provided by the built-in HTTP service of taosd. +The full package of TDengine includes the server(taosd), taosAdapter for connecting with third-party systems and providing a RESTful interface, client driver(taosc), command-line program(CLI, taos) and some tools. For the current version, the server taosd and taosAdapter can only be installed and run on Linux systems. In the future taosd and taosAdapter will also be supported on Windows, macOS and other systems. The client driver taosc and TDengine CLI can be installed and run on Windows or Linux. In addition to connectors for multiple languages, TDengine also provides a [RESTful interface](/reference/rest-api) through [taosAdapter](/reference/taosadapter). Prior to version 2.4.0.0, taosAdapter did not exist and the RESTful interface was provided by the built-in HTTP service of taosd. TDengine supports X64/ARM64/MIPS64/Alpha64 hardware platforms, and will support ARM32, RISC-V and other CPU architectures in the future. diff --git a/docs-en/07-develop/01-connect/index.md b/docs-en/07-develop/01-connect/index.md index 2e886cb892..21b2149f44 100644 --- a/docs-en/07-develop/01-connect/index.md +++ b/docs-en/07-develop/01-connect/index.md @@ -1,7 +1,7 @@ --- sidebar_label: Connection title: Connect to TDengine -description: "This document explains how to establish connection to TDengine, and briefly introduce how to install and use TDengine connectors." +description: "This document explains how to establish connections to TDengine, and briefly introduces how to install and use TDengine connectors." --- import Tabs from "@theme/Tabs"; @@ -19,7 +19,7 @@ import InstallOnLinux from "../../14-reference/03-connector/\_windows_install.md import VerifyLinux from "../../14-reference/03-connector/\_verify_linux.mdx"; import VerifyWindows from "../../14-reference/03-connector/\_verify_windows.mdx"; -Any application programs running on any kind of platforms can access TDengine through the REST API provided by TDengine. For the details, please refer to [REST API](/reference/rest-api/). Besides, application programs can use the connectors of multiple programming languages to access TDengine, including C/C++, Java, Python, Go, Node.js, C#, and Rust. This chapter describes how to establish connection to TDengine and briefly introduces how to install and use connectors. For details about the connectors, please refer to [Connectors](/reference/connector/) +Any application programs running on any kind of platform can access TDengine through the REST API provided by TDengine. For details, please refer to [REST API](/reference/rest-api/). Additionally, application programs can use the connectors of multiple programming languages including C/C++, Java, Python, Go, Node.js, C#, and Rust to access TDengine. This chapter describes how to establish a connection to TDengine and briefly introduces how to install and use connectors. For details about the connectors, please refer to [Connectors](/reference/connector/) ## Establish Connection @@ -31,12 +31,12 @@ There are two ways for a connector to establish connections to TDengine: Key differences: 1. The TDengine client driver (taosc) has the highest performance with all the features of TDengine like [Parameter Binding](/reference/connector/cpp#parameter-binding-api), [Subscription](/reference/connector/cpp#subscription-and-consumption-api), etc. -2. The TDengine client driver (taosc) is not supported across all platforms, and applications built on taosc may need to be modified when updating taosc to newere versions. +2. The TDengine client driver (taosc) is not supported across all platforms, and applications built on taosc may need to be modified when updating taosc to newer versions. 3. The REST connection is more accessible with cross-platform support, however it results in a 30% performance downgrade. ## Install Client Driver taosc -If you are choosing to use native connection and the application is not on the same host as TDengine server, the TDengine client driver taosc needs to be installed on the application host. If choosing to use the REST connection or the application is on the same host as TDengine server, this step can be skipped. It's better to use same version of taosc as the server. +If you are choosing to use the native connection and the the application is not on the same host as TDengine server, the TDengine client driver taosc needs to be installed on the application host. If choosing to use the REST connection or the application is on the same host as TDengine server, this step can be skipped. It's better to use same version of taosc as the TDengine server. ### Install diff --git a/docs-en/07-develop/05-continuous-query.mdx b/docs-en/07-develop/05-continuous-query.mdx index 97e32a17ff..6e7ec64307 100644 --- a/docs-en/07-develop/05-continuous-query.mdx +++ b/docs-en/07-develop/05-continuous-query.mdx @@ -4,15 +4,15 @@ description: "Continuous query is a query that's executed automatically accordin title: "Continuous Query" --- -Continuous query is a query that's executed automatically according to predefined frequency to provide aggregate query capability by time window, it's actually a simplified time driven stream computing. Continuous query can be performed on a table or STable in TDengine. The result of continuous query can be pushed to client or written back to TDengine. Each query is executed on a time window, which moves forward with time. The size of time window and the forward sliding time need to be specified with parameter `INTERVAL` and `SLIDING` respectively. +Continuous query is a query that's executed automatically according to a predefined frequency to provide aggregate query capability by time window, it's actually a simplified time driven stream computing. Continuous query can be performed on a table or STable in TDengine. The result of continuous query can be pushed to clients or written back to TDengine. Each query is executed on a time window, which moves forward with time. The size of time window and the forward sliding time need to be specified with parameter `INTERVAL` and `SLIDING` respectively. -Continuous query in TDengine is time driven, and can be defined using TAOS SQL directly without any extra operations. With continuous query, the result can be generated according to time window to achieve down sampling of original data. Once a continuous query is defined using TAOS SQL, the query is automatically executed at the end of each time window and the result is pushed back to client or written to TDengine. +Continuous query in TDengine is time driven, and can be defined using TAOS SQL directly without any extra operations. With continuous query, the result can be generated according to a time window to achieve down sampling of the original data. Once a continuous query is defined using TAOS SQL, the query is automatically executed at the end of each time window and the result is pushed back to clients or written to TDengine. There are some differences between continuous query in TDengine and time window computation in stream computing: - The computation is performed and the result is returned in real time in stream computing, but the computation in continuous query is only started when a time window closes. For example, if the time window is 1 day, then the result will only be generated at 23:59:59. -- If a historical data row is written in to a time widow for which the computation has been finished, the computation will not be performed again and the result will not be pushed to client again either. If the result has been written into TDengine, there will be no update for the result. -- In continuous query, if the result is pushed to client, the client status is not cached on the server side and Exactly-once is not guaranteed by the server either. If the client program crashes, a new time window will be generated from the time where the continuous query is restarted. If the result is written into TDengine, the data written into TDengine can be guaranteed as valid and continuous. +- If a historical data row is written in to a time window for which the computation has already finished, the computation will not be performed again and the result will not be pushed to client applications again. If the results have already been written into TDengine, they will not be updated. +- In continuous query, if the result is pushed to a client, the client status is not cached on the server side and Exactly-once is not guaranteed by the server. If the client program crashes, a new time window will be generated from the time where the continuous query is restarted. If the result is written into TDengine, the data written into TDengine can be guaranteed as valid and continuous. ## Syntax @@ -30,7 +30,7 @@ SLIDING: The time step for which the time window moves forward each time ## How to Use -In this section the use case of meters will be used to introduce how to use continuous query. Assume the STable and sub tables have been created using below SQL statement. +In this section the use case of meters will be used to introduce how to use continuous query. Assume the STable and subtables have been created using the SQL statements below. ```sql create table meters (ts timestamp, current float, voltage int, phase float) tags (location binary(64), groupId int); @@ -38,7 +38,7 @@ create table D1001 using meters tags ("Beijing.Chaoyang", 2); create table D1002 using meters tags ("Beijing.Haidian", 2); ``` -The average voltage for each time window of one minute with 30 seconds as the length of moving forward can be retrieved using below SQL statement. +The SQL statement below retrieves the average voltage for a one minute time window, with each time window moving forward by 30 seconds. ```sql select avg(voltage) from meters interval(1m) sliding(30s); @@ -50,13 +50,13 @@ Whenever the above SQL statement is executed, all the existing data will be comp select avg(voltage) from meters where ts > {startTime} interval(1m) sliding(30s); ``` -Another easier way for same purpose is prepend `create table {tableName} as` before the `select`. +An easier way to achieve this is to prepend `create table {tableName} as` before the `select`. ```sql create table avg_vol as select avg(voltage) from meters interval(1m) sliding(30s); ``` -A table named as `avg_vol` will be created automatically, then every 30 seconds the `select` statement will be executed automatically on the data in the past 1 minutes, i.e. the latest time window, and the result is written into table `avg_vol`. The client program just needs to query from table `avg_vol`. For example: +A table named as `avg_vol` will be created automatically, then every 30 seconds the `select` statement will be executed automatically on the data in the past 1 minute, i.e. the latest time window, and the result is written into table `avg_vol`. The client program just needs to query from table `avg_vol`. For example: ```sql taos> select * from avg_vol; @@ -68,16 +68,16 @@ taos> select * from avg_vol; 2020-07-29 13:39:00.000 | 223.0800000 | ``` -Please be noted that the minimum allowed time window is 10 milliseconds, and no upper limit. +Please note that the minimum allowed time window is 10 milliseconds, and no upper limit. -Besides, it's allowed to specify the start and end time of continuous query. If the start time is not specified, the timestamp of the first original row will be considered as the start time; if the end time is not specified, the continuous will be performed infinitely, otherwise it will be terminated once the end time is reached. For example, the continuous query in below SQL statement will be started from now and terminated one hour later. +It's possible to specify the start and end time of a continuous query. If the start time is not specified, the timestamp of the first row will be considered as the start time; if the end time is not specified, the continuous query will be performed indefinitely, otherwise it will be terminated once the end time is reached. For example, the continuous query in the SQL statement below will be started from now and terminated one hour later. ```sql create table avg_vol as select avg(voltage) from meters where ts > now and ts <= now + 1h interval(1m) sliding(30s); ``` -`now` in above SQL statement stands for the time when the continuous query is created, not the time when the computation is actually performed. Besides, to avoid the trouble caused by the delay of original data as much as possible, the actual computation in continuous query is also started with a little delay. That means, once a time window closes, the computation is not started immediately. Normally, the result can only be available a little time later, normally within one minute, after the time window closes. +`now` in the above SQL statement stands for the time when the continuous query is created, not the time when the computation is actually performed. To avoid the trouble caused by a delay in receiving data as much as possible, the actual computation in a continuous query is started after a little delay. That means, once a time window closes, the computation is not started immediately. Normally, the result are available after a little time, normally within one minute, after the time window closes. ## How to Manage -`show streams` command can be used in TDengine CLI `taos` to show all the continuous queries in the system, and `kill stream` can be used to terminate a continuous query. +`show streams` command can be used in the TDengine CLI `taos` to show all the continuous queries in the system, and `kill stream` can be used to terminate a continuous query. diff --git a/docs-en/07-develop/06-subscribe.mdx b/docs-en/07-develop/06-subscribe.mdx index 56f4ed83d8..964bb2fd8d 100644 --- a/docs-en/07-develop/06-subscribe.mdx +++ b/docs-en/07-develop/06-subscribe.mdx @@ -16,9 +16,9 @@ import CDemo from "./_sub_c.mdx"; ## Introduction -According to the time series nature of the data, data inserting in TDengine is similar to data publishing in message queues, they both can be considered as a new data record with timestamp is inserted into the system. Data is stored in ascending order of timestamp inside TDengine, so essentially each table in TDengine can be considered as a message queue. +Due to the nature of time series data, data inserting in TDengine is similar to data publishing in message queues. Data is stored in ascending order of timestamp inside TDengine, so each table in TDengine can essentially be considered as a message queue. -Lightweight service for data subscription and pushing is built in TDengine. With the API provided by TDengine, client programs can used `select` statement to subscribe the data from one or more tables. The subscription and and state maintenance is performed on the client side, the client programs polls the server to check whether there is new data, and if so the new data will be pushed back to the client side. If the client program is restarted, where to start for retrieving new data is up to the client side. +A lightweight service for data subscription and pushing is built in TDengine. With the API provided by TDengine, client programs can use `select` statements to subscribe to data from one or more tables. The subscription and state maintenance is performed on the client side, the client programs poll the server to check whether there is new data, and if so the new data will be pushed back to the client side. If the client program is restarted, where to start for retrieving new data is up to the client side. There are 3 major APIs related to subscription provided in the TDengine client driver. @@ -28,9 +28,9 @@ taos_consume taos_unsubscribe ``` -For more details about these API please refer to [C/C++ Connector](/reference/connector/cpp). Their usage will be introduced below using the use case of meters, in which the schema of STable and sub tables please refer to the previous section "continuous query". Full sample code can be found [here](https://github.com/taosdata/TDengine/blob/master/examples/c/subscribe.c). +For more details about these APIs please refer to [C/C++ Connector](/reference/connector/cpp). Their usage will be introduced below using the use case of meters, in which the schema of STable and subtables from the previous section [Continuous Query](/develop/continuous-query) are used. Full sample code can be found [here](https://github.com/taosdata/TDengine/blob/master/examples/c/subscribe.c). -If we want to get notification and take some actions if the current exceeds a threshold, like 10A, from some meters, there are two ways: +If we want to get a notification and take some actions if the current exceeds a threshold, like 10A, from some meters, there are two ways: The first way is to query on each sub table and record the last timestamp matching the criteria, then after some time query on the data later than recorded timestamp and repeat this process. The SQL statements for this way are as below. @@ -40,7 +40,7 @@ select * from D1002 where ts > {last_timestamp2} and current > 10; ... ``` -The above way works, but the problem is that the number of `select` statements increases with the number of meters grows. Finally the performance of both client side and server side will be unacceptable once the number of meters grows to a big enough number. +The above way works, but the problem is that the number of `select` statements increases with the number of meters. Additionally, the performance of both client side and server side will be unacceptable once the number of meters grows to a big enough number. A better way is to query on the STable, only one `select` is enough regardless of the number of meters, like below: @@ -48,7 +48,7 @@ A better way is to query on the STable, only one `select` is enough regardless o select * from meters where ts > {last_timestamp} and current > 10; ``` -However, how to choose `last_timestamp` becomes a new problem if using this way. Firstly, the timestamp when the data is generated is different from the timestamp when the data is inserted into the database, sometimes the difference between them may be very big. Secondly, the time when the data from different meters may arrives at the database may be different too. If the timestamp of the "slowest" meter is used as `last_timestamp` in the query, the data from other meters may be selected repeatedly; but if the timestamp of the "fasted" meters is used as `last_timestamp`, some data from other meters may be missed. +However, this presents a new problem in how to choose `last_timestamp`. First, the timestamp when the data is generated is different from the timestamp when the data is inserted into the database, sometimes the difference between them may be very big. Second, the time when the data from different meters arrives at the database may be different too. If the timestamp of the "slowest" meter is used as `last_timestamp` in the query, the data from other meters may be selected repeatedly; but if the timestamp of the "fastest" meter is used as `last_timestamp`, some data from other meters may be missed. All the problems mentioned above can be resolved thoroughly using subscription provided by TDengine. @@ -75,19 +75,19 @@ The parameter `sql` is a `select` statement in which `where` clause can be used select * from meters where current > 10; ``` -Please be noted that, all the data will be processed because no start time is specified. If only the data from one day ago needs to be processed, a time related condition can be added: +Please note that, all the data will be processed because no start time is specified. If only the data from one day ago needs to be processed, a time related condition can be added: ```sql select * from meters where ts > now - 1d and current > 10; ``` -The parameter `topic` is the name of the subscription, it needs to be guaranteed unique in the client program, but it's not necessary to be globally unique because subscription is implemented in the APIs on client side. +The parameter `topic` is the name of the subscription, it needs to be guaranteed unique in the client program, but it's not necessary to be globally unique because subscription is implemented in the APIs on the client side. -If the subscription named as `topic` doesn't exist, parameter `restart` would be ignored. If the subscription named as `topic` has been created before by the client program which then exited, when the client program is restarted to use this `topic`, parameter `restart` is used to determine retrieving data from beginning or from the last point where the subscription was broken. If the value of `restart` is **true** (i.e. a non-zero value), the data will be retrieved from beginning, or if it is **false** (i.e. zero), the data already consumed before will not be processed again. +If the subscription named as `topic` doesn't exist, the parameter `restart` will be ignored. If the subscription named as `topic` has been created before by the client program, when the client program is restarted with the subscription named `topic`, parameter `restart` is used to determine whether to retrieve data from the beginning or from the last point where the subscription was broken. If the value of `restart` is **true** (i.e. a non-zero value), the data will be retrieved from beginning, or if it is **false** (i.e. zero), the data already consumed before will not be processed again. -The last parameter of `taos_subscribe` is the polling interval in unit of millisecond. In sync mode, if the time difference between two continuous invocations to `taos_consume` is smaller than the interval specified by `taos_subscribe`, `taos_consume` would be blocked until the interval is reached. In async mode, this interval is the minimum interval between two invocations to the call back function. +The last parameter of `taos_subscribe` is the polling interval in unit of millisecond. In sync mode, if the time difference between two continuous invocations to `taos_consume` is smaller than the interval specified by `taos_subscribe`, `taos_consume` will be blocked until the interval is reached. In async mode, this interval is the minimum interval between two invocations to the call back function. -The last second parameter of `taos_subscribe` is used to pass arguments to the call back function. `taos_subscribe` doesn't process this parameter and simply passes it to the call back function. This parameter is simply ignored in sync mode. +The second to last parameter of `taos_subscribe` is used to pass arguments to the call back function. `taos_subscribe` doesn't process this parameter and simply passes it to the call back function. This parameter is simply ignored in sync mode. After a subscription is created, its data can be consumed and processed, below is the sample code of how to consume data in sync mode, in the else part if `if (async)`. @@ -149,22 +149,22 @@ void subscribe_callback(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code) { taos_unsubscribe(tsub, keep); ``` -The second parameter `keep` is used to specify whether to keep the subscription progress on the client sde. If it is **false**, i.e. **0**, then subscription will be restarted from beginning regardless of the `restart` parameter's value in when `taos_subscribe` is invoked again. The subscription progress information is stored in _{DataDir}/subscribe/_ , under which there is a file with same name as `topic` for each subscription, the subscription will be restarted from beginning if the corresponding progress file is removed. +The second parameter `keep` is used to specify whether to keep the subscription progress on the client sde. If it is **false**, i.e. **0**, then subscription will be restarted from beginning regardless of the `restart` parameter's value when `taos_subscribe` is invoked again. The subscription progress information is stored in _{DataDir}/subscribe/_ , under which there is a file with the same name as `topic` for each subscription, the subscription will be restarted from the beginning if the corresponding progress file is removed. Now let's see the effect of the above sample code, assuming below prerequisites have been done. - The sample code has been downloaded to local system - TDengine has been installed and launched properly on same system -- The database, STable, sub tables required in the sample code have been ready +- The database, STable, and subtables required in the sample code are ready -It's ready to launch below command in the directory where the sample code resides to compile and start the program. +Launch the command below in the directory where the sample code resides to compile and start the program. ```bash make ./subscribe -sql='select * from meters where current > 10;' ``` -After the program is started, open another terminal and launch TDengine CLI `taos`, then use below SQL commands to insert a row whose current is 12A into table **D1001**. +After the program is started, open another terminal and launch TDengine CLI `taos`, then use the below SQL commands to insert a row whose current is 12A into table **D1001**. ```sql use test; @@ -232,7 +232,7 @@ Query OK, 5 row(s) in set (0.004896s) ### Run the Examples -The example programs firstly consume all historical data matching the criteria. +The example programs first consume all historical data matching the criteria. ```bash ts: 1597464000000 current: 12.0 voltage: 220 phase: 1 location: Beijing.Chaoyang groupid : 2 diff --git a/docs-en/07-develop/07-cache.md b/docs-en/07-develop/07-cache.md index 13db6c3638..c99612ecfb 100644 --- a/docs-en/07-develop/07-cache.md +++ b/docs-en/07-develop/07-cache.md @@ -10,9 +10,9 @@ Caching the latest data provides the capability of retrieving data in millisecon The memory space used by TDengine cache is fixed in size, according to the configuration based on application requirement and system resources. Independent memory pool is allocated for and managed by each vnode (virtual node) in TDengine, there is no sharing of memory pools between vnodes. All the tables belonging to a vnode share all the cache memory of the vnode. -Memory pool is divided into blocks and data is stored in row format in memory and each block follows FIFO policy. The size of each block is determined by configuration parameter `cache`, the number of blocks for each vnode is determined by `blocks`. For each vnode, the total cache size is `cache * blocks`. It's better to set the size of each block to hold at least tends of rows. +Memory pool is divided into blocks and data is stored in row format in memory and each block follows FIFO policy. The size of each block is determined by configuration parameter `cache`, the number of blocks for each vnode is determined by `blocks`. For each vnode, the total cache size is `cache * blocks`. A cache block needs to ensure that each table can store at least dozens of records to be efficient. -`last_row` function can be used to retrieve the last row of a table or a STable to quickly show the current state of devices on monitoring screen. For example below SQL statement retrieves the latest voltage of all meters in Chaoyang district of Beijing. +`last_row` function can be used to retrieve the last row of a table or a STable to quickly show the current state of devices on monitoring screen. For example the below SQL statement retrieves the latest voltage of all meters in Chaoyang district of Beijing. ```sql select last_row(voltage) from meters where location='Beijing.Chaoyang'; diff --git a/docs-en/07-develop/index.md b/docs-en/07-develop/index.md index 122dd0d870..e3f55f2907 100644 --- a/docs-en/07-develop/index.md +++ b/docs-en/07-develop/index.md @@ -2,15 +2,15 @@ title: Developer Guide --- -To develop an application using TDengine to process time-series data, we recommend taking the following steps: +To develop an application to process time-series data using TDengine, we recommend taking the following steps: -1. Choose the way for connection to TDengine. No matter what programming language you use, you can always use the REST interface to access TDengine, but you can also use connectors unique to each programming language. -2. Design the data model based on your own application scenarios. Learn the [concepts](/concept/) of TDengine including "one table for one data collection point" and the "super table" concept; learn about static labels, collected metrics, and subtables. According to the data characteristics, you may decide to create one or more databases, and you should design the STable schema to fit your data. -3. Decide how to insert data. TDengine supports writing using standard SQL, but also supports schemaless writing, so that data can be written directly without creating tables manually. -4. Based on business requirements, find out what SQL query statements need to be written. +1. Choose the method to connect to TDengine. No matter what programming language you use, you can always use the REST interface to access TDengine, but you can also use connectors unique to each programming language. +2. Design the data model based on your own use cases. Learn the [concepts](/concept/) of TDengine including "one table for one data collection point" and the "super table" (STable) concept; learn about static labels, collected metrics, and subtables. Depending on the characteristics of your data and your requirements, you may decide to create one or more databases, and you should design the STable schema to fit your data. +3. Decide how you will insert data. TDengine supports writing using standard SQL, but also supports schemaless writing, so that data can be written directly without creating tables manually. +4. Based on business requirements, find out what SQL query statements need to be written. You may be able to repurpose any existing SQL. 5. If you want to run real-time analysis based on time series data, including various dashboards, it is recommended that you use the TDengine continuous query feature instead of deploying complex streaming processing systems such as Spark or Flink. 6. If your application has modules that need to consume inserted data, and they need to be notified when new data is inserted, it is recommended that you use the data subscription function provided by TDengine without the need to deploy Kafka. -7. In many scenarios (such as fleet management), the application needs to obtain the latest status of each data collection point. It is recommended that you use the cache function of TDengine instead of deploying Redis separately. +7. In many use cases (such as fleet management), the application needs to obtain the latest status of each data collection point. It is recommended that you use the cache function of TDengine instead of deploying Redis separately. 8. If you find that the SQL functions of TDengine cannot meet your requirements, then you can use user-defined functions to solve the problem. This section is organized in the order described above. For ease of understanding, TDengine provides sample code for each supported programming language for each function. If you want to learn more about the use of SQL, please read the [SQL manual](/taos-sql/). For a more in-depth understanding of the use of each connector, please read the [Connector Reference Guide](/reference/connector/). If you also want to integrate TDengine with third-party systems, such as Grafana, please refer to the [third-party tools](/third-party/). diff --git a/docs-en/10-cluster/01-deploy.md b/docs-en/10-cluster/01-deploy.md index 8c921797ec..844a026ff6 100644 --- a/docs-en/10-cluster/01-deploy.md +++ b/docs-en/10-cluster/01-deploy.md @@ -6,15 +6,15 @@ title: Deployment ### Step 1 -The FQDN of all hosts need to be setup properly, all the FQDNs need to be configured in the /etc/hosts of each host. It must be guaranteed that each FQDN can be accessed (by ping, for example) from any other hosts. +The FQDN of all hosts needs to be setup properly, all the FQDNs need to be configured in the /etc/hosts of each host. It must be confirmed that each FQDN can be accessed (by ping, for example) from any other hosts. -On each host command `hostname -f` can be executed to get the hostname. `ping` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, need to be checked and revised to make any two hosts accessible to each other. +On each host the command `hostname -f` can be executed to get the hostname. `ping` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, need to be checked and revised to make any two hosts accessible to each other. :::note -- The host where the client program runs also needs to configured properly for FQDN, to make sure all hosts for client or server can be accessed from any other. In other words, the hosts where the client is running are also considered as a part of the cluster. +- The host where the client program runs also needs to be configured properly for FQDN, to make sure all hosts for client or server can be accessed from any other. In other words, the hosts where the client is running are also considered as a part of the cluster. -- It's suggested to disable the firewall for all hosts in the cluster. At least TCP/UDP for port 6030~6042 need to be open if firewall is enabled. +- It's suggested to disable the firewall for all hosts in the cluster. At least TCP/UDP for port 6030~6042 need to be open if a firewall is enabled. ::: @@ -28,7 +28,7 @@ Now it's time to install TDengine on all hosts without starting `taosd`, the ver ### Step 4 -Now each physical node (referred to as `dnode` hereinafter, it's abbreviation for "data node") of TDengine need to be configured properly. Please be noted that one dnode doesn't stand for one host, multiple TDengine nodes can be started on single host as long as they are configured properly without conflicting. More specifically each instance of the configuration file `taos.cfg` stands for a dnode. Assuming the first dnode of TDengine cluster is "h1.taosdata.com:6030", its `taos.cfg` is configured as following. +Now each physical node (referred to as `dnode` hereinafter, it's abbreviation for "data node") of TDengine needs to be configured properly. Please note that one dnode doesn't stand for one host, multiple TDengine nodes can be started on single host as long as they are configured properly without conflicting. More specifically each instance of the configuration file `taos.cfg` stands for a dnode. Assuming the first dnode of TDengine cluster is "h1.taosdata.com:6030", its `taos.cfg` is configured as following. ```c // firstEp is the end point to connect to when any dnode starts @@ -44,9 +44,9 @@ serverPort 6030 #arbitrator ha.taosdata.com:6042 ``` -`firstEp` and `fqdn` must be configured properly. In `taos.cfg` of all dnodes in TDengine cluster, `firstEp` must be configured to point to same address, i.e. the first dnode of the cluster. `fqdn` and `serverPort` compose the address of each node itself. If you want to start multiple TDengine dnodes on a single host, please also make sure all other configurations like `dataDir`, `logDir`, and other resources related parameters are not conflicting. +`firstEp` and `fqdn` must be configured properly. In `taos.cfg` of all dnodes in TDengine cluster, `firstEp` must be configured to point to same address, i.e. the first dnode of the cluster. `fqdn` and `serverPort` compose the address of each node itself. If you want to start multiple TDengine dnodes on a single host, please make sure all other configurations like `dataDir`, `logDir`, and other resources related parameters are not conflicting. -For all the dnodes in a TDengine cluster, below parameters must be configured as exactly same, any node whose configuration is different from dnodes already in the cluster can't join the cluster. +For all the dnodes in a TDengine cluster, the below parameters must be configured exactly the same, any node whose configuration is different from dnodes already in the cluster can't join the cluster. | **#** | **Parameter** | **Definition** | | ----- | ------------------ | --------------------------------------------------------------------------------- | @@ -61,7 +61,7 @@ For all the dnodes in a TDengine cluster, below parameters must be configured as | 9 | maxVgroupsPerDb | Maximum number vgroups that can be used by each DB | :::note -Prior to version 2.0.19.0, besides the above parameters, `locale` and `charset` must be configured as same too for each dnode. +Prior to version 2.0.19.0, besides the above parameters, `locale` and `charset` must also be configured the same for each dnode. ::: @@ -92,7 +92,7 @@ From the above output, it is shown that the end point of the started dnode is "h There are a few steps necessary to add other dnodes in the cluster. -Firstly, start `taosd` as instructed in [Get Started](/get-started/), assuming it's for the second dnode. Before starting `taosd`, please making sure the configuration is correct, especially `firstEp`, `FQDN` and `serverPort`, `firstEp` must be same as the dnode shown in the section "Start First DNODE", i.e. "h1.taosdata.com" in this example. +First, start `taosd` as instructed in [Get Started](/get-started/), assuming it's for the second dnode. Before starting `taosd`, please making sure the configuration is correct, especially `firstEp`, `FQDN` and `serverPort`, `firstEp` must be same as the dnode shown in the section "Start First DNODE", i.e. "h1.taosdata.com" in this example. Then, on the first dnode, use TDengine CLI `taos` to execute below command to add the end point of the dnode in the cluster. In the command "fqdn:port" should be quoted using double quotes. @@ -109,6 +109,6 @@ SHOW DNODES; If the status of the newly added dnode is offline, please check: - Whether the `taosd` process is running properly or not -- In the log file `taosdlog.0` to see whether the fqdn and port are correct or not +- In the log file `taosdlog.0` to see whether the fqdn and port are correct The above process can be repeated to add more dnodes in the cluster. diff --git a/docs-en/10-cluster/02-cluster-mgmt.md b/docs-en/10-cluster/02-cluster-mgmt.md index 3fcd68b29c..9d717be236 100644 --- a/docs-en/10-cluster/02-cluster-mgmt.md +++ b/docs-en/10-cluster/02-cluster-mgmt.md @@ -3,7 +3,7 @@ sidebar_label: Operation title: Manage DNODEs --- -It has been introduced that how to deploy and start a cluster from scratch. Once a cluster is ready, the dnode status in the cluster can be shown at any time, new dnode can be added to scale out the cluster, an existing dnode can be removed, even load balance can be performed manually.\ +The previous section [Deployment](/cluster/deploy) introduced how to deploy and start a cluster from scratch. Once a cluster is ready, the dnode status in the cluster can be shown at any time, new dnode can be added to scale out the cluster, an existing dnode can be removed, even load balance can be performed manually. :::note All the commands to be introduced in this chapter need to be run through TDengine CLI, sometimes it's necessary to use root privilege. @@ -12,7 +12,7 @@ All the commands to be introduced in this chapter need to be run through TDengin ## Show DNODEs -below command can be executed in TDengine CLI `taos` to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes, etc. It's suggested to execute this command to check after adding or removing a dnode. +The below command can be executed in TDengine CLI `taos` to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes, etc. It's suggested to execute this command to check after adding or removing a dnode. ```sql SHOW DNODES; @@ -39,7 +39,7 @@ USE SOME_DATABASE; SHOW VGROUPS; ``` -The example output is as below: +The example output is below: ``` taos> show dnodes; @@ -87,7 +87,7 @@ taos> show dnodes; Query OK, 2 row(s) in set (0.001017s) ``` -It can be seen that the status of the new dnode is "offline", once the dnode is started and connects the firstEp of the cluster, execute the command again and get below example output, from which it can be seen that two dnodes are both in "ready" status. +It can be seen that the status of the new dnode is "offline", once the dnode is started and connects the firstEp of the cluster, execute the command again and get the example output below, from which it can be seen that two dnodes are both in "ready" status. ``` taos> show dnodes; @@ -100,7 +100,7 @@ Query OK, 2 row(s) in set (0.001316s) ## Drop DNODE -Launch TDengine CLI `taos` and execute the command below to drop or remove a dnode from the cluster. In the command, `dnodeId` can be gotten from `show dnodes`. +Launch TDengine CLI `taos` and execute the command below to drop or remove a dnode from the cluster. In the command, you can get `dnodeId` from `show dnodes`. ```sql DROP DNODE "fqdn:port"; @@ -112,7 +112,7 @@ or DROP DNODE dnodeId; ``` -The example output is as below: +The example output is below: ``` taos> show dnodes; @@ -139,7 +139,7 @@ In the above example, when `show dnodes` is executed the first time, two dnodes - Once a dnode is dropped, it can't rejoin the cluster. To rejoin, the dnode needs to deployed again after cleaning up the data directory. Normally, before dropping a dnode, the data belonging to the dnode needs to be migrated to other place. - Please be noted that `drop dnode` is different from stopping `taosd` process. `drop dnode` just removes the dnode out of TDengine cluster. Only after a dnode is dropped, can the corresponding `taosd` process be stopped. - Once a dnode is dropped, other dnodes in the cluster will be notified of the drop and will not accept the request from the dropped dnode. -- dnodeID is allocated automatically and can't be interfered manually. dnodeID is generated in ascending order without duplication. +- dnodeID is allocated automatically and can't be manually modified. dnodeID is generated in ascending order without duplication. ::: @@ -155,7 +155,7 @@ ALTER DNODE BALANCE "VNODE:-DNODE:"; In the above command, `source-dnodeId` is the original dnodeId where the vnode resides, `dest-dnodeId` specifies the target dnode. vgId (vgroup ID) can be shown by `SHOW VGROUPS `. -Firstly `show vgroups` is executed to show the vgroup distribution. +First `show vgroups` is executed to show the vgroup distribution. ``` taos> show vgroups; @@ -172,7 +172,7 @@ taos> show vgroups; Query OK, 8 row(s) in set (0.001314s) ``` -It can be seen that there are 5 vgroups in dnode 3 and 3 vgroups in node 1, now we want to move vgId 18 from dnode 3 to dnode 1. Execute below command in `taos` +It can be seen that there are 5 vgroups in dnode 3 and 3 vgroups in node 1, now we want to move vgId 18 from dnode 3 to dnode 1. Execute the below command in `taos` ``` taos> alter dnode 3 balance "vnode:18-dnode:1"; @@ -207,7 +207,7 @@ It can be seen from above output that vgId 18 has been moved from dnode 3 to dno :::note - Manual load balancing can only be performed when the automatic load balancing is disabled, i.e. `balance` is set to 0. -- Only vnode in normal state, i.e. master or slave, can be moved. vnode can't moved when its in status offline, unsynced or syncing. +- Only a vnode in normal state, i.e. master or slave, can be moved. vnode can't be moved when its in status offline, unsynced or syncing. - Before moving a vnode, it's necessary to make sure the target dnode has enough resources: CPU, memory and disk. ::: diff --git a/docs-en/10-cluster/03-ha-and-lb.md b/docs-en/10-cluster/03-ha-and-lb.md index 53c95be9e9..6e0c386abe 100644 --- a/docs-en/10-cluster/03-ha-and-lb.md +++ b/docs-en/10-cluster/03-ha-and-lb.md @@ -7,19 +7,19 @@ title: High Availability and Load Balancing High availability of vnode and mnode can be achieved through replicas in TDengine. -The number of vnodes is associated with each DB, there can be multiple DBs in a TDengine cluster. For the purpose of operation, different number of replicas can be configured properly for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas, the default value is 1. With single replica, the high availability of the system can't be guaranteed. Whenever one node is down, data service would be unavailable. The number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation would fail with error "more dnodes are needed". Below SQL statement is used to create a database named as "demo" with 3 replicas. +The number of vnodes is associated with each DB, there can be multiple DBs in a TDengine cluster. A different number of replicas can be configured for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas, the default value is 1. With single replica, the high availability of the system can't be guaranteed. Whenever one node is down, the data service will be unavailable. The number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation would fail with error "more dnodes are needed". The SQL statement below is used to create a database named "demo" with 3 replicas. ```sql CREATE DATABASE demo replica 3; ``` -The data in a DB is divided into multiple shards and stored in multiple vgroups. The number of vnodes in each group is determined by the number of replicas set for the DB. The vnodes in each vgroups store exactly same data. For the purpose of high availability, the vnodes in a vgroup must be located in different dnodes on different hosts. As long as over half of the vnodes in a vgroup are in online state, the vgroup is able to serve data access. Otherwise the vgroup can't handle any data access for reading or inserting data. +The data in a DB is divided into multiple shards and stored in multiple vgroups. The number of vnodes in each vgroup is determined by the number of replicas set for the DB. The vnodes in each vgroup store exactly the same data. For the purpose of high availability, the vnodes in a vgroup must be located in different dnodes on different hosts. As long as over half of the vnodes in a vgroup are in an online state, the vgroup is able to provide data access. Otherwise the vgroup can't provide data access for reading or inserting data. There may be data for multiple DBs in a dnode. Once a dnode is down, multiple DBs may be affected. However, it's hard to say the cluster is guaranteed to work properly as long as over half of dnodes are online because vnodes are introduced and there may be complex mapping between vnodes and dnodes. ## High Availability of Mnode -Each TDengine cluster is managed by `mnode`, which is a module of `taosd`. For the high availability of mnode, multiple mnodes can be configured using system parameter `numOfMNodes`, the valid time range is [1,3]. To make sure the data consistency between mnodes, the data replication between mnodes is performed in synchronous way. +Each TDengine cluster is managed by `mnode`, which is a module of `taosd`. For the high availability of mnode, multiple mnodes can be configured using system parameter `numOfMNodes`, the valid time range is [1,3]. To make sure the data consistency between mnodes, the data replication between mnodes is performed in a synchronous way. There may be multiple dnodes in a cluster, but only one mnode can be started in each dnode. Which one or ones of the dnodes will be designated as mnodes is automatically determined by TDengine according to the cluster configuration and system resources. Command `show mnodes` can be executed in TDengine `taos` to show the mnodes in the cluster. @@ -32,19 +32,19 @@ The end point and role/status (master, slave, unsynced, or offline) of all mnode For the high availability of mnode, `numOfMnodes` needs to be configured to 2 or a higher value. Because the data consistency between mnodes must be guaranteed, the replica confirmation parameter `quorum` is set to 2 automatically if `numOfMNodes` is set to 2 or higher. :::note -If high availability is important for your system, both vnode and mnode must be configured to have multiple replicas. How to configure for them are different and have been described. +If high availability is important for your system, both vnode and mnode must be configured to have multiple replicas. ::: ## Load Balance -Load balance will be triggered in 3 cades without manual intervention. +Load balance will be triggered in 3 cases without manual intervention. - When a new dnode is joined in the cluster, automatic load balancing may be triggered, some data from some dnodes may be transferred to the new dnode automatically. - When a dnode is removed from the cluster, the data from this dnode will be transferred to other dnodes automatically. - When a dnode is too hot, i.e. too much data has been stored in it, automatic load balancing may be triggered to migrate some vnodes from this dnode to other dnodes. -- :::tip - Automatic load balancing is controlled by parameter `balance`, 0 means disabled and 1 means enabled. +:::tip +Automatic load balancing is controlled by parameter `balance`, 0 means disabled and 1 means enabled. ::: @@ -54,7 +54,7 @@ When a dnode is offline, it can be detected by the TDengine cluster. There are t - The dnode becomes online again before the threshold configured in `offlineThreshold` is reached, it is still in the cluster and data replication is started automatically. The dnode can work properly after the data syncup is finished. -- If the dnode has been offline over the threshold configured in `offlineThreshold` in `taos.cfg`, the dnode will be removed from the cluster automatically. System alert will be generated and automatic load balancing will be triggered too if `balance` is set to 1. When the removed dnode is restarted and becomes online, it will not be joined in the cluster automatically, it can only be joined manually by the system operator. +- If the dnode has been offline over the threshold configured in `offlineThreshold` in `taos.cfg`, the dnode will be removed from the cluster automatically. A system alert will be generated and automatic load balancing will be triggered if `balance` is set to 1. When the removed dnode is restarted and becomes online, it will not join in the cluster automatically, it can only be joined manually by the system operator. :::note If all the vnodes in a vgroup (or mnodes in mnode group) are in offline or unsynced status, the master node can only be voted after all the vnodes or mnodes in the group become online and can exchange status, then the vgroup (or mnode group) is able to provide service. @@ -63,15 +63,15 @@ If all the vnodes in a vgroup (or mnodes in mnode group) are in offline or unsyn ## Arbitrator -If the number of replicas is set to an even number like 2, when half of the vnodes in a vgroup don't work master node can't be voted. Similar case is also applicable to mnode if the number of mnodes is set to an even number like 2. +If the number of replicas is set to an even number like 2, when half of the vnodes in a vgroup don't work a master node can't be voted. A similar case is also applicable to mnode if the number of mnodes is set to an even number like 2. -To resolve this problem, a new arbitrator component named `tarbitrator`, abbreviated for TDengine Arbitrator, was introduced. Arbitrator simulates a vnode or mnode but it's only responsible for network communication and doesn't handle any actual data access. With Arbitrator, any vgroup or mnode group can be considered as having number of member nodes and master node can be selected. +To resolve this problem, a new arbitrator component named `tarbitrator`, abbreviated for TDengine Arbitrator, was introduced. Arbitrator simulates a vnode or mnode but it's only responsible for network communication and doesn't handle any actual data access. As long as more than half of the vnode or mnode, including Arbitrator, are available the vnode group or mnode group can provide data insertion or query services normally. -Normally, it's suggested to configure replica number of each DB or system parameter `numOfMNodes` to an odd number. However, if a user is very sensitive to storage space, replica number of 2 plus arbitrator component can be used to achieve both lower cost of storage space and high availability. +Normally, it's suggested to configure a replica number of each DB or system parameter `numOfMNodes` to an odd number. However, if a user is very sensitive to storage space, a replica number of 2 plus arbitrator component can be used to achieve both lower cost of storage space and high availability. Arbitrator component is installed with the server package. For details about how to install, please refer to [Install](/operation/pkg-install). The `-p` parameter of `tarbitrator` can be used to specify the port on which it provides service. -In the configuration file `taos.cfg` of each dnode, parameter `arbitrator` needs to be configured to the end point of the `tarbitrator` process. arbitrator component will be used automatically if the replica is configured to an even number and will be ignored if the replica is configured to an odd number. +In the configuration file `taos.cfg` of each dnode, parameter `arbitrator` needs to be configured to the end point of the `tarbitrator` process. Arbitrator component will be used automatically if the replica is configured to an even number and will be ignored if the replica is configured to an odd number. Arbitrator can be shown by executing command in TDengine CLI `taos` with its role shown as "arb". diff --git a/docs-en/10-cluster/index.md b/docs-en/10-cluster/index.md index a19a54e01d..5a45a2ce7b 100644 --- a/docs-en/10-cluster/index.md +++ b/docs-en/10-cluster/index.md @@ -3,7 +3,7 @@ title: Cluster keywords: ["cluster", "high availability", "load balance", "scale out"] --- -TDengine has a native distributed design and provides the ability to scale out. A few of nodes can form a TDengine cluster. If you need to get higher processing power, you just need to add more nodes into the cluster. TDengine uses virtual node technology to virtualize a node into multiple virtual nodes to achieve load balancing. At the same time, TDengine can group virtual nodes on different nodes into virtual node groups, and use the replication mechanism to ensure the high availability of the system. The cluster feature of TDengine is completely open source. +TDengine has a native distributed design and provides the ability to scale out. A few nodes can form a TDengine cluster. If you need higher processing power, you just need to add more nodes into the cluster. TDengine uses virtual node technology to virtualize a node into multiple virtual nodes to achieve load balancing. At the same time, TDengine can group virtual nodes on different nodes into virtual node groups, and use the replication mechanism to ensure the high availability of the system. The cluster feature of TDengine is completely open source. This chapter mainly introduces cluster deployment, maintenance, and how to achieve high availability and load balancing. diff --git a/docs-en/12-taos-sql/08-interval.md b/docs-en/12-taos-sql/08-interval.md index 5cc3fa8cb4..2044ff4f61 100644 --- a/docs-en/12-taos-sql/08-interval.md +++ b/docs-en/12-taos-sql/08-interval.md @@ -10,7 +10,7 @@ Window related clauses are used to divide the data set to be queried into subset `INTERVAL` clause is used to generate time windows of same time interval, `SLIDING` is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining continuous query both the size of time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time range of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time window. -![Time Window](/img/sql/timewindow-1.png) +![Time Window](./timewindow-1.webp) `INTERVAL` and `SLIDING` should be used with aggregate functions and selection functions. Below SQL statement is illegal because no aggregate or selection function is used with `INTERVAL`. @@ -30,7 +30,7 @@ When the time length specified by `SLIDING` is same as that specified by `INTERV In case of using integer, bool, or string to represent the device status at a moment, the continuous rows with same status belong to same status window. Once the status changes, the status window closes. As shown in the following figure,there are two status windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12]. Status window is not applicable to STable for now. -![Status Window](/img/sql/timewindow-3.png) +![Status Window](./timewindow-3.webp) `STATE_WINDOW` is used to specify the column based on which to define status window, for example: @@ -46,7 +46,7 @@ SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val); The primary key, i.e. timestamp, is used to determine which session window the row belongs to. If the time interval between two adjacent rows is within the time range specified by `tol_val`, they belong to same session window; otherwise they belong to two different time windows. As shown in the figure below, if the limit of time interval for session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30], because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds. -![Session Window](/img/sql/timewindow-2.png) +![Session Window](./timewindow-2.webp) If the time interval between two continuous rows are within the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on STable for now. diff --git a/docs-en/12-taos-sql/index.md b/docs-en/12-taos-sql/index.md index 611f2bf75e..32850e8c4b 100644 --- a/docs-en/12-taos-sql/index.md +++ b/docs-en/12-taos-sql/index.md @@ -3,9 +3,9 @@ title: TDengine SQL description: "The syntax supported by TDengine SQL " --- -This section explains the syntax about operating database, table, STable, inserting data, selecting data, functions and some tips that can be used in TDengine SQL. It would be easier to understand with some fundamental knowledge of SQL. +This section explains the syntax to operating databases, tables, STables, inserting data, selecting data, functions and some tips that can be used in TDengine SQL. It would be easier to understand with some fundamental knowledge of SQL. -TDengine SQL is the major interface for users to write data into or query from TDengine. For users to easily use, syntax similar to standard SQL is provided. However, please be noted that TDengine SQL is not standard SQL. Besides, because TDengine doesn't provide the functionality of deleting time series data, corresponding statements are not provided in TDengine SQL. +TDengine SQL is the major interface for users to write data into or query from TDengine. For users to easily use, syntax similar to standard SQL is provided. However, please note that TDengine SQL is not standard SQL. For instance, TDengine doesn't provide the functionality of deleting time series data, thus corresponding statements are not provided in TDengine SQL. TDengine SQL doesn't support abbreviation for keywords, for example `DESCRIBE` can't be abbreviated as `DESC`. @@ -16,7 +16,7 @@ Syntax Specifications used in this chapter: - | means one of a few options, excluding | itself. - … means the item prior to it can be repeated multiple times. -To better demonstrate the syntax, usage and rules of TAOS SQL, hereinafter it's assumed that there is a data set of meters. Assuming each meter collects 3 data: current, voltage, phase. The data model is as below: +To better demonstrate the syntax, usage and rules of TAOS SQL, hereinafter it's assumed that there is a data set of meters. Assuming each meter collects 3 data measurements: current, voltage, phase. The data model is shown below: ```sql taos> DESCRIBE meters; diff --git a/docs-en/12-taos-sql/timewindow-1.webp b/docs-en/12-taos-sql/timewindow-1.webp new file mode 100644 index 0000000000..82747558e9 Binary files /dev/null and b/docs-en/12-taos-sql/timewindow-1.webp differ diff --git a/docs-en/12-taos-sql/timewindow-2.webp b/docs-en/12-taos-sql/timewindow-2.webp new file mode 100644 index 0000000000..8f1314ae34 Binary files /dev/null and b/docs-en/12-taos-sql/timewindow-2.webp differ diff --git a/docs-en/12-taos-sql/timewindow-3.webp b/docs-en/12-taos-sql/timewindow-3.webp new file mode 100644 index 0000000000..5bd16e68e7 Binary files /dev/null and b/docs-en/12-taos-sql/timewindow-3.webp differ diff --git a/docs-en/14-reference/03-connector/03-connector.mdx b/docs-en/14-reference/03-connector/03-connector.mdx index 6be914bdb4..38eba73d09 100644 --- a/docs-en/14-reference/03-connector/03-connector.mdx +++ b/docs-en/14-reference/03-connector/03-connector.mdx @@ -4,7 +4,7 @@ title: Connector TDengine provides a rich set of APIs (application development interface). To facilitate users to develop their applications quickly, TDengine supports connectors for multiple programming languages, including official connectors for C/C++, Java, Python, Go, Node.js, C#, and Rust. These connectors support connecting to TDengine clusters using both native interfaces (taosc) and REST interfaces (not supported in a few languages yet). Community developers have also contributed several unofficial connectors, such as the ADO.NET connector, the Lua connector, and the PHP connector. -![image-connector](/img/connector.png) +![image-connector](./connector.webp) ## Supported platforms diff --git a/docs-en/14-reference/03-connector/connector.webp b/docs-en/14-reference/03-connector/connector.webp new file mode 100644 index 0000000000..040cf5c26c Binary files /dev/null and b/docs-en/14-reference/03-connector/connector.webp differ diff --git a/docs-en/14-reference/03-connector/java.mdx b/docs-en/14-reference/03-connector/java.mdx index 328907c4d7..0a1960be51 100644 --- a/docs-en/14-reference/03-connector/java.mdx +++ b/docs-en/14-reference/03-connector/java.mdx @@ -11,7 +11,7 @@ import TabItem from '@theme/TabItem'; 'taos-jdbcdriver' is TDengine's official Java language connector, which allows Java developers to develop applications that access the TDengine database. 'taos-jdbcdriver' implements the interface of the JDBC driver standard and provides two forms of connectors. One is to connect to a TDengine instance natively through the TDengine client driver (taosc), which supports functions including data writing, querying, subscription, schemaless writing, and bind interface. And the other is to connect to a TDengine instance through the REST interface provided by taosAdapter (2.4.0.0 and later). REST connections implement has a slight differences to compare the set of features implemented and native connections. -![tdengine-connector](tdengine-jdbc-connector.png) +![tdengine-connector](tdengine-jdbc-connector.webp) The preceding diagram shows two ways for a Java app to access TDengine via connector: diff --git a/docs-en/14-reference/03-connector/tdengine-jdbc-connector.png b/docs-en/14-reference/03-connector/tdengine-jdbc-connector.png deleted file mode 100644 index 7541aaf98a..0000000000 Binary files a/docs-en/14-reference/03-connector/tdengine-jdbc-connector.png and /dev/null differ diff --git a/docs-en/14-reference/03-connector/tdengine-jdbc-connector.webp b/docs-en/14-reference/03-connector/tdengine-jdbc-connector.webp new file mode 100644 index 0000000000..37cf6d90a5 Binary files /dev/null and b/docs-en/14-reference/03-connector/tdengine-jdbc-connector.webp differ diff --git a/docs-en/14-reference/04-taosadapter.md b/docs-en/14-reference/04-taosadapter.md index 85fd2923b0..de42e8a883 100644 --- a/docs-en/14-reference/04-taosadapter.md +++ b/docs-en/14-reference/04-taosadapter.md @@ -24,7 +24,7 @@ taosAdapter provides the following features. ## taosAdapter architecture diagram -![taosAdapter Architecture](taosAdapter-architecture.png) +![taosAdapter Architecture](taosAdapter-architecture.webp) ## taosAdapter Deployment Method diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.png deleted file mode 100644 index 4708f836fe..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.webp new file mode 100644 index 0000000000..a78e18028a Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.png deleted file mode 100644 index f2684e6eed..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.webp new file mode 100644 index 0000000000..b152418d09 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.png deleted file mode 100644 index 74686691e4..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.webp new file mode 100644 index 0000000000..f58f48b7f1 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.png deleted file mode 100644 index 2796421556..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.webp new file mode 100644 index 0000000000..00afcce013 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.png deleted file mode 100644 index b0d3abbf21..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.webp new file mode 100644 index 0000000000..567e5694f9 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.png deleted file mode 100644 index 2b54cbeb83..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.webp new file mode 100644 index 0000000000..cc8a912810 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.png deleted file mode 100644 index eb3848657f..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.webp new file mode 100644 index 0000000000..651b716bc5 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.png deleted file mode 100644 index d94b2e02ac..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.webp new file mode 100644 index 0000000000..8666193f59 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.png deleted file mode 100644 index 654df29345..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.webp new file mode 100644 index 0000000000..7f38a76a2b Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.png b/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.png deleted file mode 100644 index e3afa22c03..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.webp b/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.webp new file mode 100644 index 0000000000..3d7fe932a2 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.png b/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.png deleted file mode 100644 index 198bf37141..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.webp b/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.webp new file mode 100644 index 0000000000..517123954e Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.png b/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.png deleted file mode 100644 index ace3aa3c2f..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.webp b/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.webp new file mode 100644 index 0000000000..6666296ac1 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.png b/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.png deleted file mode 100644 index 7082e49f6b..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.webp b/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.webp new file mode 100644 index 0000000000..6f74bc3a47 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.png b/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.png deleted file mode 100644 index ffd4911b53..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.webp b/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.webp new file mode 100644 index 0000000000..acda3b24a6 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.png b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.png deleted file mode 100644 index 802c7366f9..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.webp b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.webp new file mode 100644 index 0000000000..903e236e2a Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.png b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.png deleted file mode 100644 index 019ec921b6..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.webp b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.webp new file mode 100644 index 0000000000..14fcfe9d18 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.png b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.png deleted file mode 100644 index 3963abb4ea..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.webp b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.webp new file mode 100644 index 0000000000..00b50cc619 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.png b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.png deleted file mode 100644 index 837100464b..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.webp b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.webp new file mode 100644 index 0000000000..06d0ff6ed5 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.png b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.png deleted file mode 100644 index 98223df254..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.webp b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.webp new file mode 100644 index 0000000000..e2ec052b91 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.png b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.png deleted file mode 100644 index 07aba348f0..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.webp b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.webp new file mode 100644 index 0000000000..665c035f97 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.png b/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.png deleted file mode 100644 index 7e28939ead..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.webp b/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.webp new file mode 100644 index 0000000000..7dc42eeba9 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.png b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.png deleted file mode 100644 index 981f640b14..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.webp b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.webp new file mode 100644 index 0000000000..7ef081900f Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.png b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.png deleted file mode 100644 index 94ef4fa5fe..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.webp b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.webp new file mode 100644 index 0000000000..602452fc4c Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.png b/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.png deleted file mode 100644 index 670cacc377..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.webp b/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.webp new file mode 100644 index 0000000000..35a3ebba78 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/import_dashboard.png b/docs-en/14-reference/07-tdinsight/assets/import_dashboard.png deleted file mode 100644 index d74cd36c96..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/import_dashboard.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/import_dashboard.webp b/docs-en/14-reference/07-tdinsight/assets/import_dashboard.webp new file mode 100644 index 0000000000..fb7958f1b9 Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/import_dashboard.webp differ diff --git a/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.png b/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.png deleted file mode 100644 index 0101e7430c..0000000000 Binary files a/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.png and /dev/null differ diff --git a/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.webp b/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.webp new file mode 100644 index 0000000000..49f1d88f4a Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.webp differ diff --git a/docs-en/14-reference/07-tdinsight/index.md b/docs-en/14-reference/07-tdinsight/index.md index 4850cecb33..dc337bf9ff 100644 --- a/docs-en/14-reference/07-tdinsight/index.md +++ b/docs-en/14-reference/07-tdinsight/index.md @@ -233,33 +233,33 @@ The default username/password is `admin`. Grafana will require a password change Point to the **Configurations** -> **Data Sources** menu, and click the **Add data source** button. -![Add data source button](./assets/howto-add-datasource-button.png) +![Add data source button](./assets/howto-add-datasource-button.webp) Search for and select **TDengine**. -![Add datasource](./assets/howto-add-datasource-tdengine.png) +![Add datasource](./assets/howto-add-datasource-tdengine.webp) Configure the TDengine datasource. -![Datasource Configuration](./assets/howto-add-datasource.png) +![Datasource Configuration](./assets/howto-add-datasource.webp) Save and test. It will report 'TDengine Data source is working' under normal circumstances. -![datasource test](./assets/howto-add-datasource-test.png) +![datasource test](./assets/howto-add-datasource-test.webp) ### Importing dashboards Point to **+** / **Create** - **import** (or `/dashboard/import` url). -![Import Dashboard and Configuration](./assets/import_dashboard.png) +![Import Dashboard and Configuration](./assets/import_dashboard.webp) Type the dashboard ID `15167` in the **Import via grafana.com** location and **Load**. -![Import via grafana.com](./assets/import-dashboard-15167.png) +![Import via grafana.com](./assets/import-dashboard-15167.webp) Once the import is complete, the full page view of TDinsight is shown below. -![show](./assets/TDinsight-full.png) +![show](./assets/TDinsight-full.webp) ## TDinsight dashboard details @@ -269,7 +269,7 @@ Details of the metrics are as follows. ### Cluster Status -![tdinsight-mnodes-overview](./assets/TDinsight-1-cluster-status.png) +![tdinsight-mnodes-overview](./assets/TDinsight-1-cluster-status.webp) This section contains the current information and status of the cluster, the alert information is also here (from left to right, top to bottom). @@ -289,7 +289,7 @@ This section contains the current information and status of the cluster, the ale ### DNodes Status -![tdinsight-mnodes-overview](./assets/TDinsight-2-dnodes.png) +![tdinsight-mnodes-overview](./assets/TDinsight-2-dnodes.webp) - **DNodes Status**: simple table view of `show dnodes`. - **DNodes Lifetime**: the time elapsed since the dnode was created. @@ -298,14 +298,14 @@ This section contains the current information and status of the cluster, the ale ### MNode Overview -![tdinsight-mnodes-overview](./assets/TDinsight-3-mnodes.png) +![tdinsight-mnodes-overview](./assets/TDinsight-3-mnodes.webp) 1. **MNodes Status**: a simple table view of `show mnodes`. 2. 2. **MNodes Number**: similar to `DNodes Number`, the number of MNodes changes. ### Request -![tdinsight-requests](./assets/TDinsight-4-requests.png) +![tdinsight-requests](./assets/TDinsight-4-requests.webp) 1. **Requests Rate(Inserts per Second)**: average number of inserts per second. 2. **Requests (Selects)**: number of query requests and change rate (count of second). @@ -313,7 +313,7 @@ This section contains the current information and status of the cluster, the ale ### Database -![tdinsight-database](./assets/TDinsight-5-database.png) +![tdinsight-database](./assets/TDinsight-5-database.webp) Database usage, repeated for each value of the variable `$database` i.e. multiple rows per database. @@ -325,7 +325,7 @@ Database usage, repeated for each value of the variable `$database` i.e. multipl ### DNode Resource Usage -![dnode-usage](./assets/TDinsight-6-dnode-usage.png) +![dnode-usage](./assets/TDinsight-6-dnode-usage.webp) Data node resource usage display with repeated multiple rows for the variable `$fqdn` i.e., each data node. Includes. @@ -346,13 +346,13 @@ Data node resource usage display with repeated multiple rows for the variable `$ ### Login History -![Login History](./assets/TDinsight-7-login-history.png) +![Login History](./assets/TDinsight-7-login-history.webp) Currently, only the number of logins per minute is reported. ### Monitoring taosAdapter -![taosadapter](./assets/TDinsight-8-taosadapter.png) +![taosadapter](./assets/TDinsight-8-taosadapter.webp) Support monitoring taosAdapter request statistics and status details. Includes. diff --git a/docs-en/14-reference/taosAdapter-architecture.png b/docs-en/14-reference/taosAdapter-architecture.png deleted file mode 100644 index 08a9018553..0000000000 Binary files a/docs-en/14-reference/taosAdapter-architecture.png and /dev/null differ diff --git a/docs-en/14-reference/taosAdapter-architecture.webp b/docs-en/14-reference/taosAdapter-architecture.webp new file mode 100644 index 0000000000..a4162b0a03 Binary files /dev/null and b/docs-en/14-reference/taosAdapter-architecture.webp differ diff --git a/docs-en/20-third-party/01-grafana.mdx b/docs-en/20-third-party/01-grafana.mdx index c1bfd4a96a..7239710e0a 100644 --- a/docs-en/20-third-party/01-grafana.mdx +++ b/docs-en/20-third-party/01-grafana.mdx @@ -62,15 +62,15 @@ GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=tdengine-datasource Users can log in to the Grafana server (username/password: admin/admin) directly through the URL `http://localhost:3000` and add a datasource through `Configuration -> Data Sources` on the left side, as shown in the following figure. -![img](./grafana/add_datasource1.jpg) +![img](./grafana/add_datasource1.webp) Click `Add data source` to enter the Add data source page, and enter TDengine in the query box to add it, as shown in the following figure. -![img](./grafana/add_datasource2.jpg) +![img](./grafana/add_datasource2.webp) Enter the datasource configuration page, and follow the default prompts to modify the corresponding configuration. -![img](./grafana/add_datasource3.jpg) +![img](./grafana/add_datasource3.webp) - Host: IP address of the server where the components of the TDengine cluster provide REST service (offered by taosd before 2.4 and by taosAdapter since 2.4) and the port number of the TDengine REST service (6041), by default use `http://localhost:6041`. - User: TDengine user name. @@ -78,13 +78,13 @@ Enter the datasource configuration page, and follow the default prompts to modif Click `Save & Test` to test. Follows are a success. -![img](./grafana/add_datasource4.jpg) +![img](./grafana/add_datasource4.webp) ### Create Dashboard Go back to the main interface to create the Dashboard, click Add Query to enter the panel query page: -![img](./grafana/create_dashboard1.jpg) +![img](./grafana/create_dashboard1.webp) As shown above, select the `TDengine` data source in the `Query` and enter the corresponding SQL in the query box below for query. @@ -94,7 +94,7 @@ As shown above, select the `TDengine` data source in the `Query` and enter the c Follow the default prompt to query the average system memory usage for the specified interval on the server where the current TDengine deployment is located as follows. -![img](./grafana/create_dashboard2.jpg) +![img](./grafana/create_dashboard2.webp) > For more information on how to use Grafana to create the appropriate monitoring interface and for more details on using Grafana, refer to the official Grafana [documentation](https://grafana.com/docs/). diff --git a/docs-en/20-third-party/09-emq-broker.md b/docs-en/20-third-party/09-emq-broker.md index 13562ba7f7..560c6463b5 100644 --- a/docs-en/20-third-party/09-emq-broker.md +++ b/docs-en/20-third-party/09-emq-broker.md @@ -44,25 +44,25 @@ Since the configuration interface of EMQX differs from version to version, here Use your browser to open the URL `http://IP:18083` and log in to EMQX Dashboard. The initial installation username is `admin` and the password is: `public`. -![img](./emqx/login-dashboard.png) +![img](./emqx/login-dashboard.webp) ### Creating Rule Select "Rule" in the "Rule Engine" on the left and click the "Create" button: ! -![img](./emqx/rule-engine.png) +![img](./emqx/rule-engine.webp) ### Edit SQL fields -![img](./emqx/create-rule.png) +![img](./emqx/create-rule.webp) ### Add "action handler" -![img](./emqx/add-action-handler.png) +![img](./emqx/add-action-handler.webp) ### Add "Resource" -![img](./emqx/create-resource.png) +![img](./emqx/create-resource.webp) Select "Data to Web Service" and click the "New Resource" button. @@ -70,13 +70,13 @@ Select "Data to Web Service" and click the "New Resource" button. Select "Data to Web Service" and fill in the request URL as the address and port of the server running taosAdapter (default is 6041). Leave the other properties at their default values. -![img](./emqx/edit-resource.png) +![img](./emqx/edit-resource.webp) ### Edit "action" Edit the resource configuration to add the key/value pairing for Authorization. Please refer to the [ TDengine REST API documentation ](https://docs.taosdata.com/reference/rest-api/) for the authorization in details. Enter the rule engine replacement template in the message body. -![img](./emqx/edit-action.png) +![img](./emqx/edit-action.webp) ## Compose program to mock data @@ -163,7 +163,7 @@ Edit the resource configuration to add the key/value pairing for Authorization. Note: `CLIENT_NUM` in the code can be set to a smaller value at the beginning of the test to avoid hardware performance be not capable to handle a more significant number of concurrent clients. -![img](./emqx/client-num.png) +![img](./emqx/client-num.webp) ## Execute tests to simulate sending MQTT data @@ -172,19 +172,19 @@ npm install mqtt mockjs --save ---registry=https://registry.npm.taobao.org node mock.js ``` -![img](./emqx/run-mock.png) +![img](./emqx/run-mock.webp) ## Verify that EMQX is receiving data Refresh the EMQX Dashboard rules engine interface to see how many records were received correctly: -![img](./emqx/check-rule-matched.png) +![img](./emqx/check-rule-matched.webp) ## Verify that data writing to TDengine Use the TDengine CLI program to log in and query the appropriate databases and tables to verify that the data is being written to TDengine correctly: -![img](./emqx/check-result-in-taos.png) +![img](./emqx/check-result-in-taos.webp) Please refer to the [TDengine official documentation](https://docs.taosdata.com/) for more details on how to use TDengine. EMQX Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use EMQX. diff --git a/docs-en/20-third-party/11-kafka.md b/docs-en/20-third-party/11-kafka.md index b9c7a3814a..5aee6e044d 100644 --- a/docs-en/20-third-party/11-kafka.md +++ b/docs-en/20-third-party/11-kafka.md @@ -9,11 +9,11 @@ TDengine Kafka Connector contains two plugins: TDengine Source Connector and TDe Kafka Connect is a component of Apache Kafka that enables other systems, such as databases, cloud services, file systems, etc., to connect to Kafka easily. Data can flow from other software to Kafka via Kafka Connect and Kafka to other systems via Kafka Connect. Plugins that read data from other software are called Source Connectors, and plugins that write data to other software are called Sink Connectors. Neither Source Connector nor Sink Connector will directly connect to Kafka Broker, and Source Connector transfers data to Kafka Connect. Sink Connector receives data from Kafka Connect. -![](kafka/Kafka_Connect.png) +![](kafka/Kafka_Connect.webp) TDengine Source Connector is used to read data from TDengine in real-time and send it to Kafka Connect. Users can use The TDengine Sink Connector to receive data from Kafka Connect and write it to TDengine. -![](kafka/streaming-integration-with-kafka-connect.png) +![](kafka/streaming-integration-with-kafka-connect.webp) ## What is Confluent? @@ -26,7 +26,7 @@ Confluent adds many extensions to Kafka. include: 5. GUI for managing and monitoring Kafka - Confluent Control Center Some of these extensions are available in the community version of Confluent. Some are only available in the enterprise version. -![](kafka/confluentPlatform.png) +![](kafka/confluentPlatform.webp) Confluent Enterprise Edition provides the `confluent` command-line tool to manage various components. diff --git a/docs-en/20-third-party/emqx/add-action-handler.png b/docs-en/20-third-party/emqx/add-action-handler.png deleted file mode 100644 index 97a1f933ec..0000000000 Binary files a/docs-en/20-third-party/emqx/add-action-handler.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/add-action-handler.webp b/docs-en/20-third-party/emqx/add-action-handler.webp new file mode 100644 index 0000000000..4a8d105f71 Binary files /dev/null and b/docs-en/20-third-party/emqx/add-action-handler.webp differ diff --git a/docs-en/20-third-party/emqx/check-result-in-taos.png b/docs-en/20-third-party/emqx/check-result-in-taos.png deleted file mode 100644 index c17a5c1ea2..0000000000 Binary files a/docs-en/20-third-party/emqx/check-result-in-taos.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/check-result-in-taos.webp b/docs-en/20-third-party/emqx/check-result-in-taos.webp new file mode 100644 index 0000000000..8fa040a861 Binary files /dev/null and b/docs-en/20-third-party/emqx/check-result-in-taos.webp differ diff --git a/docs-en/20-third-party/emqx/check-rule-matched.png b/docs-en/20-third-party/emqx/check-rule-matched.png deleted file mode 100644 index 9e9a466946..0000000000 Binary files a/docs-en/20-third-party/emqx/check-rule-matched.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/check-rule-matched.webp b/docs-en/20-third-party/emqx/check-rule-matched.webp new file mode 100644 index 0000000000..e5a6140357 Binary files /dev/null and b/docs-en/20-third-party/emqx/check-rule-matched.webp differ diff --git a/docs-en/20-third-party/emqx/client-num.png b/docs-en/20-third-party/emqx/client-num.png deleted file mode 100644 index fff48cbf3b..0000000000 Binary files a/docs-en/20-third-party/emqx/client-num.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/client-num.webp b/docs-en/20-third-party/emqx/client-num.webp new file mode 100644 index 0000000000..a151b18484 Binary files /dev/null and b/docs-en/20-third-party/emqx/client-num.webp differ diff --git a/docs-en/20-third-party/emqx/create-resource.png b/docs-en/20-third-party/emqx/create-resource.png deleted file mode 100644 index 58da4c391a..0000000000 Binary files a/docs-en/20-third-party/emqx/create-resource.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/create-resource.webp b/docs-en/20-third-party/emqx/create-resource.webp new file mode 100644 index 0000000000..bf9cccbe49 Binary files /dev/null and b/docs-en/20-third-party/emqx/create-resource.webp differ diff --git a/docs-en/20-third-party/emqx/create-rule.png b/docs-en/20-third-party/emqx/create-rule.png deleted file mode 100644 index 73b0b6ee3e..0000000000 Binary files a/docs-en/20-third-party/emqx/create-rule.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/create-rule.webp b/docs-en/20-third-party/emqx/create-rule.webp new file mode 100644 index 0000000000..13e8fc83d4 Binary files /dev/null and b/docs-en/20-third-party/emqx/create-rule.webp differ diff --git a/docs-en/20-third-party/emqx/edit-action.png b/docs-en/20-third-party/emqx/edit-action.png deleted file mode 100644 index 2a43ee369a..0000000000 Binary files a/docs-en/20-third-party/emqx/edit-action.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/edit-action.webp b/docs-en/20-third-party/emqx/edit-action.webp new file mode 100644 index 0000000000..7f6d2e36a8 Binary files /dev/null and b/docs-en/20-third-party/emqx/edit-action.webp differ diff --git a/docs-en/20-third-party/emqx/edit-resource.png b/docs-en/20-third-party/emqx/edit-resource.png deleted file mode 100644 index 0a0b356004..0000000000 Binary files a/docs-en/20-third-party/emqx/edit-resource.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/edit-resource.webp b/docs-en/20-third-party/emqx/edit-resource.webp new file mode 100644 index 0000000000..fd5d278fab Binary files /dev/null and b/docs-en/20-third-party/emqx/edit-resource.webp differ diff --git a/docs-en/20-third-party/emqx/login-dashboard.png b/docs-en/20-third-party/emqx/login-dashboard.png deleted file mode 100644 index d6c5035c98..0000000000 Binary files a/docs-en/20-third-party/emqx/login-dashboard.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/login-dashboard.webp b/docs-en/20-third-party/emqx/login-dashboard.webp new file mode 100644 index 0000000000..f84cee668f Binary files /dev/null and b/docs-en/20-third-party/emqx/login-dashboard.webp differ diff --git a/docs-en/20-third-party/emqx/rule-engine.png b/docs-en/20-third-party/emqx/rule-engine.png deleted file mode 100644 index db110a837b..0000000000 Binary files a/docs-en/20-third-party/emqx/rule-engine.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/rule-engine.webp b/docs-en/20-third-party/emqx/rule-engine.webp new file mode 100644 index 0000000000..c1711c8cc7 Binary files /dev/null and b/docs-en/20-third-party/emqx/rule-engine.webp differ diff --git a/docs-en/20-third-party/emqx/rule-header-key-value.png b/docs-en/20-third-party/emqx/rule-header-key-value.png deleted file mode 100644 index b81b9a9684..0000000000 Binary files a/docs-en/20-third-party/emqx/rule-header-key-value.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/rule-header-key-value.webp b/docs-en/20-third-party/emqx/rule-header-key-value.webp new file mode 100644 index 0000000000..e645b3822d Binary files /dev/null and b/docs-en/20-third-party/emqx/rule-header-key-value.webp differ diff --git a/docs-en/20-third-party/emqx/run-mock.png b/docs-en/20-third-party/emqx/run-mock.png deleted file mode 100644 index 0da2581857..0000000000 Binary files a/docs-en/20-third-party/emqx/run-mock.png and /dev/null differ diff --git a/docs-en/20-third-party/emqx/run-mock.webp b/docs-en/20-third-party/emqx/run-mock.webp new file mode 100644 index 0000000000..ed33f1666d Binary files /dev/null and b/docs-en/20-third-party/emqx/run-mock.webp differ diff --git a/docs-en/20-third-party/grafana/add_datasource1.jpg b/docs-en/20-third-party/grafana/add_datasource1.jpg deleted file mode 100644 index 1f0f5110f3..0000000000 Binary files a/docs-en/20-third-party/grafana/add_datasource1.jpg and /dev/null differ diff --git a/docs-en/20-third-party/grafana/add_datasource1.webp b/docs-en/20-third-party/grafana/add_datasource1.webp new file mode 100644 index 0000000000..211edc4457 Binary files /dev/null and b/docs-en/20-third-party/grafana/add_datasource1.webp differ diff --git a/docs-en/20-third-party/grafana/add_datasource2.jpg b/docs-en/20-third-party/grafana/add_datasource2.jpg deleted file mode 100644 index fa7a83e00e..0000000000 Binary files a/docs-en/20-third-party/grafana/add_datasource2.jpg and /dev/null differ diff --git a/docs-en/20-third-party/grafana/add_datasource2.webp b/docs-en/20-third-party/grafana/add_datasource2.webp new file mode 100644 index 0000000000..8ab547231f Binary files /dev/null and b/docs-en/20-third-party/grafana/add_datasource2.webp differ diff --git a/docs-en/20-third-party/grafana/add_datasource3.jpg b/docs-en/20-third-party/grafana/add_datasource3.jpg deleted file mode 100644 index fc850ad08f..0000000000 Binary files a/docs-en/20-third-party/grafana/add_datasource3.jpg and /dev/null differ diff --git a/docs-en/20-third-party/grafana/add_datasource3.webp b/docs-en/20-third-party/grafana/add_datasource3.webp new file mode 100644 index 0000000000..d8a733360a Binary files /dev/null and b/docs-en/20-third-party/grafana/add_datasource3.webp differ diff --git a/docs-en/20-third-party/grafana/add_datasource4.jpg b/docs-en/20-third-party/grafana/add_datasource4.jpg deleted file mode 100644 index 3ba73e50d4..0000000000 Binary files a/docs-en/20-third-party/grafana/add_datasource4.jpg and /dev/null differ diff --git a/docs-en/20-third-party/grafana/add_datasource4.webp b/docs-en/20-third-party/grafana/add_datasource4.webp new file mode 100644 index 0000000000..b1e0fc6e2b Binary files /dev/null and b/docs-en/20-third-party/grafana/add_datasource4.webp differ diff --git a/docs-en/20-third-party/grafana/create_dashboard1.jpg b/docs-en/20-third-party/grafana/create_dashboard1.jpg deleted file mode 100644 index 3b83c3a171..0000000000 Binary files a/docs-en/20-third-party/grafana/create_dashboard1.jpg and /dev/null differ diff --git a/docs-en/20-third-party/grafana/create_dashboard1.webp b/docs-en/20-third-party/grafana/create_dashboard1.webp new file mode 100644 index 0000000000..55eb388833 Binary files /dev/null and b/docs-en/20-third-party/grafana/create_dashboard1.webp differ diff --git a/docs-en/20-third-party/grafana/create_dashboard2.jpg b/docs-en/20-third-party/grafana/create_dashboard2.jpg deleted file mode 100644 index fe5d768ac5..0000000000 Binary files a/docs-en/20-third-party/grafana/create_dashboard2.jpg and /dev/null differ diff --git a/docs-en/20-third-party/grafana/create_dashboard2.webp b/docs-en/20-third-party/grafana/create_dashboard2.webp new file mode 100644 index 0000000000..bb40e40718 Binary files /dev/null and b/docs-en/20-third-party/grafana/create_dashboard2.webp differ diff --git a/docs-en/20-third-party/kafka/Kafka_Connect.png b/docs-en/20-third-party/kafka/Kafka_Connect.png deleted file mode 100644 index f3dc02ea2a..0000000000 Binary files a/docs-en/20-third-party/kafka/Kafka_Connect.png and /dev/null differ diff --git a/docs-en/20-third-party/kafka/Kafka_Connect.webp b/docs-en/20-third-party/kafka/Kafka_Connect.webp new file mode 100644 index 0000000000..8f2000a749 Binary files /dev/null and b/docs-en/20-third-party/kafka/Kafka_Connect.webp differ diff --git a/docs-en/20-third-party/kafka/confluentPlatform.png b/docs-en/20-third-party/kafka/confluentPlatform.png deleted file mode 100644 index f8e69f2c7f..0000000000 Binary files a/docs-en/20-third-party/kafka/confluentPlatform.png and /dev/null differ diff --git a/docs-en/20-third-party/kafka/confluentPlatform.webp b/docs-en/20-third-party/kafka/confluentPlatform.webp new file mode 100644 index 0000000000..ff03d4e51a Binary files /dev/null and b/docs-en/20-third-party/kafka/confluentPlatform.webp differ diff --git a/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.png b/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.png deleted file mode 100644 index 26d8a866d7..0000000000 Binary files a/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.png and /dev/null differ diff --git a/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.webp b/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.webp new file mode 100644 index 0000000000..120d534ec1 Binary files /dev/null and b/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.webp differ diff --git a/docs-en/21-tdinternal/01-arch.md b/docs-en/21-tdinternal/01-arch.md index 9607c9b387..2c430908e4 100644 --- a/docs-en/21-tdinternal/01-arch.md +++ b/docs-en/21-tdinternal/01-arch.md @@ -11,7 +11,7 @@ The design of TDengine is based on the assumption that any hardware or software Logical structure diagram of TDengine distributed architecture as following: -![TDengine architecture diagram](structure.png) +![TDengine architecture diagram](structure.webp)
Figure 1: TDengine architecture diagram
A complete TDengine system runs on one or more physical nodes. Logically, it includes data node (dnode), TDengine client driver (TAOSC) and application (app). There are one or more data nodes in the system, which form a cluster. The application interacts with the TDengine cluster through TAOSC's API. The following is a brief introduction to each logical unit. @@ -54,7 +54,7 @@ A complete TDengine system runs on one or more physical nodes. Logically, it inc To explain the relationship between vnode, mnode, TAOSC and application and their respective roles, the following is an analysis of a typical data writing process. -![typical process of TDengine](message.png) +![typical process of TDengine](message.webp)
Figure 2: Typical process of TDengine
1. Application initiates a request to insert data through JDBC, ODBC, or other APIs. @@ -123,7 +123,7 @@ If a database has N replicas, thus a virtual node group has N virtual nodes, but Master Vnode uses a writing process as follows: -![TDengine Master Writing Process](write_master.png) +![TDengine Master Writing Process](write_master.webp)
Figure 3: TDengine Master writing process
1. Master vnode receives the application data insertion request, verifies, and moves to next step; @@ -137,7 +137,7 @@ Master Vnode uses a writing process as follows: For a slave vnode, the write process as follows: -![TDengine Slave Writing Process](write_slave.png) +![TDengine Slave Writing Process](write_slave.webp)
Figure 4: TDengine Slave Writing Process
1. Slave vnode receives a data insertion request forwarded by Master vnode; @@ -267,7 +267,7 @@ For the data collected by device D1001, the number of records per hour is counte TDengine creates a separate table for each data collection point, but in practical applications, it is often necessary to aggregate data from different data collection points. In order to perform aggregation operations efficiently, TDengine introduces the concept of STable. STable is used to represent a specific type of data collection point. It is a table set containing multiple tables. The schema of each table in the set is the same, but each table has its own static tag. The tags can be multiple and be added, deleted and modified at any time. Applications can aggregate or statistically operate all or a subset of tables under a STABLE by specifying tag filters, thus greatly simplifying the development of applications. The process is shown in the following figure: -![Diagram of multi-table aggregation query](multi_tables.png) +![Diagram of multi-table aggregation query](multi_tables.webp)
Figure 5: Diagram of multi-table aggregation query
1. Application sends a query condition to system; diff --git a/docs-en/21-tdinternal/dnode.png b/docs-en/21-tdinternal/dnode.png deleted file mode 100644 index cea87dcccb..0000000000 Binary files a/docs-en/21-tdinternal/dnode.png and /dev/null differ diff --git a/docs-en/21-tdinternal/dnode.webp b/docs-en/21-tdinternal/dnode.webp new file mode 100644 index 0000000000..a56c7e4594 Binary files /dev/null and b/docs-en/21-tdinternal/dnode.webp differ diff --git a/docs-en/21-tdinternal/message.png b/docs-en/21-tdinternal/message.png deleted file mode 100644 index 715a8bd37e..0000000000 Binary files a/docs-en/21-tdinternal/message.png and /dev/null differ diff --git a/docs-en/21-tdinternal/message.webp b/docs-en/21-tdinternal/message.webp new file mode 100644 index 0000000000..a2a42abff3 Binary files /dev/null and b/docs-en/21-tdinternal/message.webp differ diff --git a/docs-en/21-tdinternal/modules.png b/docs-en/21-tdinternal/modules.png deleted file mode 100644 index 10ae4703a6..0000000000 Binary files a/docs-en/21-tdinternal/modules.png and /dev/null differ diff --git a/docs-en/21-tdinternal/modules.webp b/docs-en/21-tdinternal/modules.webp new file mode 100644 index 0000000000..718a6abccd Binary files /dev/null and b/docs-en/21-tdinternal/modules.webp differ diff --git a/docs-en/21-tdinternal/multi_tables.png b/docs-en/21-tdinternal/multi_tables.png deleted file mode 100644 index 0cefaab6a9..0000000000 Binary files a/docs-en/21-tdinternal/multi_tables.png and /dev/null differ diff --git a/docs-en/21-tdinternal/multi_tables.webp b/docs-en/21-tdinternal/multi_tables.webp new file mode 100644 index 0000000000..8f649e34a3 Binary files /dev/null and b/docs-en/21-tdinternal/multi_tables.webp differ diff --git a/docs-en/21-tdinternal/replica-forward.png b/docs-en/21-tdinternal/replica-forward.png deleted file mode 100644 index bf616e030b..0000000000 Binary files a/docs-en/21-tdinternal/replica-forward.png and /dev/null differ diff --git a/docs-en/21-tdinternal/replica-forward.webp b/docs-en/21-tdinternal/replica-forward.webp new file mode 100644 index 0000000000..512efd4eba Binary files /dev/null and b/docs-en/21-tdinternal/replica-forward.webp differ diff --git a/docs-en/21-tdinternal/replica-master.png b/docs-en/21-tdinternal/replica-master.png deleted file mode 100644 index cb33f1ce98..0000000000 Binary files a/docs-en/21-tdinternal/replica-master.png and /dev/null differ diff --git a/docs-en/21-tdinternal/replica-master.webp b/docs-en/21-tdinternal/replica-master.webp new file mode 100644 index 0000000000..57030a11f5 Binary files /dev/null and b/docs-en/21-tdinternal/replica-master.webp differ diff --git a/docs-en/21-tdinternal/replica-restore.png b/docs-en/21-tdinternal/replica-restore.png deleted file mode 100644 index 1558e5ed01..0000000000 Binary files a/docs-en/21-tdinternal/replica-restore.png and /dev/null differ diff --git a/docs-en/21-tdinternal/replica-restore.webp b/docs-en/21-tdinternal/replica-restore.webp new file mode 100644 index 0000000000..f282c2d4d2 Binary files /dev/null and b/docs-en/21-tdinternal/replica-restore.webp differ diff --git a/docs-en/21-tdinternal/structure.png b/docs-en/21-tdinternal/structure.png deleted file mode 100644 index 4fc8f47ab0..0000000000 Binary files a/docs-en/21-tdinternal/structure.png and /dev/null differ diff --git a/docs-en/21-tdinternal/structure.webp b/docs-en/21-tdinternal/structure.webp new file mode 100644 index 0000000000..b77a42c074 Binary files /dev/null and b/docs-en/21-tdinternal/structure.webp differ diff --git a/docs-en/21-tdinternal/vnode.png b/docs-en/21-tdinternal/vnode.png deleted file mode 100644 index e6148d4907..0000000000 Binary files a/docs-en/21-tdinternal/vnode.png and /dev/null differ diff --git a/docs-en/21-tdinternal/vnode.webp b/docs-en/21-tdinternal/vnode.webp new file mode 100644 index 0000000000..fae3104c89 Binary files /dev/null and b/docs-en/21-tdinternal/vnode.webp differ diff --git a/docs-en/21-tdinternal/write_master.png b/docs-en/21-tdinternal/write_master.png deleted file mode 100644 index ff2dfc20bf..0000000000 Binary files a/docs-en/21-tdinternal/write_master.png and /dev/null differ diff --git a/docs-en/21-tdinternal/write_master.webp b/docs-en/21-tdinternal/write_master.webp new file mode 100644 index 0000000000..9624036ed3 Binary files /dev/null and b/docs-en/21-tdinternal/write_master.webp differ diff --git a/docs-en/21-tdinternal/write_slave.png b/docs-en/21-tdinternal/write_slave.png deleted file mode 100644 index cacb2cb6bc..0000000000 Binary files a/docs-en/21-tdinternal/write_slave.png and /dev/null differ diff --git a/docs-en/21-tdinternal/write_slave.webp b/docs-en/21-tdinternal/write_slave.webp new file mode 100644 index 0000000000..7c45dec11b Binary files /dev/null and b/docs-en/21-tdinternal/write_slave.webp differ diff --git a/docs-en/25-application/01-telegraf.md b/docs-en/25-application/01-telegraf.md index 718e04ecd3..07ab289ac2 100644 --- a/docs-en/25-application/01-telegraf.md +++ b/docs-en/25-application/01-telegraf.md @@ -16,7 +16,7 @@ Current mainstream IT DevOps system usually include a data collection module, a This article introduces how to quickly build a TDengine + Telegraf + Grafana based IT DevOps visualization system without writing even a single line of code and by simply modifying a few lines of configuration files. The architecture is as follows. -![IT-DevOps-Solutions-Telegraf.png](/img/IT-DevOps-Solutions-Telegraf.png) +![IT-DevOps-Solutions-Telegraf.webp](./IT-DevOps-Solutions-Telegraf.webp) ## Installation steps @@ -75,7 +75,7 @@ Log in to the Grafana interface using a web browser at `IP:3000`, with the syste Click on the gear icon on the left and select `Plugins`, you should find the TDengine data source plugin icon. Click on the plus icon on the left and select `Import` to get the data from `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard- v0.1.0.json`, download the dashboard JSON file and import it. You will then see the dashboard in the following screen. -![IT-DevOps-Solutions-telegraf-dashboard.png](/img/IT-DevOps-Solutions-telegraf-dashboard.png) +![IT-DevOps-Solutions-telegraf-dashboard.webp](./IT-DevOps-Solutions-telegraf-dashboard.webp) ## Wrap-up diff --git a/docs-en/25-application/02-collectd.md b/docs-en/25-application/02-collectd.md index 2ac37618fa..0ddea28554 100644 --- a/docs-en/25-application/02-collectd.md +++ b/docs-en/25-application/02-collectd.md @@ -17,7 +17,7 @@ The new version of TDengine supports multiple data protocols and can accept data This article introduces how to quickly build an IT DevOps visualization system based on TDengine + collectd / StatsD + Grafana without writing even a single line of code but by simply modifying a few lines of configuration files. The architecture is shown in the following figure. -![IT-DevOps-Solutions-Collectd-StatsD.png](/img/IT-DevOps-Solutions-Collectd-StatsD.png) +![IT-DevOps-Solutions-Collectd-StatsD.webp](./IT-DevOps-Solutions-Collectd-StatsD.webp) ## Installation Steps @@ -83,19 +83,19 @@ Click on the gear icon on the left and select `Plugins`, you should find the TDe Download the dashboard json from `https://github.com/taosdata/grafanaplugin/blob/master/examples/collectd/grafana/dashboards/collect-metrics-with-tdengine-v0.1.0.json`, click the plus icon on the left and select Import, follow the instructions to import the JSON file. After that, you can see The dashboard can be seen in the following screen. -![IT-DevOps-Solutions-collectd-dashboard.png](/img/IT-DevOps-Solutions-collectd-dashboard.png) +![IT-DevOps-Solutions-collectd-dashboard.webp](./IT-DevOps-Solutions-collectd-dashboard.webp) #### import collectd dashboard Download the dashboard json file from `https://github.com/taosdata/grafanaplugin/blob/master/examples/collectd/grafana/dashboards/collect-metrics-with-tdengine-v0.1.0.json`. Download the dashboard json file, click the plus icon on the left side and select `Import`, and follow the interface prompts to select the JSON file to import. After that, you can see dashboard with the following interface. -![IT-DevOps-Solutions-collectd-dashboard.png](/img/IT-DevOps-Solutions-collectd-dashboard.png) +![IT-DevOps-Solutions-collectd-dashboard.webp](./IT-DevOps-Solutions-collectd-dashboard.webp) #### Importing the StatsD dashboard Download the dashboard json from `https://github.com/taosdata/grafanaplugin/blob/master/examples/statsd/dashboards/statsd-with-tdengine-v0.1.0.json`. Click on the plus icon on the left and select `Import`, and follow the interface prompts to import the JSON file. You will then see the dashboard in the following screen. -![IT-DevOps-Solutions-statsd-dashboard.png](/img/IT-DevOps-Solutions-statsd-dashboard.png) +![IT-DevOps-Solutions-statsd-dashboard.webp](./IT-DevOps-Solutions-statsd-dashboard.webp) ## Wrap-up diff --git a/docs-en/25-application/03-immigrate.md b/docs-en/25-application/03-immigrate.md index 4cfeb892d8..68d8a2b8cc 100644 --- a/docs-en/25-application/03-immigrate.md +++ b/docs-en/25-application/03-immigrate.md @@ -32,7 +32,7 @@ We will explain how to migrate OpenTSDB applications to TDengine quickly, secure The following figure (Figure 1) shows the system's overall architecture for a typical DevOps application scenario. **Figure 1. Typical architecture in a DevOps scenario** -![IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch](/img/IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.jpg "Figure 1. Typical architecture in a DevOps scenario") +![IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch](./IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.webp "Figure 1. Typical architecture in a DevOps scenario") In this application scenario, there are Agent tools deployed in the application environment to collect machine metrics, network metrics, and application metrics. Data collectors to aggregate information collected by agents, systems for persistent data storage and management, and tools for monitoring data visualization (e.g., Grafana, etc.). @@ -75,7 +75,7 @@ After writing the data to TDengine properly, you can adapt Grafana to visualize TDengine provides two sets of Dashboard templates by default, and users only need to import the templates from the Grafana directory into Grafana to activate their use. **Importing Grafana Templates** Figure 2. -![](/img/IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.jpg "Figure 2. Importing a Grafana Template") +![](./IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.webp "Figure 2. Importing a Grafana Template") After the above steps, you completed the migration to replace OpenTSDB with TDengine. You can see that the whole process is straightforward, there is no need to write any code, and only some configuration files need to be adjusted to meet the migration work. @@ -88,7 +88,7 @@ In most DevOps scenarios, if you have a small OpenTSDB cluster (3 or fewer nodes Suppose your application is particularly complex, or the application domain is not a DevOps scenario. You can continue reading subsequent chapters for a more comprehensive and in-depth look at the advanced topics of migrating an OpenTSDB application to TDengine. **Figure 3. System architecture after migration** -![IT-DevOps-Solutions-Immigrate-TDengine-Arch](/img/IT-DevOps-Solutions-Immigrate-TDengine-Arch.jpg "Figure 3. System architecture after migration completion") +![IT-DevOps-Solutions-Immigrate-TDengine-Arch](./IT-DevOps-Solutions-Immigrate-TDengine-Arch.webp "Figure 3. System architecture after migration completion") ## Migration evaluation and strategy for other scenarios diff --git a/docs-en/25-application/IT-DevOps-Solutions-Collectd-StatsD.webp b/docs-en/25-application/IT-DevOps-Solutions-Collectd-StatsD.webp new file mode 100644 index 0000000000..147a65b17b Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-Collectd-StatsD.webp differ diff --git a/docs-en/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.webp b/docs-en/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.webp new file mode 100644 index 0000000000..3ca99c835b Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.webp differ diff --git a/docs-en/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.webp b/docs-en/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.webp new file mode 100644 index 0000000000..04811f61b9 Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.webp differ diff --git a/docs-en/25-application/IT-DevOps-Solutions-Immigrate-TDengine-Arch.webp b/docs-en/25-application/IT-DevOps-Solutions-Immigrate-TDengine-Arch.webp new file mode 100644 index 0000000000..3693006875 Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-Immigrate-TDengine-Arch.webp differ diff --git a/docs-en/25-application/IT-DevOps-Solutions-Telegraf.webp b/docs-en/25-application/IT-DevOps-Solutions-Telegraf.webp new file mode 100644 index 0000000000..fd5461ec9b Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-Telegraf.webp differ diff --git a/docs-en/25-application/IT-DevOps-Solutions-collectd-dashboard.webp b/docs-en/25-application/IT-DevOps-Solutions-collectd-dashboard.webp new file mode 100644 index 0000000000..879c27a1a5 Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-collectd-dashboard.webp differ diff --git a/docs-en/25-application/IT-DevOps-Solutions-statsd-dashboard.webp b/docs-en/25-application/IT-DevOps-Solutions-statsd-dashboard.webp new file mode 100644 index 0000000000..1d4c655970 Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-statsd-dashboard.webp differ diff --git a/docs-en/25-application/IT-DevOps-Solutions-telegraf-dashboard.webp b/docs-en/25-application/IT-DevOps-Solutions-telegraf-dashboard.webp new file mode 100644 index 0000000000..105afcdb83 Binary files /dev/null and b/docs-en/25-application/IT-DevOps-Solutions-telegraf-dashboard.webp differ diff --git a/include/libs/function/function.h b/include/libs/function/function.h index 7d3e969c41..21b7309055 100644 --- a/include/libs/function/function.h +++ b/include/libs/function/function.h @@ -39,6 +39,7 @@ typedef bool (*FExecInit)(struct SqlFunctionCtx *pCtx, struct SResultRowEntryInf typedef int32_t (*FExecProcess)(struct SqlFunctionCtx *pCtx); typedef int32_t (*FExecFinalize)(struct SqlFunctionCtx *pCtx, SSDataBlock* pBlock); typedef int32_t (*FScalarExecProcess)(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput); +typedef int32_t (*FExecCombine)(struct SqlFunctionCtx *pDestCtx, struct SqlFunctionCtx *pSourceCtx); typedef struct SScalarFuncExecFuncs { FExecGetEnv getEnv; @@ -50,6 +51,7 @@ typedef struct SFuncExecFuncs { FExecInit init; FExecProcess process; FExecFinalize finalize; + FExecCombine combine; } SFuncExecFuncs; typedef struct SFileBlockInfo { diff --git a/include/libs/nodes/nodes.h b/include/libs/nodes/nodes.h index b9cb708c9c..3429d838de 100644 --- a/include/libs/nodes/nodes.h +++ b/include/libs/nodes/nodes.h @@ -212,6 +212,7 @@ typedef enum ENodeType { QUERY_NODE_PHYSICAL_PLAN_STREAM_FINAL_INTERVAL, QUERY_NODE_PHYSICAL_PLAN_FILL, QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW, + QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW, QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW, QUERY_NODE_PHYSICAL_PLAN_PARTITION, QUERY_NODE_PHYSICAL_PLAN_DISPATCH, diff --git a/include/libs/nodes/plannodes.h b/include/libs/nodes/plannodes.h index 6c4d14ffa1..d28da1f608 100644 --- a/include/libs/nodes/plannodes.h +++ b/include/libs/nodes/plannodes.h @@ -296,6 +296,8 @@ typedef struct SSessionWinodwPhysiNode { int64_t gap; } SSessionWinodwPhysiNode; +typedef SSessionWinodwPhysiNode SStreamSessionWinodwPhysiNode; + typedef struct SStateWinodwPhysiNode { SWinodwPhysiNode window; SNode* pStateKey; diff --git a/source/client/src/clientHb.c b/source/client/src/clientHb.c index d01ec501ba..1ae6fa405f 100644 --- a/source/client/src/clientHb.c +++ b/source/client/src/clientHb.c @@ -141,7 +141,9 @@ static int32_t hbQueryHbRspHandle(SAppHbMgr *pAppHbMgr, SClientHbRsp *pRsp) { if (NULL == pTscObj) { tscDebug("tscObj rid %" PRIx64 " not exist", pRsp->connKey.tscRid); } else { - updateEpSet_s(&pTscObj->pAppInfo->mgmtEp, &pRsp->query->epSet); + if (pRsp->query->totalDnodes > 1 && !isEpsetEqual(&pTscObj->pAppInfo->mgmtEp.epSet, &pRsp->query->epSet)) { + updateEpSet_s(&pTscObj->pAppInfo->mgmtEp, &pRsp->query->epSet); + } pTscObj->connId = pRsp->query->connId; if (pRsp->query->killRid) { diff --git a/source/client/src/clientSml.c b/source/client/src/clientSml.c index 68c47c2d13..7d623072d6 100644 --- a/source/client/src/clientSml.c +++ b/source/client/src/clientSml.c @@ -24,7 +24,6 @@ #define EQUAL '=' #define QUOTE '"' #define SLASH '\\' -#define tsMaxSQLStringLen (1024*1024) #define JUMP_SPACE(sql) while (*sql != '\0'){if(*sql == SPACE) sql++;else break;} // comma , @@ -63,12 +62,11 @@ for (int i = 1; i < keyLen; ++i) { \ #define TS "_ts" #define TS_LEN 3 -#define VALUE "value" -#define VALUE_LEN 5 +#define VALUE "_value" +#define VALUE_LEN 6 #define BINARY_ADD_LEN 2 // "binary" 2 means " " #define NCHAR_ADD_LEN 3 // L"nchar" 3 means L" " -#define CHAR_SAVE_LENGTH 8 //================================================================================================= typedef TSDB_SML_PROTOCOL_TYPE SMLProtocolType; @@ -253,12 +251,20 @@ static int32_t smlGenerateSchemaAction(SSchema* colField, SHashObj* colHash, SSm return 0; } +static int32_t smlFindNearestPowerOf2(int32_t length){ + int32_t result = 1; + while(result <= length){ + result *= 2; + } + return result; +} + static int32_t smlBuildColumnDescription(SSmlKv* field, char* buf, int32_t bufSize, int32_t* outBytes) { uint8_t type = field->type; char tname[TSDB_TABLE_NAME_LEN] = {0}; memcpy(tname, field->key, field->keyLen); if (type == TSDB_DATA_TYPE_BINARY || type == TSDB_DATA_TYPE_NCHAR) { - int32_t bytes = field->length > CHAR_SAVE_LENGTH ? (2*field->length) : CHAR_SAVE_LENGTH; + int32_t bytes = smlFindNearestPowerOf2(field->length); int out = snprintf(buf, bufSize, "`%s` %s(%d)", tname, tDataTypes[field->type].name, bytes); *outBytes = out; @@ -273,8 +279,8 @@ static int32_t smlBuildColumnDescription(SSmlKv* field, char* buf, int32_t bufSi static int32_t smlApplySchemaAction(SSmlHandle* info, SSchemaAction* action) { int32_t code = 0; int32_t outBytes = 0; - char *result = (char *)taosMemoryCalloc(1, tsMaxSQLStringLen+1); - int32_t capacity = tsMaxSQLStringLen + 1; + char *result = (char *)taosMemoryCalloc(1, TSDB_MAX_ALLOWED_SQL_LEN); + int32_t capacity = TSDB_MAX_ALLOWED_SQL_LEN; uDebug("SML:0x%"PRIx64" apply schema action. action: %d", info->id, action->action); switch (action->action) { @@ -398,7 +404,7 @@ static int32_t smlApplySchemaAction(SSmlHandle* info, SSchemaAction* action) { } if(taosArrayGetSize(cols) == 0){ outBytes = snprintf(pos, freeBytes,"`%s` %s(%d)", - tsSmlTagName, tDataTypes[TSDB_DATA_TYPE_NCHAR].name, CHAR_SAVE_LENGTH); + tsSmlTagName, tDataTypes[TSDB_DATA_TYPE_NCHAR].name, 1); pos += outBytes; freeBytes -= outBytes; *pos = ','; ++pos; --freeBytes; } @@ -508,6 +514,11 @@ static int32_t smlModifyDBSchemas(SSmlHandle* info) { if (code != TSDB_CODE_SUCCESS) { return code; } + + code = catalogRefreshTableMeta(info->pCatalog, info->taos->pAppInfo->pTransporter, &ep, &pName, -1); + if (code != TSDB_CODE_SUCCESS) { + return code; + } } else { uError("SML:0x%"PRIx64" load table meta error: %s", info->id, tstrerror(code)); return code; diff --git a/source/dnode/mnode/impl/src/mndProfile.c b/source/dnode/mnode/impl/src/mndProfile.c index b9ac82d890..c9c52af0fe 100644 --- a/source/dnode/mnode/impl/src/mndProfile.c +++ b/source/dnode/mnode/impl/src/mndProfile.c @@ -379,7 +379,7 @@ static int32_t mndProcessQueryHeartBeat(SMnode *pMnode, SRpcMsg *pMsg, SClientHb } rspBasic->connId = pConn->id; - rspBasic->totalDnodes = 1; // TODO + rspBasic->totalDnodes = mndGetDnodeSize(pMnode); rspBasic->onlineDnodes = 1; // TODO mndGetMnodeEpSet(pMnode, &rspBasic->epSet); mndReleaseConn(pMnode, pConn); diff --git a/source/libs/executor/inc/executorimpl.h b/source/libs/executor/inc/executorimpl.h index 8ac320b9aa..2aad17d515 100644 --- a/source/libs/executor/inc/executorimpl.h +++ b/source/libs/executor/inc/executorimpl.h @@ -361,6 +361,18 @@ typedef struct SCatchSupporter { int64_t* pKeyBuf; } SCatchSupporter; +typedef struct SStreamAggSupporter { + SArray* pResultRows; // SResultWindowInfo + int32_t keySize; + char* pKeyBuf; // window key buffer + SDiskbasedBuf* pResultBuf; // query result buffer based on blocked-wised disk file + int32_t resultRowSize; // the result buffer size for each result row, with the meta data size for each row +} SStreamAggSupporter; + +typedef struct SessionWindowSupporter { + SStreamAggSupporter* pStreamAggSup; + int64_t gap; +} SessionWindowSupporter; typedef struct SStreamBlockScanInfo { SArray* pBlockLists; // multiple SSDatablock. SSDataBlock* pRes; // result SSDataBlock @@ -385,6 +397,7 @@ typedef struct SStreamBlockScanInfo { SInterval interval; // if the upstream is an interval operator, the interval info is also kept here. SCatchSupporter childAggSup; SArray* childIds; + SessionWindowSupporter sessionSup; } SStreamBlockScanInfo; typedef struct SSysTableScanInfo { @@ -550,6 +563,27 @@ typedef struct SSessionAggOperatorInfo { STimeWindowAggSupp twAggSup; } SSessionAggOperatorInfo; +typedef struct SResultWindowInfo { + SResultRowPosition pos; + STimeWindow win; + bool isOutput; +} SResultWindowInfo; + +typedef struct SStreamSessionAggOperatorInfo { + SOptrBasicInfo binfo; + SStreamAggSupporter streamAggSup; + SGroupResInfo groupResInfo; + int64_t gap; // session window gap + int32_t primaryTsIndex; // primary timestamp slot id + int32_t order; // current SSDataBlock scan order + STimeWindowAggSupp twAggSup; + SSDataBlock* pWinBlock; // window result + SqlFunctionCtx* pDummyCtx; // for combine + SSDataBlock* pDelRes; + SHashObj* pStDeleted; + void* pDelIterator; +} SStreamSessionAggOperatorInfo; + typedef struct STimeSliceOperatorInfo { SOptrBasicInfo binfo; SInterval interval; @@ -727,6 +761,9 @@ SOperatorInfo* createTimeSliceOperatorInfo(SOperatorInfo* downstream, SExprInfo* SOperatorInfo* createMergeJoinOperatorInfo(SOperatorInfo** pDownstream, int32_t numOfDownstream, SExprInfo* pExprInfo, int32_t numOfCols, SSDataBlock* pResBlock, SNode* pOnCondition, SExecTaskInfo* pTaskInfo); SOperatorInfo* createTagScanOperatorInfo(SReadHandle* pReadHandle, SExprInfo* pExpr, int32_t numOfOutput, SSDataBlock* pResBlock, SArray* pColMatchInfo, STableGroupInfo* pTableGroupInfo, SExecTaskInfo* pTaskInfo); +SOperatorInfo* createStreamSessionAggOperatorInfo(SOperatorInfo* downstream, + SExprInfo* pExprInfo, int32_t numOfCols, SSDataBlock* pResBlock, int64_t gap, + int32_t tsSlotId, STimeWindowAggSupp* pTwAggSupp, SExecTaskInfo* pTaskInfo); #if 0 SOperatorInfo* createTableSeqScanOperatorInfo(void* pTsdbReadHandle, STaskRuntimeEnv* pRuntimeEnv); #endif @@ -761,13 +798,19 @@ void aggEncodeResultRow(SOperatorInfo* pOperator, SAggSupporter* pSup, SOptrBasi int32_t* length); STimeWindow getActiveTimeWindow(SDiskbasedBuf* pBuf, SResultRowInfo* pResultRowInfo, int64_t ts, SInterval* pInterval, int32_t precision, STimeWindow* win); -int32_t getNumOfRowsInTimeWindow(SDataBlockInfo* pDataBlockInfo, TSKEY* pPrimaryColumn, int32_t startPos, - TSKEY ekey, __block_search_fn_t searchFn, STableQueryInfo* item, - int32_t order); +int32_t getNumOfRowsInTimeWindow(SDataBlockInfo* pDataBlockInfo, TSKEY* pPrimaryColumn, + int32_t startPos, TSKEY ekey, __block_search_fn_t searchFn, STableQueryInfo* item, + int32_t order); int32_t binarySearchForKey(char* pValue, int num, TSKEY key, int order); -int32_t initCatchSupporter(SCatchSupporter* pCatchSup, size_t rowSize, size_t keyBufSize, - const char* pKey, const char* pDir); - +int32_t initCatchSupporter(SCatchSupporter* pCatchSup, size_t rowSize, const char* pKey, + const char* pDir); +int32_t initStreamAggSupporter(SStreamAggSupporter* pSup, const char* pKey); +SResultRow* getNewResultRow_rv(SDiskbasedBuf* pResultBuf, int64_t tableGroupId, int32_t interBufSize); +SResultWindowInfo* getSessionTimeWindow(SArray* pWinInfos, TSKEY ts, int64_t gap, + int32_t* pIndex); +int32_t updateSessionWindowInfo(SResultWindowInfo* pWinInfo, TSKEY* pTs, int32_t rows, + int32_t start, int64_t gap, SHashObj* pStDeleted); +bool functionNeedToExecute(SqlFunctionCtx* pCtx); #ifdef __cplusplus } #endif diff --git a/source/libs/executor/src/executorimpl.c b/source/libs/executor/src/executorimpl.c index e16b60e58b..581cf5cacd 100644 --- a/source/libs/executor/src/executorimpl.c +++ b/source/libs/executor/src/executorimpl.c @@ -98,7 +98,6 @@ static int32_t getExprFunctionId(SExprInfo* pExprInfo) { } static void doSetTagValueToResultBuf(char* output, const char* val, int16_t type, int16_t bytes); -static bool functionNeedToExecute(SqlFunctionCtx* pCtx); static void setBlockStatisInfo(SqlFunctionCtx* pCtx, SExprInfo* pExpr, SSDataBlock* pSDataBlock); @@ -937,7 +936,7 @@ int32_t setGroupResultOutputBuf(SOptrBasicInfo* binfo, int32_t numOfCols, char* return TSDB_CODE_SUCCESS; } -static bool functionNeedToExecute(SqlFunctionCtx* pCtx) { +bool functionNeedToExecute(SqlFunctionCtx* pCtx) { struct SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx); // in case of timestamp column, always generated results. @@ -4660,6 +4659,19 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo pOptr = createSessionAggOperatorInfo(ops[0], pExprInfo, num, pResBlock, pSessionNode->gap, tsSlotId, &as, pTaskInfo); + } else if (QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW == type) { + SSessionWinodwPhysiNode* pSessionNode = (SSessionWinodwPhysiNode*)pPhyNode; + + STimeWindowAggSupp as = {.waterMark = pSessionNode->window.watermark, + .calTrigger = pSessionNode->window.triggerType}; + + SExprInfo* pExprInfo = createExprInfo(pSessionNode->window.pFuncs, NULL, &num); + SSDataBlock* pResBlock = createResDataBlock(pPhyNode->pOutputDataBlockDesc); + int32_t tsSlotId = ((SColumnNode*)pSessionNode->window.pTspk)->slotId; + + pOptr = + createStreamSessionAggOperatorInfo(ops[0], pExprInfo, num, pResBlock, pSessionNode->gap, tsSlotId, &as, pTaskInfo); + } else if (QUERY_NODE_PHYSICAL_PLAN_PARTITION == type) { SPartitionPhysiNode* pPartNode = (SPartitionPhysiNode*)pPhyNode; SArray* pColList = extractPartitionColInfo(pPartNode->pPartitionKeys); @@ -5151,15 +5163,37 @@ int32_t getOperatorExplainExecInfo(SOperatorInfo* operatorInfo, SExplainExecInfo return TSDB_CODE_SUCCESS; } -int32_t initCatchSupporter(SCatchSupporter* pCatchSup, size_t rowSize, size_t keyBufSize, const char* pKey, - const char* pDir) { +int32_t initCatchSupporter(SCatchSupporter* pCatchSup, size_t rowSize, const char* pKey, + const char* pDir) { pCatchSup->keySize = sizeof(int64_t) + sizeof(int64_t) + sizeof(TSKEY); pCatchSup->pKeyBuf = taosMemoryCalloc(1, pCatchSup->keySize); - int32_t pageSize = rowSize * 32; - int32_t bufSize = pageSize * 4096; - createDiskbasedBuf(&pCatchSup->pDataBuf, pageSize, bufSize, pKey, pDir); _hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY); pCatchSup->pWindowHashTable = taosHashInit(10000, hashFn, true, HASH_NO_LOCK); - ; - return TSDB_CODE_SUCCESS; + if (pCatchSup->pKeyBuf == NULL || pCatchSup->pWindowHashTable == NULL) { + return TSDB_CODE_OUT_OF_MEMORY; + } + + int32_t pageSize = rowSize * 32; + int32_t bufSize = pageSize * 4096; + return createDiskbasedBuf(&pCatchSup->pDataBuf, pageSize, bufSize, pKey, pDir); +} + +int32_t initStreamAggSupporter(SStreamAggSupporter* pSup, const char* pKey) { + pSup->keySize = sizeof(int64_t) + sizeof(TSKEY); + pSup->pKeyBuf = taosMemoryCalloc(1, pSup->keySize); + pSup->pResultRows = taosArrayInit(1024, sizeof(SResultWindowInfo)); + if (pSup->pKeyBuf == NULL || pSup->pResultRows == NULL) { + return TSDB_CODE_OUT_OF_MEMORY; + } + + int32_t pageSize = 4096; + while (pageSize < pSup->resultRowSize * 4) { + pageSize <<= 1u; + } + // at least four pages need to be in buffer + int32_t bufSize = 4096 * 256; + if (bufSize <= pageSize) { + bufSize = pageSize * 4; + } + return createDiskbasedBuf(&pSup->pResultBuf, pageSize, bufSize, pKey, "/tmp/"); } diff --git a/source/libs/executor/src/scanoperator.c b/source/libs/executor/src/scanoperator.c index f77b80c533..6200e1dcb0 100644 --- a/source/libs/executor/src/scanoperator.c +++ b/source/libs/executor/src/scanoperator.c @@ -645,6 +645,10 @@ static void doClearBufferedBlocks(SStreamBlockScanInfo* pInfo) { taosArrayClear(pInfo->pBlockLists); } +static bool isSessionWindow(SStreamBlockScanInfo* pInfo) { + return pInfo->sessionSup.pStreamAggSup != NULL; +} + static bool prepareDataScan(SStreamBlockScanInfo* pInfo) { SSDataBlock* pSDB = pInfo->pUpdateRes; if (pInfo->updateResIndex < pSDB->info.rows) { @@ -652,13 +656,25 @@ static bool prepareDataScan(SStreamBlockScanInfo* pInfo) { TSKEY *tsCols = (TSKEY*)pColDataInfo->pData; SResultRowInfo dumyInfo; dumyInfo.cur.pageId = -1; - STimeWindow win = getActiveTimeWindow(NULL, &dumyInfo, tsCols[pInfo->updateResIndex], &pInfo->interval, - pInfo->interval.precision, NULL); + STimeWindow win; + if (isSessionWindow(pInfo)) { + SStreamAggSupporter* pAggSup = pInfo->sessionSup.pStreamAggSup; + int64_t gap = pInfo->sessionSup.gap; + int32_t winIndex = 0; + SResultWindowInfo* pCurWin = getSessionTimeWindow(pAggSup->pResultRows, + tsCols[pInfo->updateResIndex], gap, &winIndex); + win = pCurWin->win; + pInfo->updateResIndex += updateSessionWindowInfo(pCurWin, tsCols, pSDB->info.rows, + pInfo->updateResIndex, gap, NULL); + } else { + win = getActiveTimeWindow(NULL, &dumyInfo, tsCols[pInfo->updateResIndex], + &pInfo->interval, pInfo->interval.precision, NULL); + pInfo->updateResIndex += getNumOfRowsInTimeWindow(&pSDB->info, tsCols, pInfo->updateResIndex, + win.ekey, binarySearchForKey, NULL, TSDB_ORDER_ASC); + } STableScanInfo* pTableScanInfo = pInfo->pOperatorDumy->info; pTableScanInfo->cond.twindow = win; tsdbResetReadHandle(pTableScanInfo->dataReader, &pTableScanInfo->cond); - pInfo->updateResIndex += getNumOfRowsInTimeWindow(&pSDB->info, tsCols, pInfo->updateResIndex, - win.ekey, binarySearchForKey, NULL, TSDB_ORDER_ASC); pTableScanInfo->scanTimes = 0; return true; } else { @@ -848,6 +864,7 @@ static SSDataBlock* doStreamBlockScan(SOperatorInfo* pOperator) { } else if (pInfo->scanMode == STREAM_SCAN_FROM_UPDATERES) { blockDataCleanup(pInfo->pRes); pInfo->scanMode = STREAM_SCAN_FROM_DATAREADER; + prepareDataScan(pInfo); return pInfo->pUpdateRes; } else if (pInfo->scanMode == STREAM_SCAN_FROM_DATAREADER) { SSDataBlock* pSDB = doDataScan(pInfo); @@ -924,13 +941,12 @@ static SSDataBlock* doStreamBlockScan(SOperatorInfo* pOperator) { if (rows == 0) { pOperator->status = OP_EXEC_DONE; - } else if (pInfo->interval.interval > 0) { + } else if (pInfo->pUpdateInfo) { SSDataBlock* upRes = getUpdateDataBlock(pInfo, true); //TODO(liuyao) get invertible from plan if (upRes) { pInfo->pUpdateRes = upRes; if (upRes->info.type == STREAM_REPROCESS) { pInfo->updateResIndex = 0; - prepareDataScan(pInfo); pInfo->scanMode = STREAM_SCAN_FROM_UPDATERES; } else if (upRes->info.type == STREAM_INVERT) { pInfo->scanMode = STREAM_SCAN_FROM_RES; @@ -1001,10 +1017,9 @@ SOperatorInfo* createStreamScanOperatorInfo(void* streamReadHandle, void* pDataR pInfo->scanMode = STREAM_SCAN_FROM_READERHANDLE; pInfo->pOperatorDumy = pOperatorDumy; pInfo->interval = pSTInfo->interval; + pInfo->sessionSup = (SessionWindowSupporter){.pStreamAggSup = NULL, .gap = -1}; - size_t childKeyBufSize = sizeof(int64_t) + sizeof(int64_t) + sizeof(TSKEY); - initCatchSupporter(&pInfo->childAggSup, 1024, childKeyBufSize, - "StreamFinalInterval", TD_TMP_DIR_PATH); // TODO(liuyao) get row size from phy plan + initCatchSupporter(&pInfo->childAggSup, 1024, "StreamFinalInterval", "/tmp/"); // TODO(liuyao) get row size from phy plan pOperator->name = "StreamBlockScanOperator"; pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN; diff --git a/source/libs/executor/src/timewindowoperator.c b/source/libs/executor/src/timewindowoperator.c index deca2f3804..9346dbf54a 100644 --- a/source/libs/executor/src/timewindowoperator.c +++ b/source/libs/executor/src/timewindowoperator.c @@ -9,6 +9,7 @@ typedef enum SResultTsInterpType { } SResultTsInterpType; static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator); +static SSDataBlock* doStreamSessionWindowAgg(SOperatorInfo* pOperator); /* * There are two cases to handle: @@ -1039,13 +1040,9 @@ static void setInverFunction(SqlFunctionCtx* pCtx, int32_t num, EStreamType type } } -void doClearWindow(SAggSupporter* pSup, SOptrBasicInfo* pBinfo, char* pData, - int16_t bytes, uint64_t groupId, int32_t numOfOutput) { - SET_RES_WINDOW_KEY(pSup->keyBuf, pData, bytes, groupId); - SResultRowPosition* p1 = - (SResultRowPosition*)taosHashGet(pSup->pResultRowHashTable, pSup->keyBuf, - GET_RES_WINDOW_KEY_LEN(bytes)); - SResultRow* pResult = getResultRowByPos(pSup->pResultBuf, p1); +void doClearWindowImpl(SResultRowPosition* p1, SDiskbasedBuf* pResultBuf, + SOptrBasicInfo* pBinfo, int32_t numOfOutput) { + SResultRow* pResult = getResultRowByPos(pResultBuf, p1); SqlFunctionCtx* pCtx = pBinfo->pCtx; for (int32_t i = 0; i < numOfOutput; ++i) { pCtx[i].resultInfo = getResultCell(pResult, i, pBinfo->rowCellInfoOffset); @@ -1060,6 +1057,15 @@ void doClearWindow(SAggSupporter* pSup, SOptrBasicInfo* pBinfo, char* pData, } } +void doClearWindow(SAggSupporter* pSup, SOptrBasicInfo* pBinfo, char* pData, + int16_t bytes, uint64_t groupId, int32_t numOfOutput) { + SET_RES_WINDOW_KEY(pSup->keyBuf, pData, bytes, groupId); + SResultRowPosition* p1 = + (SResultRowPosition*)taosHashGet(pSup->pResultRowHashTable, pSup->keyBuf, + GET_RES_WINDOW_KEY_LEN(bytes)); + doClearWindowImpl(p1, pSup->pResultBuf, pBinfo, numOfOutput); +} + static void doClearWindows(SAggSupporter* pSup, SOptrBasicInfo* pBinfo, SInterval* pIntrerval, int32_t tsIndex, int32_t numOfOutput, SSDataBlock* pBlock) { SColumnInfoData* pColDataInfo = taosArrayGet(pBlock->pDataBlock, tsIndex); @@ -1112,8 +1118,8 @@ static SSDataBlock* doStreamIntervalAgg(SOperatorInfo* pOperator) { } if (pBlock->info.type == STREAM_REPROCESS) { - doClearWindows(&pInfo->aggSup, &pInfo->binfo, &pInfo->interval, - pInfo->primaryTsIndex, pOperator->numOfExprs, pBlock); + doClearWindows(&pInfo->aggSup, &pInfo->binfo, &pInfo->interval, 0, + pOperator->numOfExprs, pBlock); qDebug("%s clear existed time window results for updates checked", GET_TASKID(pTaskInfo)); continue; } @@ -1644,9 +1650,10 @@ _error: return NULL; } -static SArray* doHashInterval(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResultRowInfo, SSDataBlock* pSDataBlock, +static SArray* doHashInterval(SOperatorInfo* pOperatorInfo, SSDataBlock* pSDataBlock, int32_t tableGroupId) { SStreamFinalIntervalOperatorInfo* pInfo = (SStreamFinalIntervalOperatorInfo*)pOperatorInfo->info; + SResultRowInfo* pResultRowInfo = &(pInfo->binfo.resultRowInfo); SExecTaskInfo* pTaskInfo = pOperatorInfo->pTaskInfo; int32_t numOfOutput = pOperatorInfo->numOfExprs; SArray* pUpdated = taosArrayInit(4, POINTER_BYTES); @@ -1659,7 +1666,10 @@ static SArray* doHashInterval(SOperatorInfo* pOperatorInfo, SResultRowInfo* pRes if (pSDataBlock->pDataBlock != NULL) { SColumnInfoData* pColDataInfo = taosArrayGet(pSDataBlock->pDataBlock, pInfo->primaryTsIndex); tsCols = (int64_t*)pColDataInfo->pData; + } else { + return pUpdated; } + int32_t startPos = ascScan ? 0 : (pSDataBlock->info.rows - 1); TSKEY ts = getStartTsKey(&pSDataBlock->info.window, tsCols, pSDataBlock->info.rows, ascScan); STimeWindow nextWin = getActiveTimeWindow(pInfo->aggSup.pResultBuf, pResultRowInfo, ts, @@ -1720,7 +1730,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) { pInfo->primaryTsIndex, pOperator->numOfExprs, pBlock); continue; } - pUpdated = doHashInterval(pOperator, &pInfo->binfo.resultRowInfo, pBlock, 0); + pUpdated = doHashInterval(pOperator, pBlock, 0); } finalizeUpdatedResult(pOperator->numOfExprs, pInfo->aggSup.pResultBuf, pUpdated, pInfo->binfo.rowCellInfoOffset); @@ -1730,3 +1740,534 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) { pOperator->status = OP_RES_TO_RETURN; return pInfo->binfo.pRes->info.rows == 0 ? NULL : pInfo->binfo.pRes; } + +void destroyStreamAggSupporter(SStreamAggSupporter* pSup) { + taosArrayDestroy(pSup->pResultRows); + taosMemoryFreeClear(pSup->pKeyBuf); + destroyDiskbasedBuf(pSup->pResultBuf); +} + +void destroyStreamSessionAggOperatorInfo(void* param, int32_t numOfOutput) { + SStreamSessionAggOperatorInfo* pInfo = (SStreamSessionAggOperatorInfo*)param; + doDestroyBasicInfo(&pInfo->binfo, numOfOutput); + destroyStreamAggSupporter(&pInfo->streamAggSup); + cleanupGroupResInfo(&pInfo->groupResInfo); +} + +int32_t initBiasicInfo(SOptrBasicInfo* pBasicInfo, SExprInfo* pExprInfo, + int32_t numOfCols, SSDataBlock* pResultBlock, SDiskbasedBuf* pResultBuf) { + pBasicInfo->pCtx = createSqlFunctionCtx(pExprInfo, numOfCols, &pBasicInfo->rowCellInfoOffset); + pBasicInfo->pRes = pResultBlock; + for (int32_t i = 0; i < numOfCols; ++i) { + pBasicInfo->pCtx[i].pBuf = pResultBuf; + } + return TSDB_CODE_SUCCESS; +} + +void initDummyFunction(SqlFunctionCtx* pDummy, SqlFunctionCtx* pCtx, int32_t nums) { + for (int i = 0; i < nums; i++) { + pDummy[i].functionId = pCtx[i].functionId; + } +} +void initDownStream(SOperatorInfo* downstream, SStreamSessionAggOperatorInfo* pInfo) { + ASSERT(downstream->operatorType == QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN); + SStreamBlockScanInfo* pScanInfo = downstream->info; + pScanInfo->sessionSup = + (SessionWindowSupporter){.pStreamAggSup = &pInfo->streamAggSup, .gap = pInfo->gap}; + pScanInfo->pUpdateInfo = updateInfoInit(60000, TSDB_TIME_PRECISION_MILLI, 60000 * 60 * 6); +} + +SOperatorInfo* createStreamSessionAggOperatorInfo(SOperatorInfo* downstream, + SExprInfo* pExprInfo, int32_t numOfCols, SSDataBlock* pResBlock, int64_t gap, + int32_t tsSlotId, STimeWindowAggSupp* pTwAggSupp, SExecTaskInfo* pTaskInfo) { + SStreamSessionAggOperatorInfo* pInfo = + taosMemoryCalloc(1, sizeof(SStreamSessionAggOperatorInfo)); + SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo)); + if (pInfo == NULL || pOperator == NULL) { + goto _error; + } + + initResultSizeInfo(pOperator, 4096); + + int32_t code = initStreamAggSupporter(&pInfo->streamAggSup, "StreamSessionAggOperatorInfo"); + if (code != TSDB_CODE_SUCCESS) { + goto _error; + } + + code = initBiasicInfo(&pInfo->binfo, pExprInfo, numOfCols, pResBlock, + pInfo->streamAggSup.pResultBuf); + if (code != TSDB_CODE_SUCCESS) { + goto _error; + } + pInfo->streamAggSup.resultRowSize = getResultRowSize(pInfo->binfo.pCtx, numOfCols); + + pInfo->pDummyCtx = (SqlFunctionCtx*)taosMemoryCalloc(numOfCols, sizeof(SqlFunctionCtx)); + if (pInfo->pDummyCtx == NULL) { + goto _error; + } + initDummyFunction(pInfo->pDummyCtx, pInfo->binfo.pCtx, numOfCols); + + pInfo->twAggSup = *pTwAggSupp; + initResultRowInfo(&pInfo->binfo.resultRowInfo, 8); + initExecTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pTaskInfo->window); + + pInfo->primaryTsIndex = tsSlotId; + pInfo->gap = gap; + pInfo->binfo.pRes = pResBlock; + pInfo->order = TSDB_ORDER_ASC; + _hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY); + pInfo->pStDeleted = taosHashInit(64, hashFn, true, HASH_NO_LOCK); + pInfo->pDelIterator = NULL; + pInfo->pDelRes = createOneDataBlock(pResBlock, false); + blockDataEnsureCapacity(pInfo->pDelRes, 64); + + pOperator->name = "StreamSessionWindowAggOperator"; + pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW; + pOperator->blocking = true; + pOperator->status = OP_NOT_OPENED; + pOperator->pExpr = pExprInfo; + pOperator->numOfExprs = numOfCols; + pOperator->info = pInfo; + pOperator->fpSet = createOperatorFpSet(operatorDummyOpenFn, doStreamSessionWindowAgg, + NULL, NULL, destroyStreamSessionAggOperatorInfo, aggEncodeResultRow, + aggDecodeResultRow, NULL); + pOperator->pTaskInfo = pTaskInfo; + initDownStream(downstream, pInfo); + code = appendDownstream(pOperator, &downstream, 1); + return pOperator; + +_error: + if (pInfo != NULL) { + destroyStreamSessionAggOperatorInfo(pInfo, numOfCols); + } + + taosMemoryFreeClear(pInfo); + taosMemoryFreeClear(pOperator); + pTaskInfo->code = code; + return NULL; +} + +typedef int64_t (*__get_value_fn_t)(void* data, int32_t index); + +int32_t binarySearch(void* keyList, int num, TSKEY key, int order, + __get_value_fn_t getValuefn) { + int firstPos = 0, lastPos = num - 1, midPos = -1; + int numOfRows = 0; + + if (num <= 0) return -1; + if (order == TSDB_ORDER_DESC) { + // find the first position which is smaller than the key + while (1) { + if (key >= getValuefn(keyList, lastPos)) return lastPos; + if (key == getValuefn(keyList, firstPos)) return firstPos; + if (key < getValuefn(keyList, firstPos)) return firstPos - 1; + + numOfRows = lastPos - firstPos + 1; + midPos = (numOfRows >> 1) + firstPos; + + if (key < getValuefn(keyList, midPos)) { + lastPos = midPos - 1; + } else if (key > getValuefn(keyList, midPos)) { + firstPos = midPos + 1; + } else { + break; + } + } + + } else { + // find the first position which is bigger than the key + while (1) { + if (key <= getValuefn(keyList, firstPos)) return firstPos; + if (key == getValuefn(keyList, lastPos)) return lastPos; + + if (key > getValuefn(keyList, lastPos)) { + lastPos = lastPos + 1; + if (lastPos >= num) + return -1; + else + return lastPos; + } + + numOfRows = lastPos - firstPos + 1; + midPos = (numOfRows >> 1) + firstPos; + + if (key < getValuefn(keyList, midPos)) { + lastPos = midPos - 1; + } else if (key > getValuefn(keyList, midPos)) { + firstPos = midPos + 1; + } else { + break; + } + } + } + + return midPos; +} + +int64_t getSessionWindowEndkey(void* data, int32_t index) { + SArray* pWinInfos = (SArray*) data; + SResultWindowInfo* pWin = taosArrayGet(pWinInfos, index); + return pWin->win.ekey; +} +static bool isInWindow(SResultWindowInfo* pWin, TSKEY ts, int64_t gap) { + int64_t sGap = ts - pWin->win.skey; + int64_t eGap = pWin->win.ekey - ts; + if ( (sGap < 0 && sGap >= -gap) || (eGap < 0 && eGap >= -gap) || (sGap >= 0 && eGap >= 0) ) { + return true; + } + return false; +} + +static SResultWindowInfo* insertNewSessionWindow(SArray* pWinInfos, TSKEY ts, + int32_t index) { + SResultWindowInfo win = + {.pos.offset = -1, .pos.pageId = -1, .win.skey = ts, .win.ekey = ts, .isOutput = false}; + return taosArrayInsert(pWinInfos, index, &win); +} + +static SResultWindowInfo* addNewSessionWindow(SArray* pWinInfos, TSKEY ts) { + SResultWindowInfo win = + {.pos.offset = -1, .pos.pageId = -1, .win.skey = ts, .win.ekey = ts, .isOutput = false}; + return taosArrayPush(pWinInfos, &win); +} + +SResultWindowInfo* getSessionTimeWindow(SArray* pWinInfos, TSKEY ts, int64_t gap, + int32_t* pIndex) { + int32_t size = taosArrayGetSize(pWinInfos); + if (size == 0) { + return addNewSessionWindow(pWinInfos, ts); + } + // find the first position which is smaller than the key + int32_t index = binarySearch(pWinInfos, size, ts, TSDB_ORDER_DESC, + getSessionWindowEndkey); + SResultWindowInfo* pWin = NULL; + if (index >= 0) { + pWin = taosArrayGet(pWinInfos, index); + if (isInWindow(pWin, ts, gap)) { + *pIndex = index; + return pWin; + } + } + + if (index + 1 < size) { + pWin = taosArrayGet(pWinInfos, index + 1); + if (isInWindow(pWin, ts, gap)) { + *pIndex = index + 1; + return pWin; + } + } + + if (index == size - 1) { + *pIndex = taosArrayGetSize(pWinInfos); + return addNewSessionWindow(pWinInfos, ts); + } + *pIndex = index; + return insertNewSessionWindow(pWinInfos, ts, index); +} + +int32_t updateSessionWindowInfo(SResultWindowInfo* pWinInfo, TSKEY* pTs, int32_t rows, + int32_t start, int64_t gap, SHashObj* pStDeleted) { + for (int32_t i = start; i < rows; ++i) { + if (!isInWindow(pWinInfo, pTs[i], gap)) { + return i - start; + } + if (pWinInfo->win.skey > pTs[i]) { + if (pStDeleted && pWinInfo->isOutput) { + taosHashPut(pStDeleted, &pWinInfo->pos, sizeof(SResultRowPosition), &pWinInfo->win.skey, sizeof(TSKEY)); + pWinInfo->isOutput = false; + } + pWinInfo->win.skey = pTs[i]; + } + pWinInfo->win.ekey = TMAX(pWinInfo->win.ekey, pTs[i]); + } + return rows - start; +} + +static int32_t setWindowOutputBuf(SResultWindowInfo* pWinInfo, SResultRow** pResult, + SqlFunctionCtx* pCtx, int32_t groupId, int32_t numOfOutput, + int32_t* rowCellInfoOffset, SStreamAggSupporter* pAggSup, SExecTaskInfo* pTaskInfo) { + assert(pWinInfo->win.skey <= pWinInfo->win.ekey); + // too many time window in query + int32_t size = taosArrayGetSize(pAggSup->pResultRows); + if (size > MAX_INTERVAL_TIME_WINDOW) { + longjmp(pTaskInfo->env, TSDB_CODE_QRY_TOO_MANY_TIMEWINDOW); + } + + if (pWinInfo->pos.pageId == -1) { + *pResult = getNewResultRow_rv(pAggSup->pResultBuf, groupId, pAggSup->resultRowSize); + if (*pResult == NULL) { + return TSDB_CODE_OUT_OF_MEMORY; + } + initResultRow(*pResult); + + // add a new result set for a new group + pWinInfo->pos.pageId = (*pResult)->pageId; + pWinInfo->pos.offset = (*pResult)->offset; + } else { + *pResult = getResultRowByPos(pAggSup->pResultBuf, &pWinInfo->pos); + if (!(*pResult)) { + qError("getResultRowByPos return NULL, TID:%s", GET_TASKID(pTaskInfo)); + return TSDB_CODE_FAILED; + } + } + + // set time window for current result + (*pResult)->win = pWinInfo->win; + setResultRowInitCtx(*pResult, pCtx, numOfOutput, rowCellInfoOffset); + return TSDB_CODE_SUCCESS; +} + +static int32_t doOneWindowAgg(SStreamSessionAggOperatorInfo* pInfo, + SSDataBlock* pSDataBlock, SResultWindowInfo* pCurWin, SResultRow** pResult, + int32_t startIndex, int32_t winRows, int32_t numOutput, SExecTaskInfo* pTaskInfo ) { + SColumnInfoData* pColDataInfo = + taosArrayGet(pSDataBlock->pDataBlock, pInfo->primaryTsIndex); + TSKEY* tsCols = (int64_t*)pColDataInfo->pData; + int32_t code = setWindowOutputBuf(pCurWin, pResult, pInfo->binfo.pCtx, pSDataBlock->info.groupId, + numOutput, pInfo->binfo.rowCellInfoOffset, &pInfo->streamAggSup, pTaskInfo); + if (code != TSDB_CODE_SUCCESS || (*pResult) == NULL) { + return TSDB_CODE_QRY_OUT_OF_MEMORY; + } + updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pCurWin->win, true); + doApplyFunctions(pTaskInfo, pInfo->binfo.pCtx, &pCurWin->win, + &pInfo->twAggSup.timeWindowData, startIndex, winRows, tsCols, pSDataBlock->info.rows, + numOutput, TSDB_ORDER_ASC); + return TSDB_CODE_SUCCESS; +} + +int32_t copyWinInfoToDataBlock(SSDataBlock* pBlock, SStreamAggSupporter* pAggSup, + int32_t start, int32_t num, int32_t numOfExprs, SOptrBasicInfo* pBinfo) { + for (int32_t i = start; i < num; i += 1) { + SResultWindowInfo* pWinInfo = taosArrayGet(pAggSup->pResultRows, start); + SFilePage* bufPage = getBufPage(pAggSup->pResultBuf, pWinInfo->pos.pageId); + SResultRow* pRow = (SResultRow*)((char*)bufPage + pWinInfo->pos.offset); + for (int32_t j = 0; j < numOfExprs; ++j) { + SResultRowEntryInfo* pResultInfo = getResultCell(pRow, j, pBinfo->rowCellInfoOffset); + SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, j); + char* in = GET_ROWCELL_INTERBUF(pBinfo->pCtx[j].resultInfo); + colDataAppend(pColInfoData, pBlock->info.rows, in, pResultInfo->isNullRes); + } + pBlock->info.rows += pRow->numOfRows; + releaseBufPage(pAggSup->pResultBuf, bufPage); + } + blockDataUpdateTsWindow(pBlock, -1); + return TSDB_CODE_SUCCESS; +} + +int32_t getNumCompactWindow(SArray* pWinInfos, int32_t startIndex, int64_t gap) { + SResultWindowInfo* pCurWin = taosArrayGet(pWinInfos, startIndex); + int32_t size = taosArrayGetSize(pWinInfos); + // Just look for the window behind StartIndex + for (int32_t i = startIndex + 1; i < size; i++) { + SResultWindowInfo* pWinInfo = taosArrayGet(pWinInfos, i); + if (!isInWindow(pCurWin, pWinInfo->win.skey, gap)) { + return i - startIndex - 1; + } + } + + return size - startIndex - 1; +} + +void compactFunctions(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx, + int32_t numOfOutput, SExecTaskInfo* pTaskInfo) { + for (int32_t k = 0; k < numOfOutput; ++k) { + if (fmIsWindowPseudoColumnFunc(pDestCtx[k].functionId)) { + continue; + } + int32_t code = TSDB_CODE_SUCCESS; + if (functionNeedToExecute(&pDestCtx[k]) && pDestCtx[k].fpSet.combine != NULL) { + code = pDestCtx[k].fpSet.combine(&pDestCtx[k], &pSourceCtx[k]); + if (code != TSDB_CODE_SUCCESS) { + qError("%s apply functions error, code: %s", GET_TASKID(pTaskInfo), tstrerror(code)); + pTaskInfo->code = code; + longjmp(pTaskInfo->env, code); + } + } + } +} + +void compactTimeWindow(SStreamSessionAggOperatorInfo* pInfo, int32_t startIndex, int32_t num, + int32_t groupId, int32_t numOfOutput, SExecTaskInfo* pTaskInfo, SHashObj* pStUpdated, SHashObj* pStDeleted) { + SResultWindowInfo* pCurWin = taosArrayGet(pInfo->streamAggSup.pResultRows, startIndex); + SResultRow* pCurResult = NULL; + setWindowOutputBuf(pCurWin, &pCurResult, pInfo->binfo.pCtx, groupId, + numOfOutput, pInfo->binfo.rowCellInfoOffset, &pInfo->streamAggSup, pTaskInfo); + num += startIndex + 1; + ASSERT(num <= taosArrayGetSize(pInfo->streamAggSup.pResultRows)); + // Just look for the window behind StartIndex + for (int32_t i = startIndex + 1; i < num; i++) { + SResultWindowInfo* pWinInfo = taosArrayGet(pInfo->streamAggSup.pResultRows, i); + SResultRow* pWinResult = NULL; + setWindowOutputBuf(pWinInfo, &pWinResult, pInfo->pDummyCtx, groupId, + numOfOutput, pInfo->binfo.rowCellInfoOffset, &pInfo->streamAggSup, pTaskInfo); + pCurWin->win.ekey = TMAX(pCurWin->win.ekey, pWinInfo->win.ekey); + compactFunctions(pInfo->binfo.pCtx, pInfo->pDummyCtx, numOfOutput, pTaskInfo); + taosHashRemove(pStUpdated, &pWinInfo->pos, sizeof(SResultRowPosition)); + if (pWinInfo->isOutput) { + taosHashPut(pStDeleted, &pWinInfo->pos, sizeof(SResultRowPosition), &pWinInfo->win.skey, sizeof(TSKEY)); + pWinInfo->isOutput = false; + } + taosArrayRemove(pInfo->streamAggSup.pResultRows, i); + } +} + +static void doStreamSessionWindowAggImpl(SOperatorInfo* pOperator, + SSDataBlock* pSDataBlock, SHashObj* pStUpdated, SHashObj* pStDeleted) { + SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo; + SStreamSessionAggOperatorInfo* pInfo = pOperator->info; + bool masterScan = true; + int32_t numOfOutput = pOperator->numOfExprs; + int64_t groupId = pSDataBlock->info.groupId; + int64_t gap = pInfo->gap; + int64_t code = TSDB_CODE_SUCCESS; + + int32_t step = 1; + bool ascScan = true; + TSKEY* tsCols = NULL; + SResultRow* pResult = NULL; + int32_t winRows = 0; + + if (pSDataBlock->pDataBlock != NULL) { + SColumnInfoData* pColDataInfo = + taosArrayGet(pSDataBlock->pDataBlock, pInfo->primaryTsIndex); + tsCols = (int64_t*)pColDataInfo->pData; + } else { + return ; + } + + SStreamAggSupporter* pAggSup = &pInfo->streamAggSup; + for(int32_t i = 0; i < pSDataBlock->info.rows; ) { + int32_t winIndex = 0; + SResultWindowInfo* pCurWin = + getSessionTimeWindow(pAggSup->pResultRows, tsCols[i], gap, &winIndex); + winRows = + updateSessionWindowInfo(pCurWin, tsCols, pSDataBlock->info.rows, i, pInfo->gap, pStDeleted); + code = doOneWindowAgg(pInfo, pSDataBlock, pCurWin, &pResult, i, winRows, numOfOutput, pTaskInfo); + if (code != TSDB_CODE_SUCCESS || pResult == NULL) { + longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); + } + // window start(end) key interpolation + // doWindowBorderInterpolation(pOperatorInfo, pSDataBlock, pInfo->binfo.pCtx, pResult, &nextWin, startPos, forwardStep, + // pInfo->order, false); + int32_t winNum = getNumCompactWindow(pAggSup->pResultRows, winIndex, gap); + if (winNum > 0) { + compactTimeWindow(pInfo, winIndex, winNum, groupId, numOfOutput, pTaskInfo, pStUpdated, pStDeleted); + } + + code = taosHashPut(pStUpdated, &pCurWin->pos, sizeof(SResultRowPosition), &(pCurWin->win.skey), sizeof(TSKEY)); + if (code != TSDB_CODE_SUCCESS) { + longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY); + } + pCurWin->isOutput = true; + i += winRows; + } +} + +static void doClearSessionWindows(SStreamAggSupporter* pAggSup, SOptrBasicInfo* pBinfo, + SSDataBlock* pBlock, int32_t tsIndex, int32_t numOfOutput, int64_t gap) { + SColumnInfoData* pColDataInfo = taosArrayGet(pBlock->pDataBlock, tsIndex); + TSKEY *tsCols = (TSKEY*)pColDataInfo->pData; + int32_t step = 0; + for (int32_t i = 0; i < pBlock->info.rows; i += step) { + int32_t winIndex = 0; + SResultWindowInfo* pCurWin = + getSessionTimeWindow(pAggSup->pResultRows, tsCols[i], gap, &winIndex); + step = updateSessionWindowInfo(pCurWin, tsCols, pBlock->info.rows, i, gap, NULL); + doClearWindowImpl(&pCurWin->pos, pAggSup->pResultBuf, pBinfo, numOfOutput); + } +} + +static int32_t copyUpdateResult(SHashObj* pStUpdated, SArray* pUpdated, int32_t groupId) { + void* pData = NULL; + size_t keyLen = 0; + while((pData = taosHashIterate(pStUpdated, pData)) != NULL) { + void* key = taosHashGetKey(pData, &keyLen); + ASSERT(keyLen == sizeof(SResultRowPosition)); + SResKeyPos* pos = taosMemoryMalloc(sizeof(SResKeyPos) + sizeof(uint64_t)); + if (pos == NULL) { + return TSDB_CODE_QRY_OUT_OF_MEMORY; + } + pos->groupId = groupId; + pos->pos = *(SResultRowPosition*)key; + *(int64_t*)pos->key = *(uint64_t*)pData; + taosArrayPush(pUpdated, &pos); + } + return TSDB_CODE_SUCCESS; +} + +void doBuildDeleteDataBlock(SHashObj* pStDeleted, SSDataBlock* pBlock, void** Ite) { + blockDataCleanup(pBlock); + size_t keyLen = 0; + while(( (*Ite) = taosHashIterate(pStDeleted, *Ite)) != NULL) { + SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, 0); + colDataAppend(pColInfoData, pBlock->info.rows, *Ite, false); + for (int32_t i = 1; i < pBlock->info.numOfCols; i++) { + pColInfoData = taosArrayGet(pBlock->pDataBlock, i); + colDataAppendNULL(pColInfoData, pBlock->info.rows); + } + pBlock->info.rows += 1; + if (pBlock->info.rows + 1 >= pBlock->info.capacity) { + break; + } + } + if ((*Ite) == NULL) { + taosHashClear(pStDeleted); + } +} + +static SSDataBlock* doStreamSessionWindowAgg(SOperatorInfo* pOperator) { + if (pOperator->status == OP_EXEC_DONE) { + return NULL; + } + + SStreamSessionAggOperatorInfo* pInfo = pOperator->info; + SOptrBasicInfo* pBInfo = &pInfo->binfo; + if (pOperator->status == OP_RES_TO_RETURN) { + doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator); + if (pInfo->pDelRes->info.rows > 0) { + return pInfo->pDelRes; + } + doBuildResultDatablock(pOperator, pBInfo, &pInfo->groupResInfo, + pInfo->streamAggSup.pResultBuf); + if (pBInfo->pRes->info.rows == 0 || + !hashRemainDataInGroupInfo(&pInfo->groupResInfo)) { + doSetOperatorCompleted(pOperator); + } + return pBInfo->pRes->info.rows == 0 ? NULL : pBInfo->pRes; + } + + _hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY); + SHashObj* pStUpdated = taosHashInit(64, hashFn, true, HASH_NO_LOCK); + SOperatorInfo* downstream = pOperator->pDownstream[0]; + while (1) { + SSDataBlock* pBlock = downstream->fpSet.getNextFn(downstream); + if (pBlock == NULL) { + break; + } + // the pDataBlock are always the same one, no need to call this again + setInputDataBlock(pOperator, pBInfo->pCtx, pBlock, TSDB_ORDER_ASC, MAIN_SCAN, true); + if (pBlock->info.type == STREAM_REPROCESS) { + doClearSessionWindows(&pInfo->streamAggSup, &pInfo->binfo, pBlock, 0, + pOperator->numOfExprs, pInfo->gap); + continue; + } + doStreamSessionWindowAggImpl(pOperator, pBlock, pStUpdated, pInfo->pStDeleted); + } + + // restore the value + pOperator->status = OP_RES_TO_RETURN; + SArray* pUpdated = taosArrayInit(16, POINTER_BYTES); + copyUpdateResult(pStUpdated, pUpdated, pBInfo->pRes->info.groupId); + taosHashCleanup(pStUpdated); + finalizeUpdatedResult(pOperator->numOfExprs, pInfo->streamAggSup.pResultBuf, pUpdated, + pInfo->binfo.rowCellInfoOffset); + initMultiResInfoFromArrayList(&pInfo->groupResInfo, pUpdated); + blockDataEnsureCapacity(pInfo->binfo.pRes, pOperator->resultInfo.capacity); + doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator); + if (pInfo->pDelRes->info.rows > 0) { + return pInfo->pDelRes; + } + doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, + pInfo->streamAggSup.pResultBuf); + return pBInfo->pRes->info.rows == 0 ? NULL : pBInfo->pRes; +} diff --git a/source/libs/function/inc/builtins.h b/source/libs/function/inc/builtins.h index 3a753325bd..3bd0f35bf5 100644 --- a/source/libs/function/inc/builtins.h +++ b/source/libs/function/inc/builtins.h @@ -37,6 +37,7 @@ typedef struct SBuiltinFuncDefinition { FScalarExecProcess sprocessFunc; FExecFinalize finalizeFunc; FExecProcess invertFunc; + FExecCombine combineFunc; } SBuiltinFuncDefinition; extern const SBuiltinFuncDefinition funcMgtBuiltins[]; diff --git a/source/libs/function/inc/builtinsimpl.h b/source/libs/function/inc/builtinsimpl.h index 3e2ccbc6b8..d041e08d35 100644 --- a/source/libs/function/inc/builtinsimpl.h +++ b/source/libs/function/inc/builtinsimpl.h @@ -27,6 +27,7 @@ bool functionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo); int32_t functionFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock); int32_t dummyProcess(SqlFunctionCtx* UNUSED_PARAM(pCtx)); int32_t functionFinalizeWithResultBuf(SqlFunctionCtx* pCtx, SSDataBlock* pBlock, char* finalResult); +int32_t combineFunction(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx); EFuncDataRequired countDataRequired(SFunctionNode* pFunc, STimeWindow* pTimeWindow); bool getCountFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv); @@ -37,24 +38,29 @@ EFuncDataRequired statisDataRequired(SFunctionNode* pFunc, STimeWindow* pTimeWin bool getSumFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv); int32_t sumFunction(SqlFunctionCtx *pCtx); int32_t sumInvertFunction(SqlFunctionCtx *pCtx); +int32_t sumCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx); bool minmaxFunctionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo); bool getMinmaxFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv); int32_t minFunction(SqlFunctionCtx* pCtx); int32_t maxFunction(SqlFunctionCtx *pCtx); int32_t minmaxFunctionFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock); +int32_t minCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx); +int32_t maxCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx); bool getAvgFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv); bool avgFunctionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo); int32_t avgFunction(SqlFunctionCtx* pCtx); int32_t avgFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock); int32_t avgInvertFunction(SqlFunctionCtx* pCtx); +int32_t avgCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx); bool getStddevFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv); bool stddevFunctionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo); int32_t stddevFunction(SqlFunctionCtx* pCtx); int32_t stddevFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock); int32_t stddevInvertFunction(SqlFunctionCtx* pCtx); +int32_t stddevCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx); bool getLeastSQRFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv); bool leastSQRFunctionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo); @@ -73,8 +79,10 @@ int32_t diffFunction(SqlFunctionCtx *pCtx); bool getFirstLastFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv); int32_t firstFunction(SqlFunctionCtx *pCtx); +int32_t firstCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx); int32_t lastFunction(SqlFunctionCtx *pCtx); int32_t lastFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock); +int32_t lastCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx); bool getTopBotFuncEnv(SFunctionNode* UNUSED_PARAM(pFunc), SFuncExecEnv* pEnv); int32_t topFunction(SqlFunctionCtx *pCtx); diff --git a/source/libs/function/src/builtins.c b/source/libs/function/src/builtins.c index 2cec75c8d3..b76ca1ec05 100644 --- a/source/libs/function/src/builtins.c +++ b/source/libs/function/src/builtins.c @@ -745,7 +745,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = { .initFunc = functionSetup, .processFunc = countFunction, .finalizeFunc = functionFinalize, - .invertFunc = countInvertFunction + .invertFunc = countInvertFunction, + .combineFunc = combineFunction, }, { .name = "sum", @@ -757,7 +758,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = { .initFunc = functionSetup, .processFunc = sumFunction, .finalizeFunc = functionFinalize, - .invertFunc = sumInvertFunction + .invertFunc = sumInvertFunction, + .combineFunc = sumCombine, }, { .name = "min", @@ -768,7 +770,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = { .getEnvFunc = getMinmaxFuncEnv, .initFunc = minmaxFunctionSetup, .processFunc = minFunction, - .finalizeFunc = minmaxFunctionFinalize + .finalizeFunc = minmaxFunctionFinalize, + .combineFunc = minCombine }, { .name = "max", @@ -779,7 +782,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = { .getEnvFunc = getMinmaxFuncEnv, .initFunc = minmaxFunctionSetup, .processFunc = maxFunction, - .finalizeFunc = minmaxFunctionFinalize + .finalizeFunc = minmaxFunctionFinalize, + .combineFunc = maxCombine }, { .name = "stddev", @@ -790,7 +794,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = { .initFunc = stddevFunctionSetup, .processFunc = stddevFunction, .finalizeFunc = stddevFinalize, - .invertFunc = stddevInvertFunction + .invertFunc = stddevInvertFunction, + .combineFunc = stddevCombine, }, { .name = "leastsquares", @@ -801,7 +806,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = { .initFunc = leastSQRFunctionSetup, .processFunc = leastSQRFunction, .finalizeFunc = leastSQRFinalize, - .invertFunc = leastSQRInvertFunction + .invertFunc = leastSQRInvertFunction, }, { .name = "avg", @@ -812,7 +817,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = { .initFunc = avgFunctionSetup, .processFunc = avgFunction, .finalizeFunc = avgFinalize, - .invertFunc = avgInvertFunction + .invertFunc = avgInvertFunction, + .combineFunc = avgCombine, }, { .name = "percentile", @@ -894,7 +900,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = { .getEnvFunc = getFirstLastFuncEnv, .initFunc = functionSetup, .processFunc = firstFunction, - .finalizeFunc = functionFinalize + .finalizeFunc = functionFinalize, + .combineFunc = firstCombine, }, { .name = "last", @@ -904,7 +911,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = { .getEnvFunc = getFirstLastFuncEnv, .initFunc = functionSetup, .processFunc = lastFunction, - .finalizeFunc = lastFinalize + .finalizeFunc = lastFinalize, + .combineFunc = lastCombine, }, { .name = "histogram", diff --git a/source/libs/function/src/builtinsimpl.c b/source/libs/function/src/builtinsimpl.c index ad92d095d5..da842877dc 100644 --- a/source/libs/function/src/builtinsimpl.c +++ b/source/libs/function/src/builtinsimpl.c @@ -292,6 +292,24 @@ int32_t functionFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) { return pResInfo->numOfRes; } +int32_t firstCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) { + SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx); + char* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo); + int32_t type = pDestCtx->input.pData[0]->info.type; + int32_t bytes = pDestCtx->input.pData[0]->info.bytes; + + SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx); + char* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo); + + if (pSResInfo->numOfRes != 0 && + (pDResInfo->numOfRes == 0 || *(TSKEY*)(pDBuf + bytes) > *(TSKEY*)(pSBuf + bytes)) ) { + memcpy(pDBuf, pSBuf, bytes); + *(TSKEY*)(pDBuf + bytes) = *(TSKEY*)(pSBuf + bytes); + pDResInfo->numOfRes = 1; + } + return TSDB_CODE_SUCCESS; +} + int32_t dummyProcess(SqlFunctionCtx* UNUSED_PARAM(pCtx)) { return 0; } @@ -388,6 +406,18 @@ int32_t countInvertFunction(SqlFunctionCtx* pCtx) { return TSDB_CODE_SUCCESS; } +int32_t combineFunction(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) { + SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx); + char* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo); + + SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx); + char* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo); + *((int64_t*)pDBuf) += *((int64_t*)pSBuf); + + SET_VAL(pDResInfo, *((int64_t*)pDBuf), 1); + return TSDB_CODE_SUCCESS; +} + #define LIST_ADD_N(_res, _col, _start, _rows, _t, numOfElem) \ do { \ _t* d = (_t*)(_col->pData); \ @@ -537,6 +567,26 @@ int32_t sumInvertFunction(SqlFunctionCtx* pCtx) { return TSDB_CODE_SUCCESS; } +int32_t sumCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) { + SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx); + SSumRes* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo); + int32_t type = pDestCtx->input.pData[0]->info.type; + + SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx); + SSumRes* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo); + + if (IS_SIGNED_NUMERIC_TYPE(type) || type == TSDB_DATA_TYPE_BOOL) { + pDBuf->isum += pSBuf->isum; + } else if (IS_UNSIGNED_NUMERIC_TYPE(type)) { + pDBuf->usum += pSBuf->usum; + } else if (type == TSDB_DATA_TYPE_DOUBLE || type == TSDB_DATA_TYPE_FLOAT) { + pDBuf->dsum += pSBuf->dsum; + } + + SET_VAL(pDResInfo, *((int64_t*)pDBuf), 1); + return TSDB_CODE_SUCCESS; +} + bool getSumFuncEnv(SFunctionNode* UNUSED_PARAM(pFunc), SFuncExecEnv* pEnv) { pEnv->calcMemSize = sizeof(SSumRes); return true; @@ -738,6 +788,24 @@ int32_t avgInvertFunction(SqlFunctionCtx* pCtx) { return TSDB_CODE_SUCCESS; } +int32_t avgCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) { + SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx); + SAvgRes* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo); + int32_t type = pDestCtx->input.pData[0]->info.type; + + SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx); + SAvgRes* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo); + + if (IS_INTEGER_TYPE(type)) { + pDBuf->sum.isum += pSBuf->sum.isum; + } else { + pDBuf->sum.dsum += pSBuf->sum.dsum; + } + pDBuf->count += pSBuf->count; + + return TSDB_CODE_SUCCESS; +} + int32_t avgFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) { SInputColumnInfoData* pInput = &pCtx->input; int32_t type = pInput->pData[0]->info.type; @@ -1273,6 +1341,34 @@ void setSelectivityValue(SqlFunctionCtx* pCtx, SSDataBlock* pBlock, const STuple } } +int32_t minMaxCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx, int32_t isMinFunc) { + SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx); + SMinmaxResInfo* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo); + int32_t type = pDestCtx->input.pData[0]->info.type; + + SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx); + SMinmaxResInfo* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo); + if (IS_FLOAT_TYPE(type)) { + if (pSBuf->assign && + ( (((*(double*)&pDBuf->v) < (*(double*)&pSBuf->v)) ^ isMinFunc) || !pDBuf->assign ) ) { + *(double*) &pDBuf->v = *(double*) &pSBuf->v; + } + } else { + if ( pSBuf->assign && ( ((pDBuf->v < pSBuf->v) ^ isMinFunc) || !pDBuf->assign ) ) { + pDBuf->v = pSBuf->v; + } + } + SET_VAL(pDResInfo, *((int64_t*)pDBuf), 1); + return TSDB_CODE_SUCCESS; +} + +int32_t minCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) { + return minMaxCombine(pDestCtx, pSourceCtx, 1); +} +int32_t maxCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) { + return minMaxCombine(pDestCtx, pSourceCtx, 0); +} + bool getStddevFuncEnv(SFunctionNode* pFunc, SFuncExecEnv* pEnv) { pEnv->calcMemSize = sizeof(SStddevRes); return true; @@ -1491,6 +1587,25 @@ int32_t stddevFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) { return functionFinalize(pCtx, pBlock); } +int32_t stddevCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) { + SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx); + SStddevRes* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo); + int32_t type = pDestCtx->input.pData[0]->info.type; + + SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx); + SStddevRes* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo); + + if (IS_INTEGER_TYPE(type)) { + pDBuf->isum += pSBuf->isum; + pDBuf->quadraticISum += pSBuf->quadraticISum; + } else { + pDBuf->dsum += pSBuf->dsum; + pDBuf->quadraticDSum += pSBuf->quadraticDSum; + } + pDBuf->count += pSBuf->count; + return TSDB_CODE_SUCCESS; +} + bool getLeastSQRFuncEnv(SFunctionNode* pFunc, SFuncExecEnv* pEnv) { pEnv->calcMemSize = sizeof(SLeastSQRInfo); return true; @@ -1979,6 +2094,24 @@ int32_t lastFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) { return pResInfo->numOfRes; } +int32_t lastCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) { + SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx); + char* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo); + int32_t type = pDestCtx->input.pData[0]->info.type; + int32_t bytes = pDestCtx->input.pData[0]->info.bytes; + + SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx); + char* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo); + + if (pSResInfo->numOfRes != 0 && + (pDResInfo->numOfRes == 0 || *(TSKEY*)(pDBuf + bytes) < *(TSKEY*)(pSBuf + bytes)) ) { + memcpy(pDBuf, pSBuf, bytes); + *(TSKEY*)(pDBuf + bytes) = *(TSKEY*)(pSBuf + bytes); + pDResInfo->numOfRes = 1; + } + return TSDB_CODE_SUCCESS; +} + bool getDiffFuncEnv(SFunctionNode* UNUSED_PARAM(pFunc), SFuncExecEnv* pEnv) { pEnv->calcMemSize = sizeof(SDiffInfo); return true; diff --git a/source/libs/function/src/functionMgt.c b/source/libs/function/src/functionMgt.c index 49b20ebc85..506b0eb8da 100644 --- a/source/libs/function/src/functionMgt.c +++ b/source/libs/function/src/functionMgt.c @@ -118,6 +118,7 @@ int32_t fmGetFuncExecFuncs(int32_t funcId, SFuncExecFuncs* pFpSet) { pFpSet->init = funcMgtBuiltins[funcId].initFunc; pFpSet->process = funcMgtBuiltins[funcId].processFunc; pFpSet->finalize = funcMgtBuiltins[funcId].finalizeFunc; + pFpSet->combine = funcMgtBuiltins[funcId].combineFunc; return TSDB_CODE_SUCCESS; } diff --git a/source/libs/nodes/src/nodesCodeFuncs.c b/source/libs/nodes/src/nodesCodeFuncs.c index f28885aad5..8887b9841a 100644 --- a/source/libs/nodes/src/nodesCodeFuncs.c +++ b/source/libs/nodes/src/nodesCodeFuncs.c @@ -230,6 +230,8 @@ const char* nodesNodeName(ENodeType type) { return "PhysiFill"; case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW: return "PhysiSessionWindow"; + case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW: + return "PhysiStreamSessionWindow"; case QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW: return "PhysiStateWindow"; case QUERY_NODE_PHYSICAL_PLAN_PARTITION: @@ -2528,6 +2530,29 @@ static int32_t jsonToOrderByExprNode(const SJson* pJson, void* pObj) { return code; } +static const char* jkSessionWindowTsPrimaryKey = "TsPrimaryKey"; +static const char* jkSessionWindowGap = "Gap"; + +static int32_t sessionWindowNodeToJson(const void* pObj, SJson* pJson) { + const SSessionWindowNode * pNode = (const SSessionWindowNode*)pObj; + + int32_t code = tjsonAddObject(pJson, jkSessionWindowTsPrimaryKey, nodeToJson, pNode->pCol); + if (TSDB_CODE_SUCCESS == code) { + code = tjsonAddObject(pJson, jkSessionWindowGap, nodeToJson, pNode->pGap); + } + return code; +} + +static int32_t jsonToSessionWindowNode(const SJson* pJson, void* pObj) { + SSessionWindowNode* pNode = (SSessionWindowNode*)pObj; + + int32_t code = jsonToNodeObject(pJson, jkSessionWindowTsPrimaryKey, (SNode **)&pNode->pCol); + if (TSDB_CODE_SUCCESS == code) { + code = jsonToNodeObject(pJson, jkSessionWindowGap, (SNode **)&pNode->pGap); + } + return code; +} + static const char* jkIntervalWindowInterval = "Interval"; static const char* jkIntervalWindowOffset = "Offset"; static const char* jkIntervalWindowSliding = "Sliding"; @@ -3015,8 +3040,9 @@ static int32_t specificNodeToJson(const void* pObj, SJson* pJson) { return orderByExprNodeToJson(pObj, pJson); case QUERY_NODE_LIMIT: case QUERY_NODE_STATE_WINDOW: - case QUERY_NODE_SESSION_WINDOW: break; + case QUERY_NODE_SESSION_WINDOW: + return sessionWindowNodeToJson(pObj, pJson); case QUERY_NODE_INTERVAL_WINDOW: return intervalWindowNodeToJson(pObj, pJson); case QUERY_NODE_NODE_LIST: @@ -3096,6 +3122,7 @@ static int32_t specificNodeToJson(const void* pObj, SJson* pJson) { case QUERY_NODE_PHYSICAL_PLAN_FILL: return physiFillNodeToJson(pObj, pJson); case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW: + case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW: return physiSessionWindowNodeToJson(pObj, pJson); case QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW: return physiStateWindowNodeToJson(pObj, pJson); @@ -3134,6 +3161,8 @@ static int32_t jsonToSpecificNode(const SJson* pJson, void* pObj) { return jsonToTempTableNode(pJson, pObj); case QUERY_NODE_ORDER_BY_EXPR: return jsonToOrderByExprNode(pJson, pObj); + case QUERY_NODE_SESSION_WINDOW: + return jsonToSessionWindowNode(pJson, pObj); case QUERY_NODE_INTERVAL_WINDOW: return jsonToIntervalWindowNode(pJson, pObj); case QUERY_NODE_NODE_LIST: @@ -3196,6 +3225,7 @@ static int32_t jsonToSpecificNode(const SJson* pJson, void* pObj) { case QUERY_NODE_PHYSICAL_PLAN_FILL: return jsonToPhysiFillNode(pJson, pObj); case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW: + case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW: return jsonToPhysiSessionWindowNode(pJson, pObj); case QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW: return jsonToPhysiStateWindowNode(pJson, pObj); diff --git a/source/libs/nodes/src/nodesTraverseFuncs.c b/source/libs/nodes/src/nodesTraverseFuncs.c index e8274c3c8e..ae1ff5744b 100644 --- a/source/libs/nodes/src/nodesTraverseFuncs.c +++ b/source/libs/nodes/src/nodesTraverseFuncs.c @@ -517,6 +517,7 @@ static EDealRes dispatchPhysiPlan(SNode* pNode, ETraversalOrder order, FNodeWalk res = walkWindowPhysi((SWinodwPhysiNode*)pNode, order, walker, pContext); break; case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW: + case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW: res = walkWindowPhysi((SWinodwPhysiNode*)pNode, order, walker, pContext); break; case QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW: { diff --git a/source/libs/nodes/src/nodesUtilFuncs.c b/source/libs/nodes/src/nodesUtilFuncs.c index 3f7003dfa3..c9c78b3912 100644 --- a/source/libs/nodes/src/nodesUtilFuncs.c +++ b/source/libs/nodes/src/nodesUtilFuncs.c @@ -251,6 +251,8 @@ int32_t nodesNodeSize(ENodeType type) { return sizeof(SFillPhysiNode); case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW: return sizeof(SSessionWinodwPhysiNode); + case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW: + return sizeof(SStreamSessionWinodwPhysiNode); case QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW: return sizeof(SStateWinodwPhysiNode); case QUERY_NODE_PHYSICAL_PLAN_PARTITION: @@ -664,6 +666,7 @@ void nodesDestroyNode(SNodeptr pNode) { destroyWinodwPhysiNode((SWinodwPhysiNode*)pNode); break; case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW: + case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW: destroyWinodwPhysiNode((SWinodwPhysiNode*)pNode); break; case QUERY_NODE_PHYSICAL_PLAN_DISPATCH: diff --git a/source/libs/planner/src/planPhysiCreater.c b/source/libs/planner/src/planPhysiCreater.c index fcba2aa2d3..0f88a54e91 100644 --- a/source/libs/planner/src/planPhysiCreater.c +++ b/source/libs/planner/src/planPhysiCreater.c @@ -945,7 +945,8 @@ static int32_t createIntervalPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChil static int32_t createSessionWindowPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChildren, SWindowLogicNode* pWindowLogicNode, SPhysiNode** pPhyNode) { SSessionWinodwPhysiNode* pSession = (SSessionWinodwPhysiNode*)makePhysiNode( - pCxt, getPrecision(pChildren), (SLogicNode*)pWindowLogicNode, QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW); + pCxt, getPrecision(pChildren), (SLogicNode*)pWindowLogicNode, + (pCxt->pPlanCxt->streamQuery ? QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW : QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW)); if (NULL == pSession) { return TSDB_CODE_OUT_OF_MEMORY; } diff --git a/source/libs/stream/src/tstreamUpdate.c b/source/libs/stream/src/tstreamUpdate.c index d21dadfe55..75319a2354 100644 --- a/source/libs/stream/src/tstreamUpdate.c +++ b/source/libs/stream/src/tstreamUpdate.c @@ -127,7 +127,10 @@ static SScalableBf *getSBf(SUpdateInfo *pInfo, TSKEY ts) { if (pInfo->minTS < 0) { pInfo->minTS = (TSKEY)(ts / pInfo->interval * pInfo->interval); } - uint64_t index = (uint64_t)((ts - pInfo->minTS) / pInfo->interval); + int64_t index = (int64_t)((ts - pInfo->minTS) / pInfo->interval); + if (index < 0) { + return NULL; + } if (index >= pInfo->numSBFs) { uint64_t count = index + 1 - pInfo->numSBFs; windowSBfDelete(pInfo, count); diff --git a/source/util/src/terror.c b/source/util/src/terror.c index 7c4f0fa2dd..a4e3926037 100644 --- a/source/util/src/terror.c +++ b/source/util/src/terror.c @@ -272,6 +272,10 @@ TAOS_DEFINE_ERROR(TSDB_CODE_MND_CONSUMER_NOT_EXIST, "Consumer not exist") TAOS_DEFINE_ERROR(TSDB_CODE_MND_CONSUMER_NOT_READY, "Consumer waiting for rebalance") TAOS_DEFINE_ERROR(TSDB_CODE_MND_TOPIC_SUBSCRIBED, "Topic subscribed cannot be dropped") +TAOS_DEFINE_ERROR(TSDB_CODE_MND_STREAM_ALREADY_EXIST, "Stream already exists") +TAOS_DEFINE_ERROR(TSDB_CODE_MND_STREAM_NOT_EXIST, "Stream not exist") +TAOS_DEFINE_ERROR(TSDB_CODE_MND_INVALID_STREAM_OPTION, "Invalid stream option") + // mnode-sma TAOS_DEFINE_ERROR(TSDB_CODE_MND_SMA_ALREADY_EXIST, "SMA already exists") TAOS_DEFINE_ERROR(TSDB_CODE_MND_SMA_NOT_EXIST, "SMA does not exist") diff --git a/tests/script/jenkins/basic.txt b/tests/script/jenkins/basic.txt index 7f32407b29..1cc8b97f6f 100644 --- a/tests/script/jenkins/basic.txt +++ b/tests/script/jenkins/basic.txt @@ -67,6 +67,8 @@ # ---- stream ./test.sh -f tsim/stream/basic0.sim ./test.sh -f tsim/stream/basic1.sim +./test.sh -f tsim/stream/session0.sim +./test.sh -f tsim/stream/session1.sim # ---- transaction ./test.sh -f tsim/trans/lossdata1.sim diff --git a/tests/script/tsim/stream/session0.sim b/tests/script/tsim/stream/session0.sim new file mode 100644 index 0000000000..46b343632a --- /dev/null +++ b/tests/script/tsim/stream/session0.sim @@ -0,0 +1,162 @@ +system sh/stop_dnodes.sh +system sh/deploy.sh -n dnode1 -i 1 +system sh/exec.sh -n dnode1 -s start +sleep 50 +sql connect + +print =============== create database +sql create database test vgroups 1 +sql show databases +if $rows != 3 then + return -1 +endi + +print $data00 $data01 $data02 + +sql use test + + +sql create table t1(ts timestamp, a int, b int , c int, d double,id int); +sql create stream streams2 trigger at_once into streamt as select _wstartts, count(*) c1, sum(a), max(a), min(d), stddev(a), last(a), first(d), max(id) s from t1 session(ts,10s); +sql insert into t1 values(1648791213000,NULL,NULL,NULL,NULL,1); +sql insert into t1 values(1648791223001,10,2,3,1.1,2); +sql insert into t1 values(1648791233002,3,2,3,2.1,3); +sql insert into t1 values(1648791243003,NULL,NULL,NULL,NULL,4); +sql insert into t1 values(1648791213002,NULL,NULL,NULL,NULL,5) (1648791233012,NULL,NULL,NULL,NULL,6); + +sql select * from streamt order by s desc; + +# row 0 +if $data01 != 3 then + print ======$data01 + return -1 +endi + +if $data02 != 3 then + print ======$data02 + return -1 +endi + +if $data03 != 3 then + print ======$data03 + return -1 +endi + +if $data04 != 2.100000000 then + print ======$data04 + return -1 +endi + +if $data05 != 0.000000000 then + print ======$data05 + return -1 +endi + +if $data06 != 3 then + print ======$data05 + return -1 +endi + +if $data07 != 2.100000000 then + print ======$data05 + return -1 +endi + +if $data08 != 6 then + print ======$data05 + return -1 +endi + +# row 1 + +if $data11 != 3 then + print ======$data01 + return -1 +endi + +if $data12 != 10 then + print ======$data02 + return -1 +endi + +if $data13 != 10 then + print ======$data03 + return -1 +endi + +if $data14 != 1.100000000 then + print ======$data04 + return -1 +endi + +if $data15 != 0.000000000 then + print ======$data05 + return -1 +endi + +if $data16 != 10 then + print ======$data05 + return -1 +endi + +if $data17 != 1.100000000 then + print ======$data05 + return -1 +endi + +if $data18 != 5 then + print ======$data05 + return -1 +endi + +sql insert into t1 values(1648791213000,1,2,3,1.0,7); +sql insert into t1 values(1648791223001,2,2,3,1.1,8); +sql insert into t1 values(1648791233002,3,2,3,2.1,9); +sql insert into t1 values(1648791243003,4,2,3,3.1,10); +sql insert into t1 values(1648791213002,4,2,3,4.1,11) ; +sql insert into t1 values(1648791213002,4,2,3,4.1,12) (1648791223009,4,2,3,4.1,13); + +sql select * from streamt order by s desc ; + +# row 0 +if $data01 != 7 then + print ======$data01 + return -1 +endi + +if $data02 != 9 then + print ======$data02 + return -1 +endi + +if $data03 != 4 then + print ======$data03 + return -1 +endi + +if $data04 != 1.100000000 then + print ======$data04 + return -1 +endi + +if $data05 != 0.816496581 then + print ======$data05 + return -1 +endi + +if $data06 != 3 then + print ======$data05 + return -1 +endi + +if $data07 != 1.100000000 then + print ======$data05 + return -1 +endi + +if $data08 != 13 then + print ======$data05 + return -1 +endi + +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/stream/session1.sim b/tests/script/tsim/stream/session1.sim new file mode 100644 index 0000000000..a44639ba7a --- /dev/null +++ b/tests/script/tsim/stream/session1.sim @@ -0,0 +1,190 @@ +system sh/stop_dnodes.sh +system sh/deploy.sh -n dnode1 -i 1 +system sh/exec.sh -n dnode1 -s start +sleep 50 +sql connect + +print =============== create database +sql create database test vgroups 1 +sql show databases +if $rows != 3 then + return -1 +endi + +print $data00 $data01 $data02 + +sql use test + + +sql create table t1(ts timestamp, a int, b int , c int, d double,id int); +sql create stream streams2 trigger at_once into streamt as select _wstartts, count(*) c1, sum(a), min(b), max(id) s from t1 session(ts,10s); +sql insert into t1 values(1648791210000,1,1,1,1.1,1); +sql insert into t1 values(1648791220000,2,2,2,2.1,2); +sql insert into t1 values(1648791230000,3,3,3,3.1,3); +sql insert into t1 values(1648791240000,4,4,4,4.1,4); + +sql select * from streamt order by s desc; + +# row 0 +if $data01 != 4 then + print ======$data01 + return -1 +endi + +if $data02 != 10 then + print ======$data02 + return -1 +endi + +if $data03 != 1 then + print ======$data03 + return -1 +endi + +if $data04 != 4 then + print ======$data04 + return -1 +endi + +sql insert into t1 values(1648791250005,5,5,5,5.1,5); +sql insert into t1 values(1648791260006,6,6,6,6.1,6); +sql insert into t1 values(1648791270007,7,7,7,7.1,7); +sql insert into t1 values(1648791240005,5,5,5,5.1,8) (1648791250006,6,6,6,6.1,9); + +sql select * from streamt order by s desc; + +# row 0 +if $data01 != 8 then + print ======$data01 + return -1 +endi + +if $data02 != 32 then + print ======$data02 + return -1 +endi + +if $data03 != 1 then + print ======$data03 + return -1 +endi + +if $data04 != 9 then + print ======$data04 + return -1 +endi + +# row 1 +if $data11 != 1 then + print ======$data11 + return -1 +endi + +if $data12 != 7 then + print ======$data12 + return -1 +endi + +if $data13 != 7 then + print ======$data13 + return -1 +endi + +if $data14 != 7 then + print ======$data14 + return -1 +endi + +sql insert into t1 values(1648791280008,7,7,7,7.1,10) (1648791300009,8,8,8,8.1,11); +sql insert into t1 values(1648791260007,7,7,7,7.1,12) (1648791290008,7,7,7,7.1,13) (1648791290009,8,8,8,8.1,14); +sql insert into t1 values(1648791500000,7,7,7,7.1,15) (1648791520000,8,8,8,8.1,16) (1648791540000,8,8,8,8.1,17); +sql insert into t1 values(1648791530000,8,8,8,8.1,18); +sql insert into t1 values(1648791220000,10,10,10,10.1,19) (1648791290008,2,2,2,2.1,20) (1648791540000,17,17,17,17.1,21) (1648791500001,22,22,22,22.1,22); + +sql select * from streamt order by s desc; + +# row 0 +if $data01 != 2 then + print ======$data01 + return -1 +endi + +if $data02 != 29 then + print ======$data02 + return -1 +endi + +if $data03 != 7 then + print ======$data03 + return -1 +endi + +if $data04 != 22 then + print ======$data04 + return -1 +endi + +# row 1 +if $data11 != 3 then + print ======$data11 + return -1 +endi + +if $data12 != 33 then + print ======$data12 + return -1 +endi + +if $data13 != 8 then + print ======$data13 + return -1 +endi + +if $data14 != 21 then + print ======$data14 + return -1 +endi + +# row 2 +if $data21 != 4 then + print ======$data21 + return -1 +endi + +if $data22 != 25 then + print ======$data22 + return -1 +endi + +if $data23 != 2 then + print ======$data23 + return -1 +endi + +if $data24 != 20 then + print ======$data24 + return -1 +endi + +# row 3 +if $data31 != 10 then + print ======$data31 + return -1 +endi + +if $data32 != 54 then + print ======$data32 + return -1 +endi + +if $data33 != 1 then + print ======$data33 + return -1 +endi + +if $data34 != 19 then + print ======$data34 + return -1 +endi + +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/script/tsim/sync/insertDataByRunBack.sim b/tests/script/tsim/sync/insertDataByRunBack.sim index c86cd3844b..00f0643b61 100644 --- a/tests/script/tsim/sync/insertDataByRunBack.sim +++ b/tests/script/tsim/sync/insertDataByRunBack.sim @@ -20,6 +20,8 @@ print $data[1][0] $data[1][1] $data[1][2] $data[1][3] if $rows == 2 then if $data[1][1] == stop then goto end_insert + elif $data[0][1] == stop then + goto end_insert endi endi @@ -47,6 +49,9 @@ endw if $loop_cnt == 0 then print ====> notify main to working for insert data sql insert into interaction values (now, 'working', 0, 0); + sql select * from interaction + print $data[0][0] $data[0][1] $data[0][2] $data[0][3] + print $data[1][0] $data[1][1] $data[1][2] $data[1][3] endi $loop_cnt = $loop_cnt + 1 goto loop_insert diff --git a/tests/script/tsim/sync/threeReplica1VgElectWihtInsert.sim b/tests/script/tsim/sync/threeReplica1VgElectWihtInsert.sim index f568008a82..fc501096e6 100644 --- a/tests/script/tsim/sync/threeReplica1VgElectWihtInsert.sim +++ b/tests/script/tsim/sync/threeReplica1VgElectWihtInsert.sim @@ -155,28 +155,13 @@ while $i < $tbNum sql create table $ctb using stb tags( $i ) $ntb = $ntbPrefix . $i sql create table $ntb (ts timestamp, c1 int, c2 float, c3 binary(10)) - -# $x = 0 -# while $x < $rowNum -# $binary = ' . binary -# $binary = $binary . $i -# $binary = $binary . ' -# -# sql insert into $ctb values ($tstart , $i , $x , $binary ) -# sql insert into $ntb values ($tstart , 999 , 999 , 'binary-ntb' ) -# $tstart = $tstart + 1 -# $x = $x + 1 -# endw - -# print ====> insert rows: $rowNum into $ctb and $ntb - $i = $i + 1 -# $tstart = 1640966400000 endw $totalTblNum = $tbNum * 2 -print ====>totalTblNum:$totalTblNum +sleep 1000 sql show tables +print ====> expect $totalTblNum and infinsert $rows in fact if $rows != $totalTblNum then return -1 endi @@ -222,6 +207,9 @@ endi $dnodeId = dnode . $dnodeId print ====> stop $dnodeId system sh/exec.sh -n $dnodeId -s stop -x SIGINT +sleep 1000 +print ====> start $dnodeId +system sh/exec.sh -n $dnodeId -s start $loop_cnt = 0 check_vg_ready_2: @@ -245,7 +233,7 @@ if $data[0][4] == LEADER then if $data[0][8] != FOLLOWER then goto check_vg_ready_2 endi - print ---- vgroup $data[0][0] leader switch to dnode $data[0][3] + print ---- vgroup $dnodeId leader switch to dnode $data[0][3] goto vg_ready_2 elif $data[0][6] == LEADER then if $data[0][4] != FOLLOWER then @@ -254,7 +242,7 @@ elif $data[0][6] == LEADER then if $data[0][8] != FOLLOWER then goto check_vg_ready_2 endi - print ---- vgroup $data[0][0] leader switch to dnode $data[0][5] + print ---- vgroup $dnodeId leader switch to dnode $data[0][5] goto vg_ready_2 elif $data[0][8] == LEADER then if $data[0][4] != FOLLOWER then @@ -263,7 +251,7 @@ elif $data[0][8] == LEADER then if $data[0][6] != FOLLOWER then goto check_vg_ready_2 endi - print ---- vgroup $data[0][0] leader switch to dnode $data[0][7] + print ---- vgroup $dnodeId leader switch to dnode $data[0][7] goto vg_ready_2 else goto check_vg_ready_2 @@ -272,8 +260,6 @@ vg_ready_2: $switch_loop_cnt = $switch_loop_cnt + 1 if $switch_loop_cnt < 3 then - print ====> start $dnodeId - system sh/exec.sh -n $dnodeId -s start goto switch_leader_loop endi diff --git a/tests/system-test/1-insert/insertWithMoreVgroup.py b/tests/system-test/1-insert/insertWithMoreVgroup.py index f0f35831db..8d2870fc2c 100644 --- a/tests/system-test/1-insert/insertWithMoreVgroup.py +++ b/tests/system-test/1-insert/insertWithMoreVgroup.py @@ -294,7 +294,7 @@ class TDTestCase: return def test_case3(self): - self.taosBenchCreate("127.0.0.1","no","db1", "stb1", 1, 8, 1*10000) + self.taosBenchCreate("127.0.0.1","no","db1", "stb1", 1, 1, 1*10) # self.taosBenchCreate("test209","no","db2", "stb2", 1, 8, 1*10000) # self.taosBenchCreate("chenhaoran02","no","db1", "stb1", 1, 8, 1*10000) @@ -349,17 +349,17 @@ class TDTestCase: # run case def run(self): - # create database and tables。 - self.test_case1() - tdLog.debug(" LIMIT test_case1 ............ [OK]") + # # create database and tables。 + # self.test_case1() + # tdLog.debug(" LIMIT test_case1 ............ [OK]") # # taosBenchmark : create database and table # self.test_case2() # tdLog.debug(" LIMIT test_case2 ............ [OK]") - # # taosBenchmark:create database/table and insert data - # self.test_case3() - # tdLog.debug(" LIMIT test_case3 ............ [OK]") + # taosBenchmark:create database/table and insert data + self.test_case3() + tdLog.debug(" LIMIT test_case3 ............ [OK]") # # test qnode diff --git a/tests/system-test/1-insert/manyVgroups.json b/tests/system-test/1-insert/manyVgroups.json index 1c9aa1f28c..5dea41476c 100644 --- a/tests/system-test/1-insert/manyVgroups.json +++ b/tests/system-test/1-insert/manyVgroups.json @@ -10,7 +10,7 @@ "result_file": "./insert_res.txt", "confirm_parameter_prompt": "no", "insert_interval": 0, - "interlace_rows": 100000, + "interlace_rows": 0, "num_of_records_per_req": 100, "databases": [ { @@ -29,8 +29,8 @@ "batch_create_tbl_num": 50000, "data_source": "rand", "insert_mode": "taosc", - "insert_rows": 10, - "interlace_rows": 100000, + "insert_rows": 1, + "interlace_rows": 0, "insert_interval": 0, "max_sql_len": 10000000, "disorder_ratio": 0, diff --git a/tests/system-test/2-query/check_tsdb.py b/tests/system-test/2-query/check_tsdb.py new file mode 100644 index 0000000000..33bf351207 --- /dev/null +++ b/tests/system-test/2-query/check_tsdb.py @@ -0,0 +1,106 @@ +import taos +import sys +import datetime +import inspect + +from util.log import * +from util.sql import * +from util.cases import * +from util.dnodes import * + +class TDTestCase: + updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 , + "jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143, + "wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"fnDebugFlag":143} + def init(self, conn, logSql): + tdLog.debug(f"start to excute {__file__}") + tdSql.init(conn.cursor(), True) + + def prepare_datas(self): + tdSql.execute( + '''create table stb1 + (ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp) + tags (t1 int) + ''' + ) + + tdSql.execute( + ''' + create table t1 + (ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp) + ''' + ) + for i in range(4): + tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )') + + for i in range(9): + tdSql.execute( + f"insert into ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )" + ) + tdSql.execute( + f"insert into ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )" + ) + tdSql.execute("insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )") + tdSql.execute("insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )") + tdSql.execute("insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )") + tdSql.execute("insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )") + + tdSql.execute("insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ") + tdSql.execute("insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ") + tdSql.execute("insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ") + + tdSql.execute( + f'''insert into t1 values + ( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) + ( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a ) + ( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a ) + ( '2021-01-01 01:01:06.000', 3, 33333, 333, 33, 3.33, 33.33, 0, "binary3", "nchar3", now()+3a ) + ( '2021-05-07 01:01:10.000', 4, 44444, 444, 44, 4.44, 44.44, 1, "binary4", "nchar4", now()+4a ) + ( '2021-07-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) + ( '2021-09-30 01:01:16.000', 5, 55555, 555, 55, 5.55, 55.55, 0, "binary5", "nchar5", now()+5a ) + ( '2022-02-01 01:01:20.000', 6, 66666, 666, 66, 6.66, 66.66, 1, "binary6", "nchar6", now()+6a ) + ( '2022-10-28 01:01:26.000', 7, 00000, 000, 00, 0.00, 00.00, 1, "binary7", "nchar7", "1970-01-01 08:00:00.000" ) + ( '2022-12-01 01:01:30.000', 8, -88888, -888, -88, -8.88, -88.88, 0, "binary8", "nchar8", "1969-01-01 01:00:00.000" ) + ( '2022-12-31 01:01:36.000', 9, -99999999999999999, -999, -99, -9.99, -999999999999999999999.99, 1, "binary9", "nchar9", "1900-01-01 00:00:00.000" ) + ( '2023-02-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) + ''' + ) + + + def restart_taosd_query_sum(self): + + for i in range(5): + tdLog.info(" this is %d_th restart taosd " %i) + os.system("taos -s ' use db ;select c6 from stb1 ; '") + tdSql.execute("use db ") + tdSql.query("select count(*) from stb1") + tdSql.checkRows(1) + tdSql.query("select sum(c1),sum(c2),sum(c3),sum(c4),sum(c5),sum(c6) from stb1;") + tdSql.checkData(0,0,99) + tdSql.checkData(0,1,499995) + tdSql.checkData(0,2,4995) + tdSql.checkData(0,3,594) + tdSql.checkData(0,4,49.950001001) + tdSql.checkData(0,5,599.940000000) + tdDnodes.stop(1) + tdDnodes.start(1) + time.sleep(2) + + + + def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring + tdSql.prepare() + + tdLog.printNoPrefix("==========step1:create table ==============") + + self.prepare_datas() + + os.system("taos -s ' select c6 from stb1 ; '") + self.restart_taosd_query_sum() + + def stop(self): + tdSql.close() + tdLog.success(f"{__file__} successfully executed") + +tdCases.addLinux(__file__, TDTestCase()) +tdCases.addWindows(__file__, TDTestCase()) diff --git a/tests/system-test/7-tmq/subscribeStb.py b/tests/system-test/7-tmq/subscribeStb.py index fe05d2e223..a0b3668d47 100644 --- a/tests/system-test/7-tmq/subscribeStb.py +++ b/tests/system-test/7-tmq/subscribeStb.py @@ -1377,9 +1377,9 @@ class TDTestCase: self.tmqCase1(cfgPath, buildPath) self.tmqCase2(cfgPath, buildPath) - self.tmqCase3(cfgPath, buildPath) - self.tmqCase4(cfgPath, buildPath) - self.tmqCase5(cfgPath, buildPath) + # self.tmqCase3(cfgPath, buildPath) + # self.tmqCase4(cfgPath, buildPath) + # self.tmqCase5(cfgPath, buildPath) def stop(self): tdSql.close() diff --git a/tests/system-test/7-tmq/subscribeStb0.py b/tests/system-test/7-tmq/subscribeStb0.py new file mode 100644 index 0000000000..1d56103059 --- /dev/null +++ b/tests/system-test/7-tmq/subscribeStb0.py @@ -0,0 +1,1391 @@ + +import taos +import sys +import time +import socket +import os +import threading +from enum import Enum + +from util.log import * +from util.sql import * +from util.cases import * +from util.dnodes import * + +class actionType(Enum): + CREATE_DATABASE = 0 + CREATE_STABLE = 1 + CREATE_CTABLE = 2 + INSERT_DATA = 3 + +class TDTestCase: + hostname = socket.gethostname() + #rpcDebugFlagVal = '143' + #clientCfgDict = {'serverPort': '', 'firstEp': '', 'secondEp':'', 'rpcDebugFlag':'135', 'fqdn':''} + #clientCfgDict["rpcDebugFlag"] = rpcDebugFlagVal + #updatecfgDict = {'clientCfg': {}, 'serverPort': '', 'firstEp': '', 'secondEp':'', 'rpcDebugFlag':'135', 'fqdn':''} + #updatecfgDict["rpcDebugFlag"] = rpcDebugFlagVal + #print ("===================: ", updatecfgDict) + + def init(self, conn, logSql): + tdLog.debug(f"start to excute {__file__}") + #tdSql.init(conn.cursor()) + tdSql.init(conn.cursor(), logSql) # output sql.txt file + + def getBuildPath(self): + selfPath = os.path.dirname(os.path.realpath(__file__)) + + if ("community" in selfPath): + projPath = selfPath[:selfPath.find("community")] + else: + projPath = selfPath[:selfPath.find("tests")] + + for root, dirs, files in os.walk(projPath): + if ("taosd" in files): + rootRealPath = os.path.dirname(os.path.realpath(root)) + if ("packaging" not in rootRealPath): + buildPath = root[:len(root) - len("/build/bin")] + break + return buildPath + + def newcur(self,cfg,host,port): + user = "root" + password = "taosdata" + con=taos.connect(host=host, user=user, password=password, config=cfg ,port=port) + cur=con.cursor() + print(cur) + return cur + + def initConsumerTable(self,cdbName='cdb'): + tdLog.info("create consume database, and consume info table, and consume result table") + tdSql.query("create database if not exists %s vgroups 1"%(cdbName)) + tdSql.query("drop table if exists %s.consumeinfo "%(cdbName)) + tdSql.query("drop table if exists %s.consumeresult "%(cdbName)) + + tdSql.query("create table %s.consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)"%cdbName) + tdSql.query("create table %s.consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)"%cdbName) + + def initConsumerInfoTable(self,cdbName='cdb'): + tdLog.info("drop consumeinfo table") + tdSql.query("drop table if exists %s.consumeinfo "%(cdbName)) + tdSql.query("create table %s.consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)"%cdbName) + + def insertConsumerInfo(self,consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifmanualcommit,cdbName='cdb'): + sql = "insert into %s.consumeinfo values "%cdbName + sql += "(now, %d, '%s', '%s', %d, %d, %d)"%(consumerId, topicList, keyList, expectrowcnt, ifcheckdata, ifmanualcommit) + tdLog.info("consume info sql: %s"%sql) + tdSql.query(sql) + + def selectConsumeResult(self,expectRows,cdbName='cdb'): + resultList=[] + while 1: + tdSql.query("select * from %s.consumeresult"%cdbName) + #tdLog.info("row: %d, %l64d, %l64d"%(tdSql.getData(0, 1),tdSql.getData(0, 2),tdSql.getData(0, 3)) + if tdSql.getRows() == expectRows: + break + else: + time.sleep(5) + + for i in range(expectRows): + tdLog.info ("consume id: %d, consume msgs: %d, consume rows: %d"%(tdSql.getData(i , 1), tdSql.getData(i , 2), tdSql.getData(i , 3))) + resultList.append(tdSql.getData(i , 3)) + + return resultList + + def startTmqSimProcess(self,buildPath,cfgPath,pollDelay,dbName,showMsg=1,showRow=1,cdbName='cdb',valgrind=0): + shellCmd = 'nohup ' + if valgrind == 1: + logFile = cfgPath + '/../log/valgrind-tmq.log' + shellCmd = 'nohup valgrind --log-file=' + logFile + shellCmd += '--tool=memcheck --leak-check=full --show-reachable=no --track-origins=yes --show-leak-kinds=all --num-callers=20 -v --workaround-gcc296-bugs=yes ' + + shellCmd += buildPath + '/build/bin/tmq_sim -c ' + cfgPath + shellCmd += " -y %d -d %s -g %d -r %d -w %s "%(pollDelay, dbName, showMsg, showRow, cdbName) + shellCmd += "> /dev/null 2>&1 &" + tdLog.info(shellCmd) + os.system(shellCmd) + + def create_database(self,tsql, dbName,dropFlag=1,vgroups=4,replica=1): + if dropFlag == 1: + tsql.execute("drop database if exists %s"%(dbName)) + + tsql.execute("create database if not exists %s vgroups %d replica %d"%(dbName, vgroups, replica)) + tdLog.debug("complete to create database %s"%(dbName)) + return + + def create_stable(self,tsql, dbName,stbName): + tsql.execute("create table if not exists %s.%s (ts timestamp, c1 bigint, c2 binary(16)) tags(t1 int)"%(dbName, stbName)) + tdLog.debug("complete to create %s.%s" %(dbName, stbName)) + return + + def create_ctables(self,tsql, dbName,stbName,ctbNum): + tsql.execute("use %s" %dbName) + pre_create = "create table" + sql = pre_create + #tdLog.debug("doing create one stable %s and %d child table in %s ..." %(stbname, count ,dbname)) + for i in range(ctbNum): + sql += " %s_%d using %s tags(%d)"%(stbName,i,stbName,i+1) + if (i > 0) and (i%100 == 0): + tsql.execute(sql) + sql = pre_create + if sql != pre_create: + tsql.execute(sql) + + tdLog.debug("complete to create %d child tables in %s.%s" %(ctbNum, dbName, stbName)) + return + + def insert_data(self,tsql,dbName,stbName,ctbNum,rowsPerTbl,batchNum,startTs=0): + tdLog.debug("start to insert data ............") + tsql.execute("use %s" %dbName) + pre_insert = "insert into " + sql = pre_insert + + if startTs == 0: + t = time.time() + startTs = int(round(t * 1000)) + + #tdLog.debug("doing insert data into stable:%s rows:%d ..."%(stbName, allRows)) + rowsOfSql = 0 + for i in range(ctbNum): + sql += " %s_%d values "%(stbName,i) + for j in range(rowsPerTbl): + sql += "(%d, %d, 'tmqrow_%d') "%(startTs + j, j, j) + rowsOfSql += 1 + if (j > 0) and ((rowsOfSql == batchNum) or (j == rowsPerTbl - 1)): + tsql.execute(sql) + rowsOfSql = 0 + if j < rowsPerTbl - 1: + sql = "insert into %s_%d values " %(stbName,i) + else: + sql = "insert into " + #end sql + if sql != pre_insert: + #print("insert sql:%s"%sql) + tsql.execute(sql) + tdLog.debug("insert data ............ [OK]") + return + + def prepareEnv(self, **parameterDict): + # create new connector for my thread + tsql=self.newcur(parameterDict['cfg'], 'localhost', 6030) + + if parameterDict["actionType"] == actionType.CREATE_DATABASE: + self.create_database(tsql, parameterDict["dbName"]) + elif parameterDict["actionType"] == actionType.CREATE_STABLE: + self.create_stable(tsql, parameterDict["dbName"], parameterDict["stbName"]) + elif parameterDict["actionType"] == actionType.CREATE_CTABLE: + self.create_ctables(tsql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + elif parameterDict["actionType"] == actionType.INSERT_DATA: + self.insert_data(tsql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],parameterDict["batchNum"]) + else: + tdLog.exit("not support's action: ", parameterDict["actionType"]) + + return + + def tmqCase1(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 1: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db1', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 0 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:earliest' + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume processor") + pollDelay = 100 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + time.sleep(5) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,\ + parameterDict["dbName"],\ + parameterDict["stbName"],\ + parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],\ + parameterDict["batchNum"]) + + tdLog.info("insert process end, and start to check consume result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 1 end ...... ") + + def tmqCase2(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 2: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db2', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + parameterDict2 = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db2', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb2', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict2['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_stable(tdSql, parameterDict2["dbName"], parameterDict2["stbName"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 0 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:earliest' + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume processor") + pollDelay = 100 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start create child tables of stb1 and stb2") + parameterDict['actionType'] = actionType.CREATE_CTABLE + parameterDict2['actionType'] = actionType.CREATE_CTABLE + + prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict) + prepareEnvThread.start() + prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2) + prepareEnvThread2.start() + + prepareEnvThread.join() + prepareEnvThread2.join() + + tdLog.info("start insert data into child tables of stb1 and stb2") + parameterDict['actionType'] = actionType.INSERT_DATA + parameterDict2['actionType'] = actionType.INSERT_DATA + + prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict) + prepareEnvThread.start() + prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2) + prepareEnvThread2.start() + + prepareEnvThread.join() + prepareEnvThread2.join() + + tdLog.info("insert process end, and start to check consume result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 2 end ...... ") + + def tmqCase3(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 3: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db3', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 20000, \ + 'batchNum': 50, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,parameterDict["dbName"],parameterDict["stbName"],parameterDict["ctbNum"],parameterDict["rowsPerTbl"],parameterDict["batchNum"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 0 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:earliest' + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume processor") + pollDelay = 5 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + time.sleep(3) + tdLog.info("drop som child table of stb1") + dropTblNum = 4 + tdSql.query("drop table if exists %s.%s_1"%(parameterDict["dbName"], parameterDict["stbName"])) + tdSql.query("drop table if exists %s.%s_2"%(parameterDict["dbName"], parameterDict["stbName"])) + tdSql.query("drop table if exists %s.%s_3"%(parameterDict["dbName"], parameterDict["stbName"])) + tdSql.query("drop table if exists %s.%s_4"%(parameterDict["dbName"], parameterDict["stbName"])) + + tdLog.info("drop some child tables, then start to check consume result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + remaindrowcnt = parameterDict["rowsPerTbl"] * (parameterDict["ctbNum"] - dropTblNum) + + if not (totalConsumeRows < expectrowcnt and totalConsumeRows > remaindrowcnt): + tdLog.info("act consume rows: %d, expect consume rows: between %d and %d"%(totalConsumeRows, remaindrowcnt, expectrowcnt)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 3 end ...... ") + + def tmqCase4(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 4: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db4', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,\ + parameterDict["dbName"],\ + parameterDict["stbName"],\ + parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],\ + parameterDict["batchNum"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:earliest' + self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume processor") + pollDelay = 5 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start to check consume result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt/4: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4)) + tdLog.exit("tmq consume rows error!") + + self.initConsumerInfoTable() + consumerId = 1 + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("again start consume processor") + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("again check consume result") + expectRows = 2 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 4 end ...... ") + + def tmqCase5(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 5: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db5', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,\ + parameterDict["dbName"],\ + parameterDict["stbName"],\ + parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],\ + parameterDict["batchNum"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 0 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:earliest' + self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume processor") + pollDelay = 5 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start to check consume result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt/4: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4)) + tdLog.exit("tmq consume rows error!") + + self.initConsumerInfoTable() + consumerId = 1 + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("again start consume processor") + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("again check consume result") + expectRows = 2 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != (expectrowcnt * (1 + 1/4)): + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 5 end ...... ") + + def tmqCase6(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 6: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db6', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,\ + parameterDict["dbName"],\ + parameterDict["stbName"],\ + parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],\ + parameterDict["batchNum"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:earliest' + self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume processor") + pollDelay = 5 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start to check consume result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt/4: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4)) + tdLog.exit("tmq consume rows error!") + + self.initConsumerInfoTable() + consumerId = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:latest' + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("again start consume processor") + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("again check consume result") + expectRows = 2 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 6 end ...... ") + + def tmqCase7(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 7: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db7', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,\ + parameterDict["dbName"],\ + parameterDict["stbName"],\ + parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],\ + parameterDict["batchNum"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:latest' + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume processor") + pollDelay = 5 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start to check consume result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != 0: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0)) + tdLog.exit("tmq consume rows error!") + + self.initConsumerInfoTable() + consumerId = 1 + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("again start consume processor") + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("again check consume result") + expectRows = 2 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != 0: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 7 end ...... ") + + def tmqCase8(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 8: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db8', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,\ + parameterDict["dbName"],\ + parameterDict["stbName"],\ + parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],\ + parameterDict["batchNum"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:latest' + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume 0 processor") + pollDelay = 10 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start to check consume 0 result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != 0: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0)) + tdLog.exit("tmq consume rows error!") + + tdLog.info("start consume 1 processor") + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start one new thread to insert data") + parameterDict['actionType'] = actionType.INSERT_DATA + prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict) + prepareEnvThread.start() + prepareEnvThread.join() + + tdLog.info("start to check consume 0 and 1 result") + expectRows = 2 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt)) + tdLog.exit("tmq consume rows error!") + + tdLog.info("start consume 2 processor") + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start one new thread to insert data") + parameterDict['actionType'] = actionType.INSERT_DATA + prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict) + prepareEnvThread.start() + prepareEnvThread.join() + + tdLog.info("start to check consume 0 and 1 and 2 result") + expectRows = 3 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt*2: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*2)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 8 end ...... ") + + def tmqCase9(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 9: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db9', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,\ + parameterDict["dbName"],\ + parameterDict["stbName"],\ + parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],\ + parameterDict["batchNum"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:latest' + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume 0 processor") + pollDelay = 10 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start to check consume 0 result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != 0: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0)) + tdLog.exit("tmq consume rows error!") + + tdLog.info("start consume 1 processor") + self.initConsumerInfoTable() + consumerId = 1 + ifManualCommit = 0 + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start one new thread to insert data") + parameterDict['actionType'] = actionType.INSERT_DATA + prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict) + prepareEnvThread.start() + prepareEnvThread.join() + + tdLog.info("start to check consume 0 and 1 result") + expectRows = 2 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt)) + tdLog.exit("tmq consume rows error!") + + tdLog.info("start consume 2 processor") + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start one new thread to insert data") + parameterDict['actionType'] = actionType.INSERT_DATA + prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict) + prepareEnvThread.start() + prepareEnvThread.join() + + tdLog.info("start to check consume 0 and 1 and 2 result") + expectRows = 3 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt*2: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*2)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 9 end ...... ") + + def tmqCase10(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 10: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db10', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,\ + parameterDict["dbName"],\ + parameterDict["stbName"],\ + parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],\ + parameterDict["batchNum"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:latest' + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume 0 processor") + pollDelay = 10 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start to check consume 0 result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != 0: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0)) + tdLog.exit("tmq consume rows error!") + + tdLog.info("start consume 1 processor") + self.initConsumerInfoTable() + consumerId = 1 + ifManualCommit = 1 + self.insertConsumerInfo(consumerId, expectrowcnt-10000,topicList,keyList,ifcheckdata,ifManualCommit) + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start one new thread to insert data") + parameterDict['actionType'] = actionType.INSERT_DATA + prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict) + prepareEnvThread.start() + prepareEnvThread.join() + + tdLog.info("start to check consume 0 and 1 result") + expectRows = 2 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt-10000: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt-10000)) + tdLog.exit("tmq consume rows error!") + + tdLog.info("start consume 2 processor") + self.initConsumerInfoTable() + consumerId = 2 + ifManualCommit = 1 + self.insertConsumerInfo(consumerId, expectrowcnt+10000,topicList,keyList,ifcheckdata,ifManualCommit) + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start one new thread to insert data") + parameterDict['actionType'] = actionType.INSERT_DATA + prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict) + prepareEnvThread.start() + prepareEnvThread.join() + + tdLog.info("start to check consume 0 and 1 and 2 result") + expectRows = 3 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt*2: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*2)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 10 end ...... ") + + def tmqCase11(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 11: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db11', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,\ + parameterDict["dbName"],\ + parameterDict["stbName"],\ + parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],\ + parameterDict["batchNum"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:none' + self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume processor") + pollDelay = 5 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start to check consume result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != 0: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0)) + tdLog.exit("tmq consume rows error!") + + self.initConsumerInfoTable() + consumerId = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:none' + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("again start consume processor") + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("again check consume result") + expectRows = 2 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != 0: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, 0)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 11 end ...... ") + + def tmqCase12(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 12: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db12', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,\ + parameterDict["dbName"],\ + parameterDict["stbName"],\ + parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],\ + parameterDict["batchNum"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 0 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:earliest' + self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume processor") + pollDelay = 5 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start to check consume result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt/4: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4)) + tdLog.exit("tmq consume rows error!") + + self.initConsumerInfoTable() + consumerId = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:none' + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("again start consume processor") + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("again check consume result") + expectRows = 2 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt/4: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 12 end ...... ") + + def tmqCase13(self, cfgPath, buildPath): + tdLog.printNoPrefix("======== test case 13: ") + + self.initConsumerTable() + + # create and start thread + parameterDict = {'cfg': '', \ + 'actionType': 0, \ + 'dbName': 'db13', \ + 'dropFlag': 1, \ + 'vgroups': 4, \ + 'replica': 1, \ + 'stbName': 'stb1', \ + 'ctbNum': 10, \ + 'rowsPerTbl': 10000, \ + 'batchNum': 100, \ + 'startTs': 1640966400000} # 2022-01-01 00:00:00.000 + parameterDict['cfg'] = cfgPath + + self.create_database(tdSql, parameterDict["dbName"]) + self.create_stable(tdSql, parameterDict["dbName"], parameterDict["stbName"]) + self.create_ctables(tdSql, parameterDict["dbName"], parameterDict["stbName"], parameterDict["ctbNum"]) + self.insert_data(tdSql,\ + parameterDict["dbName"],\ + parameterDict["stbName"],\ + parameterDict["ctbNum"],\ + parameterDict["rowsPerTbl"],\ + parameterDict["batchNum"]) + + tdLog.info("create topics from stb1") + topicFromStb1 = 'topic_stb1' + + tdSql.execute("create topic %s as select ts, c1, c2 from %s.%s" %(topicFromStb1, parameterDict['dbName'], parameterDict['stbName'])) + consumerId = 0 + expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + topicList = topicFromStb1 + ifcheckdata = 0 + ifManualCommit = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:earliest' + self.insertConsumerInfo(consumerId, expectrowcnt/4,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("start consume processor") + pollDelay = 5 + showMsg = 1 + showRow = 1 + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("start to check consume result") + expectRows = 1 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt/4: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt/4)) + tdLog.exit("tmq consume rows error!") + + self.initConsumerInfoTable() + consumerId = 1 + ifManualCommit = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:none' + self.insertConsumerInfo(consumerId, expectrowcnt/2,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("again start consume processor") + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("again check consume result") + expectRows = 2 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt*(1/2+1/4): + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*(1/2+1/4))) + tdLog.exit("tmq consume rows error!") + + self.initConsumerInfoTable() + consumerId = 2 + ifManualCommit = 1 + keyList = 'group.id:cgrp1,\ + enable.auto.commit:false,\ + auto.commit.interval.ms:6000,\ + auto.offset.reset:none' + self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit) + + tdLog.info("again start consume processor") + self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow) + + tdLog.info("again check consume result") + expectRows = 3 + resultList = self.selectConsumeResult(expectRows) + totalConsumeRows = 0 + for i in range(expectRows): + totalConsumeRows += resultList[i] + + if totalConsumeRows != expectrowcnt: + tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt)) + tdLog.exit("tmq consume rows error!") + + tdSql.query("drop topic %s"%topicFromStb1) + + tdLog.printNoPrefix("======== test case 13 end ...... ") + + def run(self): + tdSql.prepare() + + buildPath = self.getBuildPath() + if (buildPath == ""): + tdLog.exit("taosd not found!") + else: + tdLog.info("taosd found in %s" % buildPath) + cfgPath = buildPath + "/../sim/psim/cfg" + tdLog.info("cfgPath: %s" % cfgPath) + + # self.tmqCase1(cfgPath, buildPath) + # self.tmqCase2(cfgPath, buildPath) + self.tmqCase3(cfgPath, buildPath) + self.tmqCase4(cfgPath, buildPath) + self.tmqCase5(cfgPath, buildPath) + + def stop(self): + tdSql.close() + tdLog.success(f"{__file__} successfully executed") + +event = threading.Event() + +tdCases.addLinux(__file__, TDTestCase()) +tdCases.addWindows(__file__, TDTestCase()) diff --git a/tests/system-test/fulltest.sh b/tests/system-test/fulltest.sh index ee650b778e..01b8b56903 100755 --- a/tests/system-test/fulltest.sh +++ b/tests/system-test/fulltest.sh @@ -27,6 +27,7 @@ python3 ./test.py -f 2-query/join.py python3 ./test.py -f 2-query/cast.py python3 ./test.py -f 2-query/concat.py python3 ./test.py -f 2-query/concat_ws.py +python3 ./test.py -f 2-query/check_tsdb.py # python3 ./test.py -f 2-query/union.py # python3 ./test.py -f 2-query/union2.py # python3 ./test.py -f 2-query/union3.py @@ -62,11 +63,13 @@ python3 ./test.py -f 2-query/arccos.py python3 ./test.py -f 2-query/arctan.py python3 ./test.py -f 2-query/query_cols_tags_and_or.py python3 ./test.py -f 2-query/nestedQuery.py + python3 ./test.py -f 7-tmq/basic5.py python3 ./test.py -f 7-tmq/subscribeDb.py python3 ./test.py -f 7-tmq/subscribeDb1.py python3 ./test.py -f 7-tmq/subscribeStb.py +python3 ./test.py -f 7-tmq/subscribeStb0.py python3 ./test.py -f 7-tmq/subscribeStb1.py python3 ./test.py -f 7-tmq/subscribeStb2.py diff --git a/tests/test/c/tmqSim.c b/tests/test/c/tmqSim.c index e0f58d052f..accd1dd080 100644 --- a/tests/test/c/tmqSim.c +++ b/tests/test/c/tmqSim.c @@ -321,9 +321,16 @@ int32_t saveConsumeResult(SThreadInfo* pInfo) { TAOS* pConn = taos_connect(NULL, "root", "taosdata", NULL, 0); assert(pConn != NULL); + int64_t now = taosGetTimestampMs(); + // schema: ts timestamp, consumerid int, consummsgcnt bigint, checkresult int - sprintf(sqlStr, "insert into %s.consumeresult values (now, %d, %" PRId64 ", %" PRId64 ", %d)", g_stConfInfo.cdbName, - pInfo->consumerId, pInfo->consumeMsgCnt, pInfo->consumeRowCnt, pInfo->checkresult); + sprintf(sqlStr, "insert into %s.consumeresult values (%"PRId64", %d, %" PRId64 ", %" PRId64 ", %d)", + g_stConfInfo.cdbName, + now, + pInfo->consumerId, + pInfo->consumeMsgCnt, + pInfo->consumeRowCnt, + pInfo->checkresult); char tmpString[128]; taosFprintfFile(g_fp, "%s, consume id %d result: %s\n", getCurrentTimeString(tmpString), pInfo->consumerId ,sqlStr);
10.3 219 0.31Beijing.ChaoyangSan Jose 2
10.2 220 0.23Beijing.ChaoyangSan Jose 3
11.5 221 0.35Beijing.HaidianMountain View 3
13.4 223 0.29Beijing.HaidianMountain View 2
12.6 218 0.33Beijing.ChaoyangSan Jose 2
11.8 221 0.28Beijing.HaidianMountain View 2
10.3 218 0.25Beijing.ChaoyangSan Jose 3
12.3 221 0.31Beijing.ChaoyangSan Jose 2