Merge branch '3.0' into fix/3_liaohj
This commit is contained in:
commit
aeea699c9c
|
@ -2,7 +2,7 @@
|
|||
IF (DEFINED VERNUMBER)
|
||||
SET(TD_VER_NUMBER ${VERNUMBER})
|
||||
ELSE ()
|
||||
SET(TD_VER_NUMBER "3.1.1.0.alpha")
|
||||
SET(TD_VER_NUMBER "3.2.0.0.alpha")
|
||||
ENDIF ()
|
||||
|
||||
IF (DEFINED VERCOMPATIBLE)
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
# cos
|
||||
ExternalProject_Add(mxml
|
||||
GIT_REPOSITORY https://github.com/michaelrsweet/mxml.git
|
||||
GIT_TAG release-2.10
|
||||
GIT_TAG release-2.12
|
||||
SOURCE_DIR "${TD_CONTRIB_DIR}/mxml"
|
||||
#BINARY_DIR ""
|
||||
BUILD_IN_SOURCE TRUE
|
||||
|
|
|
@ -887,4 +887,4 @@ The `pycumsum` function finds the cumulative sum for all data in the input colum
|
|||
|
||||
</details>
|
||||
## Manage and Use UDF
|
||||
You need to add UDF to TDengine before using it in SQL queries. For more information about how to manage UDF and how to invoke UDF, please see [Manage and Use UDF](../12-taos-sql/26-udf.md).
|
||||
You need to add UDF to TDengine before using it in SQL queries. For more information about how to manage UDF and how to invoke UDF, please see [Manage and Use UDF](../taos-sql/udf/).
|
||||
|
|
|
@ -49,3 +49,5 @@ You can also add filter conditions to limit the results.
|
|||
6. You can' create index on a normal table or a child table.
|
||||
|
||||
7. If the unique values of a tag column are too few, it's better not to create index on such tag columns, the benefit would be very small.
|
||||
|
||||
8. The newly created super table will randomly generate an index name for the first column of tags, which is composed of the name tag0 column with 23 random bytes, and can be rebuilt or dropped.
|
||||
|
|
|
@ -13,7 +13,7 @@ After TDengine starts, it automatically writes many metrics in specific interval
|
|||
To deploy TDinsight, we need
|
||||
- a single-node TDengine server or a multi-node TDengine cluster and a [Grafana] server are required. This dashboard requires TDengine 3.0.1.0 and above, with the monitoring feature enabled. For detailed configuration, please refer to [TDengine monitoring configuration](../config/#monitoring-parameters).
|
||||
- taosAdapter has been installed and running, please refer to [taosAdapter](../taosadapter).
|
||||
- taosKeeper has been installed and running, please refer to [taosKeeper](../taosKeeper).
|
||||
- taosKeeper has been installed and running, please note the monitor-related items in taos.cfg file need be configured. Refer to [taosKeeper](../taosKeeper) for details.
|
||||
|
||||
Please record
|
||||
- The endpoint of taosAdapter REST service, for example `http://tdengine.local:6041`
|
||||
|
@ -80,7 +80,7 @@ chmod +x TDinsight.sh
|
|||
./TDinsight.sh
|
||||
```
|
||||
|
||||
This script will automatically download the latest [Grafana TDengine data source plugin](https://github.com/taosdata/grafanaplugin/releases/latest) and [TDinsight dashboard](https://github.com/taosdata/grafanaplugin/blob/master/dashboards/TDinsightV3.json) with configurable parameters for command-line options to the [Grafana Provisioning](https://grafana.com/docs/grafana/latest/administration/provisioning/) configuration file to automate deployment and updates, etc. With the alert setting options provided by this script, you can also get built-in support for AliCloud SMS alert notifications.
|
||||
This script will automatically download the latest [Grafana TDengine data source plugin](https://github.com/taosdata/grafanaplugin/releases/latest) and [TDinsight dashboard](https://github.com/taosdata/grafanaplugin/blob/master/dashboards/TDinsightV3.json) with configurable parameters for command-line options to the [Grafana Provisioning](https://grafana.com/docs/grafana/latest/administration/provisioning/) configuration file to automate deployment and updates, etc.
|
||||
|
||||
Assume you use TDengine and Grafana's default services on the same host. Run `. /TDinsight.sh` and open the Grafana browser window to see the TDinsight dashboard.
|
||||
|
||||
|
@ -112,9 +112,6 @@ Install and configure TDinsight dashboard in Grafana on Ubuntu 18.04/20.04 syste
|
|||
-i, --tdinsight-uid <string> Replace with a non-space ASCII code as the dashboard id. [default: tdinsight]
|
||||
-t, --tdinsight-title <string> Dashboard title. [default: TDinsight]
|
||||
-e, --tdinsight-editable If the provisioning dashboard could be editable. [default: false]
|
||||
|
||||
-E, --external-notifier <string> Apply external notifier uid to TDinsight dashboard.
|
||||
|
||||
```
|
||||
|
||||
Most command-line options can take effect the same as environment variables.
|
||||
|
@ -132,7 +129,10 @@ Most command-line options can take effect the same as environment variables.
|
|||
| -i | --tdinsight-uid | TDINSIGHT_DASHBOARD_UID | TDinsight `uid` of the dashboard. [default: tdinsight] |
|
||||
| -t | --tdinsight-title | TDINSIGHT_DASHBOARD_TITLE | TDinsight dashboard title. [Default: TDinsight] | -e | -tdinsight-title
|
||||
| -e | --tdinsight-editable | TDINSIGHT_DASHBOARD_EDITABLE | If the dashboard is configured to be editable. [Default: false] | -e | --external
|
||||
| -E | --external-notifier | EXTERNAL_NOTIFIER | Apply the external notifier uid to the TDinsight dashboard. | -s
|
||||
|
||||
:::note
|
||||
The `-E` option is deprecated. We use Grafana unified alerting function instead.
|
||||
:::
|
||||
|
||||
Suppose you start a TDengine database on host `tdengine` with HTTP API port `6041`, user `root1`, and password `pass5ord`. Execute the script.
|
||||
|
||||
|
@ -140,18 +140,6 @@ Suppose you start a TDengine database on host `tdengine` with HTTP API port `604
|
|||
sudo . /TDinsight.sh -a http://tdengine:6041 -u root1 -p pass5ord
|
||||
```
|
||||
|
||||
We provide a "-E" option to configure TDinsight to use the existing Notification Channel from the command line. Assuming your Grafana user and password is `admin:admin`, use the following command to get the `uid` of an existing notification channel.
|
||||
|
||||
```bash
|
||||
curl --no-progress-meter -u admin:admin http://localhost:3000/api/alert-notifications | jq
|
||||
```
|
||||
|
||||
Use the `uid` value obtained above as `-E` input.
|
||||
|
||||
```bash
|
||||
./TDinsight.sh -a http://tdengine:6041 -u root1 -p pass5ord -E existing-notifier
|
||||
```
|
||||
|
||||
If you want to monitor multiple TDengine clusters, you need to set up numerous TDinsight dashboards. Setting up non-default TDinsight requires some changes: the `-n` `-i` `-t` options need to be changed to non-default names, and `-N` and `-L` should also be changed if using the built-in SMS alerting feature.
|
||||
|
||||
```bash
|
||||
|
|
|
@ -218,11 +218,11 @@ The example to query the average system memory usage for the specified interval
|
|||
|
||||
### Importing the Dashboard
|
||||
|
||||
You can install TDinsight dashboard in data source configuration page (like `http://localhost:3000/datasources/edit/1/dashboards`) as a monitoring visualization tool for TDengine cluster. Ensure that you use TDinsight for 3.x.
|
||||
You can install TDinsight dashboard in data source configuration page (like `http://localhost:3000/datasources/edit/1/dashboards`) as a monitoring visualization tool for TDengine cluster. Ensure that you use TDinsight for 3.x. Please note TDinsight for 3.x needs to configure and run taoskeeper correctly. Check the [TDinsight User Manual](/reference/tdinsight/) for the details.
|
||||
|
||||

|
||||
|
||||
A dashboard for TDengine 2.x has been published on Grafana: [Dashboard 15167 - TDinsight](https://grafana.com/grafana/dashboards/15167)). Check the [TDinsight User Manual](/reference/tdinsight/) for the details.
|
||||
A dashboard for TDengine 2.x has been published on Grafana: [Dashboard 15167 - TDinsight](https://grafana.com/grafana/dashboards/15167)).
|
||||
|
||||
For more dashboards using TDengine data source, [search here in Grafana](https://grafana.com/grafana/dashboards/?dataSource=tdengine-datasource). Here is a sub list:
|
||||
|
||||
|
|
|
@ -10,6 +10,10 @@ For TDengine 2.x installation packages by version, please visit [here](https://t
|
|||
|
||||
import Release from "/components/ReleaseV3";
|
||||
|
||||
## 3.1.1.0
|
||||
|
||||
<Release type="tdengine" version="3.1.1.0" />
|
||||
|
||||
## 3.1.0.3
|
||||
|
||||
<Release type="tdengine" version="3.1.0.3" />
|
||||
|
|
|
@ -10,7 +10,7 @@ TDengine 充分利用了时序数据的特点,提出了“一个数据采集
|
|||
|
||||
如果你是开发工程师,请一定仔细阅读[开发指南](./develop)一章,该部分对数据库连接、建模、插入数据、查询、流式计算、缓存、数据订阅、用户自定义函数等功能都做了详细介绍,并配有各种编程语言的示例代码。大部分情况下,你只要复制粘贴示例代码,针对自己的应用稍作改动,就能跑起来。
|
||||
|
||||
我们已经生活在大数据时代,纵向扩展已经无法满足日益增长的业务需求,任何系统都必须具有水平扩展的能力,集群成为大数据以及 Database 系统的不可缺失功能。TDengine 团队不仅实现了集群功能,而且将这一重要核心功能开源。怎么部署、管理和维护 TDengine 集群,请仔细参考[部署集群](./deployment)一章。
|
||||
我们已经生活在大数据时代,纵向扩展已经无法满足日益增长的业务需求,任何系统都必须具有水平扩展的能力,集群成为大数据以及 Database 系统的不可缺失功能。TDengine 团队不仅实现了集群功能,而且将这一重要核心功能开源。怎么部署、管理和维护 TDengine 集群,请仔细参考[部署集群]一章。
|
||||
|
||||
TDengine 采用 SQL 作为查询语言,大大降低学习成本、降低迁移成本,但同时针对时序数据场景,又做了一些扩展,以支持插值、降采样、时间加权平均等操作。[SQL 手册](./taos-sql)一章详细描述了 SQL 语法、详细列出了各种支持的命令和函数。
|
||||
|
||||
|
@ -18,8 +18,6 @@ TDengine 采用 SQL 作为查询语言,大大降低学习成本、降低迁移
|
|||
|
||||
如果你对 TDengine 的外围工具、REST API、各种编程语言的连接器(Connector)想做更多详细了解,请看[参考指南](./reference)一章。
|
||||
|
||||
如果你对 TDengine 的内部架构设计很有兴趣,欢迎仔细阅读[技术内幕](./tdinternal)一章,里面对集群的设计、数据分区、分片、写入、读出、查询、聚合查询的流程都做了详细的介绍。如果你想研读 TDengine 代码甚至贡献代码,请一定仔细读完这一章。
|
||||
|
||||
最后,作为一个开源软件,欢迎大家的参与。如果发现文档有任何错误、描述不清晰的地方,请在每个页面的最下方,点击“编辑本文档”直接进行修改。
|
||||
|
||||
Together, we make a difference!
|
||||
|
|
|
@ -296,7 +296,7 @@ ldconfig
|
|||
3. 如果 Python UDF 程序执行时,通过 PYTHONPATH 引用其它的包,可以设置 taos.cfg 的 UdfdLdLibPath 变量为PYTHONPATH的内容
|
||||
|
||||
4. 启动 `taosd` 服务
|
||||
细节请参考 [快速开始](../../get-started)
|
||||
细节请参考 [立即开始](../../get-started)
|
||||
|
||||
### 接口定义
|
||||
|
||||
|
@ -883,5 +883,5 @@ pycumsum 使用 numpy 计算输入列所有数据的累积和。
|
|||
|
||||
</details>
|
||||
## 管理和使用 UDF
|
||||
在使用 UDF 之前需要先将其加入到 TDengine 系统中。关于如何管理和使用 UDF,请参考[管理和使用 UDF](../12-taos-sql/26-udf.md)
|
||||
在使用 UDF 之前需要先将其加入到 TDengine 系统中。关于如何管理和使用 UDF,请参考[管理和使用 UDF](../../taos-sql/udf)
|
||||
|
||||
|
|
|
@ -4,8 +4,6 @@ import PkgListV3 from "/components/PkgListV3";
|
|||
|
||||
<PkgListV3 type={1} sys="Linux" />
|
||||
|
||||
[所有下载](../../releases/tdengine)
|
||||
|
||||
2. 解压缩软件包
|
||||
|
||||
将软件包放置在当前用户可读写的任意目录下,然后执行下面的命令:`tar -xzvf TDengine-client-VERSION.tar.gz`
|
||||
|
|
|
@ -4,8 +4,6 @@ import PkgListV3 from "/components/PkgListV3";
|
|||
|
||||
<PkgListV3 type={8} sys="macOS" />
|
||||
|
||||
[所有下载](../../releases/tdengine)
|
||||
|
||||
2. 执行安装程序,按提示选择默认值,完成安装。如果安装被阻止,可以右键或者按 Ctrl 点击安装包,选择 `打开`。
|
||||
3. 配置 taos.cfg
|
||||
|
||||
|
|
|
@ -4,8 +4,6 @@ import PkgListV3 from "/components/PkgListV3";
|
|||
|
||||
<PkgListV3 type={4} sys="Windows" />
|
||||
|
||||
[所有下载](../../releases/tdengine)
|
||||
|
||||
2. 执行安装程序,按提示选择默认值,完成安装
|
||||
3. 安装路径
|
||||
|
||||
|
|
|
@ -49,3 +49,5 @@ SELECT * FROM information_schema.INS_INDEXES
|
|||
6. 不支持对普通和子表建立索引。
|
||||
|
||||
7. 如果某个 tag 列的唯一值较少时,不建议对其建立索引,这种情况下收效甚微。
|
||||
|
||||
8. 新建立的超级表,会给第一列tag,随机生成一个indexNewName, 生成规则是:tag0的name + 23个byte, 在系统表可以查,也可以按需要drop,行为和其他列tag 的索引一样
|
||||
|
|
|
@ -201,7 +201,6 @@ TDengine 对于修改数据提供两种处理方式,由 IGNORE UPDATE 选项
|
|||
对于已经存在的超级表,检查列的schema信息
|
||||
1. 检查列的schema信息是否匹配,对于不匹配的,则自动进行类型转换,当前只有数据长度大于4096byte时才报错,其余场景都能进行类型转换。
|
||||
2. 检查列的个数是否相同,如果不同,需要显示的指定超级表与subquery的列的对应关系,否则报错;如果相同,可以指定对应关系,也可以不指定,不指定则按位置顺序对应。
|
||||
3. 至少自定义一个tag,否则报错。详见 自定义TAG
|
||||
|
||||
## 自定义TAG
|
||||
|
||||
|
|
|
@ -180,7 +180,7 @@ AllowWebSockets
|
|||
node_export 是一个机器指标的导出器。请访问 [https://github.com/prometheus/node_exporter](https://github.com/prometheus/node_exporter) 了解更多信息。
|
||||
- 支持 Prometheus remote_read 和 remote_write
|
||||
remote_read 和 remote_write 是 Prometheus 数据读写分离的集群方案。请访问[https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis](https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis) 了解更多信息。
|
||||
- 获取 table 所在的虚拟节点组(VGroup)的 VGroup ID。关于虚拟节点组(VGroup)的更多信息,请访问[整体架构文档](/tdinternal/arch/#主要逻辑单元) 。
|
||||
- 获取 table 所在的虚拟节点组(VGroup)的 VGroup ID。
|
||||
|
||||
## 接口
|
||||
|
||||
|
@ -245,7 +245,7 @@ Prometheus 使用的由 \*NIX 内核暴露的硬件和操作系统指标的输
|
|||
|
||||
### 获取 table 的 VGroup ID
|
||||
|
||||
可以访问 http 接口 `http://<fqdn>:6041/rest/vgid?db=<db>&table=<table>` 获取 table 的 VGroup ID。关于虚拟节点组(VGroup)的更多信息,请访问[整体架构文档](/tdinternal/arch/#主要逻辑单元) 。
|
||||
可以访问 http 接口 `http://<fqdn>:6041/rest/vgid?db=<db>&table=<table>` 获取 table 的 VGroup ID。
|
||||
|
||||
## 内存使用优化方法
|
||||
|
||||
|
|
|
@ -11,11 +11,7 @@ taosBenchmark (曾用名 taosdemo ) 是一个用于测试 TDengine 产品性能
|
|||
|
||||
## 安装
|
||||
|
||||
taosBenchmark 有两种安装方式:
|
||||
|
||||
- 安装 TDengine 官方安装包的同时会自动安装 taosBenchmark, 详情请参考[ TDengine 安装](../../operation/pkg-install)。
|
||||
|
||||
- 单独编译 taos-tools 并安装, 详情请参考 [taos-tools](https://github.com/taosdata/taos-tools) 仓库。
|
||||
- 安装 TDengine 官方安装包的同时会自动安装 taosBenchmark
|
||||
|
||||
## 运行
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@ TDengine 通过 [taosKeeper](../taosKeeper) 将服务器的 CPU、内存、硬
|
|||
|
||||
- 单节点的 TDengine 服务器或多节点的 [TDengine] 集群,以及一个[Grafana]服务器。此仪表盘需要 TDengine 3.0.0.0 及以上,并开启监控服务,具体配置请参考:[TDengine 监控配置](../config/#监控相关)。
|
||||
- taosAdapter 已经安装并正常运行。具体细节请参考:[taosAdapter 使用手册](../taosadapter)
|
||||
- taosKeeper 已安装并正常运行。具体细节请参考:[taosKeeper 使用手册](../taosKeeper)
|
||||
- taosKeeper 已安装并正常运行。注意需要 taos.cfg 文件中打开 monitor 相关配置项,具体细节请参考:[taosKeeper 使用手册](../taosKeeper)
|
||||
|
||||
记录以下信息:
|
||||
|
||||
|
@ -120,7 +120,7 @@ chmod +x TDinsight.sh
|
|||
./TDinsight.sh
|
||||
```
|
||||
|
||||
这个脚本会自动下载最新的[Grafana TDengine 数据源插件](https://github.com/taosdata/grafanaplugin/releases/latest) 和 [TDinsight 仪表盘](https://github.com/taosdata/grafanaplugin/blob/master/dashboards/TDinsightV3.json) ,将命令行选项中的可配置参数转为 [Grafana Provisioning](https://grafana.com/docs/grafana/latest/administration/provisioning/) 配置文件,以进行自动化部署及更新等操作。利用该脚本提供的告警设置选项,你还可以获得内置的阿里云短信告警通知支持。
|
||||
这个脚本会自动下载最新的[Grafana TDengine 数据源插件](https://github.com/taosdata/grafanaplugin/releases/latest) 和 [TDinsight 仪表盘](https://github.com/taosdata/grafanaplugin/blob/master/dashboards/TDinsightV3.json) ,将命令行选项中的可配置参数转为 [Grafana Provisioning](https://grafana.com/docs/grafana/latest/administration/provisioning/) 配置文件,以进行自动化部署及更新等操作。
|
||||
|
||||
假设您在同一台主机上使用 TDengine 和 Grafana 的默认服务。运行 `./TDinsight.sh` 并打开 Grafana 浏览器窗口就可以看到 TDinsight 仪表盘了。
|
||||
|
||||
|
@ -152,9 +152,6 @@ Install and configure TDinsight dashboard in Grafana on Ubuntu 18.04/20.04 syste
|
|||
-i, --tdinsight-uid <string> Replace with a non-space ASCII code as the dashboard id. [default: tdinsight]
|
||||
-t, --tdinsight-title <string> Dashboard title. [default: TDinsight]
|
||||
-e, --tdinsight-editable If the provisioning dashboard could be editable. [default: false]
|
||||
|
||||
-E, --external-notifier <string> Apply external notifier uid to TDinsight dashboard.
|
||||
|
||||
```
|
||||
|
||||
大多数命令行选项都可以通过环境变量获得同样的效果。
|
||||
|
@ -172,7 +169,10 @@ Install and configure TDinsight dashboard in Grafana on Ubuntu 18.04/20.04 syste
|
|||
| -i | --tdinsight-uid | TDINSIGHT_DASHBOARD_UID | TDinsight 仪表盘`uid`。 [默认值:tdinsight] |
|
||||
| -t | --tdinsight-title | TDINSIGHT_DASHBOARD_TITLE | TDinsight 仪表盘标题。 [默认:TDinsight] |
|
||||
| -e | --tdinsight-editable | TDINSIGHT_DASHBOARD_EDITABLE | 如果配置仪表盘可以编辑。 [默认值:false] |
|
||||
| -E | --external-notifier | EXTERNAL_NOTIFIER | 将外部通知程序 uid 应用于 TDinsight 仪表盘。 |
|
||||
|
||||
:::note
|
||||
新版本插件使用 Grafana unified alerting 功能,`-E` 选项不再支持。
|
||||
:::
|
||||
|
||||
假设您在主机 `tdengine` 上启动 TDengine 数据库,HTTP API 端口为 `6041`,用户为 `root1`,密码为 `pass5ord`。执行脚本:
|
||||
|
||||
|
@ -180,18 +180,6 @@ Install and configure TDinsight dashboard in Grafana on Ubuntu 18.04/20.04 syste
|
|||
./TDinsight.sh -a http://tdengine:6041 -u root1 -p pass5ord
|
||||
```
|
||||
|
||||
我们提供了一个“-E”选项,用于从命令行配置 TDinsight 使用现有的通知通道(Notification Channel)。假设你的 Grafana 用户和密码是 `admin:admin`,使用以下命令获取已有的通知通道的`uid`:
|
||||
|
||||
```bash
|
||||
curl --no-progress-meter -u admin:admin http://localhost:3000/api/alert-notifications | jq
|
||||
```
|
||||
|
||||
使用上面获取的 `uid` 值作为 `-E` 输入。
|
||||
|
||||
```bash
|
||||
./TDinsight.sh -a http://tdengine:6041 -u root1 -p pass5ord -E existing-notifier
|
||||
```
|
||||
|
||||
如果要监控多个 TDengine 集群,则需要设置多个 TDinsight 仪表盘。设置非默认 TDinsight 需要进行一些更改: `-n` `-i` `-t` 选项需要更改为非默认名称,如果使用 内置短信告警功能,`-N` 和 `-L` 也应该改变。
|
||||
|
||||
```bash
|
||||
|
|
|
@ -13,12 +13,7 @@ taosKeeper 是 TDengine 3.0 版本监控指标的导出工具,通过简单的
|
|||
|
||||
## 安装
|
||||
|
||||
taosKeeper 有两种安装方式:
|
||||
taosKeeper 安装方式:
|
||||
|
||||
- 安装 TDengine 官方安装包的同时会自动安装 taosKeeper, 详情请参考[ TDengine 安装](../../operation/pkg-install)。
|
||||
|
||||
- 单独编译 taosKeeper 并安装,详情请参考 [taosKeeper](https://github.com/taosdata/taoskeeper) 仓库。
|
||||
- 安装 TDengine 官方安装包的同时会自动安装 taosKeeper
|
||||
|
||||
## 配置和运行方式
|
||||
|
||||
|
|
|
@ -1,207 +0,0 @@
|
|||
---
|
||||
title: 安装和卸载
|
||||
description: 安装、卸载、启动、停止和升级
|
||||
---
|
||||
|
||||
import Tabs from "@theme/Tabs";
|
||||
import TabItem from "@theme/TabItem";
|
||||
|
||||
本节将介绍一些关于安装和卸载更深层次的内容,以及升级的注意事项。
|
||||
|
||||
## 安装
|
||||
|
||||
关于安装,请参考 [使用安装包立即开始](../../get-started/package)
|
||||
|
||||
|
||||
|
||||
## 安装目录说明
|
||||
|
||||
TDengine 成功安装后,主安装目录是 /usr/local/taos,目录内容如下:
|
||||
|
||||
```
|
||||
$ cd /usr/local/taos
|
||||
$ ll
|
||||
$ ll
|
||||
total 28
|
||||
drwxr-xr-x 7 root root 4096 Feb 22 09:34 ./
|
||||
drwxr-xr-x 12 root root 4096 Feb 22 09:34 ../
|
||||
drwxr-xr-x 2 root root 4096 Feb 22 09:34 bin/
|
||||
drwxr-xr-x 2 root root 4096 Feb 22 09:34 cfg/
|
||||
lrwxrwxrwx 1 root root 13 Feb 22 09:34 data -> /var/lib/taos/
|
||||
drwxr-xr-x 2 root root 4096 Feb 22 09:34 driver/
|
||||
drwxr-xr-x 10 root root 4096 Feb 22 09:34 examples/
|
||||
drwxr-xr-x 2 root root 4096 Feb 22 09:34 include/
|
||||
lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/
|
||||
```
|
||||
|
||||
- 自动生成配置文件目录、数据库目录、日志目录。
|
||||
- 配置文件缺省目录:/etc/taos/taos.cfg, 软链接到 /usr/local/taos/cfg/taos.cfg;
|
||||
- 数据库缺省目录:/var/lib/taos, 软链接到 /usr/local/taos/data;
|
||||
- 日志缺省目录:/var/log/taos, 软链接到 /usr/local/taos/log;
|
||||
- /usr/local/taos/bin 目录下的可执行文件,会软链接到 /usr/bin 目录下;
|
||||
- /usr/local/taos/driver 目录下的动态库文件,会软链接到 /usr/lib 目录下;
|
||||
- /usr/local/taos/include 目录下的头文件,会软链接到到 /usr/include 目录下;
|
||||
|
||||
## 卸载
|
||||
|
||||
<Tabs>
|
||||
<TabItem label="apt-get 卸载" value="aptremove">
|
||||
|
||||
TDengine 卸载命令如下:
|
||||
|
||||
```
|
||||
$ sudo apt-get remove tdengine
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
The following packages will be REMOVED:
|
||||
tdengine
|
||||
0 upgraded, 0 newly installed, 1 to remove and 18 not upgraded.
|
||||
After this operation, 68.3 MB disk space will be freed.
|
||||
Do you want to continue? [Y/n] y
|
||||
(Reading database ... 135625 files and directories currently installed.)
|
||||
Removing tdengine (3.0.0.0) ...
|
||||
TDengine is removed successfully!
|
||||
|
||||
```
|
||||
|
||||
taosTools 卸载命令如下:
|
||||
|
||||
```
|
||||
$ sudo apt remove taostools
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
The following packages will be REMOVED:
|
||||
taostools
|
||||
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
|
||||
After this operation, 68.3 MB disk space will be freed.
|
||||
Do you want to continue? [Y/n]
|
||||
(Reading database ... 147973 files and directories currently installed.)
|
||||
Removing taostools (2.1.2) ...
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem label="Deb 卸载" value="debuninst">
|
||||
|
||||
TDengine 卸载命令如下:
|
||||
|
||||
```
|
||||
$ sudo dpkg -r tdengine
|
||||
(Reading database ... 120119 files and directories currently installed.)
|
||||
Removing tdengine (3.0.0.0) ...
|
||||
TDengine is removed successfully!
|
||||
|
||||
```
|
||||
|
||||
taosTools 卸载命令如下:
|
||||
|
||||
```
|
||||
$ sudo dpkg -r taostools
|
||||
(Reading database ... 147973 files and directories currently installed.)
|
||||
Removing taostools (2.1.2) ...
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem label="RPM 卸载" value="rpmuninst">
|
||||
|
||||
卸载 TDengine 命令如下:
|
||||
|
||||
```
|
||||
$ sudo rpm -e tdengine
|
||||
TDengine is removed successfully!
|
||||
```
|
||||
|
||||
卸载 taosTools 命令如下:
|
||||
|
||||
```
|
||||
sudo rpm -e taostools
|
||||
taosToole is removed successfully!
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem label="tar.gz 卸载" value="taruninst">
|
||||
|
||||
卸载 TDengine 命令如下:
|
||||
|
||||
```
|
||||
$ rmtaos
|
||||
TDengine is removed successfully!
|
||||
```
|
||||
|
||||
卸载 taosTools 命令如下:
|
||||
|
||||
```
|
||||
$ rmtaostools
|
||||
Start to uninstall taos tools ...
|
||||
|
||||
taos tools is uninstalled successfully!
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem label="Windows 卸载" value="windows">
|
||||
在 C:\TDengine 目录下,通过运行 unins000.exe 卸载程序来卸载 TDengine。
|
||||
</TabItem>
|
||||
|
||||
<TabItem label="Mac 卸载" value="mac">
|
||||
|
||||
卸载 TDengine 命令如下:
|
||||
|
||||
```
|
||||
$ rmtaos
|
||||
TDengine is removed successfully!
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
:::info
|
||||
|
||||
- TDengine 提供了多种安装包,但最好不要在一个系统上同时使用 tar.gz 安装包和 deb 或 rpm 安装包。否则会相互影响,导致在使用时出现问题。
|
||||
|
||||
- 对于 deb 包安装后,如果安装目录被手工误删了部分,出现卸载、或重新安装不能成功。此时,需要清除 TDengine 包的安装信息,执行如下命令:
|
||||
|
||||
```
|
||||
$ sudo rm -f /var/lib/dpkg/info/tdengine*
|
||||
```
|
||||
|
||||
然后再重新进行安装就可以了。
|
||||
|
||||
- 对于 rpm 包安装后,如果安装目录被手工误删了部分,出现卸载、或重新安装不能成功。此时,需要清除 TDengine 包的安装信息,执行如下命令:
|
||||
|
||||
```
|
||||
$ sudo rpm -e --noscripts tdengine
|
||||
```
|
||||
|
||||
然后再重新进行安装就可以了。
|
||||
|
||||
:::
|
||||
|
||||
## 卸载和更新文件说明
|
||||
|
||||
卸载安装包的时候,将保留配置文件、数据库文件和日志文件,即 /etc/taos/taos.cfg 、 /var/lib/taos 、 /var/log/taos 。如果用户确认后不需保留,可以手工删除,但一定要慎重,因为删除后,数据将永久丢失,不可以恢复!
|
||||
|
||||
如果是更新安装,当缺省配置文件( /etc/taos/taos.cfg )存在时,仍然使用已有的配置文件,安装包中携带的配置文件修改为 taos.cfg.orig 保存在 /usr/local/taos/cfg/ 目录,可以作为设置配置参数的参考样例;如果不存在配置文件,就使用安装包中自带的配置文件。
|
||||
|
||||
## 升级
|
||||
升级分为两个层面:升级安装包 和 升级运行中的实例。
|
||||
|
||||
升级安装包请遵循前述安装和卸载的步骤先卸载旧版本再安装新版本。
|
||||
|
||||
升级运行中的实例则要复杂得多,首先请注意版本号,TDengine 的版本号目前分为四段,如 2.4.0.14 和 2.4.0.16,只有前三段版本号一致(即只有第四段版本号不同)才能把一个运行中的实例进行升级。升级步骤如下:
|
||||
- 停止数据写入
|
||||
- 确保所有数据落盘,即写入时序数据库
|
||||
- 停止 TDengine 集群
|
||||
- 卸载旧版本并安装新版本
|
||||
- 重新启动 TDengine 集群
|
||||
- 进行简单的查询操作确认旧数据没有丢失
|
||||
- 进行简单的写入操作确认 TDengine 集群可用
|
||||
- 重新恢复业务数据的写入
|
||||
|
||||
:::warning
|
||||
TDengine 不保证低版本能够兼容高版本的数据,所以任何时候都不推荐降级
|
||||
|
||||
:::
|
|
@ -1,80 +0,0 @@
|
|||
---
|
||||
title: 集群运维
|
||||
description: TDengine 提供了多种集群运维手段以使集群运行更健康更高效
|
||||
---
|
||||
|
||||
为了使集群运行更健康更高效,TDengine 企业版提供了一些运维手段来帮助系统管理员更好地运维集群。
|
||||
|
||||
## 数据重整
|
||||
|
||||
TDengine 面向多种写入场景,在有些写入场景下,TDengine 的存储会导致数据存储的放大或数据文件的空洞等。这一方面影响数据的存储效率,另一方面也会影响查询效率。为了解决上述问题,TDengine 企业版提供了对数据的重整功能,即 DATA COMPACT 功能,将存储的数据文件重新整理,删除文件空洞和无效数据,提高数据的组织度,从而提高存储和查询的效率。
|
||||
|
||||
**语法**
|
||||
|
||||
```sql
|
||||
COMPACT DATABASE db_name [start with 'XXXX'] [end with 'YYYY'];
|
||||
```
|
||||
|
||||
**效果**
|
||||
|
||||
- 扫描并压缩指定的 DB 中所有 VGROUP 中 VNODE 的所有数据文件
|
||||
- COMPCAT 会删除被删除数据以及被删除的表的数据
|
||||
- COMPACT 会合并多个 STT 文件
|
||||
- 可通过 start with 关键字指定 COMPACT 数据的起始时间
|
||||
- 可通过 end with 关键字指定 COMPACT 数据的终止时间
|
||||
|
||||
**补充说明**
|
||||
|
||||
- COMPACT 为异步,执行 COMPACT 命令后不会等 COMPACT 结束就会返回。如果上一个 COMPACT 没有完成则再发起一个 COMPACT 任务,则会等上一个任务完成后再返回。
|
||||
- COMPACT 可能阻塞写入,但不阻塞查询
|
||||
- COMPACT 的进度不可观测
|
||||
|
||||
## 集群负载再平衡
|
||||
|
||||
当多副本集群中的一个或多个节点因为升级或其它原因而重启后,有可能出现集群中各个 dnode 负载不均衡的现象,极端情况下会出现所有 vgroup 的 leader 都位于同一个 dnode 的情况。为了解决这个问题,可以使用下面的命令
|
||||
|
||||
```sql
|
||||
balance vgroup leader;
|
||||
```
|
||||
|
||||
**功能**
|
||||
|
||||
让所有的 vgroup 的 leade r在各自的replica节点上均匀分布。这个命令会让 vgroup 强制重新选举,通过重新选举,在选举的过程中,变换 vgroup 的leader,通过这个方式,最终让leader均匀分布。
|
||||
|
||||
**注意**
|
||||
|
||||
Raft选举本身带有随机性,所以通过选举的重新分布产生的均匀分布也是带有一定的概率,不会完全的均匀。**该命令的副作用是影响查询和写入**,在vgroup重新选举时,从开始选举到选举出新的 leader 这段时间,这 个vgroup 无法写入和查询。选举过程一般在秒级完成。所有的vgroup会依次逐个重新选举。
|
||||
|
||||
## 恢复数据节点
|
||||
|
||||
在多节点三副本的集群环境中,如果某个 dnode 的磁盘损坏,该 dnode 会自动退出,但集群中其它的 dnode 仍然能够继续提供写入和查询服务。
|
||||
|
||||
在更换了损坏的磁盘后,如果想要让曾经主动退出的 dnode 重新加入集群提供服务,可以通过 `restore dnode` 命令来恢复该数据节点上的部分或全部逻辑节点,该功能依赖多副本中的其它副本进行数据复制,所以只在集群中 dnode 数量大于等于 3 且副本数为 3 的情况下能够工作。
|
||||
|
||||
|
||||
```sql
|
||||
restore dnode <dnode_id>;# 恢复dnode上的mnode,所有vnode和qnode
|
||||
restore mnode on dnode <dnode_id>;# 恢复dnode上的mnode
|
||||
restore vnode on dnode <dnode_id> ;# 恢复dnode上的所有vnode
|
||||
restore qnode on dnode <dnode_id>;# 恢复dnode上的qnode
|
||||
```
|
||||
|
||||
**限制**
|
||||
- 该功能是基于已有的复制功能的恢复,不是灾难恢复或者备份恢复,所以对于要恢复的 mnode 和 vnode来说,使用该命令的前提是还存在该 mnode 或 vnode 的其它两个副本仍然能够正常工作。
|
||||
- 该命令不能修复数据目录中的个别文件的损坏或者丢失。例如,如果某个 mnode 或者 vnode 中的个别文件或数据损坏,无法单独恢复损坏的某个文件或者某块数据。此时,可以选择将该 mnode/vnode 的数据全部清空再进行恢复。
|
||||
|
||||
|
||||
## 虚拟组分裂 (Scale Out)
|
||||
|
||||
当一个 vgroup 因为子表数过多而导致 CPU 或 Disk 资源使用量负载过高时,增加 dnode 节点后,可通过 `split vgroup` 命令把该 vgroup 分裂为两个虚拟组。分裂完成后,新产生的两个 vgroup 承担原来由一个 vgroup 提供的读写服务。这也是 TDengine 为企业版用户提供的 scale out 集群的能力。
|
||||
|
||||
```sql
|
||||
split vgroup <vgroup_id>
|
||||
```
|
||||
|
||||
**注意**
|
||||
- 单副本库虚拟组,在分裂完成后,历史时序数据总磁盘空间使用量,可能会翻倍。所以,在执行该操作之前,通过增加 dnode 节点方式,确保集群中有足够的 CPU 和磁盘资源,避免资源不足现象发生。
|
||||
- 该命令为 DB 级事务;执行过程,当前DB的其它管理事务将会被拒绝。集群中,其它DB不受影响。
|
||||
- 分裂任务执行过程中,可持续提供读写服务;期间,可能存在可感知的短暂的读写业务中断。
|
||||
- 在分裂过程中,不支持流和订阅。分裂结束后,历史 WAL 会清空。
|
||||
- 分裂过程中,可支持节点宕机重启容错;但不支持节点磁盘故障容错。
|
|
@ -1,56 +0,0 @@
|
|||
---
|
||||
title: 多级存储
|
||||
---
|
||||
|
||||
## 多级存储
|
||||
|
||||
说明:多级存储功能仅企业版支持。
|
||||
|
||||
在默认配置下,TDengine 会将所有数据保存在 /var/lib/taos 目录下,而且每个 vnode 的数据文件保存在该目录下的不同目录。为扩大存储空间,尽量减少文件读取的瓶颈,提高数据吞吐率 TDengine 可通过配置系统参数 dataDir 让多个挂载的硬盘被系统同时使用。
|
||||
|
||||
除此之外,TDengine 也提供了数据分级存储的功能,将不同时间段的数据存储在挂载的不同介质上的目录里,从而实现不同“热度”的数据存储在不同的存储介质上,充分利用存储,节约成本。比如,最新采集的数据需要经常访问,对硬盘的读取性能要求高,那么用户可以配置将这些数据存储在 SSD 盘上。超过一定期限的数据,查询需求量没有那么高,那么可以存储在相对便宜的 HDD 盘上。
|
||||
|
||||
多级存储支持 3 级,每级最多可配置 16 个挂载点。
|
||||
|
||||
TDengine 多级存储配置方式如下(在配置文件/etc/taos/taos.cfg 中):
|
||||
|
||||
```
|
||||
dataDir [path] <level> <primary>
|
||||
```
|
||||
|
||||
- path: 挂载点的文件夹路径
|
||||
- level: 介质存储等级,取值为 0,1,2。
|
||||
0 级存储最新的数据,1 级存储次新的数据,2 级存储最老的数据,省略默认为 0。
|
||||
各级存储之间的数据流向:0 级存储 -> 1 级存储 -> 2 级存储。
|
||||
同一存储等级可挂载多个硬盘,同一存储等级上的数据文件分布在该存储等级的所有硬盘上。
|
||||
需要说明的是,数据在不同级别的存储介质上的移动,是由系统自动完成的,用户无需干预。
|
||||
- primary: 是否为主挂载点,0(否)或 1(是),省略默认为 1。
|
||||
|
||||
在配置中,只允许一个主挂载点的存在(level=0,primary=1),例如采用如下的配置方式:
|
||||
|
||||
```
|
||||
dataDir /mnt/data1 0 1
|
||||
dataDir /mnt/data2 0 0
|
||||
dataDir /mnt/data3 1 0
|
||||
dataDir /mnt/data4 1 0
|
||||
dataDir /mnt/data5 2 0
|
||||
dataDir /mnt/data6 2 0
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
1. 多级存储不允许跨级配置,合法的配置方案有:仅 0 级,仅 0 级+ 1 级,以及 0 级+ 1 级+ 2 级。而不允许只配置 level=0 和 level=2,而不配置 level=1。
|
||||
2. 禁止手动移除使用中的挂载盘,挂载盘目前不支持非本地的网络盘。
|
||||
3. 多级存储目前不支持删除已经挂载的硬盘的功能。
|
||||
|
||||
:::
|
||||
|
||||
## 0 级负载均衡
|
||||
|
||||
在多级存储中,有且只有一个主挂载点,主挂载点承担了系统中最重要的元数据在座,同时各个 vnode 的主目录均存在于当前 dnode 主挂载点上,从而导致该 dnode 的写入性能受限于单个磁盘的 IO 吞吐能力。
|
||||
|
||||
从 TDengine 3.1.0.0 开始,如果一个 dnode 配置了多个 0 级挂载点,我们将该 dnode 上所有 vnode 的主目录均衡分布在所有的 0 级挂载点上,由这些 0 级挂载点共同承担写入负荷。在网络 I/O 及其它处理资源不成为瓶颈的情况下,通过优化集群配置,测试结果证明整个系统的写入能力和 0 级挂载点的数量呈现线性关系,即随着 0 级挂载点数量的增加,整个系统的写入能力也成倍增加。
|
||||
|
||||
## 同级挂载点选择策略
|
||||
|
||||
一般情况下,当 TDengine 要从同级挂载点中选择一个用于生成新的数据文件时,采用 round robin 策略进行选择。但现实中有可能每个磁盘的容量不相同,或者容量相同但写入的数据量不相同,这就导致会出现每个磁盘上的可用空间不均衡,在实际进行选择时有可能会选择到一个剩余空间已经很小的磁盘。为了解决这个问题,从 3.1.1.0 开始引入了一个新的配置 `minDiskFreeSize`,当某块磁盘上的可用空间小于等于这个阈值时,该磁盘将不再被选择用于生成新的数据文件。该配置项的单位为字节,其值应该大于 2GB,即会跳过可用空间小于 2GB 的挂载点。
|
|
@ -218,11 +218,11 @@ docker run -d \
|
|||
|
||||
### 导入 Dashboard
|
||||
|
||||
在数据源配置页面,您可以为该数据源导入 TDinsight 面板,作为 TDengine 集群的监控可视化工具。如果 TDengine 服务端为 3.0 版本请选择 `TDinsight for 3.x` 导入。
|
||||
在数据源配置页面,您可以为该数据源导入 TDinsight 面板,作为 TDengine 集群的监控可视化工具。如果 TDengine 服务端为 3.0 版本请选择 `TDinsight for 3.x` 导入。注意 TDinsight for 3.x 需要运行和配置 taoskeeper,相关使用说明请见 [TDinsight 用户手册](/reference/tdinsight/)。
|
||||
|
||||

|
||||
|
||||
其中适配 TDengine 2.* 的 Dashboard 已发布在 Grafana:[Dashboard 15167 - TDinsight](https://grafana.com/grafana/dashboards/15167)) 。其他安装方式和相关使用说明请见 [TDinsight 用户手册](/reference/tdinsight/)。
|
||||
其中适配 TDengine 2.* 的 Dashboard 已发布在 Grafana:[Dashboard 15167 - TDinsight](https://grafana.com/grafana/dashboards/15167)) 。
|
||||
|
||||
使用 TDengine 作为数据源的其他面板,可以[在此搜索](https://grafana.com/grafana/dashboards/?dataSource=tdengine-datasource)。以下是一份不完全列表:
|
||||
|
||||
|
|
|
@ -23,7 +23,7 @@ TDengine Source Connector 用于把数据实时地从 TDengine 读出来发送
|
|||
1. Linux 操作系统
|
||||
2. 已安装 Java 8 和 Maven
|
||||
3. 已安装 Git、curl、vi
|
||||
4. 已安装并启动 TDengine。如果还没有可参考[安装和卸载](../../operation/pkg-install)
|
||||
4. 已安装并启动 TDengine。
|
||||
|
||||
## 安装 Kafka
|
||||
|
||||
|
|
|
@ -10,6 +10,10 @@ TDengine 2.x 各版本安装包请访问[这里](https://www.taosdata.com/all-do
|
|||
|
||||
import Release from "/components/ReleaseV3";
|
||||
|
||||
## 3.1.1.0
|
||||
|
||||
<Release type="tdengine" version="3.1.1.0" />
|
||||
|
||||
## 3.1.0.3
|
||||
|
||||
<Release type="tdengine" version="3.1.0.3" />
|
||||
|
|
|
@ -416,6 +416,7 @@ void tFreeStreamTask(SStreamTask* pTask);
|
|||
int32_t streamTaskInit(SStreamTask* pTask, SStreamMeta* pMeta, SMsgCb* pMsgCb, int64_t ver);
|
||||
|
||||
int32_t tDecodeStreamTaskChkInfo(SDecoder* pDecoder, SCheckpointInfo* pChkpInfo);
|
||||
int32_t tDecodeStreamTaskId(SDecoder* pDecoder, SStreamTaskId* pTaskId);
|
||||
|
||||
int32_t streamTaskPutDataIntoInputQ(SStreamTask* pTask, SStreamQueueItem* pItem);
|
||||
int32_t streamTaskPutTranstateIntoInputQ(SStreamTask* pTask);
|
||||
|
|
|
@ -184,6 +184,7 @@ typedef enum ELogicConditionType {
|
|||
#define TSDB_UNI_LEN 24
|
||||
#define TSDB_USER_LEN TSDB_UNI_LEN
|
||||
|
||||
#define TSDB_POINTER_PRINT_BYTES 18 // 0x1122334455667788
|
||||
// ACCOUNT is a 32 bit positive integer
|
||||
// this is the length of its string representation, including the terminator zero
|
||||
#define TSDB_ACCT_ID_LEN 11
|
||||
|
@ -223,6 +224,7 @@ typedef enum ELogicConditionType {
|
|||
#define TSDB_SUBSCRIBE_KEY_LEN (TSDB_CGROUP_LEN + TSDB_TOPIC_FNAME_LEN + 2)
|
||||
#define TSDB_PARTITION_KEY_LEN (TSDB_SUBSCRIBE_KEY_LEN + 20)
|
||||
#define TSDB_COL_NAME_LEN 65
|
||||
#define TSDB_COL_FNAME_LEN (TSDB_TABLE_NAME_LEN + TSDB_COL_NAME_LEN + TSDB_NAME_DELIMITER_LEN)
|
||||
#define TSDB_MAX_SAVED_SQL_LEN TSDB_MAX_COLUMNS * 64
|
||||
#define TSDB_MAX_SQL_LEN TSDB_PAYLOAD_SIZE
|
||||
#define TSDB_MAX_SQL_SHOW_LEN 1024
|
||||
|
|
|
@ -79,6 +79,20 @@ static FORCE_INLINE void taosEncryptPass_c(uint8_t *inBuf, size_t len, char *tar
|
|||
memcpy(target, buf, TSDB_PASSWORD_LEN);
|
||||
}
|
||||
|
||||
static FORCE_INLINE int32_t taosCreateMD5Hash(char *pBuf, int32_t len) {
|
||||
T_MD5_CTX ctx;
|
||||
tMD5Init(&ctx);
|
||||
tMD5Update(&ctx, (uint8_t*)pBuf, len);
|
||||
tMD5Final(&ctx);
|
||||
char* p = pBuf;
|
||||
int32_t resLen = 0;
|
||||
for (uint8_t i = 0; i < tListLen(ctx.digest); ++i) {
|
||||
resLen += snprintf(p, 3, "%02x", ctx.digest[i]);
|
||||
p += 2;
|
||||
}
|
||||
return resLen;
|
||||
}
|
||||
|
||||
static FORCE_INLINE int32_t taosGetTbHashVal(const char *tbname, int32_t tblen, int32_t method, int32_t prefix,
|
||||
int32_t suffix) {
|
||||
if ((prefix == 0 && suffix == 0) || (tblen <= (prefix + suffix)) || (tblen <= -1 * (prefix + suffix)) ||
|
||||
|
|
|
@ -240,7 +240,7 @@ int32_t tsTtlBatchDropNum = 10000; // number of tables dropped per batch
|
|||
// internal
|
||||
int32_t tsTransPullupInterval = 2;
|
||||
int32_t tsMqRebalanceInterval = 2;
|
||||
int32_t tsStreamCheckpointTickInterval = 20;
|
||||
int32_t tsStreamCheckpointTickInterval = 600;
|
||||
int32_t tsStreamNodeCheckInterval = 10;
|
||||
int32_t tsTtlUnit = 86400;
|
||||
int32_t tsTtlPushIntervalSec = 10;
|
||||
|
|
|
@ -2612,9 +2612,9 @@ static int32_t mndProcessDropStbReq(SRpcMsg *pReq) {
|
|||
dropReq.igNotExists, dropReq.source);
|
||||
|
||||
SName name = {0};
|
||||
tNameFromString(&name, pDb->name, T_NAME_ACCT | T_NAME_DB);
|
||||
tNameFromString(&name, dropReq.name, T_NAME_ACCT | T_NAME_DB | T_NAME_TABLE);
|
||||
|
||||
auditRecord(pReq, pMnode->clusterId, "dropStb", name.dbname, dropReq.name, detail);
|
||||
auditRecord(pReq, pMnode->clusterId, "dropStb", name.dbname, name.tname, detail);
|
||||
|
||||
_OVER:
|
||||
if (code != 0 && code != TSDB_CODE_ACTION_IN_PROGRESS) {
|
||||
|
|
|
@ -884,7 +884,11 @@ static int32_t mndProcessCreateStreamReq(SRpcMsg *pReq) {
|
|||
createStreamReq.lastTs, createStreamReq.maxDelay, createStreamReq.numOfTags, createStreamReq.sourceDB,
|
||||
createStreamReq.targetStbFullName, createStreamReq.triggerType, createStreamReq.watermark);
|
||||
|
||||
auditRecord(pReq, pMnode->clusterId, "createStream", createStreamReq.name, "", detail);
|
||||
SName name = {0};
|
||||
tNameFromString(&name, createStreamReq.name, T_NAME_ACCT | T_NAME_DB);
|
||||
//reuse this function for stream
|
||||
|
||||
auditRecord(pReq, pMnode->clusterId, "createStream", name.dbname, "", detail);
|
||||
|
||||
_OVER:
|
||||
if (code != 0 && code != TSDB_CODE_ACTION_IN_PROGRESS) {
|
||||
|
@ -1318,7 +1322,11 @@ static int32_t mndProcessDropStreamReq(SRpcMsg *pReq) {
|
|||
char detail[100] = {0};
|
||||
sprintf(detail, "igNotExists:%d", dropReq.igNotExists);
|
||||
|
||||
auditRecord(pReq, pMnode->clusterId, "dropStream", dropReq.name, "", detail);
|
||||
SName name = {0};
|
||||
tNameFromString(&name, dropReq.name, T_NAME_ACCT | T_NAME_DB);
|
||||
//reuse this function for stream
|
||||
|
||||
auditRecord(pReq, pMnode->clusterId, "dropStream", name.dbname, "", detail);
|
||||
|
||||
sdbRelease(pMnode->pSdb, pStream);
|
||||
mndTransDrop(pTrans);
|
||||
|
|
|
@ -853,7 +853,11 @@ end:
|
|||
char detail[100] = {0};
|
||||
sprintf(detail, "igNotExists:%d", dropReq.igNotExists);
|
||||
|
||||
auditRecord(pReq, pMnode->clusterId, "dropTopic", dropReq.name, "", detail);
|
||||
SName name = {0};
|
||||
tNameFromString(&name, dropReq.name, T_NAME_ACCT | T_NAME_DB);
|
||||
//reuse this function for topic
|
||||
|
||||
auditRecord(pReq, pMnode->clusterId, "dropTopic", name.dbname, "", detail);
|
||||
|
||||
return TSDB_CODE_ACTION_IN_PROGRESS;
|
||||
}
|
||||
|
|
|
@ -1039,11 +1039,14 @@ static int32_t mndProcessAlterUserReq(SRpcMsg *pReq) {
|
|||
if (code == 0) code = TSDB_CODE_ACTION_IN_PROGRESS;
|
||||
|
||||
char detail[1000] = {0};
|
||||
sprintf(detail, "alterType:%s, enable:%d, superUser:%d, sysInfo:%d, tabName:%s",
|
||||
sprintf(detail, "alterType:%s, enable:%d, superUser:%d, sysInfo:%d, tabName:%s, password:",
|
||||
mndUserAuditTypeStr(alterReq.alterType), alterReq.enable, alterReq.superUser, alterReq.sysInfo, alterReq.tabName);
|
||||
|
||||
if(alterReq.alterType == TSDB_ALTER_USER_PASSWD){
|
||||
auditRecord(pReq, pMnode->clusterId, "changePassword", alterReq.user, "", detail);
|
||||
sprintf(detail, "alterType:%s, enable:%d, superUser:%d, sysInfo:%d, tabName:%s, password:xxx",
|
||||
mndUserAuditTypeStr(alterReq.alterType), alterReq.enable, alterReq.superUser, alterReq.sysInfo,
|
||||
alterReq.tabName);
|
||||
auditRecord(pReq, pMnode->clusterId, "alterUser", alterReq.user, "", detail);
|
||||
}
|
||||
else if(alterReq.alterType == TSDB_ALTER_USER_SUPERUSER ||
|
||||
alterReq.alterType == TSDB_ALTER_USER_ENABLE ||
|
||||
|
|
|
@ -295,86 +295,6 @@ end:
|
|||
rsp.code = code;
|
||||
tmsgSendRsp(&rsp);
|
||||
return 0;
|
||||
|
||||
// SMqVgOffset vgOffset = {0};
|
||||
// int32_t vgId = TD_VID(pTq->pVnode);
|
||||
//
|
||||
// SDecoder decoder;
|
||||
// tDecoderInit(&decoder, (uint8_t*)msg, msgLen);
|
||||
// if (tDecodeMqVgOffset(&decoder, &vgOffset) < 0) {
|
||||
// tqError("vgId:%d failed to decode seek msg", vgId);
|
||||
// return -1;
|
||||
// }
|
||||
//
|
||||
// tDecoderClear(&decoder);
|
||||
//
|
||||
// tqDebug("topic:%s, vgId:%d process offset seek by consumer:0x%" PRIx64 ", req offset:%" PRId64,
|
||||
// vgOffset.offset.subKey, vgId, vgOffset.consumerId, vgOffset.offset.val.version);
|
||||
//
|
||||
// STqOffset* pOffset = &vgOffset.offset;
|
||||
// if (pOffset->val.type != TMQ_OFFSET__LOG) {
|
||||
// tqError("vgId:%d, subKey:%s invalid seek offset type:%d", vgId, pOffset->subKey, pOffset->val.type);
|
||||
// return -1;
|
||||
// }
|
||||
//
|
||||
// STqHandle* pHandle = taosHashGet(pTq->pHandle, pOffset->subKey, strlen(pOffset->subKey));
|
||||
// if (pHandle == NULL) {
|
||||
// tqError("tmq seek: consumer:0x%" PRIx64 " vgId:%d subkey %s not found", vgOffset.consumerId, vgId,
|
||||
// pOffset->subKey); terrno = TSDB_CODE_INVALID_MSG; return -1;
|
||||
// }
|
||||
//
|
||||
// // 2. check consumer-vg assignment status
|
||||
// taosRLockLatch(&pTq->lock);
|
||||
// if (pHandle->consumerId != vgOffset.consumerId) {
|
||||
// tqDebug("ERROR tmq seek: consumer:0x%" PRIx64 " vgId:%d, subkey %s, mismatch for saved handle consumer:0x%"
|
||||
// PRIx64,
|
||||
// vgOffset.consumerId, vgId, pOffset->subKey, pHandle->consumerId);
|
||||
// terrno = TSDB_CODE_TMQ_CONSUMER_MISMATCH;
|
||||
// taosRUnLockLatch(&pTq->lock);
|
||||
// return -1;
|
||||
// }
|
||||
// taosRUnLockLatch(&pTq->lock);
|
||||
//
|
||||
// // 3. check the offset info
|
||||
// STqOffset* pSavedOffset = tqOffsetRead(pTq->pOffsetStore, pOffset->subKey);
|
||||
// if (pSavedOffset != NULL) {
|
||||
// if (pSavedOffset->val.type != TMQ_OFFSET__LOG) {
|
||||
// tqError("invalid saved offset type, vgId:%d sub:%s", vgId, pOffset->subKey);
|
||||
// return 0; // no need to update the offset value
|
||||
// }
|
||||
//
|
||||
// if (pSavedOffset->val.version == pOffset->val.version) {
|
||||
// tqDebug("vgId:%d subKey:%s no need to seek to %" PRId64 " prev offset:%" PRId64, vgId, pOffset->subKey,
|
||||
// pOffset->val.version, pSavedOffset->val.version);
|
||||
// return 0;
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// int64_t sver = 0, ever = 0;
|
||||
// walReaderValidVersionRange(pHandle->execHandle.pTqReader->pWalReader, &sver, &ever);
|
||||
// if (pOffset->val.version < sver) {
|
||||
// pOffset->val.version = sver;
|
||||
// } else if (pOffset->val.version > ever) {
|
||||
// pOffset->val.version = ever;
|
||||
// }
|
||||
//
|
||||
// // save the new offset value
|
||||
// if (pSavedOffset != NULL) {
|
||||
// tqDebug("vgId:%d sub:%s seek to:%" PRId64 " prev offset:%" PRId64, vgId, pOffset->subKey, pOffset->val.version,
|
||||
// pSavedOffset->val.version);
|
||||
// } else {
|
||||
// tqDebug("vgId:%d sub:%s seek to:%" PRId64 " not saved yet", vgId, pOffset->subKey, pOffset->val.version);
|
||||
// }
|
||||
//
|
||||
// if (tqOffsetWrite(pTq->pOffsetStore, pOffset) < 0) {
|
||||
// tqError("failed to save offset, vgId:%d sub:%s seek to %" PRId64, vgId, pOffset->subKey, pOffset->val.version);
|
||||
// return -1;
|
||||
// }
|
||||
//
|
||||
// tqDebug("topic:%s, vgId:%d consumer:0x%" PRIx64 " offset is update to:%" PRId64, vgOffset.offset.subKey, vgId,
|
||||
// vgOffset.consumerId, vgOffset.offset.val.version);
|
||||
//
|
||||
// return 0;
|
||||
}
|
||||
|
||||
int32_t tqCheckColModifiable(STQ* pTq, int64_t tbUid, int32_t colId) {
|
||||
|
@ -492,7 +412,7 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg) {
|
|||
taosWUnLockLatch(&pTq->lock);
|
||||
|
||||
tqDebug("tmq poll: consumer:0x%" PRIx64
|
||||
"vgId:%d, topic:%s, subscription is executing, wait for 10ms and retry, pHandle:%p",
|
||||
" vgId:%d, topic:%s, subscription is executing, wait for 10ms and retry, pHandle:%p",
|
||||
consumerId, vgId, req.subKey, pHandle);
|
||||
taosMsleep(10);
|
||||
}
|
||||
|
@ -592,10 +512,10 @@ int32_t tqProcessVgWalInfoReq(STQ* pTq, SRpcMsg* pMsg) {
|
|||
taosRUnLockLatch(&pTq->lock);
|
||||
return -1;
|
||||
}
|
||||
taosRUnLockLatch(&pTq->lock);
|
||||
|
||||
int64_t sver = 0, ever = 0;
|
||||
walReaderValidVersionRange(pHandle->execHandle.pTqReader->pWalReader, &sver, &ever);
|
||||
taosRUnLockLatch(&pTq->lock);
|
||||
|
||||
SMqDataRsp dataRsp = {0};
|
||||
tqInitDataRsp(&dataRsp, req.reqOffset);
|
||||
|
@ -653,15 +573,19 @@ int32_t tqProcessDeleteSubReq(STQ* pTq, int64_t sversion, char* msg, int32_t msg
|
|||
tqInfo("vgId:%d, tq process delete sub req %s", vgId, pReq->subKey);
|
||||
int32_t code = 0;
|
||||
|
||||
taosWLockLatch(&pTq->lock);
|
||||
STqHandle* pHandle = taosHashGet(pTq->pHandle, pReq->subKey, strlen(pReq->subKey));
|
||||
if (pHandle) {
|
||||
while (tqIsHandleExec(pHandle)) {
|
||||
tqDebug("vgId:%d, topic:%s, subscription is executing, wait for 10ms and retry, pHandle:%p", vgId,
|
||||
pHandle->subKey, pHandle);
|
||||
taosMsleep(10);
|
||||
}
|
||||
while (1) {
|
||||
taosWLockLatch(&pTq->lock);
|
||||
bool exec = tqIsHandleExec(pHandle);
|
||||
|
||||
if(exec){
|
||||
tqInfo("vgId:%d, topic:%s, subscription is executing, wait for 10ms and retry, pHandle:%p", vgId,
|
||||
pHandle->subKey, pHandle);
|
||||
taosWUnLockLatch(&pTq->lock);
|
||||
taosMsleep(10);
|
||||
continue;
|
||||
}
|
||||
if (pHandle->pRef) {
|
||||
walCloseRef(pTq->pVnode->pWal, pHandle->pRef->refId);
|
||||
}
|
||||
|
@ -672,8 +596,12 @@ int32_t tqProcessDeleteSubReq(STQ* pTq, int64_t sversion, char* msg, int32_t msg
|
|||
if (code != 0) {
|
||||
tqError("cannot process tq delete req %s, since no such handle", pReq->subKey);
|
||||
}
|
||||
taosWUnLockLatch(&pTq->lock);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
taosWLockLatch(&pTq->lock);
|
||||
code = tqOffsetDelete(pTq->pOffsetStore, pReq->subKey);
|
||||
if (code != 0) {
|
||||
tqError("cannot process tq delete req %s, since no such offset in cache", pReq->subKey);
|
||||
|
|
|
@ -366,6 +366,7 @@ int32_t extractMsgFromWal(SWalReader* pReader, void** pItem, int64_t maxVer, con
|
|||
bool tqNextBlockInWal(STqReader* pReader, const char* id) {
|
||||
SWalReader* pWalReader = pReader->pWalReader;
|
||||
|
||||
// uint64_t st = taosGetTimestampMs();
|
||||
while (1) {
|
||||
SArray* pBlockList = pReader->submit.aSubmitTbData;
|
||||
if (pBlockList == NULL || pReader->nextBlk >= taosArrayGetSize(pBlockList)) {
|
||||
|
@ -439,6 +440,10 @@ bool tqNextBlockInWal(STqReader* pReader, const char* id) {
|
|||
tDestroySubmitReq(&pReader->submit, TSDB_MSG_FLG_DECODE);
|
||||
|
||||
pReader->msg.msgStr = NULL;
|
||||
|
||||
// if(taosGetTimestampMs() - st > 5){
|
||||
// return false;
|
||||
// }
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -489,7 +494,7 @@ bool tqNextBlockImpl(STqReader* pReader, const char* idstr) {
|
|||
tqDebug("block found, ver:%" PRId64 ", uid:%" PRId64", %s", pReader->msg.ver, pSubmitTbData->uid, idstr);
|
||||
return true;
|
||||
} else {
|
||||
tqDebug("discard submit block, uid:%" PRId64 ", total queried tables:%d continue %s", pSubmitTbData->uid,
|
||||
tqInfo("discard submit block, uid:%" PRId64 ", total queried tables:%d continue %s", pSubmitTbData->uid,
|
||||
taosHashGetSize(pReader->tbIdHash), idstr);
|
||||
}
|
||||
|
||||
|
@ -850,7 +855,7 @@ int32_t tqRetrieveTaosxBlock(STqReader* pReader, SArray* blocks, SArray* schemas
|
|||
tDeleteSchemaWrapper(pSW);
|
||||
goto FAIL;
|
||||
}
|
||||
tqDebug("vgId:%d, build new block, col %d", pReader->pWalReader->pWal->cfg.vgId,
|
||||
tqTrace("vgId:%d, build new block, col %d", pReader->pWalReader->pWal->cfg.vgId,
|
||||
(int32_t)taosArrayGetSize(block.pDataBlock));
|
||||
|
||||
block.info.id.uid = uid;
|
||||
|
@ -867,7 +872,7 @@ int32_t tqRetrieveTaosxBlock(STqReader* pReader, SArray* blocks, SArray* schemas
|
|||
|
||||
SSDataBlock* pBlock = taosArrayGetLast(blocks);
|
||||
|
||||
tqDebug("vgId:%d, taosx scan, block num: %d", pReader->pWalReader->pWal->cfg.vgId,
|
||||
tqTrace("vgId:%d, taosx scan, block num: %d", pReader->pWalReader->pWal->cfg.vgId,
|
||||
(int32_t)taosArrayGetSize(blocks));
|
||||
|
||||
int32_t targetIdx = 0;
|
||||
|
@ -949,7 +954,7 @@ int32_t tqRetrieveTaosxBlock(STqReader* pReader, SArray* blocks, SArray* schemas
|
|||
tDeleteSchemaWrapper(pSW);
|
||||
goto FAIL;
|
||||
}
|
||||
tqDebug("vgId:%d, build new block, col %d", pReader->pWalReader->pWal->cfg.vgId,
|
||||
tqTrace("vgId:%d, build new block, col %d", pReader->pWalReader->pWal->cfg.vgId,
|
||||
(int32_t)taosArrayGetSize(block.pDataBlock));
|
||||
|
||||
block.info.id.uid = uid;
|
||||
|
@ -966,7 +971,7 @@ int32_t tqRetrieveTaosxBlock(STqReader* pReader, SArray* blocks, SArray* schemas
|
|||
|
||||
SSDataBlock* pBlock = taosArrayGetLast(blocks);
|
||||
|
||||
tqDebug("vgId:%d, taosx scan, block num: %d", pReader->pWalReader->pWal->cfg.vgId,
|
||||
tqTrace("vgId:%d, taosx scan, block num: %d", pReader->pWalReader->pWal->cfg.vgId,
|
||||
(int32_t)taosArrayGetSize(blocks));
|
||||
|
||||
int32_t targetIdx = 0;
|
||||
|
|
|
@ -235,27 +235,23 @@ int32_t streamTaskSnapWrite(SStreamTaskWriter* pWriter, uint8_t* pData, uint32_t
|
|||
STqHandle handle;
|
||||
SSnapDataHdr* pHdr = (SSnapDataHdr*)pData;
|
||||
if (pHdr->type == SNAP_DATA_STREAM_TASK) {
|
||||
SStreamTask* pTask = taosMemoryCalloc(1, sizeof(SStreamTask));
|
||||
if (pTask == NULL) {
|
||||
return -1;
|
||||
}
|
||||
SStreamTaskId task = {0};
|
||||
|
||||
SDecoder decoder;
|
||||
tDecoderInit(&decoder, (uint8_t*)pData + sizeof(SSnapDataHdr), nData - sizeof(SSnapDataHdr));
|
||||
code = tDecodeStreamTask(&decoder, pTask);
|
||||
|
||||
code = tDecodeStreamTaskId(&decoder, &task);
|
||||
if (code < 0) {
|
||||
tDecoderClear(&decoder);
|
||||
taosMemoryFree(pTask);
|
||||
goto _err;
|
||||
}
|
||||
tDecoderClear(&decoder);
|
||||
// tdbTbInsert(TTB *pTb, const void *pKey, int keyLen, const void *pVal, int valLen, TXN *pTxn)
|
||||
if (tdbTbUpsert(pTq->pStreamMeta->pTaskDb, &pTask->id.taskId, sizeof(int32_t),
|
||||
(uint8_t*)pData + sizeof(SSnapDataHdr), nData - sizeof(SSnapDataHdr), pWriter->txn) < 0) {
|
||||
taosMemoryFree(pTask);
|
||||
int64_t key[2] = {task.streamId, task.taskId};
|
||||
if (tdbTbUpsert(pTq->pStreamMeta->pTaskDb, key, sizeof(int64_t) << 1, (uint8_t*)pData + sizeof(SSnapDataHdr),
|
||||
nData - sizeof(SSnapDataHdr), pWriter->txn) < 0) {
|
||||
return -1;
|
||||
}
|
||||
taosMemoryFree(pTask);
|
||||
} else if (pHdr->type == SNAP_DATA_STREAM_TASK_CHECKPOINT) {
|
||||
// do nothing
|
||||
}
|
||||
|
|
|
@ -132,12 +132,6 @@ static int32_t extractResetOffsetVal(STqOffsetVal* pOffsetVal, STQ* pTq, STqHand
|
|||
return 0;
|
||||
}
|
||||
|
||||
//static void setRequestVersion(STqOffsetVal* offset, int64_t ver){
|
||||
// if(offset->type == TMQ_OFFSET__LOG){
|
||||
// offset->version = ver;
|
||||
// }
|
||||
//}
|
||||
|
||||
static int32_t extractDataAndRspForNormalSubscribe(STQ* pTq, STqHandle* pHandle, const SMqPollReq* pRequest,
|
||||
SRpcMsg* pMsg, STqOffsetVal* pOffset) {
|
||||
uint64_t consumerId = pRequest->consumerId;
|
||||
|
@ -146,7 +140,6 @@ static int32_t extractDataAndRspForNormalSubscribe(STQ* pTq, STqHandle* pHandle,
|
|||
|
||||
SMqDataRsp dataRsp = {0};
|
||||
tqInitDataRsp(&dataRsp, *pOffset);
|
||||
// dataRsp.reqOffset.type = pOffset->type; // store origin type for getting offset in tmq_get_vgroup_offset
|
||||
|
||||
qSetTaskId(pHandle->execHandle.task, consumerId, pRequest->reqId);
|
||||
int code = tqScanData(pTq, pHandle, &dataRsp, pOffset);
|
||||
|
@ -167,7 +160,6 @@ static int32_t extractDataAndRspForNormalSubscribe(STQ* pTq, STqHandle* pHandle,
|
|||
taosWUnLockLatch(&pTq->lock);
|
||||
}
|
||||
|
||||
// setRequestVersion(&dataRsp.reqOffset, pOffset->version);
|
||||
code = tqSendDataRsp(pHandle, pMsg, pRequest, (SMqDataRsp*)&dataRsp, TMQ_MSG_TYPE__POLL_DATA_RSP, vgId);
|
||||
|
||||
end : {
|
||||
|
@ -188,7 +180,6 @@ static int32_t extractDataAndRspForDbStbSubscribe(STQ* pTq, STqHandle* pHandle,
|
|||
SMqMetaRsp metaRsp = {0};
|
||||
STaosxRsp taosxRsp = {0};
|
||||
tqInitTaosxRsp(&taosxRsp, *offset);
|
||||
// taosxRsp.reqOffset.type = offset->type; // store origin type for getting offset in tmq_get_vgroup_offset
|
||||
|
||||
if (offset->type != TMQ_OFFSET__LOG) {
|
||||
if (tqScanTaosx(pTq, pHandle, &taosxRsp, &metaRsp, offset) < 0) {
|
||||
|
@ -222,19 +213,14 @@ static int32_t extractDataAndRspForDbStbSubscribe(STQ* pTq, STqHandle* pHandle,
|
|||
walReaderVerifyOffset(pHandle->pWalReader, offset);
|
||||
int64_t fetchVer = offset->version;
|
||||
|
||||
// uint64_t st = taosGetTimestampMs();
|
||||
int totalRows = 0;
|
||||
while (1) {
|
||||
int32_t savedEpoch = atomic_load_32(&pHandle->epoch);
|
||||
if (savedEpoch > pRequest->epoch) {
|
||||
tqWarn("tmq poll: consumer:0x%" PRIx64 " (epoch %d), subkey:%s vgId:%d offset %" PRId64
|
||||
", found new consumer epoch %d, discard req epoch %d",
|
||||
pRequest->consumerId, pRequest->epoch, pHandle->subKey, vgId, fetchVer, savedEpoch, pRequest->epoch);
|
||||
break;
|
||||
}
|
||||
// int32_t savedEpoch = atomic_load_32(&pHandle->epoch);
|
||||
// ASSERT (savedEpoch <= pRequest->epoch);
|
||||
|
||||
if (tqFetchLog(pTq, pHandle, &fetchVer, pRequest->reqId) < 0) {
|
||||
tqOffsetResetToLog(&taosxRsp.rspOffset, fetchVer);
|
||||
// setRequestVersion(&taosxRsp.reqOffset, offset->version);
|
||||
code = tqSendDataRsp(pHandle, pMsg, pRequest, (SMqDataRsp*)&taosxRsp, TMQ_MSG_TYPE__POLL_DATA_RSP, vgId);
|
||||
goto end;
|
||||
}
|
||||
|
@ -247,7 +233,6 @@ static int32_t extractDataAndRspForDbStbSubscribe(STQ* pTq, STqHandle* pHandle,
|
|||
if (pHead->msgType != TDMT_VND_SUBMIT) {
|
||||
if (totalRows > 0) {
|
||||
tqOffsetResetToLog(&taosxRsp.rspOffset, fetchVer);
|
||||
// setRequestVersion(&taosxRsp.reqOffset, offset->version);
|
||||
code = tqSendDataRsp(pHandle, pMsg, pRequest, (SMqDataRsp*)&taosxRsp, TMQ_MSG_TYPE__POLL_DATA_RSP, vgId);
|
||||
goto end;
|
||||
}
|
||||
|
@ -275,9 +260,9 @@ static int32_t extractDataAndRspForDbStbSubscribe(STQ* pTq, STqHandle* pHandle,
|
|||
goto end;
|
||||
}
|
||||
|
||||
// if (totalRows >= 4096 || taosxRsp.createTableNum > 0 || (taosGetTimestampMs() - st > 5)) {
|
||||
if (totalRows >= 4096 || taosxRsp.createTableNum > 0) {
|
||||
tqOffsetResetToLog(&taosxRsp.rspOffset, fetchVer + 1);
|
||||
// setRequestVersion(&taosxRsp.reqOffset, offset->version);
|
||||
code = tqSendDataRsp(pHandle, pMsg, pRequest, (SMqDataRsp*)&taosxRsp, taosxRsp.createTableNum > 0 ? TMQ_MSG_TYPE__POLL_DATA_META_RSP : TMQ_MSG_TYPE__POLL_DATA_RSP, vgId);
|
||||
goto end;
|
||||
} else {
|
||||
|
@ -316,7 +301,7 @@ int32_t tqExtractDataForMq(STQ* pTq, STqHandle* pHandle, const SMqPollReq* pRequ
|
|||
// this is a normal subscribe requirement
|
||||
if (pHandle->execHandle.subType == TOPIC_SUB_TYPE__COLUMN) {
|
||||
return extractDataAndRspForNormalSubscribe(pTq, pHandle, pRequest, pMsg, &reqOffset);
|
||||
} else { // todo handle the case where re-balance occurs.
|
||||
} else {
|
||||
// for taosx
|
||||
return extractDataAndRspForDbStbSubscribe(pTq, pHandle, pRequest, pMsg, &reqOffset);
|
||||
}
|
||||
|
|
|
@ -120,23 +120,7 @@ static int32_t tsdbUpgradeHead(STsdb *tsdb, SDFileSet *pDFileSet, SDataFReader *
|
|||
};
|
||||
|
||||
if (dataBlk->hasDup) {
|
||||
tBlockDataReset(ctx->blockData);
|
||||
|
||||
int16_t aCid = 0;
|
||||
STSchema tSchema = {0};
|
||||
TABLEID tbid = {.suid = pBlockIdx->suid, .uid = pBlockIdx->uid};
|
||||
code = tBlockDataInit(ctx->blockData, &tbid, &tSchema, &aCid, 0);
|
||||
TSDB_CHECK_CODE(code, lino, _exit);
|
||||
|
||||
code = tsdbReadDataBlock(reader, dataBlk, ctx->blockData);
|
||||
TSDB_CHECK_CODE(code, lino, _exit);
|
||||
|
||||
record.count = 1;
|
||||
for (int32_t i = 1; i < ctx->blockData->nRow; ++i) {
|
||||
if (ctx->blockData->aTSKEY[i] != ctx->blockData->aTSKEY[i - 1]) {
|
||||
record.count++;
|
||||
}
|
||||
}
|
||||
record.count = 0;
|
||||
}
|
||||
|
||||
code = tBrinBlockPut(ctx->brinBlock, &record);
|
||||
|
|
|
@ -1834,9 +1834,6 @@ static SSDataBlock* doQueueScan(SOperatorInfo* pOperator) {
|
|||
if (pTaskInfo->streamInfo.currentOffset.type == TMQ_OFFSET__SNAPSHOT_DATA) {
|
||||
SSDataBlock* pResult = doTableScan(pInfo->pTableScanOp);
|
||||
if (pResult && pResult->info.rows > 0) {
|
||||
// qDebug("queue scan tsdb return %" PRId64 " rows min:%" PRId64 " max:%" PRId64 " wal curVersion:%" PRId64,
|
||||
// pResult->info.rows, pResult->info.window.skey, pResult->info.window.ekey,
|
||||
// pInfo->tqReader->pWalReader->curVersion);
|
||||
tqOffsetResetToData(&pTaskInfo->streamInfo.currentOffset, pResult->info.id.uid, pResult->info.window.ekey);
|
||||
return pResult;
|
||||
}
|
||||
|
|
|
@ -3312,7 +3312,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
{
|
||||
.name = "now",
|
||||
.type = FUNCTION_TYPE_NOW,
|
||||
.classification = FUNC_MGT_SCALAR_FUNC | FUNC_MGT_DATETIME_FUNC,
|
||||
.classification = FUNC_MGT_SCALAR_FUNC | FUNC_MGT_DATETIME_FUNC | FUNC_MGT_KEEP_ORDER_FUNC,
|
||||
.translateFunc = translateNowToday,
|
||||
.getEnvFunc = NULL,
|
||||
.initFunc = NULL,
|
||||
|
@ -3322,7 +3322,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
{
|
||||
.name = "today",
|
||||
.type = FUNCTION_TYPE_TODAY,
|
||||
.classification = FUNC_MGT_SCALAR_FUNC | FUNC_MGT_DATETIME_FUNC,
|
||||
.classification = FUNC_MGT_SCALAR_FUNC | FUNC_MGT_DATETIME_FUNC | FUNC_MGT_KEEP_ORDER_FUNC,
|
||||
.translateFunc = translateNowToday,
|
||||
.getEnvFunc = NULL,
|
||||
.initFunc = NULL,
|
||||
|
|
|
@ -401,8 +401,10 @@ static int32_t createPartialFunction(const SFunctionNode* pSrcFunc, SFunctionNod
|
|||
nodesDestroyList(pParameterList);
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
snprintf((*pPartialFunc)->node.aliasName, sizeof((*pPartialFunc)->node.aliasName), "%s.%p",
|
||||
(*pPartialFunc)->functionName, pSrcFunc);
|
||||
char name[TSDB_FUNC_NAME_LEN + TSDB_NAME_DELIMITER_LEN + TSDB_POINTER_PRINT_BYTES + 1] = {0};
|
||||
int32_t len = snprintf(name, sizeof(name) - 1, "%s.%p", (*pPartialFunc)->functionName, pSrcFunc);
|
||||
taosCreateMD5Hash(name, len);
|
||||
strncpy((*pPartialFunc)->node.aliasName, name, TSDB_COL_NAME_LEN - 1);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
|
|
@ -202,9 +202,25 @@ static int32_t calcConstProject(SNode* pProject, bool dual, SNode** pNew) {
|
|||
return code;
|
||||
}
|
||||
|
||||
typedef struct SIsUselessColCtx {
|
||||
bool isUseless;
|
||||
} SIsUselessColCtx ;
|
||||
|
||||
EDealRes checkUselessCol(SNode *pNode, void *pContext) {
|
||||
SIsUselessColCtx *ctx = (SIsUselessColCtx *)pContext;
|
||||
if (QUERY_NODE_FUNCTION == nodeType(pNode) && !fmIsScalarFunc(((SFunctionNode*)pNode)->funcId) &&
|
||||
!fmIsPseudoColumnFunc(((SFunctionNode*)pNode)->funcId)) {
|
||||
ctx->isUseless = false;
|
||||
return DEAL_RES_END;
|
||||
}
|
||||
|
||||
return DEAL_RES_CONTINUE;
|
||||
}
|
||||
|
||||
static bool isUselessCol(SExprNode* pProj) {
|
||||
if (QUERY_NODE_FUNCTION == nodeType(pProj) && !fmIsScalarFunc(((SFunctionNode*)pProj)->funcId) &&
|
||||
!fmIsPseudoColumnFunc(((SFunctionNode*)pProj)->funcId)) {
|
||||
SIsUselessColCtx ctx = {.isUseless = true};
|
||||
nodesWalkExpr((SNode*)pProj, checkUselessCol, (void *)&ctx);
|
||||
if (!ctx.isUseless) {
|
||||
return false;
|
||||
}
|
||||
return NULL == ((SExprNode*)pProj)->pAssociation;
|
||||
|
|
|
@ -2890,7 +2890,7 @@ static SNode* createMultiResFunc(SFunctionNode* pSrcFunc, SExprNode* pExpr) {
|
|||
pFunc->funcId = pSrcFunc->funcId;
|
||||
pFunc->funcType = pSrcFunc->funcType;
|
||||
strcpy(pFunc->functionName, pSrcFunc->functionName);
|
||||
char buf[TSDB_FUNC_NAME_LEN + TSDB_TABLE_NAME_LEN + TSDB_COL_NAME_LEN];
|
||||
char buf[TSDB_FUNC_NAME_LEN + TSDB_TABLE_NAME_LEN + TSDB_COL_NAME_LEN + TSDB_NAME_DELIMITER_LEN + 3] = {0};
|
||||
int32_t len = 0;
|
||||
if (QUERY_NODE_COLUMN == nodeType(pExpr)) {
|
||||
SColumnNode* pCol = (SColumnNode*)pExpr;
|
||||
|
@ -2898,16 +2898,20 @@ static SNode* createMultiResFunc(SFunctionNode* pSrcFunc, SExprNode* pExpr) {
|
|||
strcpy(pFunc->node.userAlias, pCol->colName);
|
||||
strcpy(pFunc->node.aliasName, pCol->colName);
|
||||
} else {
|
||||
len = snprintf(buf, sizeof(buf), "%s(%s.%s)", pSrcFunc->functionName, pCol->tableAlias, pCol->colName);
|
||||
strncpy(pFunc->node.aliasName, buf, TMIN(len, sizeof(pFunc->node.aliasName) - 1));
|
||||
len = snprintf(buf, sizeof(buf), "%s(%s)", pSrcFunc->functionName, pCol->colName);
|
||||
strncpy(pFunc->node.userAlias, buf, TMIN(len, sizeof(pFunc->node.userAlias) - 1));
|
||||
len = snprintf(buf, sizeof(buf) - 1, "%s(%s.%s)", pSrcFunc->functionName, pCol->tableAlias, pCol->colName);
|
||||
taosCreateMD5Hash(buf, len);
|
||||
strncpy(pFunc->node.aliasName, buf, TSDB_COL_NAME_LEN - 1);
|
||||
len = snprintf(buf, sizeof(buf) - 1, "%s(%s)", pSrcFunc->functionName, pCol->colName);
|
||||
taosCreateMD5Hash(buf, len);
|
||||
strncpy(pFunc->node.userAlias, buf, TSDB_COL_NAME_LEN - 1);
|
||||
}
|
||||
} else {
|
||||
len = snprintf(buf, sizeof(buf), "%s(%s)", pSrcFunc->functionName, pExpr->aliasName);
|
||||
strncpy(pFunc->node.aliasName, buf, TMIN(len, sizeof(pFunc->node.aliasName) - 1));
|
||||
len = snprintf(buf, sizeof(buf), "%s(%s)", pSrcFunc->functionName, pExpr->userAlias);
|
||||
strncpy(pFunc->node.userAlias, buf, TMIN(len, sizeof(pFunc->node.userAlias) - 1));
|
||||
len = snprintf(buf, sizeof(buf) - 1, "%s(%s)", pSrcFunc->functionName, pExpr->aliasName);
|
||||
taosCreateMD5Hash(buf, len);
|
||||
strncpy(pFunc->node.aliasName, buf, TSDB_COL_NAME_LEN - 1);
|
||||
len = snprintf(buf, sizeof(buf) - 1, "%s(%s)", pSrcFunc->functionName, pExpr->userAlias);
|
||||
taosCreateMD5Hash(buf, len);
|
||||
strncpy(pFunc->node.userAlias, buf, TSDB_COL_NAME_LEN - 1);
|
||||
}
|
||||
|
||||
return (SNode*)pFunc;
|
||||
|
@ -3750,10 +3754,6 @@ static int32_t translatePartitionBy(STranslateContext* pCxt, SSelectStmt* pSelec
|
|||
pCxt->currClause = SQL_CLAUSE_PARTITION_BY;
|
||||
int32_t code = TSDB_CODE_SUCCESS;
|
||||
|
||||
if (pSelect->pPartitionByList) {
|
||||
code = removeConstantValueFromList(&pSelect->pPartitionByList);
|
||||
}
|
||||
|
||||
if (TSDB_CODE_SUCCESS == code && pSelect->pPartitionByList) {
|
||||
int8_t typeType = getTableTypeFromTableNode(pSelect->pFromTable);
|
||||
SNode* pPar = nodesListGetNode(pSelect->pPartitionByList, 0);
|
||||
|
@ -3964,6 +3964,11 @@ static int32_t translateSelectFrom(STranslateContext* pCxt, SSelectStmt* pSelect
|
|||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = replaceTbName(pCxt, pSelect);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
if (pSelect->pPartitionByList) {
|
||||
code = removeConstantValueFromList(&pSelect->pPartitionByList);
|
||||
}
|
||||
}
|
||||
|
||||
return code;
|
||||
}
|
||||
|
|
|
@ -1765,8 +1765,12 @@ static int32_t partTagsOptRebuildTbanme(SNodeList* pPartKeys) {
|
|||
}
|
||||
|
||||
// todo refact: just to mask compilation warnings
|
||||
static void partTagsSetAlias(char* pAlias, int32_t len, const char* pTableAlias, const char* pColName) {
|
||||
snprintf(pAlias, len, "%s.%s", pTableAlias, pColName);
|
||||
static void partTagsSetAlias(char* pAlias, const char* pTableAlias, const char* pColName) {
|
||||
char name[TSDB_COL_FNAME_LEN + 1] = {0};
|
||||
int32_t len = snprintf(name, TSDB_COL_FNAME_LEN, "%s.%s", pTableAlias, pColName);
|
||||
|
||||
taosCreateMD5Hash(name, len);
|
||||
strncpy(pAlias, name, TSDB_COL_NAME_LEN - 1);
|
||||
}
|
||||
|
||||
static SNode* partTagsCreateWrapperFunc(const char* pFuncName, SNode* pNode) {
|
||||
|
@ -1778,7 +1782,7 @@ static SNode* partTagsCreateWrapperFunc(const char* pFuncName, SNode* pNode) {
|
|||
snprintf(pFunc->functionName, sizeof(pFunc->functionName), "%s", pFuncName);
|
||||
if (QUERY_NODE_COLUMN == nodeType(pNode) && COLUMN_TYPE_TBNAME != ((SColumnNode*)pNode)->colType) {
|
||||
SColumnNode* pCol = (SColumnNode*)pNode;
|
||||
partTagsSetAlias(pFunc->node.aliasName, sizeof(pFunc->node.aliasName), pCol->tableAlias, pCol->colName);
|
||||
partTagsSetAlias(pFunc->node.aliasName, pCol->tableAlias, pCol->colName);
|
||||
} else {
|
||||
strcpy(pFunc->node.aliasName, ((SExprNode*)pNode)->aliasName);
|
||||
}
|
||||
|
@ -2292,7 +2296,10 @@ static SNode* rewriteUniqueOptCreateFirstFunc(SFunctionNode* pSelectValue, SNode
|
|||
strcpy(pFunc->node.aliasName, pSelectValue->node.aliasName);
|
||||
} else {
|
||||
int64_t pointer = (int64_t)pFunc;
|
||||
snprintf(pFunc->node.aliasName, sizeof(pFunc->node.aliasName), "%s.%" PRId64 "", pFunc->functionName, pointer);
|
||||
char name[TSDB_FUNC_NAME_LEN + TSDB_POINTER_PRINT_BYTES + TSDB_NAME_DELIMITER_LEN + 1] = {0};
|
||||
int32_t len = snprintf(name, sizeof(name) - 1, "%s.%" PRId64 "", pFunc->functionName, pointer);
|
||||
taosCreateMD5Hash(name, len);
|
||||
strncpy(pFunc->node.aliasName, name, TSDB_COL_NAME_LEN - 1);
|
||||
}
|
||||
int32_t code = nodesListMakeStrictAppend(&pFunc->pParameterList, nodesCloneNode(pCol));
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
|
|
|
@ -39,37 +39,45 @@ typedef struct SPhysiPlanContext {
|
|||
bool hasSysScan;
|
||||
} SPhysiPlanContext;
|
||||
|
||||
static int32_t getSlotKey(SNode* pNode, const char* pStmtName, char* pKey) {
|
||||
static int32_t getSlotKey(SNode* pNode, const char* pStmtName, char* pKey, int32_t keyBufSize) {
|
||||
int32_t len = 0;
|
||||
if (QUERY_NODE_COLUMN == nodeType(pNode)) {
|
||||
SColumnNode* pCol = (SColumnNode*)pNode;
|
||||
if (NULL != pStmtName) {
|
||||
if ('\0' != pStmtName[0]) {
|
||||
return sprintf(pKey, "%s.%s", pStmtName, pCol->node.aliasName);
|
||||
len = snprintf(pKey, keyBufSize, "%s.%s", pStmtName, pCol->node.aliasName);
|
||||
return taosCreateMD5Hash(pKey, len);
|
||||
} else {
|
||||
return sprintf(pKey, "%s", pCol->node.aliasName);
|
||||
return snprintf(pKey, keyBufSize, "%s", pCol->node.aliasName);
|
||||
}
|
||||
}
|
||||
if ('\0' == pCol->tableAlias[0]) {
|
||||
return sprintf(pKey, "%s", pCol->colName);
|
||||
return snprintf(pKey, keyBufSize, "%s", pCol->colName);
|
||||
}
|
||||
return sprintf(pKey, "%s.%s", pCol->tableAlias, pCol->colName);
|
||||
|
||||
len = snprintf(pKey, keyBufSize, "%s.%s", pCol->tableAlias, pCol->colName);
|
||||
return taosCreateMD5Hash(pKey, len);
|
||||
} else if (QUERY_NODE_FUNCTION == nodeType(pNode)) {
|
||||
SFunctionNode* pFunc = (SFunctionNode*)pNode;
|
||||
if (FUNCTION_TYPE_TBNAME == pFunc->funcType) {
|
||||
SValueNode* pVal = (SValueNode*)nodesListGetNode(pFunc->pParameterList, 0);
|
||||
if (pVal) {
|
||||
if (NULL != pStmtName && '\0' != pStmtName[0]) {
|
||||
return sprintf(pKey, "%s.%s", pStmtName, ((SExprNode*)pNode)->aliasName);
|
||||
len = snprintf(pKey, keyBufSize, "%s.%s", pStmtName, ((SExprNode*)pNode)->aliasName);
|
||||
return taosCreateMD5Hash(pKey, len);
|
||||
}
|
||||
return sprintf(pKey, "%s.%s", pVal->literal, ((SExprNode*)pNode)->aliasName);
|
||||
len = snprintf(pKey, keyBufSize, "%s.%s", pVal->literal, ((SExprNode*)pNode)->aliasName);
|
||||
return taosCreateMD5Hash(pKey, len);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (NULL != pStmtName && '\0' != pStmtName[0]) {
|
||||
return sprintf(pKey, "%s.%s", pStmtName, ((SExprNode*)pNode)->aliasName);
|
||||
len = snprintf(pKey, keyBufSize, "%s.%s", pStmtName, ((SExprNode*)pNode)->aliasName);
|
||||
return taosCreateMD5Hash(pKey, len);
|
||||
}
|
||||
return sprintf(pKey, "%s", ((SExprNode*)pNode)->aliasName);
|
||||
|
||||
return snprintf(pKey, keyBufSize, "%s", ((SExprNode*)pNode)->aliasName);
|
||||
}
|
||||
|
||||
static SNode* createSlotDesc(SPhysiPlanContext* pCxt, const char* pName, const SNode* pNode, int16_t slotId,
|
||||
|
@ -147,8 +155,8 @@ static int32_t buildDataBlockSlots(SPhysiPlanContext* pCxt, SNodeList* pList, SD
|
|||
int16_t slotId = 0;
|
||||
SNode* pNode = NULL;
|
||||
FOREACH(pNode, pList) {
|
||||
char name[TSDB_TABLE_NAME_LEN + TSDB_COL_NAME_LEN];
|
||||
getSlotKey(pNode, NULL, name);
|
||||
char name[TSDB_COL_FNAME_LEN + 1] = {0};
|
||||
getSlotKey(pNode, NULL, name, TSDB_COL_FNAME_LEN);
|
||||
code = nodesListStrictAppend(pDataBlockDesc->pSlots, createSlotDesc(pCxt, name, pNode, slotId, true, false));
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = putSlotToHash(name, pDataBlockDesc->dataBlockId, slotId, pNode, pHash);
|
||||
|
@ -210,8 +218,8 @@ static int32_t addDataBlockSlotsImpl(SPhysiPlanContext* pCxt, SNodeList* pList,
|
|||
SNode* pNode = NULL;
|
||||
FOREACH(pNode, pList) {
|
||||
SNode* pExpr = QUERY_NODE_ORDER_BY_EXPR == nodeType(pNode) ? ((SOrderByExprNode*)pNode)->pExpr : pNode;
|
||||
char name[TSDB_TABLE_NAME_LEN + TSDB_COL_NAME_LEN] = {0};
|
||||
int32_t len = getSlotKey(pExpr, pStmtName, name);
|
||||
char name[TSDB_COL_FNAME_LEN + 1] = {0};
|
||||
int32_t len = getSlotKey(pExpr, pStmtName, name, TSDB_COL_FNAME_LEN);
|
||||
SSlotIndex* pIndex = taosHashGet(pHash, name, len);
|
||||
if (NULL == pIndex) {
|
||||
code =
|
||||
|
@ -299,8 +307,8 @@ static void dumpSlots(const char* pName, SHashObj* pHash) {
|
|||
static EDealRes doSetSlotId(SNode* pNode, void* pContext) {
|
||||
if (QUERY_NODE_COLUMN == nodeType(pNode) && 0 != strcmp(((SColumnNode*)pNode)->colName, "*")) {
|
||||
SSetSlotIdCxt* pCxt = (SSetSlotIdCxt*)pContext;
|
||||
char name[TSDB_TABLE_NAME_LEN + TSDB_COL_NAME_LEN];
|
||||
int32_t len = getSlotKey(pNode, NULL, name);
|
||||
char name[TSDB_COL_FNAME_LEN + 1] = {0};
|
||||
int32_t len = getSlotKey(pNode, NULL, name, TSDB_COL_FNAME_LEN);
|
||||
SSlotIndex* pIndex = taosHashGet(pCxt->pLeftHash, name, len);
|
||||
if (NULL == pIndex) {
|
||||
pIndex = taosHashGet(pCxt->pRightHash, name, len);
|
||||
|
@ -879,7 +887,7 @@ static int32_t createHashJoinColList(int16_t lBlkId, int16_t rBlkId, SNode* pEq1
|
|||
|
||||
static int32_t sortHashJoinTargets(int16_t lBlkId, int16_t rBlkId, SHashJoinPhysiNode* pJoin) {
|
||||
SNode* pNode = NULL;
|
||||
char name[TSDB_TABLE_NAME_LEN + TSDB_COL_NAME_LEN];
|
||||
char name[TSDB_COL_FNAME_LEN + 1] = {0};
|
||||
SSHashObj* pHash = tSimpleHashInit(pJoin->pTargets->length, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY));
|
||||
if (NULL == pHash) {
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
|
@ -888,7 +896,7 @@ static int32_t sortHashJoinTargets(int16_t lBlkId, int16_t rBlkId, SHashJoinPhys
|
|||
|
||||
FOREACH(pNode, pJoin->pTargets) {
|
||||
SColumnNode* pCol = (SColumnNode*)pNode;
|
||||
int32_t len = getSlotKey(pNode, NULL, name);
|
||||
int32_t len = getSlotKey(pNode, NULL, name, TSDB_COL_FNAME_LEN);
|
||||
tSimpleHashPut(pHash, name, len, &pCol, POINTER_BYTES);
|
||||
}
|
||||
|
||||
|
@ -897,7 +905,7 @@ static int32_t sortHashJoinTargets(int16_t lBlkId, int16_t rBlkId, SHashJoinPhys
|
|||
|
||||
FOREACH(pNode, pJoin->pOnLeft) {
|
||||
SColumnNode* pCol = (SColumnNode*)pNode;
|
||||
int32_t len = getSlotKey(pNode, NULL, name);
|
||||
int32_t len = getSlotKey(pNode, NULL, name, TSDB_COL_FNAME_LEN);
|
||||
SNode** p = tSimpleHashGet(pHash, name, len);
|
||||
if (p) {
|
||||
nodesListStrictAppend(pJoin->pTargets, *p);
|
||||
|
@ -906,7 +914,7 @@ static int32_t sortHashJoinTargets(int16_t lBlkId, int16_t rBlkId, SHashJoinPhys
|
|||
}
|
||||
FOREACH(pNode, pJoin->pOnRight) {
|
||||
SColumnNode* pCol = (SColumnNode*)pNode;
|
||||
int32_t len = getSlotKey(pNode, NULL, name);
|
||||
int32_t len = getSlotKey(pNode, NULL, name, TSDB_COL_FNAME_LEN);
|
||||
SNode** p = tSimpleHashGet(pHash, name, len);
|
||||
if (p) {
|
||||
nodesListStrictAppend(pJoin->pTargets, *p);
|
||||
|
|
|
@ -389,7 +389,11 @@ static int32_t stbSplAppendWStart(SNodeList* pFuncs, int32_t* pIndex) {
|
|||
}
|
||||
strcpy(pWStart->functionName, "_wstart");
|
||||
int64_t pointer = (int64_t)pWStart;
|
||||
snprintf(pWStart->node.aliasName, sizeof(pWStart->node.aliasName), "%s.%" PRId64 "", pWStart->functionName, pointer);
|
||||
char name[TSDB_COL_NAME_LEN + TSDB_POINTER_PRINT_BYTES + TSDB_NAME_DELIMITER_LEN + 1] = {0};
|
||||
int32_t len = snprintf(name, sizeof(name) - 1, "%s.%" PRId64 "", pWStart->functionName, pointer);
|
||||
taosCreateMD5Hash(name, len);
|
||||
strncpy(pWStart->node.aliasName, name, TSDB_COL_NAME_LEN - 1);
|
||||
|
||||
int32_t code = fmGetFuncInfo(pWStart, NULL, 0);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = nodesListStrictAppend(pFuncs, (SNode*)pWStart);
|
||||
|
@ -415,7 +419,11 @@ static int32_t stbSplAppendWEnd(SWindowLogicNode* pWin, int32_t* pIndex) {
|
|||
}
|
||||
strcpy(pWEnd->functionName, "_wend");
|
||||
int64_t pointer = (int64_t)pWEnd;
|
||||
snprintf(pWEnd->node.aliasName, sizeof(pWEnd->node.aliasName), "%s.%" PRId64 "", pWEnd->functionName, pointer);
|
||||
char name[TSDB_COL_NAME_LEN + TSDB_POINTER_PRINT_BYTES + TSDB_NAME_DELIMITER_LEN + 1] = {0};
|
||||
int32_t len = snprintf(name, sizeof(name) - 1, "%s.%" PRId64 "", pWEnd->functionName, pointer);
|
||||
taosCreateMD5Hash(name, len);
|
||||
strncpy(pWEnd->node.aliasName, name, TSDB_COL_NAME_LEN - 1);
|
||||
|
||||
int32_t code = fmGetFuncInfo(pWEnd, NULL, 0);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = nodesListStrictAppend(pWin->pFuncs, (SNode*)pWEnd);
|
||||
|
|
|
@ -251,6 +251,18 @@ int32_t tDecodeStreamTaskChkInfo(SDecoder* pDecoder, SCheckpointInfo* pChkpInfo)
|
|||
tEndDecode(pDecoder);
|
||||
return 0;
|
||||
}
|
||||
int32_t tDecodeStreamTaskId(SDecoder* pDecoder, SStreamTaskId* pTaskId) {
|
||||
int64_t ver;
|
||||
if (tStartDecode(pDecoder) < 0) return -1;
|
||||
if (tDecodeI64(pDecoder, &ver) < 0) return -1;
|
||||
if (ver != SSTREAM_TASK_VER) return -1;
|
||||
|
||||
if (tDecodeI64(pDecoder, &pTaskId->streamId) < 0) return -1;
|
||||
if (tDecodeI32(pDecoder, &pTaskId->taskId) < 0) return -1;
|
||||
|
||||
tEndDecode(pDecoder);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void freeItem(void* p) {
|
||||
SStreamContinueExecInfo* pInfo = p;
|
||||
|
|
|
@ -116,7 +116,11 @@ int32_t tfsAllocDiskOnTier(STfsTier *pTier) {
|
|||
|
||||
if (pDisk == NULL) continue;
|
||||
|
||||
if (pDisk->size.avail < tsMinDiskFreeSize) continue;
|
||||
if (pDisk->size.avail < tsMinDiskFreeSize) {
|
||||
uInfo("disk %s is full and skip it, level:%d id:%d free size:%" PRId64 " min free size:%" PRId64, pDisk->path,
|
||||
pDisk->level, pDisk->id, pDisk->size.avail, tsMinDiskFreeSize);
|
||||
continue;
|
||||
}
|
||||
|
||||
retId = diskId;
|
||||
terrno = 0;
|
||||
|
|
|
@ -2156,10 +2156,12 @@ static void cliSchedMsgToNextNode(SCliMsg* pMsg, SCliThrd* pThrd) {
|
|||
|
||||
if (rpcDebugFlag & DEBUG_DEBUG) {
|
||||
STraceId* trace = &pMsg->msg.info.traceId;
|
||||
char tbuf[256] = {0};
|
||||
char* tbuf = taosMemoryCalloc(1, TSDB_FQDN_LEN * 5);
|
||||
|
||||
EPSET_TO_STR(&pCtx->epSet, tbuf);
|
||||
tGDebug("%s retry on next node,use:%s, step: %d,timeout:%" PRId64 "", transLabel(pThrd->pTransInst), tbuf,
|
||||
pCtx->retryStep, pCtx->retryNextInterval);
|
||||
taosMemoryFree(tbuf);
|
||||
}
|
||||
|
||||
STaskArg* arg = taosMemoryMalloc(sizeof(STaskArg));
|
||||
|
|
|
@ -147,6 +147,7 @@
|
|||
,,y,system-test,./pytest.sh python3 ./test.py -f 99-TDcase/TS-3404.py
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 99-TDcase/TS-3581.py
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 99-TDcase/TS-3311.py
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 99-TDcase/TS-3821.py
|
||||
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/balance_vgroups_r1.py -N 6
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/taosShell.py
|
||||
|
|
|
@ -30,8 +30,26 @@ fi
|
|||
TAOS_DIR=`pwd`
|
||||
LOG_DIR=$TAOS_DIR/sim/asan
|
||||
|
||||
|
||||
|
||||
error_num=`cat ${LOG_DIR}/*.asan | grep "ERROR" | wc -l`
|
||||
memory_leak=`cat ${LOG_DIR}/*.asan | grep "Direct leak" | wc -l`
|
||||
|
||||
archOs=`arch`
|
||||
if [[ $archOs =~ "aarch64" ]]; then
|
||||
echo "arm64 check mem leak"
|
||||
memory_leak=`cat ${LOG_DIR}/*.asan | grep "Direct leak" | grep -v "Direct leak of 32 byte"| wc -l`
|
||||
memory_count=`cat ${LOG_DIR}/*.asan | grep "Direct leak of 32 byte"| wc -l`
|
||||
|
||||
if [ $memory_count -eq $error_num ] && [ $memory_leak -eq 0 ]; then
|
||||
echo "reset error_num to 0, ignore: __cxa_thread_atexit_impl leak"
|
||||
error_num=0
|
||||
fi
|
||||
else
|
||||
echo "os check mem leak"
|
||||
memory_leak=`cat ${LOG_DIR}/*.asan | grep "Direct leak" | wc -l`
|
||||
fi
|
||||
|
||||
|
||||
indirect_leak=`cat ${LOG_DIR}/*.asan | grep "Indirect leak" | wc -l`
|
||||
python_error=`cat ${LOG_DIR}/*.info | grep -w "stack" | wc -l`
|
||||
|
||||
|
|
|
@ -101,6 +101,18 @@ class TDTestCase:
|
|||
self.now_check_ntb()
|
||||
self.now_check_stb()
|
||||
|
||||
## TD-25540
|
||||
tdSql.execute(f'create database db1')
|
||||
tdSql.execute(f'use db1')
|
||||
tdSql.execute(f"create table db1.tb (ts timestamp, c0 int)")
|
||||
tdSql.execute(f'insert into db1.tb values(now + 1h, 1)')
|
||||
|
||||
for func in {"NOW", "NOW()", "TODAY()", "1", "'1970-01-01 00:00:00'"}:
|
||||
tdSql.query(f"SELECT _wstart, count(*) FROM (SELECT ts, LAST(c0) FROM db1.tb WHERE ts > {func}) interval(1d);")
|
||||
tdSql.checkRows(1)
|
||||
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success(f"{__file__} successfully executed")
|
||||
|
|
|
@ -6082,8 +6082,18 @@ class TDTestCase:
|
|||
tdLog.info(len(sql))
|
||||
tdSql.error(sql)
|
||||
|
||||
#special sql
|
||||
#6
|
||||
tdSql.query("select 6-1 from stable_1;")
|
||||
for i in range(self.fornum):
|
||||
sql = "select count(*) from (select avg(q_int)/1000 from stable_1); "
|
||||
tdLog.info(sql)
|
||||
tdLog.info(len(sql))
|
||||
tdSql.query(sql)
|
||||
self.cur1.execute(sql)
|
||||
self.explain_sql(sql)
|
||||
|
||||
#special sql
|
||||
tdSql.query("select 7-1 from stable_1;")
|
||||
for i in range(self.fornum):
|
||||
sql = "select * from ( select _block_dist() from stable_1);"
|
||||
tdSql.error(sql)
|
||||
|
|
|
@ -29,7 +29,7 @@ class TDTestCase:
|
|||
'rowsPerTbl': 10000,
|
||||
'batchNum': 2000,
|
||||
'startTs': 1640966400000, # 2022-01-01 00:00:00.000
|
||||
'pollDelay': 20,
|
||||
'pollDelay': 50,
|
||||
'showMsg': 1,
|
||||
'showRow': 1}
|
||||
|
||||
|
@ -45,7 +45,7 @@ class TDTestCase:
|
|||
autoCommitInterval = 'auto.commit.interval.ms:1000'
|
||||
autoOffset = 'auto.offset.reset:earliest'
|
||||
|
||||
pollDelay = 20
|
||||
pollDelay = 50
|
||||
showMsg = 1
|
||||
showRow = 1
|
||||
|
||||
|
|
|
@ -203,7 +203,7 @@ class TDTestCase:
|
|||
tdLog.exit("show consumers %d not equal expect num: %d"%(topicNum, expectConsumerNUm))
|
||||
|
||||
flag = 0
|
||||
for i in range(10):
|
||||
while (1):
|
||||
tdSql.query('show subscriptions;')
|
||||
subscribeNum = tdSql.queryRows
|
||||
tdLog.info(" get subscriptions count: %d"%(subscribeNum))
|
||||
|
|
|
@ -0,0 +1,84 @@
|
|||
import taos
|
||||
import sys
|
||||
import time
|
||||
import socket
|
||||
import os
|
||||
import threading
|
||||
|
||||
from util.log import *
|
||||
from util.sql import *
|
||||
from util.cases import *
|
||||
from util.dnodes import *
|
||||
|
||||
class TDTestCase:
|
||||
hostname = socket.gethostname()
|
||||
|
||||
def init(self, conn, logSql, replicaVar=1):
|
||||
self.replicaVar = int(replicaVar)
|
||||
tdLog.debug(f"start to excute {__file__}")
|
||||
#tdSql.init(conn.cursor())
|
||||
tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files or "taosd.exe" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root) - len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
def create_tables(self):
|
||||
tdSql.execute(f'''CREATE STABLE `s_e8d66f7af53e4c88866efbc44252a8cd_device_technical_indicators`
|
||||
(`ts` TIMESTAMP, `event_time` TIMESTAMP, `wbli` DOUBLE, `vrc` DOUBLE, `csd` DOUBLE,
|
||||
`oiv` DOUBLE, `tiv` DOUBLE, `flol` DOUBLE, `capacity` DOUBLE, `ispc` NCHAR(50)) TAGS
|
||||
(`device_identification` NCHAR(64))''')
|
||||
tdSql.execute(f''' CREATE TABLE `t_1000000000001001_e8d66f7af53e4c88866efbc44252a8cd_device_technical_indicators`
|
||||
USING `s_e8d66f7af53e4c88866efbc44252a8cd_device_technical_indicators` (`device_identification`)
|
||||
TAGS ("1000000000001001")''')
|
||||
|
||||
def insert_data(self):
|
||||
tdLog.debug("start to insert data ............")
|
||||
|
||||
tdSql.execute(f"INSERT INTO `t_1000000000001001_e8d66f7af53e4c88866efbc44252a8cd_device_technical_indicators` VALUES ('2023-08-06 17:47:35.685','2023-07-24 11:18:20.000', 17.199999999999999, 100.000000000000000, 100.000000000000000, NULL, 112.328999999999994, 132.182899999999989, 12.300000000000001, '符合条件')")
|
||||
tdSql.execute(f"INSERT INTO `t_1000000000001001_e8d66f7af53e4c88866efbc44252a8cd_device_technical_indicators` VALUES ('2023-08-06 17:47:36.239','2023-07-24 11:18:20.000', 17.199999999999999, 100.000000000000000, 100.000000000000000, NULL, 112.328999999999994, 132.182899999999989, 12.300000000000001, '符合条件')")
|
||||
tdSql.execute(f"INSERT INTO `t_1000000000001001_e8d66f7af53e4c88866efbc44252a8cd_device_technical_indicators` VALUES ('2023-08-06 17:47:37.290','2023-07-24 11:18:20.000', 17.199999999999999, 100.000000000000000, 100.000000000000000, NULL, 112.328999999999994, 132.182899999999989, 12.300000000000001, '符合条件')")
|
||||
tdSql.execute(f"INSERT INTO `t_1000000000001001_e8d66f7af53e4c88866efbc44252a8cd_device_technical_indicators` VALUES ('2023-08-06 17:47:38.414','2023-07-24 11:18:20.000', 17.199999999999999, 100.000000000000000, 100.000000000000000, NULL, 112.328999999999994, 132.182899999999989, 12.300000000000001, '符合条件')")
|
||||
tdSql.execute(f"INSERT INTO `t_1000000000001001_e8d66f7af53e4c88866efbc44252a8cd_device_technical_indicators` VALUES ('2023-08-06 17:47:39.471','2023-07-24 11:18:20.000', 17.199999999999999, 100.000000000000000, 100.000000000000000, NULL, 112.328999999999994, 132.182899999999989, 12.300000000000001, '符合条件')")
|
||||
|
||||
tdLog.debug("insert data ............ [OK]")
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
self.create_tables()
|
||||
self.insert_data()
|
||||
tdLog.printNoPrefix("======== test TS-3821")
|
||||
|
||||
tdSql.query(f'''select count(*),device_identification from s_e8d66f7af53e4c88866efbc44252a8cd_device_technical_indicators
|
||||
where 1=1 and device_identification ='1000000000001001' group by device_identification;''')
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkCols(2)
|
||||
tdSql.checkData(0, 0, 5)
|
||||
tdSql.checkData(0, 1, "1000000000001001")
|
||||
|
||||
tdSql.query(f'''select count(*),device_identification from t_1000000000001001_e8d66f7af53e4c88866efbc44252a8cd_device_technical_indicators
|
||||
group by device_identification;''')
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkCols(2)
|
||||
tdSql.checkData(0, 0, 5)
|
||||
tdSql.checkData(0, 1, "1000000000001001")
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success(f"{__file__} successfully executed")
|
||||
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
Loading…
Reference in New Issue