Merge branch 'feature/TD-14761' of https://github.com/taosdata/TDengine into feature/TD-14761

This commit is contained in:
wangmm0220 2022-07-22 10:29:12 +08:00
commit 6ca882510b
207 changed files with 7539 additions and 5098 deletions

View File

@ -1,4 +1,5 @@
---
sidebar_label: Docker
title: 通过 Docker 快速体验 TDengine
---

View File

@ -0,0 +1,240 @@
---
sidebar_label: 安装包
title: 使用安装包安装和卸载
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
:::info
如果您希望对 TDengine 贡献代码或对内部实现感兴趣,请参考我们的 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装.
:::
TDengine 开源版本提供 deb 和 rpm 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 deb 支持 Debian/Ubuntu 及衍生系统rpm 支持 CentOS/RHEL/SUSE 及衍生系统。同时我们也为企业用户提供 tar.gz 格式安装包。
## 安装
<Tabs>
<TabItem value="apt-get" label="apt-get">
可以使用 apt-get 工具从官方仓库安装。
**安装包仓库**
```
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-stable stable main" | sudo tee /etc/apt/sources.list.d/tdengine-stable.list
```
如果安装 Beta 版需要安装包仓库
```
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-beta beta main" | sudo tee /etc/apt/sources.list.d/tdengine-beta.list
```
**使用 apt-get 命令安装**
```
sudo apt-get update
apt-cache policy tdengine
sudo apt-get install tdengine
```
:::tip
apt-get 方式只适用于 Debian 或 Ubuntu 系统
::::
</TabItem>
<TabItem label="Deb 安装" value="debinst">
1、从官网下载获得 deb 安装包,例如 TDengine-server-2.4.0.7-Linux-x64.deb
2、进入到 TDengine-server-2.4.0.7-Linux-x64.deb 安装包所在目录,执行如下的安装命令:
```
$ sudo dpkg -i TDengine-server-2.4.0.7-Linux-x64.deb
(Reading database ... 137504 files and directories currently installed.)
Preparing to unpack TDengine-server-2.4.0.7-Linux-x64.deb ...
TDengine is removed successfully!
Unpacking tdengine (2.4.0.7) over (2.4.0.7) ...
Setting up tdengine (2.4.0.7) ...
Start to install TDengine...
System hostname is: ubuntu-1804
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join
OR leave it blank to build one:
Enter your email address for priority support or enter empty to skip:
Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service.
To configure TDengine : edit /etc/taos/taos.cfg
To start TDengine : sudo systemctl start taosd
To access TDengine : taos -h ubuntu-1804 to login into TDengine server
TDengine is installed successfully!
```
</TabItem>
<TabItem label="RPM 安装" value="rpminst">
1、从官网下载获得 rpm 安装包,例如 TDengine-server-2.4.0.7-Linux-x64.rpm
2、进入到 TDengine-server-2.4.0.7-Linux-x64.rpm 安装包所在目录,执行如下的安装命令:
```
$ sudo rpm -ivh TDengine-server-2.4.0.7-Linux-x64.rpm
Preparing... ################################# [100%]
Updating / installing...
1:tdengine-2.4.0.7-3 ################################# [100%]
Start to install TDengine...
System hostname is: centos7
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join
OR leave it blank to build one:
Enter your email address for priority support or enter empty to skip:
Created symlink from /etc/systemd/system/multi-user.target.wants/taosd.service to /etc/systemd/system/taosd.service.
To configure TDengine : edit /etc/taos/taos.cfg
To start TDengine : sudo systemctl start taosd
To access TDengine : taos -h centos7 to login into TDengine server
TDengine is installed successfully!
```
</TabItem>
<TabItem label="tar.gz 安装" value="tarinst">
1、从官网下载获得 tar.gz 安装包,例如 TDengine-server-2.4.0.7-Linux-x64.tar.gz
2、进入到 TDengine-server-2.4.0.7-Linux-x64.tar.gz 安装包所在目录,先解压文件后,进入子目录,执行其中的 install.sh 安装脚本:
```
$ tar xvzf TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz
TDengine-enterprise-server-2.4.0.7/
TDengine-enterprise-server-2.4.0.7/driver/
TDengine-enterprise-server-2.4.0.7/driver/vercomp.txt
TDengine-enterprise-server-2.4.0.7/driver/libtaos.so.2.4.0.7
TDengine-enterprise-server-2.4.0.7/install.sh
TDengine-enterprise-server-2.4.0.7/examples/
...
$ ll
total 43816
drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ./
drwxr-xr-x 20 ubuntu ubuntu 4096 Feb 22 09:30 ../
drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 TDengine-enterprise-server-2.4.0.7/
-rw-rw-r-- 1 ubuntu ubuntu 44852544 Feb 22 09:31 TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz
$ cd TDengine-enterprise-server-2.4.0.7/
$ ll
total 40784
drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 ./
drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ../
drwxrwxr-x 2 ubuntu ubuntu 4096 Feb 22 09:30 driver/
drwxrwxr-x 10 ubuntu ubuntu 4096 Feb 22 09:30 examples/
-rwxrwxr-x 1 ubuntu ubuntu 33294 Feb 22 09:30 install.sh*
-rw-rw-r-- 1 ubuntu ubuntu 41704288 Feb 22 09:30 taos.tar.gz
$ sudo ./install.sh
Start to update TDengine...
Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service.
Nginx for TDengine is updated successfully!
To configure TDengine : edit /etc/taos/taos.cfg
To configure Taos Adapter (if has) : edit /etc/taos/taosadapter.toml
To start TDengine : sudo systemctl start taosd
To access TDengine : use taos -h ubuntu-1804 in shell OR from http://127.0.0.1:6060
TDengine is updated successfully!
Install taoskeeper as a standalone service
taoskeeper is installed, enable it by `systemctl enable taoskeeper`
```
:::info
install.sh 安装脚本在执行过程中,会通过命令行交互界面询问一些配置信息。如果希望采取无交互安装方式,那么可以用 -e no 参数来执行 install.sh 脚本。运行 `./install.sh -h` 指令可以查看所有参数的详细说明信息。
:::
</TabItem>
</Tabs>
:::note
当安装第一个节点时,出现 Enter FQDN提示的时候不需要输入任何内容。只有当安装第二个或以后更多的节点时才需要输入已有集群中任何一个可用节点的 FQDN支持该新节点加入集群。当然也可以不输入而是在新节点启动前配置到新节点的配置文件中。
:::
## 卸载
<Tabs>
<TabItem label="apt-get 卸载" value="aptremove">
内容 TBD
</TabItem>
<TabItem label="Deb 卸载" value="debuninst">
卸载命令如下:
```
$ sudo dpkg -r tdengine
(Reading database ... 137504 files and directories currently installed.)
Removing tdengine (2.4.0.7) ...
TDengine is removed successfully!
```
</TabItem>
<TabItem label="RPM 卸载" value="rpmuninst">
卸载命令如下:
```
$ sudo rpm -e tdengine
TDengine is removed successfully!
```
</TabItem>
<TabItem label="tar.gz 卸载" value="taruninst">
卸载命令如下:
```
$ rmtaos
Nginx for TDengine is running, stopping it...
TDengine is removed successfully!
taosKeeper is removed successfully!
```
</TabItem>
</Tabs>
:::info
- TDengine 提供了多种安装包,但最好不要在一个系统上同时使用 tar.gz 安装包和 deb 或 rpm 安装包。否则会相互影响,导致在使用时出现问题。
- 对于 deb 包安装后,如果安装目录被手工误删了部分,出现卸载、或重新安装不能成功。此时,需要清除 TDengine 包的安装信息,执行如下命令:
```
$ sudo rm -f /var/lib/dpkg/info/tdengine*
```
然后再重新进行安装就可以了。
- 对于 rpm 包安装后,如果安装目录被手工误删了部分,出现卸载、或重新安装不能成功。此时,需要清除 TDengine 包的安装信息,执行如下命令:
```
$ sudo rpm -e --noscripts tdengine
```
然后再重新进行安装就可以了。
:::

View File

@ -0,0 +1,135 @@
---
sidebar_label: 开始使用
title: 快速体验 TDengine
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import PkgInstall from "./\_pkg_install.mdx";
import AptGetInstall from "./\_apt_get_install.mdx";
## 启动
安装后,请使用 `systemctl` 命令来启动 TDengine 的服务进程。
```bash
systemctl start taosd
```
检查服务是否正常工作:
```bash
systemctl status taosd
```
如果服务进程处于活动状态,则 status 指令会显示如下的相关信息:
```
Active: active (running)
```
如果后台服务进程处于停止状态,则 status 指令会显示如下的相关信息:
```
Active: inactive (dead)
```
如果 TDengine 服务正常工作,那么您可以通过 TDengine 的命令行程序 `taos` 来访问并体验 TDengine。
systemctl 命令汇总:
- 启动服务进程:`systemctl start taosd`
- 停止服务进程:`systemctl stop taosd`
- 重启服务进程:`systemctl restart taosd`
- 查看服务状态:`systemctl status taosd`
:::info
- systemctl 命令需要 _root_ 权限来运行,如果您非 _root_ 用户,请在命令前添加 sudo 。
- `systemctl stop taosd` 指令在执行后并不会马上停止 TDengine 服务,而是会等待系统中必要的落盘工作正常完成。在数据量很大的情况下,这可能会消耗较长时间。
- 如果系统中不支持 `systemd`,也可以用手动运行 `/usr/local/taos/bin/taosd` 方式启动 TDengine 服务。
:::
## TDengine 命令行 (CLI)
为便于检查 TDengine 的状态,执行数据库 (Database) 的各种即席(Ad Hoc)查询TDengine 提供一命令行应用程序(以下简称为 TDengine CLI) taos。要进入 TDengine 命令行,您只要在安装有 TDengine 的 Linux 终端执行 `taos` 即可。
```bash
taos
```
如果连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息出来(请参考 [FAQ](/train-faq/faq) 来解决终端连接服务端失败的问题)。 TDengine CLI 的提示符号如下:
```cmd
taos>
```
在 TDengine CLI 中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行数据库(database)插入查询操作。在终端中运行的 SQL 语句需要以分号结束来运行。示例:
```sql
create database demo;
use demo;
create table t (ts timestamp, speed int);
insert into t values ('2019-07-15 00:00:00', 10);
insert into t values ('2019-07-15 01:00:00', 20);
select * from t;
ts | speed |
========================================
2019-07-15 00:00:00.000 | 10 |
2019-07-15 01:00:00.000 | 20 |
Query OK, 2 row(s) in set (0.003128s)
```
除执行 SQL 语句外,系统管理员还可以从 TDengine CLI 进行检查系统运行状态、添加删除用户账号等操作。TDengine CLI 连同应用驱动也可以独立安装在 Linux 或 Windows 机器上运行,更多细节请参考 [这里](../reference/taos-shell/)
## 使用 taosBenchmark 体验写入速度
启动 TDengine 的服务,在 Linux 终端执行 `taosBenchmark` (曾命名为 `taosdemo`
```bash
taosBenchmark
```
该命令将在数据库 test 下面自动创建一张超级表 meters该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupIdgroupId 被设置为 1 到 10 location 被设置为 "California.SanFrancisco" 或者 "California.LosAngeles"。
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能,即使在一台普通的 PC 服务器往往也仅需十几秒。
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,您可以设置不同参数进行体验,请执行 `taosBenchmark --help` 详细列出。taosBenchmark 详细使用方法请参照 [如何使用 taosBenchmark 对 TDengine 进行性能测试](https://www.taosdata.com/2021/10/09/3111.html)。
## 使用 TDengine CLI 体验查询速度
使用上述 taosBenchmark 插入数据后,可以在 TDengine CLI 输入查询命令,体验查询速度。
查询超级表下记录总条数:
```sql
taos> select count(*) from test.meters;
```
查询 1 亿条记录的平均值、最大值、最小值等:
```sql
taos> select avg(current), max(voltage), min(phase) from test.meters;
```
查询 location="California.SanFrancisco" 的记录总条数:
```sql
taos> select count(*) from test.meters where location="California.SanFrancisco";
```
查询 groupId=10 的所有记录的平均值、最大值、最小值等:
```sql
taos> select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
```
对表 d10 按 10s 进行平均值、最大值和最小值聚合统计:
```sql
taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
```

View File

@ -1,105 +0,0 @@
---
title: 数据节点管理
---
上面已经介绍如何从零开始搭建集群。集群组建完成后,可以随时查看集群中当前的数据节点的状态,还可以添加新的数据节点进行扩容,删除数据节点,甚至手动进行数据节点之间的负载均衡操作。
:::note
以下所有执行命令的操作需要先登陆进 TDengine 系统,必要时请使用 root 权限。
:::
## 查看数据节点
启动 TDengine CLI 程序 taos然后执行
```sql
SHOW DNODES;
```
它将列出集群中所有的 dnode每个 dnode 的 IDend_pointfqdn:port状态readyoffline 等vnode 数目,还未使用的 vnode 数目等信息。在添加或删除一个数据节点后,可以使用该命令查看。
输出如下(具体内容仅供参考,取决于实际的集群配置)
```
taos> show dnodes;
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | trd01:6030 | 100 | 1024 | ready | 2022-07-15 16:47:47.726 | |
Query OK, 1 rows affected (0.006684s)
```
## 查看虚拟节点组
为充分利用多核技术,并提供横向扩展能力,数据需要分片处理。因此 TDengine 会将一个 DB 的数据切分成多份,存放在多个 vnode 里。这些 vnode 可能分布在多个数据节点 dnode 里,这样就实现了水平扩展。一个 vnode 仅仅属于一个 DB但一个 DB 可以有多个 vnode。vnode 所在的数据节点是 mnode 根据当前系统资源的情况,自动进行分配的,无需任何人工干预。
启动 CLI 程序 taos然后执行
```sql
USE SOME_DATABASE;
SHOW VGROUPS;
```
输出如下(具体内容仅供参考,取决于实际的集群配置)
```
taos> use db;
Database changed.
taos> show vgroups;
vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | status | nfiles | file_size | tsma |
================================================================================================================================================================================================
2 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 |
3 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 |
4 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 |
Query OK, 8 row(s) in set (0.001154s)
```
## 添加数据节点
启动 CLI 程序 taos然后执行
```sql
CREATE DNODE "fqdn:port";
```
将新数据节点的 End Point 添加进集群的 EP 列表。“fqdn:port“需要用双引号引起来否则出错。一个数据节点对外服务的 fqdn 和 port 可以通过配置文件 taos.cfg 进行配置,缺省是自动获取。【强烈不建议用自动获取方式来配置 FQDN可能导致生成的数据节点的 End Point 不是所期望的】
然后启动新加入的数据节点的 taosd 进程,再通过 taos 查看数据节点状态:
```
taos> show dnodes;
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | localhost:6030 | 100 | 1024 | ready | 2022-07-15 16:47:47.726 | |
2 | localhost:7030 | 0 | 1024 | ready | 2022-07-15 16:56:13.670 | |
Query OK, 2 rows affected (0.007031s)
```
从中可以看到两个 dnode 状态都为 ready
## 删除数据节点
先停止要删除的数据节点的 taosd 进程,然后启动 CLI 程序 taos执行
```sql
DROP DNODE "fqdn:port";
```
或者
```sql
DROP DNODE dnodeId;
```
通过 “fqdn:port” 或 dnodeID 来指定一个具体的节点都是可以的。其中 fqdn 是被删除的节点的 FQDNport 是其对外服务器的端口号dnodeID 可以通过 SHOW DNODES 获得。
:::warning
数据节点一旦被 drop 之后,不能重新加入集群。需要将此节点重新部署(清空数据文件夹)。集群在完成 `drop dnode` 操作之前,会将该 dnode 的数据迁移走。
请注意 `drop dnode` 和 停止 taosd 进程是两个不同的概念,不要混淆:因为删除 dnode 之前要执行迁移数据的操作,因此被删除的 dnode 必须保持在线状态。待删除操作结束之后,才能停止 taosd 进程。
一个数据节点被 drop 之后,其他节点都会感知到这个 dnodeID 的删除操作,任何集群中的节点都不会再接收此 dnodeID 的请求。
dnodeID 是集群自动分配的,不得人工指定。它在生成时是递增的,不会重复。
:::

View File

@ -1 +0,0 @@
label: 集群管理

View File

@ -1,5 +1,6 @@
---
title: 集群部署
sidebar_label: 手动部署
title: 集群部署和管理
---
## 准备工作
@ -72,15 +73,16 @@ serverPort 6030
按照《立即开始》里的步骤,启动第一个数据节点,例如 h1.taosdata.com然后执行 taos启动 taos shell从 shell 里执行命令“SHOW DNODES”如下所示
```
Welcome to the TDengine shell from Linux, Client Version:3.0.0.0
Copyright (c) 2022 by TAOS Data, Inc. All rights reserved.
Server is Enterprise trial Edition, ver:3.0.0.0 and will never expire.
taos> show dnodes;
id | endpoint | vnodes | support_vnodes | status | create_time | note |
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | h1.taosdata.com:6030 | 0 | 1024 | ready | 2022-07-16 10:50:42.673 | |
1 | h1.taosdata.com:6030 | 0 | 1024 | ready | 2022-07-16 10:50:42.673 | |
Query OK, 1 rows affected (0.007984s)
taos>
@ -91,7 +93,7 @@ taos>
上述命令里,可以看到刚启动的数据节点的 End Point 是h1.taos.com:6030就是这个新集群的 firstEp。
### 启动后续数据节点
### 添加数据节点
将后续的数据节点添加到现有集群,具体有以下几步:
@ -125,3 +127,74 @@ firstEp 这个参数仅仅在该数据节点首次加入集群时有作用,加
两个没有配置 firstEp 参数的数据节点 dnode 启动后,会独立运行起来。这个时候,无法将其中一个数据节点加入到另外一个数据节点,形成集群。无法将两个独立的集群合并成为新的集群。
:::
## 查看数据节点
启动 TDengine CLI 程序 taos然后执行
```sql
SHOW DNODES;
```
它将列出集群中所有的 dnode每个 dnode 的 IDend_pointfqdn:port状态readyoffline 等vnode 数目,还未使用的 vnode 数目等信息。在添加或删除一个数据节点后,可以使用该命令查看。
输出如下(具体内容仅供参考,取决于实际的集群配置)
```
taos> show dnodes;
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | trd01:6030 | 100 | 1024 | ready | 2022-07-15 16:47:47.726 | |
Query OK, 1 rows affected (0.006684s)
```
## 查看虚拟节点组
为充分利用多核技术,并提供横向扩展能力,数据需要分片处理。因此 TDengine 会将一个 DB 的数据切分成多份,存放在多个 vnode 里。这些 vnode 可能分布在多个数据节点 dnode 里,这样就实现了水平扩展。一个 vnode 仅仅属于一个 DB但一个 DB 可以有多个 vnode。vnode 所在的数据节点是 mnode 根据当前系统资源的情况,自动进行分配的,无需任何人工干预。
启动 CLI 程序 taos然后执行
```sql
USE SOME_DATABASE;
SHOW VGROUPS;
```
输出如下(具体内容仅供参考,取决于实际的集群配置)
```
taos> use db;
Database changed.
taos> show vgroups;
vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | status | nfiles | file_size | tsma |
================================================================================================================================================================================================
2 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 |
3 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 |
4 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 |
Query OK, 8 row(s) in set (0.001154s)
```
## 删除数据节点
先停止要删除的数据节点的 taosd 进程,然后启动 CLI 程序 taos执行
```sql
DROP DNODE "fqdn:port";
```
或者
```sql
DROP DNODE dnodeId;
```
通过 “fqdn:port” 或 dnodeID 来指定一个具体的节点都是可以的。其中 fqdn 是被删除的节点的 FQDNport 是其对外服务器的端口号dnodeID 可以通过 SHOW DNODES 获得。
:::warning
数据节点一旦被 drop 之后,不能重新加入集群。需要将此节点重新部署(清空数据文件夹)。集群在完成 `drop dnode` 操作之前,会将该 dnode 的数据迁移走。
请注意 `drop dnode` 和 停止 taosd 进程是两个不同的概念,不要混淆:因为删除 dnode 之前要执行迁移数据的操作,因此被删除的 dnode 必须保持在线状态。待删除操作结束之后,才能停止 taosd 进程。
一个数据节点被 drop 之后,其他节点都会感知到这个 dnodeID 的删除操作,任何集群中的节点都不会再接收此 dnodeID 的请求。
dnodeID 是集群自动分配的,不得人工指定。它在生成时是递增的,不会重复。
:::

View File

@ -0,0 +1,452 @@
---
sidebar_label: Kubernetes
title: 在 Kubernetes 上部署 TDengine 集群
---
## 配置 ConfigMap
为 TDengine 创建 `taoscfg.yaml`,此文件中的配置将作为环境变量传入 TDengine 镜像,更新此配置将导致所有 TDengine POD 重启。
```yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: taoscfg
labels:
app: tdengine
data:
CLUSTER: "1"
TAOS_KEEP: "3650"
TAOS_DEBUG_FLAG: "135"
```
## 配置服务
创建一个 service 配置文件:`taosd-service.yaml`,服务名称 `metadata.name` (此处为 "taosd") 将在下一步中使用到。添加 TDengine 所用到的所有端口:
```yaml
---
apiVersion: v1
kind: Service
metadata:
name: "taosd"
labels:
app: "tdengine"
spec:
ports:
- name: tcp6030
protocol: "TCP"
port: 6030
- name: tcp6035
protocol: "TCP"
port: 6035
- name: tcp6041
protocol: "TCP"
port: 6041
- name: udp6030
protocol: "UDP"
port: 6030
- name: udp6031
protocol: "UDP"
port: 6031
- name: udp6032
protocol: "UDP"
port: 6032
- name: udp6033
protocol: "UDP"
port: 6033
- name: udp6034
protocol: "UDP"
port: 6034
- name: udp6035
protocol: "UDP"
port: 6035
- name: udp6036
protocol: "UDP"
port: 6036
- name: udp6037
protocol: "UDP"
port: 6037
- name: udp6038
protocol: "UDP"
port: 6038
- name: udp6039
protocol: "UDP"
port: 6039
- name: udp6040
protocol: "UDP"
port: 6040
selector:
app: "tdengine"
```
## 有状态服务 StatefulSet
根据 Kubernetes 对各类部署的说明,我们将使用 StatefulSet 作为 TDengine 的服务类型,创建文件 `tdengine.yaml`
```yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: "tdengine"
labels:
app: "tdengine"
spec:
serviceName: "taosd"
replicas: 2
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: "tdengine"
template:
metadata:
name: "tdengine"
labels:
app: "tdengine"
spec:
containers:
- name: "tdengine"
image: "zitsen/taosd:develop"
imagePullPolicy: "Always"
envFrom:
- configMapRef:
name: taoscfg
ports:
- name: tcp6030
protocol: "TCP"
containerPort: 6030
- name: tcp6035
protocol: "TCP"
containerPort: 6035
- name: tcp6041
protocol: "TCP"
containerPort: 6041
- name: udp6030
protocol: "UDP"
containerPort: 6030
- name: udp6031
protocol: "UDP"
containerPort: 6031
- name: udp6032
protocol: "UDP"
containerPort: 6032
- name: udp6033
protocol: "UDP"
containerPort: 6033
- name: udp6034
protocol: "UDP"
containerPort: 6034
- name: udp6035
protocol: "UDP"
containerPort: 6035
- name: udp6036
protocol: "UDP"
containerPort: 6036
- name: udp6037
protocol: "UDP"
containerPort: 6037
- name: udp6038
protocol: "UDP"
containerPort: 6038
- name: udp6039
protocol: "UDP"
containerPort: 6039
- name: udp6040
protocol: "UDP"
containerPort: 6040
env:
# POD_NAME for FQDN config
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# SERVICE_NAME and NAMESPACE for fqdn resolve
- name: SERVICE_NAME
value: "taosd"
- name: STS_NAME
value: "tdengine"
- name: STS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# TZ for timezone settings, we recommend to always set it.
- name: TZ
value: "Asia/Shanghai"
# TAOS_ prefix will configured in taos.cfg, strip prefix and camelCase.
- name: TAOS_SERVER_PORT
value: "6030"
# Must set if you want a cluster.
- name: TAOS_FIRST_EP
value: "$(STS_NAME)-0.$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local:$(TAOS_SERVER_PORT)"
# TAOS_FQND should always be setted in k8s env.
- name: TAOS_FQDN
value: "$(POD_NAME).$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local"
volumeMounts:
- name: taosdata
mountPath: /var/lib/taos
readinessProbe:
exec:
command:
- taos
- -s
- "show mnodes"
initialDelaySeconds: 5
timeoutSeconds: 5000
livenessProbe:
tcpSocket:
port: 6030
initialDelaySeconds: 15
periodSeconds: 20
volumeClaimTemplates:
- metadata:
name: taosdata
spec:
accessModes:
- "ReadWriteOnce"
storageClassName: "csi-rbd-sc"
resources:
requests:
storage: "10Gi"
```
## 启动集群
将前述三个文件添加到 Kubernetes 集群中:
```bash
kubectl apply -f taoscfg.yaml
kubectl apply -f taosd-service.yaml
kubectl apply -f tdengine.yaml
```
上面的配置将生成一个两节点的 TDengine 集群dnode 是自动配置的,可以使用 `show dnodes` 命令查看当前集群的节点:
```bash
kubectl exec -i -t tdengine-0 -- taos -s "show dnodes"
kubectl exec -i -t tdengine-1 -- taos -s "show dnodes"
```
输出如下:
```
Welcome to the TDengine shell from Linux, Client Version:2.1.1.0
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
taos> show dnodes
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 1 | 40 | ready | any | 2021-06-01 17:13:24.181 | |
2 | tdengine-1.taosd.default.sv... | 0 | 40 | ready | any | 2021-06-01 17:14:09.257 | |
Query OK, 2 row(s) in set (0.000997s)
```
## 集群扩容
TDengine 集群支持自动扩容:
```bash
kubectl scale statefulsets tdengine --replicas=4
```
上面命令行中参数 `--replica=4` 表示要将 TDengine 集群扩容到 4 个节点,执行后首先检查 POD 的状态:
```bash
kubectl get pods -l app=tdengine
```
输出如下:
```
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 161m
tdengine-1 1/1 Running 0 161m
tdengine-2 1/1 Running 0 32m
tdengine-3 1/1 Running 0 32m
```
此时 POD 的状态仍然是 RunningTDengine 集群中的 dnode 状态要等 POD 状态为 `ready` 之后才能看到:
```bash
kubectl exec -i -t tdengine-0 -- taos -s "show dnodes"
```
扩容后的四节点 TDengine 集群的 dnode 列表:
```
Welcome to the TDengine shell from Linux, Client Version:2.1.1.0
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
taos> show dnodes
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 40 | ready | any | 2021-06-01 11:58:12.915 | |
2 | tdengine-1.taosd.default.sv... | 0 | 40 | ready | any | 2021-06-01 11:58:33.127 | |
3 | tdengine-2.taosd.default.sv... | 0 | 40 | ready | any | 2021-06-01 14:07:27.078 | |
4 | tdengine-3.taosd.default.sv... | 1 | 40 | ready | any | 2021-06-01 14:07:48.362 | |
Query OK, 4 row(s) in set (0.001293s)
```
## 集群缩容
TDengine 的缩容并没有自动化,我们尝试将一个三节点集群缩容到两节点。
首先,确认一个三节点 TDengine 集群正常工作,在 TDengine CLI 中查看 dnode 的状态:
```bash
taos> show dnodes
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 1 | 40 | ready | any | 2021-06-01 16:27:24.852 | |
2 | tdengine-1.taosd.default.sv... | 0 | 40 | ready | any | 2021-06-01 16:27:53.339 | |
3 | tdengine-2.taosd.default.sv... | 0 | 40 | ready | any | 2021-06-01 16:28:49.787 | |
Query OK, 3 row(s) in set (0.001101s)
```
想要安全的缩容,首先需要将节点从 dnode 列表中移除,也即从集群中移除:
```bash
kubectl exec -i -t tdengine-0 -- taos -s "drop dnode 'tdengine-2.taosd.default.svc.cluster.local:6030'"
```
通过 `show dondes` 命令确认移除成功后,移除相应的 POD
```bash
kubectl scale statefulsets tdengine --replicas=2
```
最后一个 POD 会被删除,使用 `kubectl get pods -l app=tdengine` 查看集群状态:
```
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 3h40m
tdengine-1 1/1 Running 0 3h40m
```
POD 删除后,需要手动删除 PVC否则下次扩容时会继续使用以前的数据导致无法正常加入集群。
```bash
kubectl delete pvc taosdata-tdengine-2
```
此时的集群状态是安全的,需要时还可以再次进行扩容:
```bash
kubectl scale statefulsets tdengine --replicas=3
```
`show dnodes` 输出如下:
```
taos> show dnodes
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 1 | 40 | ready | any | 2021-06-01 16:27:24.852 | |
2 | tdengine-1.taosd.default.sv... | 0 | 40 | ready | any | 2021-06-01 16:27:53.339 | |
4 | tdengine-2.taosd.default.sv... | 0 | 40 | ready | any | 2021-06-01 16:40:49.177 | |
```
## 删除集群
完整移除 TDengine 集群,需要分别清理 statefulset、svc、configmap、pvc。
```bash
kubectl delete statefulset -l app=tdengine
kubectl delete svc -l app=tdengine
kubectl delete pvc -l app=tdengine
kubectl delete configmap taoscfg
```
## 常见错误
### 错误一
扩容到四节点之后缩容到两节点,删除的 POD 会进入 offline 状态:
```
Welcome to the TDengine shell from Linux, Client Version:2.1.1.0
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
taos> show dnodes
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 40 | ready | any | 2021-06-01 11:58:12.915 | |
2 | tdengine-1.taosd.default.sv... | 0 | 40 | ready | any | 2021-06-01 11:58:33.127 | |
3 | tdengine-2.taosd.default.sv... | 0 | 40 | offline | any | 2021-06-01 14:07:27.078 | status msg timeout |
4 | tdengine-3.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 14:07:48.362 | status msg timeout |
Query OK, 4 row(s) in set (0.001236s)
```
`drop dnode` 的行为按不会按照预期进行,且下次集群重启后,所有的 dnode 节点将无法启动 dropping 状态无法退出。
### 错误二
TDengine 集群会持有 replica 参数,如果缩容后的节点数小于这个值,集群将无法使用:
创建一个库使用 replica 参数为 2插入部分数据
```bash
kubectl exec -i -t tdengine-0 -- \
taos -s \
"create database if not exists test replica 2;
use test;
create table if not exists t1(ts timestamp, n int);
insert into t1 values(now, 1)(now+1s, 2);"
```
缩容到单节点:
```bash
kubectl scale statefulsets tdengine --replicas=1
```
在 taos shell 中的所有数据库操作将无法成功。
```
taos> show dnodes;
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 2 | 40 | ready | any | 2021-06-01 15:55:52.562 | |
2 | tdengine-1.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 15:56:07.212 | status msg timeout |
Query OK, 2 row(s) in set (0.000845s)
taos> show dnodes;
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 2 | 40 | ready | any | 2021-06-01 15:55:52.562 | |
2 | tdengine-1.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 15:56:07.212 | status msg timeout |
Query OK, 2 row(s) in set (0.000837s)
taos> use test;
Database changed.
taos> insert into t1 values(now, 3);
DB error: Unable to resolve FQDN (0.013874s)
```

View File

@ -0,0 +1,434 @@
---
sidebar_label: Helm
title: 使用 Helm 部署 TDengine 集群
---
Helm 是 Kubernetes 的包管理器,上一节使用 Kubernets 部署 TDengine 集群的操作已经足够简单,但 Helm 依然可以提供更强大的能力。
## 安装 Helm
```bash
curl -fsSL -o get_helm.sh \
https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod +x get_helm.sh
./get_helm.sh
```
Helm 会使用 kubectl 和 kubeconfig 的配置来操作 Kubernetes可以参考 Rancher 安装 Kubernetes 的配置来进行设置。
## 安装 TDengine Chart
TDengine Chart 尚未发布到 Helm 仓库,当前可以从 GitHub 直接下载:
```bash
wget https://github.com/taosdata/TDengine-Operator/raw/main/helm/tdengine-0.3.0.tgz
```
获取当前 Kubernetes 的存储类:
```bash
kubectl get storageclass
```
在 minikube 默认为 standard.
之后,使用 helm 命令安装:
```bash
helm install tdengine tdengine-0.3.0.tgz \
--set storage.className=<your storage class name>
```
在 minikube 环境下,可以设置一个较小的容量避免超出磁盘可用空间:
```bash
helm install tdengine tdengine-0.3.0.tgz \
--set storage.className=standard \
--set storage.dataSize=2Gi \
--set storage.logSize=10Mi
```
部署成功后TDengine Chart 将会输出操作 TDengine 的说明:
```bash
export POD_NAME=$(kubectl get pods --namespace default \
-l "app.kubernetes.io/name=tdengine,app.kubernetes.io/instance=tdengine" \
-o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default exec $POD_NAME -- taos -s "show dnodes; show mnodes"
kubectl --namespace default exec -it $POD_NAME -- taos
```
可以创建一个表进行测试:
```bash
kubectl --namespace default exec $POD_NAME -- \
taos -s "create database test;
use test;
create table t1 (ts timestamp, n int);
insert into t1 values(now, 1)(now + 1s, 2);
select * from t1;"
```
## 配置 Values
TDengine 支持 `values.yaml` 自定义。
通过 helm show values 可以获取 TDengine Chart 支持的全部 values 列表:
```bash
helm show values tdengine-0.3.0.tgz
```
你可以将结果保存为 values.yaml之后可以修改其中的各项参数如 replica 数量存储类名称容量大小TDengine 配置等,然后使用如下命令安装 TDengine 集群:
```bash
helm install tdengine tdengine-0.3.0.tgz -f values.yaml
```
全部参数如下:
```yaml
# Default values for tdengine.
# This is a YAML-formatted file.
# Declare variables to be passed into helm templates.
replicaCount: 1
image:
prefix: tdengine/tdengine
#pullPolicy: Always
# Overrides the image tag whose default is the chart appVersion.
#tag: "2.4.0.5"
service:
# ClusterIP is the default service type, use NodeIP only if you know what you are doing.
type: ClusterIP
ports:
# TCP range required
tcp:
[
6030,
6031,
6032,
6033,
6034,
6035,
6036,
6037,
6038,
6039,
6040,
6041,
6042,
6043,
6044,
6045,
6060,
]
# UDP range 6030-6039
udp: [6030, 6031, 6032, 6033, 6034, 6035, 6036, 6037, 6038, 6039]
arbitrator: true
# Set timezone here, not in taoscfg
timezone: "Asia/Shanghai"
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
storage:
# Set storageClassName for pvc. K8s use default storage class if not set.
#
className: ""
dataSize: "100Gi"
logSize: "10Gi"
nodeSelectors:
taosd:
# node selectors
clusterDomainSuffix: ""
# Config settings in taos.cfg file.
#
# The helm/k8s support will use environment variables for taos.cfg,
# converting an upper-snake-cased variable like `TAOS_DEBUG_FLAG`,
# to a camelCase taos config variable `debugFlag`.
#
# See the variable list at https://www.taosdata.com/cn/documentation/administrator .
#
# Note:
# 1. firstEp/secondEp: should not be setted here, it's auto generated at scale-up.
# 2. serverPort: should not be setted, we'll use the default 6030 in many places.
# 3. fqdn: will be auto generated in kubenetes, user should not care about it.
# 4. role: currently role is not supported - every node is able to be mnode and vnode.
#
# Btw, keep quotes "" around the value like below, even the value will be number or not.
taoscfg:
# number of replications, for cluster only
TAOS_REPLICA: "1"
# number of management nodes in the system
TAOS_NUM_OF_MNODES: "1"
# number of days per DB file
# TAOS_DAYS: "10"
# number of days to keep DB file, default is 10 years.
#TAOS_KEEP: "3650"
# cache block size (Mbyte)
#TAOS_CACHE: "16"
# number of cache blocks per vnode
#TAOS_BLOCKS: "6"
# minimum rows of records in file block
#TAOS_MIN_ROWS: "100"
# maximum rows of records in file block
#TAOS_MAX_ROWS: "4096"
#
# TAOS_NUM_OF_THREADS_PER_CORE: number of threads per CPU core
#TAOS_NUM_OF_THREADS_PER_CORE: "1.0"
#
# TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data
#TAOS_NUM_OF_COMMIT_THREADS: "4"
#
# TAOS_RATIO_OF_QUERY_CORES:
# the proportion of total CPU cores available for query processing
# 2.0: the query threads will be set to double of the CPU cores.
# 1.0: all CPU cores are available for query processing [default].
# 0.5: only half of the CPU cores are available for query.
# 0.0: only one core available.
#TAOS_RATIO_OF_QUERY_CORES: "1.0"
#
# TAOS_KEEP_COLUMN_NAME:
# the last_row/first/last aggregator will not change the original column name in the result fields
#TAOS_KEEP_COLUMN_NAME: "0"
# enable/disable backuping vnode directory when removing vnode
#TAOS_VNODE_BAK: "1"
# enable/disable installation / usage report
#TAOS_TELEMETRY_REPORTING: "1"
# enable/disable load balancing
#TAOS_BALANCE: "1"
# max timer control blocks
#TAOS_MAX_TMR_CTRL: "512"
# time interval of system monitor, seconds
#TAOS_MONITOR_INTERVAL: "30"
# number of seconds allowed for a dnode to be offline, for cluster only
#TAOS_OFFLINE_THRESHOLD: "8640000"
# RPC re-try timer, millisecond
#TAOS_RPC_TIMER: "1000"
# RPC maximum time for ack, seconds.
#TAOS_RPC_MAX_TIME: "600"
# time interval of dnode status reporting to mnode, seconds, for cluster only
#TAOS_STATUS_INTERVAL: "1"
# time interval of heart beat from shell to dnode, seconds
#TAOS_SHELL_ACTIVITY_TIMER: "3"
# minimum sliding window time, milli-second
#TAOS_MIN_SLIDING_TIME: "10"
# minimum time window, milli-second
#TAOS_MIN_INTERVAL_TIME: "10"
# maximum delay before launching a stream computation, milli-second
#TAOS_MAX_STREAM_COMP_DELAY: "20000"
# maximum delay before launching a stream computation for the first time, milli-second
#TAOS_MAX_FIRST_STREAM_COMP_DELAY: "10000"
# retry delay when a stream computation fails, milli-second
#TAOS_RETRY_STREAM_COMP_DELAY: "10"
# the delayed time for launching a stream computation, from 0.1(default, 10% of whole computing time window) to 0.9
#TAOS_STREAM_COMP_DELAY_RATIO: "0.1"
# max number of vgroups per db, 0 means configured automatically
#TAOS_MAX_VGROUPS_PER_DB: "0"
# max number of tables per vnode
#TAOS_MAX_TABLES_PER_VNODE: "1000000"
# the number of acknowledgments required for successful data writing
#TAOS_QUORUM: "1"
# enable/disable compression
#TAOS_COMP: "2"
# write ahead log (WAL) level, 0: no wal; 1: write wal, but no fysnc; 2: write wal, and call fsync
#TAOS_WAL_LEVEL: "1"
# if walLevel is set to 2, the cycle of fsync being executed, if set to 0, fsync is called right away
#TAOS_FSYNC: "3000"
# the compressed rpc message, option:
# -1 (no compression)
# 0 (all message compressed),
# > 0 (rpc message body which larger than this value will be compressed)
#TAOS_COMPRESS_MSG_SIZE: "-1"
# max length of an SQL
#TAOS_MAX_SQL_LENGTH: "1048576"
# the maximum number of records allowed for super table time sorting
#TAOS_MAX_NUM_OF_ORDERED_RES: "100000"
# max number of connections allowed in dnode
#TAOS_MAX_SHELL_CONNS: "5000"
# max number of connections allowed in client
#TAOS_MAX_CONNECTIONS: "5000"
# stop writing logs when the disk size of the log folder is less than this value
#TAOS_MINIMAL_LOG_DIR_G_B: "0.1"
# stop writing temporary files when the disk size of the tmp folder is less than this value
#TAOS_MINIMAL_TMP_DIR_G_B: "0.1"
# if disk free space is less than this value, taosd service exit directly within startup process
#TAOS_MINIMAL_DATA_DIR_G_B: "0.1"
# One mnode is equal to the number of vnode consumed
#TAOS_MNODE_EQUAL_VNODE_NUM: "4"
# enbale/disable http service
#TAOS_HTTP: "1"
# enable/disable system monitor
#TAOS_MONITOR: "1"
# enable/disable recording the SQL statements via restful interface
#TAOS_HTTP_ENABLE_RECORD_SQL: "0"
# number of threads used to process http requests
#TAOS_HTTP_MAX_THREADS: "2"
# maximum number of rows returned by the restful interface
#TAOS_RESTFUL_ROW_LIMIT: "10240"
# The following parameter is used to limit the maximum number of lines in log files.
# max number of lines per log filters
# numOfLogLines 10000000
# enable/disable async log
#TAOS_ASYNC_LOG: "0"
#
# time of keeping log files, days
#TAOS_LOG_KEEP_DAYS: "0"
# The following parameters are used for debug purpose only.
# debugFlag 8 bits mask: FILE-SCREEN-UNUSED-HeartBeat-DUMP-TRACE_WARN-ERROR
# 131: output warning and error
# 135: output debug, warning and error
# 143: output trace, debug, warning and error to log
# 199: output debug, warning and error to both screen and file
# 207: output trace, debug, warning and error to both screen and file
#
# debug flag for all log type, take effect when non-zero value\
#TAOS_DEBUG_FLAG: "143"
# enable/disable recording the SQL in taos client
#TAOS_ENABLE_RECORD_SQL: "0"
# generate core file when service crash
#TAOS_ENABLE_CORE_FILE: "1"
# maximum display width of binary and nchar fields in the shell. The parts exceeding this limit will be hidden
#TAOS_MAX_BINARY_DISPLAY_WIDTH: "30"
# enable/disable stream (continuous query)
#TAOS_STREAM: "1"
# in retrieve blocking model, only in 50% query threads will be used in query processing in dnode
#TAOS_RETRIEVE_BLOCKING_MODEL: "0"
# the maximum allowed query buffer size in MB during query processing for each data node
# -1 no limit (default)
# 0 no query allowed, queries are disabled
#TAOS_QUERY_BUFFER_SIZE: "-1"
```
## 扩容
关于扩容可参考上一节的说明,有一些额外的操作需要从 helm 的部署中获取。
首先,从部署中获取 StatefulSet 的名称。
```bash
export STS_NAME=$(kubectl get statefulset \
-l "app.kubernetes.io/name=tdengine" \
-o jsonpath="{.items[0].metadata.name}")
```
扩容操作极其简单,增加 replica 即可。以下命令将 TDengine 扩充到三节点:
```bash
kubectl scale --replicas 3 statefulset/$STS_NAME
```
使用命令 `show dnodes``show mnodes` 检查是否扩容成功。
## 缩容
:::warning
缩容操作并没有完整测试,可能造成数据风险,请谨慎使用。
:::
获取需要缩容的 dnode 列表,并手动 Drop。
```bash
kubectl --namespace default exec $POD_NAME -- \
cat /var/lib/taos/dnode/dnodeEps.json \
| jq '.dnodeInfos[1:] |map(.dnodeFqdn + ":" + (.dnodePort|tostring)) | .[]' -r
kubectl --namespace default exec $POD_NAME -- taos -s "show dnodes"
kubectl --namespace default exec $POD_NAME -- taos -s 'drop dnode "<you dnode in list>"'
```
## 删除集群
Helm 管理下,清理操作也变得简单:
```bash
helm uninstall tdengine
```
但 Helm 也不会自动移除 PVC需要手动获取 PVC 然后删除掉。

View File

@ -0,0 +1 @@
label: 部署集群

View File

@ -1,10 +1,10 @@
---
title: 集群管理
title: 部署集群
---
TDengine 支持集群提供水平扩展的能力。如果需要获得更高的处理能力只需要多增加节点即可。TDengine 采用虚拟节点技术将一个节点虚拟化为多个虚拟节点以实现负载均衡。同时TDengine可以将多个节点上的虚拟节点组成虚拟节点组通过多副本机制以保证供系统的高可用。TDengine的集群功能完全开源。
本章节主要介绍集群的部署、维护,以及如何实现高可用和负载均衡
本章节主要介绍如何在主机上人工部署集群,以及如何使用 Kubernetes 和 Helm部署集群
```mdx-code-block
import DocCardList from '@theme/DocCardList';

View File

@ -6,199 +6,11 @@ description: 安装、卸载、启动、停止和升级
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
TDengine 开源版本提供 deb 和 rpm 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 deb 支持 Debian/Ubuntu 及衍生系统rpm 支持 CentOS/RHEL/SUSE 及衍生系统。同时我们也为企业用户提供 tar.gz 格式安装包
本节将介绍一些关于安装和卸载更深层次的内容,以及升级的注意事项
## 安装
## 安装和卸载
<Tabs>
<TabItem label="Deb 安装" value="debinst">
1、从官网下载获得 deb 安装包,例如 TDengine-server-2.4.0.7-Linux-x64.deb
2、进入到 TDengine-server-2.4.0.7-Linux-x64.deb 安装包所在目录,执行如下的安装命令:
```
$ sudo dpkg -i TDengine-server-2.4.0.7-Linux-x64.deb
(Reading database ... 137504 files and directories currently installed.)
Preparing to unpack TDengine-server-2.4.0.7-Linux-x64.deb ...
TDengine is removed successfully!
Unpacking tdengine (2.4.0.7) over (2.4.0.7) ...
Setting up tdengine (2.4.0.7) ...
Start to install TDengine...
System hostname is: ubuntu-1804
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join
OR leave it blank to build one:
Enter your email address for priority support or enter empty to skip:
Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service.
To configure TDengine : edit /etc/taos/taos.cfg
To start TDengine : sudo systemctl start taosd
To access TDengine : taos -h ubuntu-1804 to login into TDengine server
TDengine is installed successfully!
```
</TabItem>
<TabItem label="RPM 安装" value="rpminst">
1、从官网下载获得 rpm 安装包,例如 TDengine-server-2.4.0.7-Linux-x64.rpm
2、进入到 TDengine-server-2.4.0.7-Linux-x64.rpm 安装包所在目录,执行如下的安装命令:
```
$ sudo rpm -ivh TDengine-server-2.4.0.7-Linux-x64.rpm
Preparing... ################################# [100%]
Updating / installing...
1:tdengine-2.4.0.7-3 ################################# [100%]
Start to install TDengine...
System hostname is: centos7
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join
OR leave it blank to build one:
Enter your email address for priority support or enter empty to skip:
Created symlink from /etc/systemd/system/multi-user.target.wants/taosd.service to /etc/systemd/system/taosd.service.
To configure TDengine : edit /etc/taos/taos.cfg
To start TDengine : sudo systemctl start taosd
To access TDengine : taos -h centos7 to login into TDengine server
TDengine is installed successfully!
```
</TabItem>
<TabItem label="tar.gz 安装" value="tarinst">
1、从官网下载获得 tar.gz 安装包,例如 TDengine-server-2.4.0.7-Linux-x64.tar.gz
2、进入到 TDengine-server-2.4.0.7-Linux-x64.tar.gz 安装包所在目录,先解压文件后,进入子目录,执行其中的 install.sh 安装脚本:
```
$ tar xvzf TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz
TDengine-enterprise-server-2.4.0.7/
TDengine-enterprise-server-2.4.0.7/driver/
TDengine-enterprise-server-2.4.0.7/driver/vercomp.txt
TDengine-enterprise-server-2.4.0.7/driver/libtaos.so.2.4.0.7
TDengine-enterprise-server-2.4.0.7/install.sh
TDengine-enterprise-server-2.4.0.7/examples/
...
$ ll
total 43816
drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ./
drwxr-xr-x 20 ubuntu ubuntu 4096 Feb 22 09:30 ../
drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 TDengine-enterprise-server-2.4.0.7/
-rw-rw-r-- 1 ubuntu ubuntu 44852544 Feb 22 09:31 TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz
$ cd TDengine-enterprise-server-2.4.0.7/
$ ll
total 40784
drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 ./
drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ../
drwxrwxr-x 2 ubuntu ubuntu 4096 Feb 22 09:30 driver/
drwxrwxr-x 10 ubuntu ubuntu 4096 Feb 22 09:30 examples/
-rwxrwxr-x 1 ubuntu ubuntu 33294 Feb 22 09:30 install.sh*
-rw-rw-r-- 1 ubuntu ubuntu 41704288 Feb 22 09:30 taos.tar.gz
$ sudo ./install.sh
Start to update TDengine...
Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service.
Nginx for TDengine is updated successfully!
To configure TDengine : edit /etc/taos/taos.cfg
To configure Taos Adapter (if has) : edit /etc/taos/taosadapter.toml
To start TDengine : sudo systemctl start taosd
To access TDengine : use taos -h ubuntu-1804 in shell OR from http://127.0.0.1:6060
TDengine is updated successfully!
Install taoskeeper as a standalone service
taoskeeper is installed, enable it by `systemctl enable taoskeeper`
```
:::info
install.sh 安装脚本在执行过程中,会通过命令行交互界面询问一些配置信息。如果希望采取无交互安装方式,那么可以用 -e no 参数来执行 install.sh 脚本。运行 `./install.sh -h` 指令可以查看所有参数的详细说明信息。
:::
</TabItem>
</Tabs>
:::note
当安装第一个节点时,出现 Enter FQDN提示的时候不需要输入任何内容。只有当安装第二个或以后更多的节点时才需要输入已有集群中任何一个可用节点的 FQDN支持该新节点加入集群。当然也可以不输入而是在新节点启动前配置到新节点的配置文件中。
:::
## 卸载
<Tabs>
<TabItem label="Deb 卸载" value="debuninst">
卸载命令如下:
```
$ sudo dpkg -r tdengine
(Reading database ... 137504 files and directories currently installed.)
Removing tdengine (2.4.0.7) ...
TDengine is removed successfully!
```
</TabItem>
<TabItem label="RPM 卸载" value="rpmuninst">
卸载命令如下:
```
$ sudo rpm -e tdengine
TDengine is removed successfully!
```
</TabItem>
<TabItem label="tar.gz 卸载" value="taruninst">
卸载命令如下:
```
$ rmtaos
Nginx for TDengine is running, stopping it...
TDengine is removed successfully!
taosKeeper is removed successfully!
```
</TabItem>
</Tabs>
:::info
- TDengine 提供了多种安装包,但最好不要在一个系统上同时使用 tar.gz 安装包和 deb 或 rpm 安装包。否则会相互影响,导致在使用时出现问题。
- 对于 deb 包安装后,如果安装目录被手工误删了部分,出现卸载、或重新安装不能成功。此时,需要清除 TDengine 包的安装信息,执行如下命令:
```
$ sudo rm -f /var/lib/dpkg/info/tdengine*
```
然后再重新进行安装就可以了。
- 对于 rpm 包安装后,如果安装目录被手工误删了部分,出现卸载、或重新安装不能成功。此时,需要清除 TDengine 包的安装信息,执行如下命令:
```
$ sudo rpm -e --noscripts tdengine
```
然后再重新进行安装就可以了。
:::
关于安装和卸载,请参考 [安装和卸载](/get-started/package)
## 安装目录说明
@ -234,34 +46,6 @@ lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/
如果是更新安装,当缺省配置文件( /etc/taos/taos.cfg )存在时,仍然使用已有的配置文件,安装包中携带的配置文件修改为 taos.cfg.orig 保存在 /usr/local/taos/cfg/ 目录,可以作为设置配置参数的参考样例;如果不存在配置文件,就使用安装包中自带的配置文件。
## 启动和停止
TDengine 使用 Linux 系统的 systemd/systemctl/service 来管理系统的启动和、停止、重启操作。TDengine 的服务进程是 taosd默认情况下 TDengine 在系统启动后将自动启动。DBA 可以通过 systemd/systemctl/service 手动操作停止、启动、重新启动服务。
以 systemctl 为例,命令如下:
- 启动服务进程:`systemctl start taosd`
- 停止服务进程:`systemctl stop taosd`
- 重启服务进程:`systemctl restart taosd`
- 查看服务状态:`systemctl status taosd`
注意TDengine 在 2.4 版本之后包含一个独立组件 taosAdapter 需要使用 systemctl 命令管理 taosAdapter 服务的启动和停止。
如果服务进程处于活动状态,则 status 指令会显示如下的相关信息:
```
Active: active (running)
```
如果后台服务进程处于停止状态,则 status 指令会显示如下的相关信息:
```
Active: inactive (dead)
```
## 升级
升级分为两个层面:升级安装包 和 升级运行中的实例。

View File

@ -2803,6 +2803,7 @@ typedef struct {
int32_t tSerializeSTableIndexRsp(void* buf, int32_t bufLen, const STableIndexRsp* pRsp);
int32_t tDeserializeSTableIndexRsp(void* buf, int32_t bufLen, STableIndexRsp* pRsp);
void tFreeSerializeSTableIndexRsp(STableIndexRsp* pRsp);
void tFreeSTableIndexInfo(void* pInfo);

View File

@ -50,6 +50,7 @@ bool tNameIsValid(const SName* name);
const char* tNameGetTableName(const SName* name);
int32_t tNameGetDbName(const SName* name, char* dst);
const char* tNameGetDbNameP(const SName* name);
int32_t tNameGetFullDbName(const SName* name, char* dst);

View File

@ -73,7 +73,6 @@ static FORCE_INLINE int64_t taosGetTimestampToday(int32_t precision) {
}
int64_t taosTimeAdd(int64_t t, int64_t duration, char unit, int32_t precision);
int64_t taosTimeSub(int64_t t, int64_t duration, char unit, int32_t precision);
int64_t taosTimeTruncate(int64_t t, const SInterval* pInterval, int32_t precision);
int32_t taosTimeCountInterval(int64_t skey, int64_t ekey, int64_t interval, char unit, int32_t precision);

View File

@ -193,7 +193,7 @@ int32_t taosAsyncExec(__async_exec_fn_t execFn, void* execParam, int32_t* code);
void destroySendMsgInfo(SMsgSendInfo* pMsgBody);
int32_t asyncSendMsgToServerExt(void* pTransporter, SEpSet* epSet, int64_t* pTransporterId, const SMsgSendInfo* pInfo,
int32_t asyncSendMsgToServerExt(void* pTransporter, SEpSet* epSet, int64_t* pTransporterId, SMsgSendInfo* pInfo,
bool persistHandle, void* ctx);
/**
@ -205,7 +205,7 @@ int32_t asyncSendMsgToServerExt(void* pTransporter, SEpSet* epSet, int64_t* pTra
* @param pInfo
* @return
*/
int32_t asyncSendMsgToServer(void* pTransporter, SEpSet* epSet, int64_t* pTransporterId, const SMsgSendInfo* pInfo);
int32_t asyncSendMsgToServer(void* pTransporter, SEpSet* epSet, int64_t* pTransporterId, SMsgSendInfo* pInfo);
int32_t queryBuildUseDbOutput(SUseDbOutput* pOut, SUseDbRsp* usedbRsp);
@ -260,6 +260,8 @@ extern int32_t (*queryProcessMsgRsp[TDMT_MAX])(void* output, char* msg, int32_t
#define REQUEST_TOTAL_EXEC_TIMES 2
#define IS_SYS_DBNAME(_dbname) (((*(_dbname) == 'i') && (0 == strcmp(_dbname, TSDB_INFORMATION_SCHEMA_DB))) || ((*(_dbname) == 'p') && (0 == strcmp(_dbname, TSDB_PERFORMANCE_SCHEMA_DB))))
#define qFatal(...) \
do { \
if (qDebugFlag & DEBUG_FATAL) { \

View File

@ -33,11 +33,12 @@ typedef struct SUpdateInfo {
int64_t watermark;
TSKEY minTS;
SScalableBf* pCloseWinSBF;
SHashObj* pMap;
} SUpdateInfo;
SUpdateInfo *updateInfoInitP(SInterval* pInterval, int64_t watermark);
SUpdateInfo *updateInfoInit(int64_t interval, int32_t precision, int64_t watermark);
bool updateInfoIsUpdated(SUpdateInfo *pInfo, tb_uid_t tableId, TSKEY ts);
bool updateInfoIsUpdated(SUpdateInfo *pInfo, uint64_t tableId, TSKEY ts);
void updateInfoDestroy(SUpdateInfo *pInfo);
void updateInfoAddCloseWindowSBF(SUpdateInfo *pInfo);
void updateInfoDestoryColseWinSBF(SUpdateInfo *pInfo);

View File

@ -124,18 +124,16 @@ void *rpcReallocCont(void *ptr, int32_t contLen);
// Because taosd supports multi-process mode
// These functions should not be used on the server side
// Please use tmsg<xx> functions, which are defined in tmsgcb.h
void rpcSendRequest(void *thandle, const SEpSet *pEpSet, SRpcMsg *pMsg, int64_t *rid);
void rpcSendResponse(const SRpcMsg *pMsg);
void rpcRegisterBrokenLinkArg(SRpcMsg *msg);
void rpcReleaseHandle(void *handle, int8_t type); // just release conn to rpc instance, no close sock
int rpcSendRequest(void *thandle, const SEpSet *pEpSet, SRpcMsg *pMsg, int64_t *rid);
int rpcSendResponse(const SRpcMsg *pMsg);
int rpcRegisterBrokenLinkArg(SRpcMsg *msg);
int rpcReleaseHandle(void *handle, int8_t type); // just release conn to rpc instance, no close sock
// These functions will not be called in the child process
void rpcSendRedirectRsp(void *pConn, const SEpSet *pEpSet);
void rpcSendRequestWithCtx(void *thandle, const SEpSet *pEpSet, SRpcMsg *pMsg, int64_t *rid, SRpcCtx *ctx);
int32_t rpcGetConnInfo(void *thandle, SRpcConnInfo *pInfo);
void rpcSendRecv(void *shandle, SEpSet *pEpSet, SRpcMsg *pReq, SRpcMsg *pRsp);
void rpcSetDefaultAddr(void *thandle, const char *ip, const char *fqdn);
void* rpcAllocHandle();
int rpcSendRequestWithCtx(void *thandle, const SEpSet *pEpSet, SRpcMsg *pMsg, int64_t *rid, SRpcCtx *ctx);
int rpcSendRecv(void *shandle, SEpSet *pEpSet, SRpcMsg *pReq, SRpcMsg *pRsp);
int rpcSetDefaultAddr(void *thandle, const char *ip, const char *fqdn);
void *rpcAllocHandle();
#ifdef __cplusplus
}

View File

@ -33,16 +33,16 @@ extern "C" {
#define wTrace(...) { if (wDebugFlag & DEBUG_TRACE) { taosPrintLog("WAL ", DEBUG_TRACE, wDebugFlag, __VA_ARGS__); }}
// clang-format on
#define WAL_PROTO_VER 0
#define WAL_NOSUFFIX_LEN 20
#define WAL_SUFFIX_AT (WAL_NOSUFFIX_LEN + 1)
#define WAL_LOG_SUFFIX "log"
#define WAL_INDEX_SUFFIX "idx"
#define WAL_REFRESH_MS 1000
#define WAL_MAX_SIZE (TSDB_MAX_WAL_SIZE + sizeof(SWalCkHead))
#define WAL_PATH_LEN (TSDB_FILENAME_LEN + 12)
#define WAL_FILE_LEN (WAL_PATH_LEN + 32)
#define WAL_MAGIC 0xFAFBFCFDULL
#define WAL_PROTO_VER 0
#define WAL_NOSUFFIX_LEN 20
#define WAL_SUFFIX_AT (WAL_NOSUFFIX_LEN + 1)
#define WAL_LOG_SUFFIX "log"
#define WAL_INDEX_SUFFIX "idx"
#define WAL_REFRESH_MS 1000
#define WAL_PATH_LEN (TSDB_FILENAME_LEN + 12)
#define WAL_FILE_LEN (WAL_PATH_LEN + 32)
#define WAL_MAGIC 0xFAFBFCFDULL
#define WAL_SCAN_BUF_SIZE (1024 * 1024 * 3)
typedef enum {
TAOS_WAL_WRITE = 1,
@ -64,6 +64,7 @@ typedef struct {
int64_t verInSnapshotting;
int64_t snapshotVer;
int64_t commitVer;
int64_t appliedVer;
int64_t lastVer;
} SWalVer;
@ -172,6 +173,9 @@ int32_t walRollback(SWal *, int64_t ver);
int32_t walBeginSnapshot(SWal *, int64_t ver);
int32_t walEndSnapshot(SWal *);
int32_t walRestoreFromSnapshot(SWal *, int64_t ver);
// for tq
int32_t walApplyVer(SWal *, int64_t ver);
// int32_t walDataCorrupted(SWal*);
// read
@ -186,7 +190,6 @@ void walSetReaderCapacity(SWalReader *pRead, int32_t capacity);
int32_t walFetchHead(SWalReader *pRead, int64_t ver, SWalCkHead *pHead);
int32_t walFetchBody(SWalReader *pRead, SWalCkHead **ppHead);
int32_t walSkipFetchBody(SWalReader *pRead, const SWalCkHead *pHead);
typedef struct {
int64_t refId;
int64_t ver;
@ -206,6 +209,7 @@ int64_t walGetFirstVer(SWal *);
int64_t walGetSnapshotVer(SWal *);
int64_t walGetLastVer(SWal *);
int64_t walGetCommittedVer(SWal *);
int64_t walGetAppliedVer(SWal *);
#ifdef __cplusplus
}

View File

@ -33,7 +33,7 @@ typedef struct {
SDiskSize size;
} SDiskSpace;
bool taosCheckSystemIsSmallEnd();
bool taosCheckSystemIsLittleEnd();
void taosGetSystemInfo();
int32_t taosGetEmail(char *email, int32_t maxLen);
int32_t taosGetOsReleaseName(char *releaseName, int32_t maxLen);

View File

@ -421,7 +421,7 @@ typedef enum ELogicConditionType {
#define TSDB_DEFAULT_STABLES_HASH_SIZE 100
#define TSDB_DEFAULT_CTABLES_HASH_SIZE 20000
#define TSDB_MAX_WAL_SIZE (1024 * 1024 * 3)
#define TSDB_MAX_MSG_SIZE (1024 * 1024 * 10)
#define TSDB_ARB_DUMMY_TIME 4765104000000 // 2121-01-01 00:00:00.000, :P

View File

@ -45,7 +45,6 @@ void taosIp2String(uint32_t ip, char *str);
void taosIpPort2String(uint32_t ip, uint16_t port, char *str);
void *tmemmem(const char *haystack, int hlen, const char *needle, int nlen);
char *strDupUnquo(const char *src);
static FORCE_INLINE void taosEncryptPass(uint8_t *inBuf, size_t inLen, char *target) {
T_MD5_CTX context;

View File

@ -72,7 +72,6 @@ typedef struct SStmtBindInfo {
typedef struct SStmtExecInfo {
int32_t affectedRows;
SRequestObj* pRequest;
SHashObj* pVgHash;
SHashObj* pBlockHash;
bool autoCreateTbl;
} SStmtExecInfo;
@ -88,6 +87,7 @@ typedef struct SStmtSQLInfo {
SArray* nodeList;
SStmtQueryResInfo queryRes;
bool autoCreateTbl;
SHashObj* pVgHash;
} SStmtSQLInfo;
typedef struct STscStmt {

View File

@ -88,7 +88,7 @@ void closeTransporter(SAppInstInfo *pAppInfo) {
static bool clientRpcRfp(int32_t code, tmsg_t msgType) {
if (NEED_REDIRECT_ERROR(code)) {
if (msgType == TDMT_SCH_QUERY || msgType == TDMT_SCH_MERGE_QUERY || msgType == TDMT_SCH_FETCH ||
msgType == TDMT_SCH_MERGE_FETCH) {
msgType == TDMT_SCH_MERGE_FETCH || msgType == TDMT_SCH_QUERY_HEARTBEAT || msgType == TDMT_SCH_DROP_TASK) {
return false;
}
return true;

View File

@ -590,6 +590,11 @@ int32_t buildAsyncExecNodeList(SRequestObj* pRequest, SArray** pNodeList, SArray
return code;
}
void freeVgList(void *list) {
SArray* pList = *(SArray**)list;
taosArrayDestroy(pList);
}
int32_t buildSyncExecNodeList(SRequestObj* pRequest, SArray** pNodeList, SArray* pMnodeList) {
SArray* pDbVgList = NULL;
SArray* pQnodeList = NULL;
@ -641,7 +646,7 @@ int32_t buildSyncExecNodeList(SRequestObj* pRequest, SArray** pNodeList, SArray*
_return:
taosArrayDestroy(pDbVgList);
taosArrayDestroyEx(pDbVgList, freeVgList);
taosArrayDestroy(pQnodeList);
return code;

View File

@ -6,11 +6,16 @@
#include "clientStmt.h"
static int32_t stmtCreateRequest(STscStmt* pStmt) {
int32_t code = 0;
if (pStmt->exec.pRequest == NULL) {
return buildRequest(pStmt->taos->id, pStmt->sql.sqlStr, pStmt->sql.sqlLen, NULL, false, &pStmt->exec.pRequest);
} else {
return TSDB_CODE_SUCCESS;
code = buildRequest(pStmt->taos->id, pStmt->sql.sqlStr, pStmt->sql.sqlLen, NULL, false, &pStmt->exec.pRequest);
if (TSDB_CODE_SUCCESS == code) {
pStmt->exec.pRequest->syncQuery = true;
}
}
return code;
}
int32_t stmtSwitchStatus(STscStmt* pStmt, STMT_STATUS newStatus) {
@ -155,7 +160,7 @@ int32_t stmtUpdateBindInfo(TAOS_STMT* stmt, STableMeta* pTableMeta, void* tags,
int32_t stmtUpdateExecInfo(TAOS_STMT* stmt, SHashObj* pVgHash, SHashObj* pBlockHash, bool autoCreateTbl) {
STscStmt* pStmt = (STscStmt*)stmt;
pStmt->exec.pVgHash = pVgHash;
pStmt->sql.pVgHash = pVgHash;
pStmt->exec.pBlockHash = pBlockHash;
pStmt->exec.autoCreateTbl = autoCreateTbl;
@ -177,7 +182,7 @@ int32_t stmtUpdateInfo(TAOS_STMT* stmt, STableMeta* pTableMeta, void* tags, char
int32_t stmtGetExecInfo(TAOS_STMT* stmt, SHashObj** pVgHash, SHashObj** pBlockHash) {
STscStmt* pStmt = (STscStmt*)stmt;
*pVgHash = pStmt->exec.pVgHash;
*pVgHash = pStmt->sql.pVgHash;
*pBlockHash = pStmt->exec.pBlockHash;
return TSDB_CODE_SUCCESS;
@ -227,7 +232,7 @@ int32_t stmtParseSql(STscStmt* pStmt) {
};
STMT_ERR_RET(stmtCreateRequest(pStmt));
STMT_ERR_RET(parseSql(pStmt->exec.pRequest, false, &pStmt->sql.pQuery, &stmtCb));
pStmt->bInfo.needParse = false;
@ -308,6 +313,8 @@ int32_t stmtCleanSQLInfo(STscStmt* pStmt) {
taosMemoryFree(pStmt->sql.sqlStr);
qDestroyQuery(pStmt->sql.pQuery);
taosArrayDestroy(pStmt->sql.nodeList);
taosHashCleanup(pStmt->sql.pVgHash);
pStmt->sql.pVgHash = NULL;
void* pIter = taosHashIterate(pStmt->sql.pTableCache, NULL);
while (pIter) {
@ -340,7 +347,7 @@ int32_t stmtRebuildDataBlock(STscStmt* pStmt, STableDataBlocks* pDataBlock, STab
STMT_ERR_RET(catalogGetTableHashVgroup(pStmt->pCatalog, &conn, &pStmt->bInfo.sname, &vgInfo));
STMT_ERR_RET(
taosHashPut(pStmt->exec.pVgHash, (const char*)&vgInfo.vgId, sizeof(vgInfo.vgId), (char*)&vgInfo, sizeof(vgInfo)));
taosHashPut(pStmt->sql.pVgHash, (const char*)&vgInfo.vgId, sizeof(vgInfo.vgId), (char*)&vgInfo, sizeof(vgInfo)));
STMT_ERR_RET(qRebuildStmtDataBlock(newBlock, pDataBlock, uid, vgInfo.vgId));
@ -680,6 +687,7 @@ int stmtBindBatch(TAOS_STMT* stmt, TAOS_MULTI_BIND* bind, int32_t colIdx) {
if (pStmt->sql.pQuery->haveResultSet) {
setResSchemaInfo(&pStmt->exec.pRequest->body.resInfo, pStmt->sql.pQuery->pResSchema,
pStmt->sql.pQuery->numOfResCols);
taosMemoryFreeClear(pStmt->sql.pQuery->pResSchema);
setResPrecision(&pStmt->exec.pRequest->body.resInfo, pStmt->sql.pQuery->precision);
}
@ -804,7 +812,7 @@ int stmtExec(TAOS_STMT* stmt) {
if (STMT_TYPE_QUERY == pStmt->sql.type) {
launchQueryImpl(pStmt->exec.pRequest, pStmt->sql.pQuery, true, NULL);
} else {
STMT_ERR_RET(qBuildStmtOutput(pStmt->sql.pQuery, pStmt->exec.pVgHash, pStmt->exec.pBlockHash));
STMT_ERR_RET(qBuildStmtOutput(pStmt->sql.pQuery, pStmt->sql.pVgHash, pStmt->exec.pBlockHash));
launchQueryImpl(pStmt->exec.pRequest, pStmt->sql.pQuery, true, (autoCreateTbl ? (void**)&pRsp : NULL));
}
@ -847,9 +855,10 @@ _return:
int stmtClose(TAOS_STMT* stmt) {
STscStmt* pStmt = (STscStmt*)stmt;
STMT_RET(stmtCleanSQLInfo(pStmt));
stmtCleanSQLInfo(pStmt);
taosMemoryFree(stmt);
return TSDB_CODE_SUCCESS;
}
const char* stmtErrstr(TAOS_STMT* stmt) {

View File

@ -40,26 +40,26 @@ bool tsPrintAuth = false;
// multi process
int32_t tsMultiProcess = 0;
int32_t tsMnodeShmSize = TSDB_MAX_WAL_SIZE * 2 + 1024;
int32_t tsVnodeShmSize = TSDB_MAX_WAL_SIZE * 10 + 1024;
int32_t tsQnodeShmSize = TSDB_MAX_WAL_SIZE * 4 + 1024;
int32_t tsSnodeShmSize = TSDB_MAX_WAL_SIZE * 4 + 1024;
int32_t tsBnodeShmSize = TSDB_MAX_WAL_SIZE * 4 + 1024;
int32_t tsMnodeShmSize = TSDB_MAX_MSG_SIZE * 2 + 1024;
int32_t tsVnodeShmSize = TSDB_MAX_MSG_SIZE * 10 + 1024;
int32_t tsQnodeShmSize = TSDB_MAX_MSG_SIZE * 4 + 1024;
int32_t tsSnodeShmSize = TSDB_MAX_MSG_SIZE * 4 + 1024;
int32_t tsBnodeShmSize = TSDB_MAX_MSG_SIZE * 4 + 1024;
int32_t tsNumOfShmThreads = 1;
// queue & threads
int32_t tsNumOfRpcThreads = 1;
int32_t tsNumOfCommitThreads = 2;
int32_t tsNumOfTaskQueueThreads = 1;
int32_t tsNumOfMnodeQueryThreads = 2;
int32_t tsNumOfMnodeQueryThreads = 4;
int32_t tsNumOfMnodeFetchThreads = 1;
int32_t tsNumOfMnodeReadThreads = 1;
int32_t tsNumOfVnodeQueryThreads = 2;
int32_t tsNumOfVnodeQueryThreads = 4;
int32_t tsNumOfVnodeStreamThreads = 2;
int32_t tsNumOfVnodeFetchThreads = 4;
int32_t tsNumOfVnodeWriteThreads = 2;
int32_t tsNumOfVnodeSyncThreads = 2;
int32_t tsNumOfQnodeQueryThreads = 2;
int32_t tsNumOfQnodeQueryThreads = 4;
int32_t tsNumOfQnodeFetchThreads = 4;
int32_t tsNumOfSnodeSharedThreads = 2;
int32_t tsNumOfSnodeUniqueThreads = 2;
@ -387,11 +387,11 @@ static int32_t taosAddServerCfg(SConfig *pCfg) {
if (cfgAddBool(pCfg, "deadLockKillQuery", tsDeadLockKillQuery, 0) != 0) return -1;
if (cfgAddInt32(pCfg, "multiProcess", tsMultiProcess, 0, 2, 0) != 0) return -1;
if (cfgAddInt32(pCfg, "mnodeShmSize", tsMnodeShmSize, TSDB_MAX_WAL_SIZE * 2 + 1024, INT32_MAX, 0) != 0) return -1;
if (cfgAddInt32(pCfg, "vnodeShmSize", tsVnodeShmSize, TSDB_MAX_WAL_SIZE * 2 + 1024, INT32_MAX, 0) != 0) return -1;
if (cfgAddInt32(pCfg, "qnodeShmSize", tsQnodeShmSize, TSDB_MAX_WAL_SIZE * 2 + 1024, INT32_MAX, 0) != 0) return -1;
if (cfgAddInt32(pCfg, "snodeShmSize", tsSnodeShmSize, TSDB_MAX_WAL_SIZE * 2 + 1024, INT32_MAX, 0) != 0) return -1;
if (cfgAddInt32(pCfg, "bnodeShmSize", tsBnodeShmSize, TSDB_MAX_WAL_SIZE * 2 + 1024, INT32_MAX, 0) != 0) return -1;
if (cfgAddInt32(pCfg, "mnodeShmSize", tsMnodeShmSize, TSDB_MAX_MSG_SIZE * 2 + 1024, INT32_MAX, 0) != 0) return -1;
if (cfgAddInt32(pCfg, "vnodeShmSize", tsVnodeShmSize, TSDB_MAX_MSG_SIZE * 2 + 1024, INT32_MAX, 0) != 0) return -1;
if (cfgAddInt32(pCfg, "qnodeShmSize", tsQnodeShmSize, TSDB_MAX_MSG_SIZE * 2 + 1024, INT32_MAX, 0) != 0) return -1;
if (cfgAddInt32(pCfg, "snodeShmSize", tsSnodeShmSize, TSDB_MAX_MSG_SIZE * 2 + 1024, INT32_MAX, 0) != 0) return -1;
if (cfgAddInt32(pCfg, "bnodeShmSize", tsBnodeShmSize, TSDB_MAX_MSG_SIZE * 2 + 1024, INT32_MAX, 0) != 0) return -1;
if (cfgAddInt32(pCfg, "mumOfShmThreads", tsNumOfShmThreads, 1, 1024, 0) != 0) return -1;
tsNumOfRpcThreads = tsNumOfCores / 2;
@ -402,16 +402,16 @@ static int32_t taosAddServerCfg(SConfig *pCfg) {
tsNumOfCommitThreads = TRANGE(tsNumOfCommitThreads, 2, 4);
if (cfgAddInt32(pCfg, "numOfCommitThreads", tsNumOfCommitThreads, 1, 1024, 0) != 0) return -1;
tsNumOfMnodeQueryThreads = tsNumOfCores / 8;
tsNumOfMnodeQueryThreads = TRANGE(tsNumOfMnodeQueryThreads, 1, 4);
tsNumOfMnodeQueryThreads = tsNumOfCores * 2;
tsNumOfMnodeQueryThreads = TRANGE(tsNumOfMnodeQueryThreads, 4, 8);
if (cfgAddInt32(pCfg, "numOfMnodeQueryThreads", tsNumOfMnodeQueryThreads, 1, 1024, 0) != 0) return -1;
tsNumOfMnodeReadThreads = tsNumOfCores / 8;
tsNumOfMnodeReadThreads = TRANGE(tsNumOfMnodeReadThreads, 1, 4);
if (cfgAddInt32(pCfg, "numOfMnodeReadThreads", tsNumOfMnodeReadThreads, 1, 1024, 0) != 0) return -1;
tsNumOfVnodeQueryThreads = tsNumOfCores / 4;
tsNumOfVnodeQueryThreads = TMAX(tsNumOfVnodeQueryThreads, 2);
tsNumOfVnodeQueryThreads = tsNumOfCores * 2;
tsNumOfVnodeQueryThreads = TMAX(tsNumOfVnodeQueryThreads, 4);
if (cfgAddInt32(pCfg, "numOfVnodeQueryThreads", tsNumOfVnodeQueryThreads, 1, 1024, 0) != 0) return -1;
tsNumOfVnodeStreamThreads = tsNumOfCores / 4;
@ -430,8 +430,8 @@ static int32_t taosAddServerCfg(SConfig *pCfg) {
tsNumOfVnodeSyncThreads = TMAX(tsNumOfVnodeSyncThreads, 1);
if (cfgAddInt32(pCfg, "numOfVnodeSyncThreads", tsNumOfVnodeSyncThreads, 1, 1024, 0) != 0) return -1;
tsNumOfQnodeQueryThreads = tsNumOfCores / 2;
tsNumOfQnodeQueryThreads = TMAX(tsNumOfQnodeQueryThreads, 1);
tsNumOfQnodeQueryThreads = tsNumOfCores * 2;
tsNumOfQnodeQueryThreads = TMAX(tsNumOfQnodeQueryThreads, 4);
if (cfgAddInt32(pCfg, "numOfQnodeQueryThreads", tsNumOfQnodeQueryThreads, 1, 1024, 0) != 0) return -1;
tsNumOfQnodeFetchThreads = tsNumOfCores / 2;
@ -447,8 +447,8 @@ static int32_t taosAddServerCfg(SConfig *pCfg) {
if (cfgAddInt32(pCfg, "numOfSnodeUniqueThreads", tsNumOfSnodeUniqueThreads, 1, 1024, 0) != 0) return -1;
tsRpcQueueMemoryAllowed = tsTotalMemoryKB * 1024 * 0.1;
tsRpcQueueMemoryAllowed = TRANGE(tsRpcQueueMemoryAllowed, TSDB_MAX_WAL_SIZE * 10L, TSDB_MAX_WAL_SIZE * 10000L);
if (cfgAddInt64(pCfg, "rpcQueueMemoryAllowed", tsRpcQueueMemoryAllowed, TSDB_MAX_WAL_SIZE * 10L, INT64_MAX, 0) != 0)
tsRpcQueueMemoryAllowed = TRANGE(tsRpcQueueMemoryAllowed, TSDB_MAX_MSG_SIZE * 10L, TSDB_MAX_MSG_SIZE * 10000L);
if (cfgAddInt64(pCfg, "rpcQueueMemoryAllowed", tsRpcQueueMemoryAllowed, TSDB_MAX_MSG_SIZE * 10L, INT64_MAX, 0) != 0)
return -1;
if (cfgAddBool(pCfg, "monitor", tsEnableMonitor, 0) != 0) return -1;

View File

@ -2933,6 +2933,13 @@ int32_t tSerializeSTableIndexRsp(void *buf, int32_t bufLen, const STableIndexRsp
return tlen;
}
void tFreeSerializeSTableIndexRsp(STableIndexRsp *pRsp) {
if (pRsp->pIndex != NULL) {
taosArrayDestroy(pRsp->pIndex);
pRsp->pIndex = NULL;
}
}
int32_t tDeserializeSTableIndexInfo(SDecoder *pDecoder, STableIndexInfo *pInfo) {
if (tDecodeI8(pDecoder, &pInfo->intervalUnit) < 0) return -1;
if (tDecodeI8(pDecoder, &pInfo->slidingUnit) < 0) return -1;
@ -5342,6 +5349,7 @@ int32_t tEncodeSVAlterTbReq(SEncoder *pEncoder, const SVAlterTbReq *pReq) {
if (tEncodeCStr(pEncoder, pReq->tbName) < 0) return -1;
if (tEncodeI8(pEncoder, pReq->action) < 0) return -1;
if (tEncodeI32(pEncoder, pReq->colId) < 0) return -1;
switch (pReq->action) {
case TSDB_ALTER_TABLE_ADD_COLUMN:
if (tEncodeCStr(pEncoder, pReq->colName) < 0) return -1;
@ -5392,6 +5400,7 @@ int32_t tDecodeSVAlterTbReq(SDecoder *pDecoder, SVAlterTbReq *pReq) {
if (tDecodeCStr(pDecoder, &pReq->tbName) < 0) return -1;
if (tDecodeI8(pDecoder, &pReq->action) < 0) return -1;
if (tDecodeI32(pDecoder, &pReq->colId) < 0) return -1;
switch (pReq->action) {
case TSDB_ALTER_TABLE_ADD_COLUMN:
if (tDecodeCStr(pDecoder, &pReq->colName) < 0) return -1;

View File

@ -190,6 +190,11 @@ int32_t tNameGetDbName(const SName* name, char* dst) {
return 0;
}
const char* tNameGetDbNameP(const SName* name) {
return &name->dbname[0];
}
int32_t tNameGetFullDbName(const SName* name, char* dst) {
assert(name != NULL && dst != NULL);
snprintf(dst, TSDB_DB_FNAME_LEN, "%d.%s", name->acctId, name->dbname);

View File

@ -585,7 +585,7 @@ int32_t tdSTSRowNew(SArray *pArray, STSchema *pTSchema, STSRow **ppRow) {
ASSERT(pTColumn->colId == PRIMARYKEY_TIMESTAMP_COL_ID);
} else {
if (IS_VAR_DATA_TYPE(pTColumn->type)) {
if (pColVal) {
if (pColVal && !pColVal->isNone && !pColVal->isNull) {
varDataLen += (pColVal->value.nData + sizeof(VarDataLenT));
if (maxVarDataLen < (pColVal->value.nData + sizeof(VarDataLenT))) {
maxVarDataLen = pColVal->value.nData + sizeof(VarDataLenT);

View File

@ -700,6 +700,8 @@ int64_t taosTimeAdd(int64_t t, int64_t duration, char unit, int32_t precision) {
numOfMonth *= 12;
}
int64_t fraction = t % TSDB_TICK_PER_SECOND(precision);
struct tm tm;
time_t tt = (time_t)(t / TSDB_TICK_PER_SECOND(precision));
taosLocalTime(&tt, &tm);
@ -707,35 +709,9 @@ int64_t taosTimeAdd(int64_t t, int64_t duration, char unit, int32_t precision) {
tm.tm_year = mon / 12;
tm.tm_mon = mon % 12;
return (int64_t)(taosMktime(&tm) * TSDB_TICK_PER_SECOND(precision));
return (int64_t)(taosMktime(&tm) * TSDB_TICK_PER_SECOND(precision) + fraction);
}
int64_t taosTimeSub(int64_t t, int64_t duration, char unit, int32_t precision) {
if (duration == 0) {
return t;
}
if (unit != 'n' && unit != 'y') {
return t - duration;
}
// The following code handles the y/n time duration
int64_t numOfMonth = duration;
if (unit == 'y') {
numOfMonth *= 12;
}
struct tm tm;
time_t tt = (time_t)(t / TSDB_TICK_PER_SECOND(precision));
taosLocalTime(&tt, &tm);
int32_t mon = tm.tm_year * 12 + tm.tm_mon - (int32_t)numOfMonth;
tm.tm_year = mon / 12;
tm.tm_mon = mon % 12;
return (int64_t)(taosMktime(&tm) * TSDB_TICK_PER_SECOND(precision));
}
int32_t taosTimeCountInterval(int64_t skey, int64_t ekey, int64_t interval, char unit, int32_t precision) {
if (ekey < skey) {
int64_t tmp = ekey;
@ -844,11 +820,14 @@ int64_t taosTimeTruncate(int64_t t, const SInterval* pInterval, int32_t precisio
} else {
// try to move current window to the left-hande-side, due to the offset effect.
int64_t end = taosTimeAdd(start, pInterval->interval, pInterval->intervalUnit, precision) - 1;
ASSERT(end >= t);
end = taosTimeAdd(end, -pInterval->sliding, pInterval->slidingUnit, precision);
if (end >= t) {
start = taosTimeAdd(start, -pInterval->sliding, pInterval->slidingUnit, precision);
int64_t newEnd = end;
while(newEnd >= t) {
end = newEnd;
newEnd = taosTimeAdd(newEnd, -pInterval->sliding, pInterval->slidingUnit, precision);
}
start = taosTimeAdd(end, -pInterval->interval, pInterval->intervalUnit, precision) + 1;
}
}

View File

@ -158,8 +158,8 @@ static void taosCleanupArgs() {
}
int main(int argc, char const *argv[]) {
if (!taosCheckSystemIsSmallEnd()) {
printf("failed to start since on non-small-end machines\n");
if (!taosCheckSystemIsLittleEnd()) {
printf("failed to start since on non-little-end machines\n");
return -1;
}

View File

@ -265,6 +265,10 @@ static int32_t mndProcessMqHbReq(SRpcMsg *pMsg) {
int64_t consumerId = be64toh(pReq->consumerId);
SMqConsumerObj *pConsumer = mndAcquireConsumer(pMnode, consumerId);
if (pConsumer == NULL) {
terrno = TSDB_CODE_MND_CONSUMER_NOT_EXIST;
return -1;
}
atomic_store_32(&pConsumer->hbStatus, 0);

View File

@ -1153,6 +1153,7 @@ _OVER:
mError("failed to get table index %s since %s", indexReq.tbFName, terrstr());
}
tFreeSerializeSTableIndexRsp(&rsp);
return code;
}

View File

@ -44,6 +44,10 @@ void mndSyncCommitMsg(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta cbM
SSyncMgmt *pMgmt = &pMnode->syncMgmt;
SSdbRaw *pRaw = pMsg->pCont;
// delete msg handle
SRpcMsg rpcMsg = {0};
syncGetAndDelRespRpc(pMnode->syncMgmt.sync, cbMeta.seqNum, &rpcMsg.info);
int32_t transId = sdbGetIdFromRaw(pMnode->pSdb, pRaw);
pMgmt->errCode = cbMeta.code;
mDebug("trans:%d, is proposed, saved:%d code:0x%x, apply index:%" PRId64 " term:%" PRIu64 " config:%" PRId64

View File

@ -231,7 +231,7 @@ static int32_t sdbReadFileImp(SSdb *pSdb) {
snprintf(file, sizeof(file), "%s%ssdb.data", pSdb->currDir, TD_DIRSEP);
mDebug("start to read sdb file:%s", file);
SSdbRaw *pRaw = taosMemoryMalloc(WAL_MAX_SIZE + 100);
SSdbRaw *pRaw = taosMemoryMalloc(TSDB_MAX_MSG_SIZE + 100);
if (pRaw == NULL) {
terrno = TSDB_CODE_OUT_OF_MEMORY;
mError("failed read sdb file since %s", terrstr());
@ -556,8 +556,9 @@ int32_t sdbStartRead(SSdb *pSdb, SSdbIter **ppIter, int64_t *index, int64_t *ter
if (term != NULL) *term = commitTerm;
if (config != NULL) *config = commitConfig;
mDebug("sdbiter:%p, is created to read snapshot, commit index:%" PRId64 " term:%" PRId64 " config:%" PRId64 " file:%s",
pIter, commitIndex, commitTerm, commitConfig, pIter->name);
mDebug("sdbiter:%p, is created to read snapshot, commit index:%" PRId64 " term:%" PRId64 " config:%" PRId64
" file:%s",
pIter, commitIndex, commitTerm, commitConfig, pIter->name);
return 0;
}
@ -669,4 +670,4 @@ int32_t sdbDoWrite(SSdb *pSdb, SSdbIter *pIter, void *pBuf, int32_t len) {
pIter->total += writelen;
mDebug("sdbiter:%p, write:%d bytes to snapshot, total:%" PRId64, pIter, writelen, pIter->total);
return 0;
}
}

View File

@ -233,7 +233,6 @@ struct SVnodeCfg {
};
typedef struct {
TSKEY lastKey;
uint64_t uid;
uint64_t groupId;
} STableKeyInfo;

View File

@ -108,6 +108,10 @@ typedef struct {
// exec
STqExecHandle execHandle;
// prevent drop
int64_t ntbUid;
SArray* colIdList; // SArray<int32_t>
} STqHandle;
struct STQ {

View File

@ -142,6 +142,7 @@ void tqClose(STQ*);
int tqPushMsg(STQ*, void* msg, int32_t msgLen, tmsg_t msgType, int64_t ver);
int tqCommit(STQ*);
int32_t tqUpdateTbUidList(STQ* pTq, const SArray* tbUidList, bool isAdd);
int32_t tqCheckColModifiable(STQ* pTq, int32_t colId);
int32_t tqProcessVgChangeReq(STQ* pTq, char* msg, int32_t msgLen);
int32_t tqProcessVgDeleteReq(STQ* pTq, char* msg, int32_t msgLen);
int32_t tqProcessOffsetCommitReq(STQ* pTq, char* msg, int32_t msgLen);

View File

@ -208,6 +208,26 @@ int32_t tqProcessOffsetCommitReq(STQ* pTq, char* msg, int32_t msgLen) {
return 0;
}
int32_t tqCheckColModifiable(STQ* pTq, int32_t colId) {
void* pIter = NULL;
while (1) {
pIter = taosHashIterate(pTq->handles, pIter);
if (pIter == NULL) break;
STqHandle* pExec = (STqHandle*)pIter;
if (pExec->execHandle.subType == TOPIC_SUB_TYPE__COLUMN) {
int32_t sz = taosArrayGetSize(pExec->colIdList);
for (int32_t i = 0; i < sz; i++) {
int32_t forbidColId = *(int32_t*)taosArrayGet(pExec->colIdList, i);
if (forbidColId == colId) {
taosHashCancelIterate(pTq->handles, pIter);
return -1;
}
}
}
}
return 0;
}
static int32_t tqInitDataRsp(SMqDataRsp* pRsp, const SMqPollReq* pReq, int8_t subType) {
pRsp->reqOffset = pReq->reqOffset;
@ -506,7 +526,8 @@ int32_t tqProcessVgChangeReq(STQ* pTq, char* msg, int32_t msgLen) {
.initTqReader = true,
.version = ver,
};
pHandle->execHandle.execCol.task[i] = qCreateQueueExecTaskInfo(pHandle->execHandle.execCol.qmsg, &handle, &pHandle->execHandle.numOfCols);
pHandle->execHandle.execCol.task[i] =
qCreateQueueExecTaskInfo(pHandle->execHandle.execCol.qmsg, &handle, &pHandle->execHandle.numOfCols);
ASSERT(pHandle->execHandle.execCol.task[i]);
void* scanner = NULL;
qExtractStreamScanner(pHandle->execHandle.execCol.task[i], &scanner);
@ -679,9 +700,9 @@ int32_t tqProcessTaskRunReq(STQ* pTq, SRpcMsg* pMsg) {
//
SStreamTaskRunReq* pReq = pMsg->pCont;
int32_t taskId = pReq->taskId;
SStreamTask* pTask = *(SStreamTask**)taosHashGet(pTq->pStreamTasks, &taskId, sizeof(int32_t));
if (pTask) {
streamProcessRunReq(pTask);
SStreamTask** ppTask = (SStreamTask**)taosHashGet(pTq->pStreamTasks, &taskId, sizeof(int32_t));
if (ppTask) {
streamProcessRunReq(*ppTask);
return 0;
} else {
return -1;
@ -696,14 +717,14 @@ int32_t tqProcessTaskDispatchReq(STQ* pTq, SRpcMsg* pMsg) {
SDecoder decoder;
tDecoderInit(&decoder, msgBody, msgLen);
tDecodeStreamDispatchReq(&decoder, &req);
int32_t taskId = req.taskId;
SStreamTask* pTask = *(SStreamTask**)taosHashGet(pTq->pStreamTasks, &taskId, sizeof(int32_t));
if (pTask) {
int32_t taskId = req.taskId;
SStreamTask** ppTask = (SStreamTask**)taosHashGet(pTq->pStreamTasks, &taskId, sizeof(int32_t));
if (ppTask) {
SRpcMsg rsp = {
.info = pMsg->info,
.code = 0,
};
streamProcessDispatchReq(pTask, &req, &rsp);
streamProcessDispatchReq(*ppTask, &req, &rsp);
return 0;
} else {
return -1;
@ -713,9 +734,9 @@ int32_t tqProcessTaskDispatchReq(STQ* pTq, SRpcMsg* pMsg) {
int32_t tqProcessTaskRecoverReq(STQ* pTq, SRpcMsg* pMsg) {
SStreamTaskRecoverReq* pReq = pMsg->pCont;
int32_t taskId = pReq->taskId;
SStreamTask* pTask = *(SStreamTask**)taosHashGet(pTq->pStreamTasks, &taskId, sizeof(int32_t));
if (pTask) {
streamProcessRecoverReq(pTask, pReq, pMsg);
SStreamTask** ppTask = (SStreamTask**)taosHashGet(pTq->pStreamTasks, &taskId, sizeof(int32_t));
if (ppTask) {
streamProcessRecoverReq(*ppTask, pReq, pMsg);
return 0;
} else {
return -1;
@ -725,9 +746,9 @@ int32_t tqProcessTaskRecoverReq(STQ* pTq, SRpcMsg* pMsg) {
int32_t tqProcessTaskDispatchRsp(STQ* pTq, SRpcMsg* pMsg) {
SStreamDispatchRsp* pRsp = POINTER_SHIFT(pMsg->pCont, sizeof(SMsgHead));
int32_t taskId = pRsp->taskId;
SStreamTask* pTask = *(SStreamTask**)taosHashGet(pTq->pStreamTasks, &taskId, sizeof(int32_t));
if (pTask) {
streamProcessDispatchRsp(pTask, pRsp);
SStreamTask** ppTask = (SStreamTask**)taosHashGet(pTq->pStreamTasks, &taskId, sizeof(int32_t));
if (ppTask) {
streamProcessDispatchRsp(*ppTask, pRsp);
return 0;
} else {
return -1;
@ -737,9 +758,9 @@ int32_t tqProcessTaskDispatchRsp(STQ* pTq, SRpcMsg* pMsg) {
int32_t tqProcessTaskRecoverRsp(STQ* pTq, SRpcMsg* pMsg) {
SStreamTaskRecoverRsp* pRsp = pMsg->pCont;
int32_t taskId = pRsp->taskId;
SStreamTask* pTask = *(SStreamTask**)taosHashGet(pTq->pStreamTasks, &taskId, sizeof(int32_t));
if (pTask) {
streamProcessRecoverRsp(pTask, pRsp);
SStreamTask** ppTask = (SStreamTask**)taosHashGet(pTq->pStreamTasks, &taskId, sizeof(int32_t));
if (ppTask) {
streamProcessRecoverRsp(*ppTask, pRsp);
return 0;
} else {
return -1;
@ -749,8 +770,9 @@ int32_t tqProcessTaskRecoverRsp(STQ* pTq, SRpcMsg* pMsg) {
int32_t tqProcessTaskDropReq(STQ* pTq, char* msg, int32_t msgLen) {
SVDropStreamTaskReq* pReq = (SVDropStreamTaskReq*)msg;
SStreamTask* pTask = *(SStreamTask**)taosHashGet(pTq->pStreamTasks, &pReq->taskId, sizeof(int32_t));
if (pTask) {
SStreamTask** ppTask = (SStreamTask**)taosHashGet(pTq->pStreamTasks, &pReq->taskId, sizeof(int32_t));
if (ppTask) {
SStreamTask* pTask = *ppTask;
taosHashRemove(pTq->pStreamTasks, &pReq->taskId, sizeof(int32_t));
atomic_store_8(&pTask->taskStatus, TASK_STATUS__DROPPING);
}
@ -780,16 +802,17 @@ int32_t tqProcessTaskRetrieveReq(STQ* pTq, SRpcMsg* pMsg) {
SDecoder decoder;
tDecoderInit(&decoder, msgBody, msgLen);
tDecodeStreamRetrieveReq(&decoder, &req);
int32_t taskId = req.dstTaskId;
SStreamTask* pTask = *(SStreamTask**)taosHashGet(pTq->pStreamTasks, &taskId, sizeof(int32_t));
if (atomic_load_8(&pTask->taskStatus) != TASK_STATUS__NORMAL) {
return 0;
int32_t taskId = req.dstTaskId;
SStreamTask** ppTask = (SStreamTask**)taosHashGet(pTq->pStreamTasks, &taskId, sizeof(int32_t));
if (ppTask) {
SRpcMsg rsp = {
.info = pMsg->info,
.code = 0,
};
streamProcessRetrieveReq(*ppTask, &req, &rsp);
} else {
return -1;
}
SRpcMsg rsp = {
.info = pMsg->info,
.code = 0,
};
streamProcessRetrieveReq(pTask, &req, &rsp);
return 0;
}

View File

@ -237,6 +237,8 @@ int32_t tqPushMsgNew(STQ* pTq, void* msg, int32_t msgLen, tmsg_t msgType, int64_
#endif
int tqPushMsg(STQ* pTq, void* msg, int32_t msgLen, tmsg_t msgType, int64_t ver) {
walApplyVer(pTq->pVnode->pWal, ver);
if (msgType == TDMT_VND_SUBMIT) {
if (taosHashGetSize(pTq->pStreamTasks) == 0) return 0;
@ -253,4 +255,3 @@ int tqPushMsg(STQ* pTq, void* msg, int32_t msgLen, tmsg_t msgType, int64_t ver)
return 0;
}

View File

@ -2576,10 +2576,10 @@ void updateSchema(TSDBROW* pRow, uint64_t uid, STsdbReader* pReader) {
int32_t sversion = TSDBROW_SVERSION(pRow);
if (pReader->pSchema == NULL) {
pReader->pSchema = metaGetTbTSchema(pReader->pTsdb->pVnode->pMeta, uid, sversion);
metaGetTbTSchemaEx(pReader->pTsdb->pVnode->pMeta, pReader->suid, uid, sversion, &pReader->pSchema);
} else if (pReader->pSchema->version != sversion) {
taosMemoryFreeClear(pReader->pSchema);
pReader->pSchema = metaGetTbTSchema(pReader->pTsdb->pVnode->pMeta, uid, sversion);
metaGetTbTSchemaEx(pReader->pTsdb->pVnode->pMeta, pReader->suid, uid, sversion, &pReader->pSchema);
}
}

View File

@ -270,7 +270,7 @@ int32_t vnodeGetAllTableList(SVnode *pVnode, uint64_t uid, SArray *list) {
break;
}
STableKeyInfo info = {.lastKey = TSKEY_INITIAL_VAL, uid = id};
STableKeyInfo info = {uid = id};
taosArrayPush(list, &info);
}

View File

@ -878,6 +878,8 @@ _exit:
tdProcessRSmaSubmit(pVnode->pSma, pReq, STREAM_INPUT__DATA_SUBMIT);
}
vDebug("successful submit in vg %d version %ld", pVnode->config.vgId, version);
return 0;
}

View File

@ -460,8 +460,6 @@ typedef struct SCtgOperation {
#define CTG_FLAG_MAKE_STB(_isStb) (((_isStb) == 1) ? CTG_FLAG_STB : ((_isStb) == 0 ? CTG_FLAG_NOT_STB : CTG_FLAG_UNKNOWN_STB))
#define CTG_FLAG_MATCH_STB(_flag, tbType) (CTG_FLAG_IS_UNKNOWN_STB(_flag) || (CTG_FLAG_IS_STB(_flag) && (tbType) == TSDB_SUPER_TABLE) || (CTG_FLAG_IS_NOT_STB(_flag) && (tbType) != TSDB_SUPER_TABLE))
#define CTG_IS_SYS_DBNAME(_dbname) (((*(_dbname) == 'i') && (0 == strcmp(_dbname, TSDB_INFORMATION_SCHEMA_DB))) || ((*(_dbname) == 'p') && (0 == strcmp(_dbname, TSDB_PERFORMANCE_SCHEMA_DB))))
#define CTG_META_SIZE(pMeta) (sizeof(STableMeta) + ((pMeta)->tableInfo.numOfTags + (pMeta)->tableInfo.numOfColumns) * sizeof(SSchema))
#define CTG_TABLE_NOT_EXIST(code) (code == CTG_ERR_CODE_TABLE_NOT_EXIST)

View File

@ -865,7 +865,7 @@ int32_t catalogChkTbMetaVersion(SCatalog* pCtg, SRequestConnInfo *pConn, SArray*
tNameFromString(&name, pTb->tbFName, T_NAME_ACCT | T_NAME_DB | T_NAME_TABLE);
if (CTG_IS_SYS_DBNAME(name.dbname)) {
if (IS_SYS_DBNAME(name.dbname)) {
continue;
}
@ -936,7 +936,7 @@ int32_t catalogGetTableDistVgInfo(SCatalog* pCtg, SRequestConnInfo *pConn, const
CTG_API_LEAVE(TSDB_CODE_CTG_INVALID_INPUT);
}
if (CTG_IS_SYS_DBNAME(pTableName->dbname)) {
if (IS_SYS_DBNAME(pTableName->dbname)) {
ctgError("no valid vgInfo for db, dbname:%s", pTableName->dbname);
CTG_API_LEAVE(TSDB_CODE_CTG_INVALID_INPUT);
}
@ -947,7 +947,7 @@ int32_t catalogGetTableDistVgInfo(SCatalog* pCtg, SRequestConnInfo *pConn, const
int32_t catalogGetTableHashVgroup(SCatalog *pCtg, SRequestConnInfo *pConn, const SName *pTableName, SVgroupInfo *pVgroup) {
CTG_API_ENTER();
if (CTG_IS_SYS_DBNAME(pTableName->dbname)) {
if (IS_SYS_DBNAME(pTableName->dbname)) {
ctgError("no valid vgInfo for db, dbname:%s", pTableName->dbname);
CTG_API_LEAVE(TSDB_CODE_CTG_INVALID_INPUT);
}

View File

@ -132,7 +132,7 @@ void ctgReleaseDBCache(SCatalog *pCtg, SCtgDBCache *dbCache) {
int32_t ctgAcquireDBCacheImpl(SCatalog* pCtg, const char *dbFName, SCtgDBCache **pCache, bool acquire) {
char *p = strchr(dbFName, '.');
if (p && CTG_IS_SYS_DBNAME(p + 1)) {
if (p && IS_SYS_DBNAME(p + 1)) {
dbFName = p + 1;
}
@ -694,7 +694,7 @@ int32_t ctgDropDbCacheEnqueue(SCatalog* pCtg, const char *dbFName, int64_t dbId)
}
char *p = strchr(dbFName, '.');
if (p && CTG_IS_SYS_DBNAME(p + 1)) {
if (p && IS_SYS_DBNAME(p + 1)) {
dbFName = p + 1;
}
@ -727,7 +727,7 @@ int32_t ctgDropDbVgroupEnqueue(SCatalog* pCtg, const char *dbFName, bool syncOp)
}
char *p = strchr(dbFName, '.');
if (p && CTG_IS_SYS_DBNAME(p + 1)) {
if (p && IS_SYS_DBNAME(p + 1)) {
dbFName = p + 1;
}
@ -823,7 +823,7 @@ int32_t ctgUpdateVgroupEnqueue(SCatalog* pCtg, const char *dbFName, int64_t dbId
}
char *p = strchr(dbFName, '.');
if (p && CTG_IS_SYS_DBNAME(p + 1)) {
if (p && IS_SYS_DBNAME(p + 1)) {
dbFName = p + 1;
}
@ -859,7 +859,7 @@ int32_t ctgUpdateTbMetaEnqueue(SCatalog* pCtg, STableMetaOutput *output, bool sy
}
char *p = strchr(output->dbFName, '.');
if (p && CTG_IS_SYS_DBNAME(p + 1)) {
if (p && IS_SYS_DBNAME(p + 1)) {
memmove(output->dbFName, p + 1, strlen(p + 1));
}
@ -2123,7 +2123,7 @@ int32_t ctgStartUpdateThread() {
int32_t ctgGetTbMetaFromCache(SCatalog* pCtg, SRequestConnInfo *pConn, SCtgTbMetaCtx* ctx, STableMeta** pTableMeta) {
if (CTG_IS_SYS_DBNAME(ctx->pName->dbname)) {
if (IS_SYS_DBNAME(ctx->pName->dbname)) {
CTG_FLAG_SET_SYS_DB(ctx->flag);
}
@ -2177,7 +2177,7 @@ _return:
}
int32_t ctgGetTbHashVgroupFromCache(SCatalog *pCtg, const SName *pTableName, SVgroupInfo **pVgroup) {
if (CTG_IS_SYS_DBNAME(pTableName->dbname)) {
if (IS_SYS_DBNAME(pTableName->dbname)) {
ctgError("no valid vgInfo for db, dbname:%s", pTableName->dbname);
CTG_ERR_RET(TSDB_CODE_CTG_INVALID_INPUT);
}

View File

@ -375,6 +375,8 @@ int32_t ctgGetQnodeListFromMnode(SCatalog* pCtg, SRequestConnInfo *pConn, SArray
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, NULL));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -408,6 +410,8 @@ int32_t ctgGetDnodeListFromMnode(SCatalog* pCtg, SRequestConnInfo *pConn, SArray
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, NULL));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -447,6 +451,8 @@ int32_t ctgGetDBVgInfoFromMnode(SCatalog* pCtg, SRequestConnInfo *pConn, SBuildU
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, input->db));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -485,6 +491,8 @@ int32_t ctgGetDBCfgFromMnode(SCatalog* pCtg, SRequestConnInfo *pConn, const char
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, (char*)dbFName));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -522,6 +530,8 @@ int32_t ctgGetIndexInfoFromMnode(SCatalog* pCtg, SRequestConnInfo *pConn, const
rpcSendRecv(pConn->pTrans, &pConn->mgmtEps, &rpcMsg, &rpcRsp);
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, (char*)indexName));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -563,6 +573,8 @@ int32_t ctgGetTbIndexFromMnode(SCatalog* pCtg, SRequestConnInfo *pConn, SName *n
rpcSendRecv(pConn->pTrans, &pConn->mgmtEps, &rpcMsg, &rpcRsp);
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, (char*)tbFName));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -602,6 +614,8 @@ int32_t ctgGetUdfInfoFromMnode(SCatalog* pCtg, SRequestConnInfo *pConn, const ch
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, (char*)funcName));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -639,6 +653,8 @@ int32_t ctgGetUserDbAuthFromMnode(SCatalog* pCtg, SRequestConnInfo *pConn, const
rpcSendRecv(pConn->pTrans, &pConn->mgmtEps, &rpcMsg, &rpcRsp);
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, (char*)user));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -683,6 +699,8 @@ int32_t ctgGetTbMetaFromMnodeImpl(SCatalog* pCtg, SRequestConnInfo *pConn, char
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, tbFName));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -740,6 +758,8 @@ int32_t ctgGetTbMetaFromVnode(SCatalog* pCtg, SRequestConnInfo *pConn, const SNa
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, tbFName));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -784,6 +804,8 @@ int32_t ctgGetTableCfgFromVnode(SCatalog* pCtg, SRequestConnInfo *pConn, const S
rpcSendRecv(pConn->pTrans, &vgroupInfo->epSet, &rpcMsg, &rpcRsp);
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, (char*)tbFName));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -824,6 +846,8 @@ int32_t ctgGetTableCfgFromMnode(SCatalog* pCtg, SRequestConnInfo *pConn, const S
rpcSendRecv(pConn->pTrans, &pConn->mgmtEps, &rpcMsg, &rpcRsp);
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, (char*)tbFName));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -858,6 +882,8 @@ int32_t ctgGetSvrVerFromMnode(SCatalog* pCtg, SRequestConnInfo *pConn, char **ou
rpcSendRecv(pConn->pTrans, &pConn->mgmtEps, &rpcMsg, &rpcRsp);
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, NULL));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}

View File

@ -401,8 +401,6 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
QRY_ERR_RET(qExplainBufAppendExecInfo(pResNode->pExecInfo, tbuf, &tlen));
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
}
EXPLAIN_ROW_APPEND(EXPLAIN_COLUMNS_FORMAT, pTagScanNode->pScanCols->length);
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
if (pTagScanNode->pScanPseudoCols) {
EXPLAIN_ROW_APPEND(EXPLAIN_PSEUDO_COLUMNS_FORMAT, pTagScanNode->pScanPseudoCols->length);
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);

View File

@ -744,8 +744,8 @@ typedef struct SSortOperatorInfo {
int64_t startTs; // sort start time
uint64_t sortElapsed; // sort elapsed time, time to flush to disk not included.
SNode* pCondition;
SLimitInfo limitInfo;
SNode* pCondition;
} SSortOperatorInfo;
typedef struct STagFilterOperatorInfo {
@ -785,7 +785,7 @@ int32_t initExprSupp(SExprSupp* pSup, SExprInfo* pExprInfo, int32_t numOfExpr);
void cleanupExprSupp(SExprSupp* pSup);
int32_t initAggInfo(SExprSupp *pSup, SAggSupporter* pAggSup, SExprInfo* pExprInfo, int32_t numOfCols, size_t keyBufSize,
const char* pkey);
void initResultSizeInfo(SOperatorInfo* pOperator, int32_t numOfRows);
void initResultSizeInfo(SResultInfo * pResultInfo, int32_t numOfRows);
void doBuildResultDatablock(SOperatorInfo* pOperator, SOptrBasicInfo* pbInfo, SGroupResInfo* pGroupResInfo, SDiskbasedBuf* pBuf);
int32_t handleLimitOffset(SOperatorInfo *pOperator, SLimitInfo* pLimitInfo, SSDataBlock* pBlock, bool holdDataInBuf);
bool hasLimitOffsetInfo(SLimitInfo* pLimitInfo);
@ -797,7 +797,7 @@ void doApplyFunctions(SExecTaskInfo* taskInfo, SqlFunctionCtx* pCtx, STimeWin
int32_t extractDataBlockFromFetchRsp(SSDataBlock* pRes, SLoadRemoteDataInfo* pLoadInfo, int32_t numOfRows, char* pData,
int32_t compLen, int32_t numOfOutput, int64_t startTs, uint64_t* total,
SArray* pColList);
void getAlignQueryTimeWindow(SInterval* pInterval, int32_t precision, int64_t key, STimeWindow* win);
STimeWindow getAlignQueryTimeWindow(SInterval* pInterval, int32_t precision, int64_t key);
STimeWindow getFirstQualifiedTimeWindow(int64_t ts, STimeWindow* pWindow, SInterval* pInterval, int32_t order);
int32_t getTableScanInfo(SOperatorInfo* pOperator, int32_t *order, int32_t* scanFlag);

View File

@ -50,7 +50,7 @@ SOperatorInfo* createLastrowScanOperator(SLastRowScanPhysiNode* pScanNode, SRead
STableListInfo* pTableList = &pTaskInfo->tableqinfoList;
initResultSizeInfo(pOperator, 1024);
initResultSizeInfo(&pOperator->resultInfo, 1024);
blockDataEnsureCapacity(pInfo->pRes, pOperator->resultInfo.capacity);
pInfo->pUidList = taosArrayInit(4, sizeof(int64_t));

View File

@ -13,7 +13,6 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include "ttime.h"
#include "function.h"
#include "functionMgt.h"
#include "index.h"
@ -21,6 +20,7 @@
#include "tdatablock.h"
#include "thash.h"
#include "tmsg.h"
#include "ttime.h"
#include "executil.h"
#include "executorimpl.h"
@ -72,7 +72,7 @@ size_t getResultRowSize(SqlFunctionCtx* pCtx, int32_t numOfOutput) {
void cleanupGroupResInfo(SGroupResInfo* pGroupResInfo) {
assert(pGroupResInfo != NULL);
for(int32_t i = 0; i < taosArrayGetSize(pGroupResInfo->pRows); ++i) {
for (int32_t i = 0; i < taosArrayGetSize(pGroupResInfo->pRows); ++i) {
SResKeyPos* pRes = taosArrayGetP(pGroupResInfo->pRows, i);
taosMemoryFree(pRes);
}
@ -266,17 +266,24 @@ EDealRes doTranslateTagExpr(SNode** pNode, void* pContext) {
}
int32_t isTableOk(STableKeyInfo* info, SNode* pTagCond, void* metaHandle, bool* pQualified) {
int32_t code = TSDB_CODE_SUCCESS;
SMetaReader mr = {0};
metaReaderInit(&mr, metaHandle, 0);
metaGetTableEntryByUid(&mr, info->uid);
code = metaGetTableEntryByUid(&mr, info->uid);
if (TSDB_CODE_SUCCESS != code) {
metaReaderClear(&mr);
return terrno;
}
SNode* pTagCondTmp = nodesCloneNode(pTagCond);
nodesRewriteExprPostOrder(&pTagCondTmp, doTranslateTagExpr, &mr);
metaReaderClear(&mr);
SNode* pNew = NULL;
int32_t code = scalarCalculateConstants(pTagCondTmp, &pNew);
SNode* pNew = NULL;
code = scalarCalculateConstants(pTagCondTmp, &pNew);
if (TSDB_CODE_SUCCESS != code) {
terrno = code;
nodesDestroyNode(pTagCondTmp);
@ -295,7 +302,8 @@ int32_t isTableOk(STableKeyInfo* info, SNode* pTagCond, void* metaHandle, bool*
return TSDB_CODE_SUCCESS;
}
int32_t getTableList(void* metaHandle, void* pVnode, SScanPhysiNode* pScanNode, SNode* pTagCond, SNode* pTagIndexCond, STableListInfo* pListInfo) {
int32_t getTableList(void* metaHandle, void* pVnode, SScanPhysiNode* pScanNode, SNode* pTagCond, SNode* pTagIndexCond,
STableListInfo* pListInfo) {
int32_t code = TSDB_CODE_SUCCESS;
pListInfo->pTableList = taosArrayInit(8, sizeof(STableKeyInfo));
@ -317,14 +325,14 @@ int32_t getTableList(void* metaHandle, void* pVnode, SScanPhysiNode* pScanNode,
code = doFilterTag(pTagIndexCond, &metaArg, res, &status);
if (code != 0 || status == SFLT_NOT_INDEX) {
qError("failed to get tableIds from index, reason:%s, suid:%" PRIu64, tstrerror(code), tableUid);
// code = TSDB_CODE_INDEX_REBUILDING;
// code = TSDB_CODE_INDEX_REBUILDING;
code = vnodeGetAllTableList(pVnode, tableUid, pListInfo->pTableList);
} else {
qDebug("success to get tableIds, size:%d, suid:%" PRIu64, (int)taosArrayGetSize(res), tableUid);
}
for (int i = 0; i < taosArrayGetSize(res); i++) {
STableKeyInfo info = {.lastKey = TSKEY_INITIAL_VAL, .uid = *(uint64_t*)taosArrayGet(res, i), .groupId = 0};
STableKeyInfo info = {.uid = *(uint64_t*)taosArrayGet(res, i), .groupId = 0};
taosArrayPush(pListInfo->pTableList, &info);
}
taosArrayDestroy(res);
@ -338,7 +346,7 @@ int32_t getTableList(void* metaHandle, void* pVnode, SScanPhysiNode* pScanNode,
return code;
}
} else { // Create one table group.
STableKeyInfo info = {.lastKey = 0, .uid = tableUid, .groupId = 0};
STableKeyInfo info = {.uid = tableUid, .groupId = 0};
taosArrayPush(pListInfo->pTableList, &info);
}
@ -610,8 +618,7 @@ static int32_t setSelectValueColumnInfo(SqlFunctionCtx* pCtx, int32_t numOfOutpu
for (int32_t i = 0; i < numOfOutput; ++i) {
const char* pName = pCtx[i].pExpr->pExpr->_function.functionName;
if ((strcmp(pName, "_select_value") == 0) ||
(strcmp(pName, "_group_key") == 0)) {
if ((strcmp(pName, "_select_value") == 0) || (strcmp(pName, "_group_key") == 0)) {
pValCtx[num++] = &pCtx[i];
} else if (fmIsSelectFunc(pCtx[i].functionId)) {
p = &pCtx[i];
@ -747,11 +754,11 @@ SInterval extractIntervalInfo(const STableScanPhysiNode* pTableScanNode) {
SColumn extractColumnFromColumnNode(SColumnNode* pColNode) {
SColumn c = {0};
c.slotId = pColNode->slotId;
c.colId = pColNode->colId;
c.type = pColNode->node.resType.type;
c.bytes = pColNode->node.resType.bytes;
c.scale = pColNode->node.resType.scale;
c.slotId = pColNode->slotId;
c.colId = pColNode->colId;
c.type = pColNode->node.resType.type;
c.bytes = pColNode->node.resType.bytes;
c.scale = pColNode->node.resType.scale;
c.precision = pColNode->node.resType.precision;
return c;
}
@ -768,10 +775,10 @@ int32_t initQueryTableDataCond(SQueryTableDataCond* pCond, const STableScanPhysi
// pCond->twindow = pTableScanNode->scanRange;
// TODO: get it from stable scan node
pCond->twindows = pTableScanNode->scanRange;
pCond->suid = pTableScanNode->scan.suid;
pCond->type = BLOCK_LOAD_OFFSET_ORDER;
pCond->suid = pTableScanNode->scan.suid;
pCond->type = BLOCK_LOAD_OFFSET_ORDER;
pCond->startVersion = -1;
pCond->endVersion = -1;
pCond->endVersion = -1;
// pCond->type = pTableScanNode->scanFlag;
int32_t j = 0;
@ -824,10 +831,10 @@ int32_t convertFillType(int32_t mode) {
static void getInitialStartTimeWindow(SInterval* pInterval, TSKEY ts, STimeWindow* w, bool ascQuery) {
if (ascQuery) {
getAlignQueryTimeWindow(pInterval, pInterval->precision, ts, w);
*w = getAlignQueryTimeWindow(pInterval, pInterval->precision, ts);
} else {
// the start position of the first time window in the endpoint that spreads beyond the queried last timestamp
getAlignQueryTimeWindow(pInterval, pInterval->precision, ts, w);
*w = getAlignQueryTimeWindow(pInterval, pInterval->precision, ts);
int64_t key = w->skey;
while (key < ts) { // moving towards end
@ -850,11 +857,11 @@ static STimeWindow doCalculateTimeWindow(int64_t ts, SInterval* pInterval) {
}
STimeWindow getFirstQualifiedTimeWindow(int64_t ts, STimeWindow* pWindow, SInterval* pInterval, int32_t order) {
int32_t factor = (order == TSDB_ORDER_ASC)? -1:1;
int32_t factor = (order == TSDB_ORDER_ASC) ? -1 : 1;
STimeWindow win = *pWindow;
STimeWindow save = win;
while(win.skey <= ts && win.ekey >= ts) {
while (win.skey <= ts && win.ekey >= ts) {
save = win;
win.skey = taosTimeAdd(win.skey, factor * pInterval->sliding, pInterval->slidingUnit, pInterval->precision);
win.ekey = taosTimeAdd(win.ekey, factor * pInterval->sliding, pInterval->slidingUnit, pInterval->precision);
@ -894,7 +901,6 @@ bool hasLimitOffsetInfo(SLimitInfo* pLimitInfo) {
pLimitInfo->slimit.offset != -1);
}
static int64_t getLimit(const SNode* pLimit) { return NULL == pLimit ? -1 : ((SLimitNode*)pLimit)->limit; }
static int64_t getOffset(const SNode* pLimit) { return NULL == pLimit ? -1 : ((SLimitNode*)pLimit)->offset; }
@ -903,7 +909,7 @@ void initLimitInfo(const SNode* pLimit, const SNode* pSLimit, SLimitInfo* pLimit
SLimit slimit = {.limit = getLimit(pSLimit), .offset = getOffset(pSLimit)};
pLimitInfo->limit = limit;
pLimitInfo->slimit= slimit;
pLimitInfo->slimit = slimit;
pLimitInfo->remainOffset = limit.offset;
pLimitInfo->remainGroupOffset = slimit.offset;
}
}

View File

@ -191,7 +191,7 @@ static SArray* filterQualifiedChildTables(const SStreamScanInfo* pScanInfo, cons
SMetaReader mr = {0};
metaReaderInit(&mr, pScanInfo->readHandle.meta, 0);
for (int32_t i = 0; i < taosArrayGetSize(tableIdList); ++i) {
int64_t* id = (int64_t*)taosArrayGet(tableIdList, i);
uint64_t* id = (uint64_t*)taosArrayGet(tableIdList, i);
int32_t code = metaGetTableEntryByUid(&mr, *id);
if (code != TSDB_CODE_SUCCESS) {
@ -206,7 +206,7 @@ static SArray* filterQualifiedChildTables(const SStreamScanInfo* pScanInfo, cons
if (pScanInfo->pTagCond != NULL) {
bool qualified = false;
STableKeyInfo info = {.groupId = 0, .uid = mr.me.uid, .lastKey = 0};
STableKeyInfo info = {.groupId = 0, .uid = mr.me.uid};
code = isTableOk(&info, pScanInfo->pTagCond, pScanInfo->readHandle.meta, &qualified);
if (code != TSDB_CODE_SUCCESS) {
qError("failed to filter new table, uid:0x%" PRIx64 ", %s", info.uid, idstr);
@ -218,9 +218,7 @@ static SArray* filterQualifiedChildTables(const SStreamScanInfo* pScanInfo, cons
}
}
/*pScanInfo->pStreamScanOp->pTaskInfo->tableqinfoList.*/
// handle multiple partition
taosArrayPush(qa, id);
}
@ -244,6 +242,19 @@ int32_t qUpdateQualifiedTableId(qTaskInfo_t tinfo, const SArray* tableIdList, bo
qDebug(" %d qualified child tables added into stream scanner", (int32_t)taosArrayGetSize(qa));
code = tqReaderAddTbUidList(pScanInfo->tqReader, qa);
if (code != TSDB_CODE_SUCCESS) {
return code;
}
// add to qTaskInfo
// todo refactor STableList
for(int32_t i = 0; i < taosArrayGetSize(qa); ++i) {
uint64_t* uid = taosArrayGet(qa, i);
STableKeyInfo keyInfo = {.uid = *uid, .groupId = 0};
taosArrayPush(pTaskInfo->tableqinfoList.pTableList, &keyInfo);
}
taosArrayDestroy(qa);
} else { // remove the table id in current list
qDebug(" %d remove child tables from the stream scanner", (int32_t)taosArrayGetSize(tableIdList));

View File

@ -834,18 +834,20 @@ bool isTaskKilled(SExecTaskInfo* pTaskInfo) {
void setTaskKilled(SExecTaskInfo* pTaskInfo) { pTaskInfo->code = TSDB_CODE_TSC_QUERY_CANCELLED; }
/////////////////////////////////////////////////////////////////////////////////////////////
// todo refactor : return window
void getAlignQueryTimeWindow(SInterval* pInterval, int32_t precision, int64_t key, STimeWindow* win) {
win->skey = taosTimeTruncate(key, pInterval, precision);
STimeWindow getAlignQueryTimeWindow(SInterval* pInterval, int32_t precision, int64_t key) {
STimeWindow win = {0};
win.skey = taosTimeTruncate(key, pInterval, precision);
/*
* if the realSkey > INT64_MAX - pInterval->interval, the query duration between
* realSkey and realEkey must be less than one interval.Therefore, no need to adjust the query ranges.
*/
win->ekey = taosTimeAdd(win->skey, pInterval->interval, pInterval->intervalUnit, precision) - 1;
if (win->ekey < win->skey) {
win->ekey = INT64_MAX;
win.ekey = taosTimeAdd(win.skey, pInterval->interval, pInterval->intervalUnit, precision) - 1;
if (win.ekey < win.skey) {
win.ekey = INT64_MAX;
}
return win;
}
#if 0
@ -3349,7 +3351,11 @@ static SSDataBlock* doProjectOperation(SOperatorInfo* pOperator) {
// filter shall be applied after apply functions and limit/offset on the result
doFilter(pProjectInfo->pFilterNode, pInfo->pRes);
if (status == PROJECT_RETRIEVE_CONTINUE) {
if (pTaskInfo->execModel == OPTR_EXEC_MODEL_STREAM) {
break;
}
if (status == PROJECT_RETRIEVE_CONTINUE || pInfo->pRes->info.rows == 0) {
continue;
} else if (status == PROJECT_RETRIEVE_DONE) {
break;
@ -3603,13 +3609,13 @@ int32_t initAggInfo(SExprSupp* pSup, SAggSupporter* pAggSup, SExprInfo* pExprInf
return TSDB_CODE_SUCCESS;
}
void initResultSizeInfo(SOperatorInfo* pOperator, int32_t numOfRows) {
void initResultSizeInfo(SResultInfo * pResultInfo, int32_t numOfRows) {
ASSERT(numOfRows != 0);
pOperator->resultInfo.capacity = numOfRows;
pOperator->resultInfo.threshold = numOfRows * 0.75;
pResultInfo->capacity = numOfRows;
pResultInfo->threshold = numOfRows * 0.75;
if (pOperator->resultInfo.threshold == 0) {
pOperator->resultInfo.threshold = numOfRows;
if (pResultInfo->threshold == 0) {
pResultInfo->threshold = numOfRows;
}
}
@ -3672,7 +3678,7 @@ SOperatorInfo* createAggregateOperatorInfo(SOperatorInfo* downstream, SExprInfo*
int32_t numOfRows = 1024;
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
initResultSizeInfo(pOperator, numOfRows);
initResultSizeInfo(&pOperator->resultInfo, numOfRows);
int32_t code = initAggInfo(&pOperator->exprSupp, &pInfo->aggSup, pExprInfo, numOfCols, keyBufSize, pTaskInfo->id.str);
if (code != TSDB_CODE_SUCCESS) {
goto _error;
@ -3827,7 +3833,7 @@ SOperatorInfo* createProjectOperatorInfo(SOperatorInfo* downstream, SProjectPhys
if (numOfRows * pResBlock->info.rowSize > TWOMB) {
numOfRows = TWOMB / pResBlock->info.rowSize;
}
initResultSizeInfo(pOperator, numOfRows);
initResultSizeInfo(&pOperator->resultInfo, numOfRows);
initAggInfo(&pOperator->exprSupp, &pInfo->aggSup, pExprInfo, numOfCols, keyBufSize, pTaskInfo->id.str);
initBasicInfo(&pInfo->binfo, pResBlock);
@ -3955,7 +3961,7 @@ static SSDataBlock* doApplyIndefinitFunction(SOperatorInfo* pOperator) {
doFilter(pIndefInfo->pCondition, pInfo->pRes);
size_t rows = pInfo->pRes->info.rows;
if (rows >= 0) {
if (rows > 0 || pOperator->status == OP_EXEC_DONE) {
break;
}
}
@ -4005,7 +4011,7 @@ SOperatorInfo* createIndefinitOutputOperatorInfo(SOperatorInfo* downstream, SPhy
numOfRows = TWOMB / pResBlock->info.rowSize;
}
initResultSizeInfo(pOperator, numOfRows);
initResultSizeInfo(&pOperator->resultInfo, numOfRows);
initAggInfo(pSup, &pInfo->aggSup, pExprInfo, numOfExpr, keyBufSize, pTaskInfo->id.str);
initBasicInfo(&pInfo->binfo, pResBlock);
@ -4044,8 +4050,7 @@ static int32_t initFillInfo(SFillOperatorInfo* pInfo, SExprInfo* pExpr, int32_t
STimeWindow win, int32_t capacity, const char* id, SInterval* pInterval, int32_t fillType) {
SFillColInfo* pColInfo = createFillColInfo(pExpr, numOfCols, pValNode);
STimeWindow w = TSWINDOW_INITIALIZER;
getAlignQueryTimeWindow(pInterval, pInterval->precision, win.skey, &w);
STimeWindow w = getAlignQueryTimeWindow(pInterval, pInterval->precision, win.skey);
w = getFirstQualifiedTimeWindow(win.skey, &w, pInterval, TSDB_ORDER_ASC);
int32_t order = TSDB_ORDER_ASC;
@ -4082,7 +4087,7 @@ SOperatorInfo* createFillOperatorInfo(SOperatorInfo* downstream, SFillPhysiNode*
int32_t type = convertFillType(pPhyFillNode->mode);
SResultInfo* pResultInfo = &pOperator->resultInfo;
initResultSizeInfo(pOperator, 4096);
initResultSizeInfo(&pOperator->resultInfo, 4096);
pInfo->primaryTsCol = ((SColumnNode*)pPhyFillNode->pWStartTs)->slotId;
int32_t numOfOutputCols = 0;
@ -4286,7 +4291,7 @@ int32_t generateGroupIdMap(STableListInfo* pTableListInfo, SReadHandle* pHandle,
REPLACE_NODE(pNew);
} else {
taosMemoryFree(keyBuf);
nodesClearList(groupNew);
nodesDestroyList(groupNew);
metaReaderClear(&mr);
return code;
}
@ -4304,7 +4309,7 @@ int32_t generateGroupIdMap(STableListInfo* pTableListInfo, SReadHandle* pHandle,
if (tTagIsJson(data)) {
terrno = TSDB_CODE_QRY_JSON_IN_GROUP_ERROR;
taosMemoryFree(keyBuf);
nodesClearList(groupNew);
nodesDestroyList(groupNew);
metaReaderClear(&mr);
return terrno;
}
@ -4327,7 +4332,7 @@ int32_t generateGroupIdMap(STableListInfo* pTableListInfo, SReadHandle* pHandle,
info->groupId = groupId;
groupNum++;
nodesClearList(groupNew);
nodesDestroyList(groupNew);
metaReaderClear(&mr);
}
taosMemoryFree(keyBuf);
@ -4456,7 +4461,7 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
return NULL;
}
} else { // Create one table group.
STableKeyInfo info = {.lastKey = 0, .uid = pBlockNode->uid, .groupId = 0};
STableKeyInfo info = {.uid = pBlockNode->uid, .groupId = 0};
taosArrayPush(pTableListInfo->pTableList, &info);
}

View File

@ -406,7 +406,7 @@ SOperatorInfo* createGroupOperatorInfo(SOperatorInfo* downstream, SExprInfo* pEx
goto _error;
}
initResultSizeInfo(pOperator, 4096);
initResultSizeInfo(&pOperator->resultInfo, 4096);
initAggInfo(&pOperator->exprSupp, &pInfo->aggSup, pExprInfo, numOfCols, pInfo->groupKeyLen, pTaskInfo->id.str);
initBasicInfo(&pInfo->binfo, pResultBlock);
initResultRowInfo(&pInfo->binfo.resultRowInfo);

View File

@ -41,7 +41,7 @@ SOperatorInfo* createMergeJoinOperatorInfo(SOperatorInfo** pDownstream, int32_t
int32_t numOfCols = 0;
SExprInfo* pExprInfo = createExprInfo(pJoinNode->pTargets, NULL, &numOfCols);
initResultSizeInfo(pOperator, 4096);
initResultSizeInfo(&pOperator->resultInfo, 4096);
pInfo->pRes = pResBlock;
pOperator->name = "MergeJoinOperator";

View File

@ -125,7 +125,7 @@ static bool overlapWithTimeWindow(SInterval* pInterval, SDataBlockInfo* pBlockIn
}
if (order == TSDB_ORDER_ASC) {
getAlignQueryTimeWindow(pInterval, pInterval->precision, pBlockInfo->window.skey, &w);
w = getAlignQueryTimeWindow(pInterval, pInterval->precision, pBlockInfo->window.skey);
assert(w.ekey >= pBlockInfo->window.skey);
if (w.ekey < pBlockInfo->window.ekey) {
@ -144,7 +144,7 @@ static bool overlapWithTimeWindow(SInterval* pInterval, SDataBlockInfo* pBlockIn
}
}
} else {
getAlignQueryTimeWindow(pInterval, pInterval->precision, pBlockInfo->window.ekey, &w);
w = getAlignQueryTimeWindow(pInterval, pInterval->precision, pBlockInfo->window.ekey);
assert(w.skey <= pBlockInfo->window.ekey);
if (w.skey > pBlockInfo->window.skey) {
@ -359,6 +359,7 @@ void setTbNameColData(void* pMeta, const SSDataBlock* pBlock, SColumnInfoData* p
SScalarParam param = {.columnData = pColInfoData};
fpSet.process(&srcParam, 1, &param);
colDataDestroy(&infoData);
}
static SSDataBlock* doTableScanImpl(SOperatorInfo* pOperator) {
@ -1519,6 +1520,7 @@ static void destroyStreamScanOperatorInfo(void* param, int32_t numOfOutput) {
blockDataDestroy(pStreamScan->pPullDataRes);
blockDataDestroy(pStreamScan->pDeleteDataRes);
taosArrayDestroy(pStreamScan->pBlockLists);
taosArrayDestroy(pStreamScan->tsArray);
taosMemoryFree(pStreamScan);
}
@ -2044,8 +2046,8 @@ static SSDataBlock* sysTableScanUserTables(SOperatorInfo* pOperator) {
uint64_t suid = pInfo->pCur->mr.me.ctbEntry.suid;
int32_t code = metaGetTableEntryByUid(&mr, suid);
if (code != TSDB_CODE_SUCCESS) {
qError("failed to get super table meta, uid:0x%" PRIx64 ", code:%s, %s", suid, tstrerror(terrno),
GET_TASKID(pTaskInfo));
qError("failed to get super table meta, cname:%s, suid:0x%" PRIx64 ", code:%s, %s",
pInfo->pCur->mr.me.name, suid, tstrerror(terrno), GET_TASKID(pTaskInfo));
metaReaderClear(&mr);
metaCloseTbCursor(pInfo->pCur);
pInfo->pCur = NULL;
@ -2151,16 +2153,39 @@ static SSDataBlock* sysTableScanUserTables(SOperatorInfo* pOperator) {
}
}
static SSDataBlock* sysTableScanUserSTables(SOperatorInfo* pOperator) {
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
SSysTableScanInfo* pInfo = pOperator->info;
if (pOperator->status == OP_EXEC_DONE) {
return NULL;
}
pInfo->pRes->info.rows = 0;
pOperator->status = OP_EXEC_DONE;
pInfo->loadInfo.totalRows += pInfo->pRes->info.rows;
return (pInfo->pRes->info.rows == 0) ? NULL : pInfo->pRes;
}
static SSDataBlock* doSysTableScan(SOperatorInfo* pOperator) {
// build message and send to mnode to fetch the content of system tables.
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
SSysTableScanInfo* pInfo = pOperator->info;
const char* name = tNameGetTableName(&pInfo->name);
if (pInfo->showRewrite) {
char dbName[TSDB_DB_NAME_LEN] = {0};
getDBNameFromCondition(pInfo->pCondition, dbName);
sprintf(pInfo->req.db, "%d.%s", pInfo->accountId, dbName);
}
if (strncasecmp(name, TSDB_INS_TABLE_USER_TABLES, TSDB_TABLE_FNAME_LEN) == 0) {
return sysTableScanUserTables(pOperator);
} else if (strncasecmp(name, TSDB_INS_TABLE_USER_TAGS, TSDB_TABLE_FNAME_LEN) == 0) {
return sysTableScanUserTags(pOperator);
} else if (strncasecmp(name, TSDB_INS_TABLE_USER_STABLES, TSDB_TABLE_FNAME_LEN) == 0 && IS_SYS_DBNAME(pInfo->req.db)) {
return sysTableScanUserSTables(pOperator);
} else { // load the meta from mnode of the given epset
if (pOperator->status == OP_EXEC_DONE) {
return NULL;
@ -2171,12 +2196,6 @@ static SSDataBlock* doSysTableScan(SOperatorInfo* pOperator) {
strncpy(pInfo->req.tb, tNameGetTableName(&pInfo->name), tListLen(pInfo->req.tb));
strcpy(pInfo->req.user, pInfo->pUser);
if (pInfo->showRewrite) {
char dbName[TSDB_DB_NAME_LEN] = {0};
getDBNameFromCondition(pInfo->pCondition, dbName);
sprintf(pInfo->req.db, "%d.%s", pInfo->accountId, dbName);
}
int32_t contLen = tSerializeSRetrieveTableReq(NULL, 0, &pInfo->req);
char* buf1 = taosMemoryCalloc(1, contLen);
tSerializeSRetrieveTableReq(buf1, contLen, &pInfo->req);
@ -2324,7 +2343,7 @@ SOperatorInfo* createSysTableScanOperatorInfo(void* readHandle, SSystemTableScan
pInfo->pCondition = pScanNode->node.pConditions;
pInfo->scanCols = colList;
initResultSizeInfo(pOperator, 4096);
initResultSizeInfo(&pOperator->resultInfo, 4096);
tNameAssign(&pInfo->name, &pScanNode->tableName);
const char* name = tNameGetTableName(&pInfo->name);
@ -2554,7 +2573,7 @@ SOperatorInfo* createTagScanOperatorInfo(SReadHandle* pReadHandle, STagScanPhysi
pOperator->info = pInfo;
pOperator->pTaskInfo = pTaskInfo;
initResultSizeInfo(pOperator, 4096);
initResultSizeInfo(&pOperator->resultInfo, 4096);
blockDataEnsureCapacity(pInfo->pRes, pOperator->resultInfo.capacity);
pOperator->fpSet =
@ -3099,7 +3118,7 @@ SOperatorInfo* createTableMergeScanOperatorInfo(STableScanPhysiNode* pTableScanN
pOperator->info = pInfo;
pOperator->exprSupp.numOfExprs = numOfCols;
pOperator->pTaskInfo = pTaskInfo;
initResultSizeInfo(pOperator, 1024);
initResultSizeInfo(&pOperator->resultInfo, 1024);
pOperator->fpSet =
createOperatorFpSet(operatorDummyOpenFn, doTableMergeScan, NULL, NULL, destroyTableMergeScanOperatorInfo, NULL,

View File

@ -26,7 +26,7 @@ static void destroyOrderOperatorInfo(void* param, int32_t numOfOutput);
SOperatorInfo* createSortOperatorInfo(SOperatorInfo* downstream, SSortPhysiNode* pSortNode, SExecTaskInfo* pTaskInfo) {
SSortOperatorInfo* pInfo = taosMemoryCalloc(1, sizeof(SSortOperatorInfo));
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
if (pInfo == NULL || pOperator == NULL /* || rowSize > 100 * 1024 * 1024*/) {
if (pInfo == NULL || pOperator == NULL) {
goto _error;
}
@ -41,13 +41,15 @@ SOperatorInfo* createSortOperatorInfo(SOperatorInfo* downstream, SSortPhysiNode*
extractColMatchInfo(pSortNode->pTargets, pDescNode, &numOfOutputCols, COL_MATCH_FROM_SLOT_ID);
pOperator->exprSupp.pCtx = createSqlFunctionCtx(pExprInfo, numOfCols, &pOperator->exprSupp.rowEntryInfoOffset);
initResultSizeInfo(&pOperator->resultInfo, 1024);
pInfo->binfo.pRes = pResBlock;
initResultSizeInfo(pOperator, 1024);
pInfo->pSortInfo = createSortInfo(pSortNode->pSortKeys);
pInfo->pSortInfo = createSortInfo(pSortNode->pSortKeys);
pInfo->pCondition = pSortNode->node.pConditions;
pInfo->pColMatchInfo = pColMatchColInfo;
initLimitInfo(pSortNode->node.pLimit, pSortNode->node.pSlimit, &pInfo->limitInfo);
pOperator->name = "SortOperator";
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_SORT;
pOperator->blocking = true;
@ -208,26 +210,44 @@ SSDataBlock* doSort(SOperatorInfo* pOperator) {
SSDataBlock* pBlock = NULL;
while (1) {
pBlock = getSortedBlockData(pInfo->pSortHandle, pInfo->binfo.pRes, pOperator->resultInfo.capacity,
pInfo->pColMatchInfo, pInfo);
if (pBlock != NULL) {
doFilter(pInfo->pCondition, pBlock);
}
pInfo->pColMatchInfo, pInfo);
if (pBlock == NULL) {
doSetOperatorCompleted(pOperator);
break;
return NULL;
}
if (blockDataGetNumOfRows(pBlock) > 0) {
doFilter(pInfo->pCondition, pBlock);
if (blockDataGetNumOfRows(pBlock) == 0) {
continue;
}
// todo add the limit/offset info
if (pInfo->limitInfo.remainOffset > 0) {
if (pInfo->limitInfo.remainOffset >= blockDataGetNumOfRows(pBlock)) {
pInfo->limitInfo.remainOffset -= pBlock->info.rows;
continue;
}
blockDataTrimFirstNRows(pBlock, pInfo->limitInfo.remainOffset);
pInfo->limitInfo.remainOffset = 0;
}
if (pInfo->limitInfo.limit.limit > 0 &&
pInfo->limitInfo.limit.limit <= pInfo->limitInfo.numOfOutputRows + blockDataGetNumOfRows(pBlock)) {
int32_t remain = pInfo->limitInfo.limit.limit - pInfo->limitInfo.numOfOutputRows;
blockDataKeepFirstNRows(pBlock, remain);
}
size_t numOfRows = blockDataGetNumOfRows(pBlock);
pInfo->limitInfo.numOfOutputRows += numOfRows;
pOperator->resultInfo.totalRows += numOfRows;
if (numOfRows > 0) {
break;
}
}
if (pBlock != NULL) {
pOperator->resultInfo.totalRows += pBlock->info.rows;
}
return pBlock;
return blockDataGetNumOfRows(pBlock) > 0? pBlock:NULL;
}
void destroyOrderOperatorInfo(void* param, int32_t numOfOutput) {
@ -479,7 +499,7 @@ SOperatorInfo* createGroupSortOperatorInfo(SOperatorInfo* downstream, SGroupSort
pOperator->exprSupp.pCtx = createSqlFunctionCtx(pExprInfo, numOfCols, &pOperator->exprSupp.rowEntryInfoOffset);
pInfo->binfo.pRes = pResBlock;
initResultSizeInfo(pOperator, 1024);
initResultSizeInfo(&pOperator->resultInfo, 1024);
pInfo->pSortInfo = createSortInfo(pSortPhyNode->pSortKeys);
;
@ -711,7 +731,7 @@ SOperatorInfo* createMultiwayMergeOperatorInfo(SOperatorInfo** downStreams, size
extractColMatchInfo(pMergePhyNode->pTargets, pDescNode, &numOfOutputCols, COL_MATCH_FROM_SLOT_ID);
SPhysiNode* pChildNode = (SPhysiNode*)nodesListGetNode(pPhyNode->pChildren, 0);
SSDataBlock* pInputBlock = createResDataBlock(pChildNode->pOutputDataBlockDesc);
initResultSizeInfo(pOperator, 1024);
initResultSizeInfo(&pOperator->resultInfo, 1024);
pInfo->groupSort = pMergePhyNode->groupSort;
pInfo->binfo.pRes = pResBlock;

View File

@ -652,7 +652,7 @@ static void doInterpUnclosedTimeWindow(SOperatorInfo* pOperatorInfo, int32_t num
void printDataBlock(SSDataBlock* pBlock, const char* flag) {
if (!pBlock || pBlock->info.rows == 0) {
qDebug("======printDataBlock: Block is Null or Empty");
qDebug("===stream===printDataBlock: Block is Null or Empty");
return;
}
char* pBuf = NULL;
@ -660,6 +660,62 @@ void printDataBlock(SSDataBlock* pBlock, const char* flag) {
taosMemoryFree(pBuf);
}
typedef int32_t (*__compare_fn_t)(void* pKey, void* data, int32_t index);
int32_t binarySearchCom(void* keyList, int num, void* pKey, int order, __compare_fn_t comparefn) {
int firstPos = 0, lastPos = num - 1, midPos = -1;
int numOfRows = 0;
if (num <= 0) return -1;
if (order == TSDB_ORDER_DESC) {
// find the first position which is smaller or equal than the key
while (1) {
if (comparefn(pKey, keyList, lastPos) >= 0) return lastPos;
if (comparefn(pKey, keyList, firstPos) == 0) return firstPos;
if (comparefn(pKey, keyList, firstPos) < 0) return firstPos - 1;
numOfRows = lastPos - firstPos + 1;
midPos = (numOfRows >> 1) + firstPos;
if (comparefn(pKey, keyList, midPos) < 0) {
lastPos = midPos - 1;
} else if (comparefn(pKey, keyList, midPos) > 0) {
firstPos = midPos + 1;
} else {
break;
}
}
} else {
// find the first position which is bigger or equal than the key
while (1) {
if (comparefn(pKey, keyList, firstPos) <= 0) return firstPos;
if (comparefn(pKey, keyList, lastPos) == 0) return lastPos;
if (comparefn(pKey, keyList, lastPos) > 0) {
lastPos = lastPos + 1;
if (lastPos >= num)
return -1;
else
return lastPos;
}
numOfRows = lastPos - firstPos + 1;
midPos = (numOfRows >> 1) + firstPos;
if (comparefn(pKey, keyList, midPos) < 0) {
lastPos = midPos - 1;
} else if (comparefn(pKey, keyList, midPos) > 0) {
firstPos = midPos + 1;
} else {
break;
}
}
}
return midPos;
}
typedef int64_t (*__get_value_fn_t)(void* data, int32_t index);
int32_t binarySearch(void* keyList, int num, TSKEY key, int order, __get_value_fn_t getValuefn) {
@ -716,20 +772,31 @@ int32_t binarySearch(void* keyList, int num, TSKEY key, int order, __get_value_f
return midPos;
}
int64_t getReskey(void* data, int32_t index) {
int32_t compareResKey(void* pKey, void* data, int32_t index) {
SArray* res = (SArray*)data;
SResKeyPos* pos = taosArrayGetP(res, index);
return *(int64_t*)pos->key;
SWinRes* pData = (SWinRes*) pKey;
if (pData->ts == *(int64_t*)pos->key) {
if (pData->groupId > pos->groupId) {
return 1;
} else if (pData->groupId < pos->groupId) {
return -1;
}
return 0;
} else if (pData->ts > *(int64_t*)pos->key) {
return 1;
}
return -1;
}
static int32_t saveResult(int64_t ts, int32_t pageId, int32_t offset, uint64_t groupId, SArray* pUpdated) {
int32_t size = taosArrayGetSize(pUpdated);
int32_t index = binarySearch(pUpdated, size, ts, TSDB_ORDER_DESC, getReskey);
SWinRes data = {.ts = ts, .groupId = groupId};
int32_t index = binarySearchCom(pUpdated, size, &data, TSDB_ORDER_DESC, compareResKey);
if (index == -1) {
index = 0;
} else {
TSKEY resTs = getReskey(pUpdated, index);
if (resTs < ts) {
if (compareResKey(&data, pUpdated, index) > 0) {
index++;
} else {
return TSDB_CODE_SUCCESS;
@ -753,10 +820,10 @@ static int32_t saveResultRow(SResultRow* result, uint64_t groupId, SArray* pUpda
return saveResult(result->win.skey, result->pageId, result->offset, groupId, pUpdated);
}
static void removeResult(SArray* pUpdated, TSKEY key) {
static void removeResult(SArray* pUpdated, SWinRes* pKey) {
int32_t size = taosArrayGetSize(pUpdated);
int32_t index = binarySearch(pUpdated, size, key, TSDB_ORDER_DESC, getReskey);
if (index >= 0 && key == getReskey(pUpdated, index)) {
int32_t index = binarySearchCom(pUpdated, size, pKey, TSDB_ORDER_DESC, compareResKey);
if (index >= 0 && 0 == compareResKey(pKey, pUpdated, index)) {
taosArrayRemove(pUpdated, index);
}
}
@ -765,7 +832,7 @@ static void removeResults(SArray* pWins, SArray* pUpdated) {
int32_t size = taosArrayGetSize(pWins);
for (int32_t i = 0; i < size; i++) {
SWinRes* pW = taosArrayGet(pWins, i);
removeResult(pUpdated, pW->ts);
removeResult(pUpdated, pW);
}
}
@ -775,14 +842,30 @@ int64_t getWinReskey(void* data, int32_t index) {
return pos->ts;
}
int32_t compareWinRes(void* pKey, void* data, int32_t index) {
SArray* res = (SArray*)data;
SWinRes* pos = taosArrayGetP(res, index);
SResKeyPos* pData = (SResKeyPos*) pKey;
if (*(int64_t*)pData->key == pos->ts) {
if (pData->groupId > pos->groupId) {
return 1;
} else if (pData->groupId < pos->groupId) {
return -1;
}
return 0;
} else if (*(int64_t*)pData->key > pos->ts) {
return 1;
}
return -1;
}
static void removeDeleteResults(SArray* pUpdated, SArray* pDelWins) {
int32_t upSize = taosArrayGetSize(pUpdated);
int32_t delSize = taosArrayGetSize(pDelWins);
for (int32_t i = 0; i < upSize; i++) {
SResKeyPos* pResKey = taosArrayGetP(pUpdated, i);
int64_t key = *(int64_t*)pResKey->key;
int32_t index = binarySearch(pDelWins, delSize, key, TSDB_ORDER_DESC, getWinReskey);
if (index >= 0 && key == getWinReskey(pDelWins, index)) {
int32_t index = binarySearchCom(pDelWins, delSize, pResKey, TSDB_ORDER_DESC, compareWinRes);
if (index >= 0 && 0 == compareWinRes(pResKey, pDelWins, index)) {
taosArrayRemove(pDelWins, index);
}
}
@ -924,11 +1007,17 @@ SResultRowPosition addToOpenWindowList(SResultRowInfo* pResultRowInfo, const SRe
int64_t* extractTsCol(SSDataBlock* pBlock, const SIntervalAggOperatorInfo* pInfo) {
TSKEY* tsCols = NULL;
if (pBlock->pDataBlock != NULL) {
SColumnInfoData* pColDataInfo = taosArrayGet(pBlock->pDataBlock, pInfo->primaryTsIndex);
tsCols = (int64_t*)pColDataInfo->pData;
if (tsCols != NULL) {
// no data in primary ts
if (tsCols[0] == 0 && tsCols[pBlock->info.rows - 1] == 0) {
return NULL;
}
if (tsCols[0] != 0 && (pBlock->info.window.skey == 0 && pBlock->info.window.ekey == 0)) {
blockDataUpdateTsWindow(pBlock, pInfo->primaryTsIndex);
}
}
@ -1442,8 +1531,10 @@ static SSDataBlock* doStreamIntervalAgg(SOperatorInfo* pOperator) {
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->aggSup.pResultBuf);
if (pInfo->binfo.pRes->info.rows == 0 || !hasDataInGroupInfo(&pInfo->groupResInfo)) {
pOperator->status = OP_EXEC_DONE;
qDebug("===stream===single interval is done");
freeAllPages(pInfo->pRecycledPages, pInfo->aggSup.pResultBuf);
}
printDataBlock(pInfo->binfo.pRes, "single interval");
return pInfo->binfo.pRes->info.rows == 0 ? NULL : pInfo->binfo.pRes;
}
@ -1677,7 +1768,7 @@ SOperatorInfo* createIntervalOperatorInfo(SOperatorInfo* downstream, SExprInfo*
SExprSupp* pSup = &pOperator->exprSupp;
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
initResultSizeInfo(pOperator, 4096);
initResultSizeInfo(&pOperator->resultInfo, 4096);
int32_t code = initAggInfo(pSup, &pInfo->aggSup, pExprInfo, numOfCols, keyBufSize, pTaskInfo->id.str);
initBasicInfo(&pInfo->binfo, pResBlock);
@ -1758,7 +1849,7 @@ SOperatorInfo* createStreamIntervalOperatorInfo(SOperatorInfo* downstream, SExpr
int32_t numOfRows = 4096;
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
initResultSizeInfo(pOperator, numOfRows);
initResultSizeInfo(&pOperator->resultInfo, numOfRows);
int32_t code = initAggInfo(&pOperator->exprSupp, &pInfo->aggSup, pExprInfo, numOfCols, keyBufSize, pTaskInfo->id.str);
initBasicInfo(&pInfo->binfo, pResBlock);
initExecTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pInfo->win);
@ -2218,7 +2309,7 @@ SOperatorInfo* createTimeSliceOperatorInfo(SOperatorInfo* downstream, SPhysiNode
pInfo->tsCol = extractColumnFromColumnNode((SColumnNode*)pInterpPhyNode->pTimeSeries);
pInfo->fillType = convertFillType(pInterpPhyNode->fillMode);
initResultSizeInfo(pOperator, 4096);
initResultSizeInfo(&pOperator->resultInfo, 4096);
pInfo->pFillColInfo = createFillColInfo(pExprInfo, numOfExprs, (SNodeListNode*)pInterpPhyNode->pFillValues);
pInfo->pRes = createResDataBlock(pPhyNode->pOutputDataBlockDesc);
@ -2266,7 +2357,7 @@ SOperatorInfo* createStatewindowOperatorInfo(SOperatorInfo* downstream, SExprInf
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
initResultSizeInfo(pOperator, 4096);
initResultSizeInfo(&pOperator->resultInfo, 4096);
initAggInfo(&pOperator->exprSupp, &pInfo->aggSup, pExpr, numOfCols, keyBufSize, pTaskInfo->id.str);
initBasicInfo(&pInfo->binfo, pResBlock);
@ -2314,7 +2405,7 @@ SOperatorInfo* createSessionAggOperatorInfo(SOperatorInfo* downstream, SExprInfo
}
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
initResultSizeInfo(pOperator, 4096);
initResultSizeInfo(&pOperator->resultInfo, 4096);
int32_t code = initAggInfo(&pOperator->exprSupp, &pInfo->aggSup, pExprInfo, numOfCols, keyBufSize, pTaskInfo->id.str);
if (code != TSDB_CODE_SUCCESS) {
@ -2890,7 +2981,7 @@ SOperatorInfo* createStreamFinalIntervalOperatorInfo(SOperatorInfo* downstream,
ASSERT(pInfo->twAggSup.calTrigger != STREAM_TRIGGER_MAX_DELAY);
pInfo->primaryTsIndex = ((SColumnNode*)pIntervalPhyNode->window.pTspk)->slotId;
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
initResultSizeInfo(pOperator, 4096);
initResultSizeInfo(&pOperator->resultInfo, 4096);
if (pIntervalPhyNode->window.pExprs != NULL) {
int32_t numOfScalar = 0;
SExprInfo* pScalarExprInfo = createExprInfo(pIntervalPhyNode->window.pExprs, NULL, &numOfScalar);
@ -3066,7 +3157,7 @@ SOperatorInfo* createStreamSessionAggOperatorInfo(SOperatorInfo* downstream, SPh
goto _error;
}
initResultSizeInfo(pOperator, 4096);
initResultSizeInfo(&pOperator->resultInfo, 4096);
if (pSessionNode->window.pExprs != NULL) {
int32_t numOfScalar = 0;
SExprInfo* pScalarExprInfo = createExprInfo(pSessionNode->window.pExprs, NULL, &numOfScalar);
@ -4330,7 +4421,7 @@ SOperatorInfo* createStreamStateAggOperatorInfo(SOperatorInfo* downstream, SPhys
SExprInfo* pExprInfo = createExprInfo(pStateNode->window.pFuncs, NULL, &numOfCols);
pInfo->stateCol = extractColumnFromColumnNode(pColNode);
initResultSizeInfo(pOperator, 4096);
initResultSizeInfo(&pOperator->resultInfo, 4096);
if (pStateNode->window.pExprs != NULL) {
int32_t numOfScalar = 0;
SExprInfo* pScalarExprInfo = createExprInfo(pStateNode->window.pExprs, NULL, &numOfScalar);
@ -4580,7 +4671,7 @@ SOperatorInfo* createMergeAlignedIntervalOperatorInfo(SOperatorInfo* downstream,
iaInfo->primaryTsIndex = primaryTsSlotId;
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
initResultSizeInfo(pOperator, 4096);
initResultSizeInfo(&pOperator->resultInfo, 4096);
int32_t code =
initAggInfo(&pOperator->exprSupp, &iaInfo->aggSup, pExprInfo, numOfCols, keyBufSize, pTaskInfo->id.str);
@ -4886,7 +4977,7 @@ SOperatorInfo* createMergeIntervalOperatorInfo(SOperatorInfo* downstream, SExprI
SExprSupp* pExprSupp = &pOperator->exprSupp;
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
initResultSizeInfo(pOperator, 4096);
initResultSizeInfo(&pOperator->resultInfo, 4096);
int32_t code = initAggInfo(pExprSupp, &iaInfo->aggSup, pExprInfo, numOfCols, keyBufSize, pTaskInfo->id.str);
initBasicInfo(&iaInfo->binfo, pResBlock);

View File

@ -958,6 +958,7 @@ static bool validateHistogramBinDesc(char* binDescStr, int8_t binType, char* err
return false;
}
cJSON_Delete(binDesc);
taosMemoryFree(intervals);
return true;
}

View File

@ -165,6 +165,7 @@ typedef struct SElapsedInfo {
typedef struct STwaInfo {
double dOutput;
bool isNull;
SPoint1 p;
STimeWindow win;
} STwaInfo;
@ -2466,9 +2467,7 @@ bool apercentileFunctionSetup(SqlFunctionCtx* pCtx, SResultRowEntryInfo* pResult
int32_t apercentileFunction(SqlFunctionCtx* pCtx) {
int32_t numOfElems = 0;
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
SInputColumnInfoData* pInput = &pCtx->input;
// SColumnDataAgg* pAgg = pInput->pColumnDataAgg[0];
SColumnInfoData* pCol = pInput->pData[0];
int32_t type = pCol->info.type;
@ -2501,6 +2500,9 @@ int32_t apercentileFunction(SqlFunctionCtx* pCtx) {
GET_TYPED_DATA(v, double, type, data);
tHistogramAdd(&pInfo->pHisto, v);
}
qDebug("add %d elements into histogram, total:%d, numOfEntry:%d, %p", numOfElems, pInfo->pHisto->numOfElems,
pInfo->pHisto->numOfEntries, pInfo->pHisto);
}
SET_VAL(pResInfo, numOfElems, 1);
@ -2539,11 +2541,19 @@ static void apercentileTransferInfo(SAPercentileInfo* pInput, SAPercentileInfo*
if (pHisto->numOfElems <= 0) {
memcpy(pHisto, pInput->pHisto, sizeof(SHistogramInfo) + sizeof(SHistBin) * (MAX_HISTOGRAM_BIN + 1));
pHisto->elems = (SHistBin*)((char*)pHisto + sizeof(SHistogramInfo));
qDebug("merge histo, total:%"PRId64", entry:%d, %p", pHisto->numOfElems, pHisto->numOfEntries, pHisto);
} else {
pHisto->elems = (SHistBin*)((char*)pHisto + sizeof(SHistogramInfo));
qDebug("input histogram, elem:%"PRId64", entry:%d, %p", pHisto->numOfElems, pHisto->numOfEntries,
pInput->pHisto);
SHistogramInfo* pRes = tHistogramMerge(pHisto, pInput->pHisto, MAX_HISTOGRAM_BIN);
memcpy(pHisto, pRes, sizeof(SHistogramInfo) + sizeof(SHistBin) * MAX_HISTOGRAM_BIN);
pHisto->elems = (SHistBin*)((char*)pHisto + sizeof(SHistogramInfo));
qDebug("merge histo, total:%"PRId64", entry:%d, %p", pHisto->numOfElems, pHisto->numOfEntries,
pHisto);
tHistogramDestroy(&pRes);
}
}
@ -2559,14 +2569,20 @@ int32_t apercentileFunctionMerge(SqlFunctionCtx* pCtx) {
SAPercentileInfo* pInfo = GET_ROWCELL_INTERBUF(pResInfo);
int32_t start = pInput->startRowIndex;
qDebug("total %d rows will merge, %p", pInput->numOfRows, pInfo->pHisto);
int32_t start = pInput->startRowIndex;
for (int32_t i = start; i < start + pInput->numOfRows; ++i) {
char* data = colDataGetData(pCol, i);
char* data = colDataGetData(pCol, i);
SAPercentileInfo* pInputInfo = (SAPercentileInfo*)varDataVal(data);
apercentileTransferInfo(pInputInfo, pInfo);
}
if (pInfo->algo != APERCT_ALGO_TDIGEST) {
qDebug("after merge, total:%d, numOfEntry:%d, %p", pInfo->pHisto->numOfElems, pInfo->pHisto->numOfEntries, pInfo->pHisto);
}
SET_VAL(pResInfo, 1, 1);
return TSDB_CODE_SUCCESS;
}
@ -2584,6 +2600,8 @@ int32_t apercentileFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
}
} else {
if (pInfo->pHisto->numOfElems > 0) {
qDebug("get the final res:%d, elements:%"PRId64", entry:%d", pInfo->pHisto->numOfElems, pInfo->pHisto->numOfEntries);
double ratio[] = {pInfo->percent};
double* res = tHistogramUniform(pInfo->pHisto, ratio, 1);
pInfo->result = *res;
@ -2637,6 +2655,9 @@ int32_t apercentileCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx)
SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx);
SAPercentileInfo* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo);
ASSERT(pDBuf->algo == pSBuf->algo);
qDebug("start to combine apercentile, %p", pDBuf->pHisto);
apercentileTransferInfo(pSBuf, pDBuf);
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
return TSDB_CODE_SUCCESS;
@ -5181,8 +5202,9 @@ bool twaFunctionSetup(SqlFunctionCtx* pCtx, SResultRowEntryInfo* pResultInfo) {
}
STwaInfo* pInfo = GET_ROWCELL_INTERBUF(GET_RES_INFO(pCtx));
pInfo->p.key = INT64_MIN;
pInfo->win = TSWINDOW_INITIALIZER;
pInfo->isNull = false;
pInfo->p.key = INT64_MIN;
pInfo->win = TSWINDOW_INITIALIZER;
return true;
}
@ -5208,27 +5230,47 @@ int32_t twaFunction(SqlFunctionCtx* pCtx) {
SPoint1* last = &pInfo->p;
int32_t numOfElems = 0;
if (IS_NULL_TYPE(pInputCol->info.type)) {
pInfo->isNull = true;
goto _twa_over;
}
int32_t i = pInput->startRowIndex;
if (pCtx->start.key != INT64_MIN) {
ASSERT((pCtx->start.key < tsList[i] && pCtx->order == TSDB_ORDER_ASC) ||
(pCtx->start.key > tsList[i] && pCtx->order == TSDB_ORDER_DESC));
ASSERT(last->key == INT64_MIN);
last->key = tsList[i];
for (; i < pInput->numOfRows + pInput->startRowIndex; ++i) {
if (colDataIsNull_f(pInputCol->nullbitmap, i)) {
continue;
}
GET_TYPED_DATA(last->val, double, pInputCol->info.type, colDataGetData(pInputCol, i));
last->key = tsList[i];
pInfo->dOutput += twa_get_area(pCtx->start, *last);
pInfo->win.skey = pCtx->start.key;
numOfElems++;
i += 1;
GET_TYPED_DATA(last->val, double, pInputCol->info.type, colDataGetData(pInputCol, i));
pInfo->dOutput += twa_get_area(pCtx->start, *last);
pInfo->win.skey = pCtx->start.key;
numOfElems++;
i += 1;
break;
}
} else if (pInfo->p.key == INT64_MIN) {
last->key = tsList[i];
GET_TYPED_DATA(last->val, double, pInputCol->info.type, colDataGetData(pInputCol, i));
for (; i < pInput->numOfRows + pInput->startRowIndex; ++i) {
if (colDataIsNull_f(pInputCol->nullbitmap, i)) {
continue;
}
pInfo->win.skey = last->key;
numOfElems++;
i += 1;
last->key = tsList[i];
GET_TYPED_DATA(last->val, double, pInputCol->info.type, colDataGetData(pInputCol, i));
pInfo->win.skey = last->key;
numOfElems++;
i += 1;
break;
}
}
SPoint1 st = {0};
@ -5241,6 +5283,7 @@ int32_t twaFunction(SqlFunctionCtx* pCtx) {
if (colDataIsNull_f(pInputCol->nullbitmap, i)) {
continue;
}
numOfElems++;
INIT_INTP_POINT(st, tsList[i], val[i]);
pInfo->dOutput += twa_get_area(pInfo->p, st);
@ -5255,6 +5298,7 @@ int32_t twaFunction(SqlFunctionCtx* pCtx) {
if (colDataIsNull_f(pInputCol->nullbitmap, i)) {
continue;
}
numOfElems++;
INIT_INTP_POINT(st, tsList[i], val[i]);
pInfo->dOutput += twa_get_area(pInfo->p, st);
@ -5268,6 +5312,7 @@ int32_t twaFunction(SqlFunctionCtx* pCtx) {
if (colDataIsNull_f(pInputCol->nullbitmap, i)) {
continue;
}
numOfElems++;
INIT_INTP_POINT(st, tsList[i], val[i]);
pInfo->dOutput += twa_get_area(pInfo->p, st);
@ -5281,6 +5326,7 @@ int32_t twaFunction(SqlFunctionCtx* pCtx) {
if (colDataIsNull_f(pInputCol->nullbitmap, i)) {
continue;
}
numOfElems++;
INIT_INTP_POINT(st, tsList[i], val[i]);
pInfo->dOutput += twa_get_area(pInfo->p, st);
@ -5294,6 +5340,7 @@ int32_t twaFunction(SqlFunctionCtx* pCtx) {
if (colDataIsNull_f(pInputCol->nullbitmap, i)) {
continue;
}
numOfElems++;
INIT_INTP_POINT(st, tsList[i], val[i]);
pInfo->dOutput += twa_get_area(pInfo->p, st);
@ -5307,6 +5354,7 @@ int32_t twaFunction(SqlFunctionCtx* pCtx) {
if (colDataIsNull_f(pInputCol->nullbitmap, i)) {
continue;
}
numOfElems++;
INIT_INTP_POINT(st, tsList[i], val[i]);
pInfo->dOutput += twa_get_area(pInfo->p, st);
@ -5320,6 +5368,7 @@ int32_t twaFunction(SqlFunctionCtx* pCtx) {
if (colDataIsNull_f(pInputCol->nullbitmap, i)) {
continue;
}
numOfElems++;
INIT_INTP_POINT(st, tsList[i], val[i]);
pInfo->dOutput += twa_get_area(pInfo->p, st);
@ -5333,6 +5382,7 @@ int32_t twaFunction(SqlFunctionCtx* pCtx) {
if (colDataIsNull_f(pInputCol->nullbitmap, i)) {
continue;
}
numOfElems++;
INIT_INTP_POINT(st, tsList[i], val[i]);
pInfo->dOutput += twa_get_area(pInfo->p, st);
@ -5346,6 +5396,7 @@ int32_t twaFunction(SqlFunctionCtx* pCtx) {
if (colDataIsNull_f(pInputCol->nullbitmap, i)) {
continue;
}
numOfElems++;
INIT_INTP_POINT(st, tsList[i], val[i]);
pInfo->dOutput += twa_get_area(pInfo->p, st);
@ -5359,6 +5410,7 @@ int32_t twaFunction(SqlFunctionCtx* pCtx) {
if (colDataIsNull_f(pInputCol->nullbitmap, i)) {
continue;
}
numOfElems++;
INIT_INTP_POINT(st, tsList[i], val[i]);
pInfo->dOutput += twa_get_area(pInfo->p, st);
@ -5379,7 +5431,12 @@ int32_t twaFunction(SqlFunctionCtx* pCtx) {
pInfo->win.ekey = pInfo->p.key;
SET_VAL(pResInfo, numOfElems, 1);
_twa_over:
if (numOfElems == 0) {
pInfo->isNull = true;
}
SET_VAL(pResInfo, 1, 1);
return TSDB_CODE_SUCCESS;
}
@ -5400,8 +5457,8 @@ int32_t twaFinalize(struct SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
STwaInfo* pInfo = (STwaInfo*)GET_ROWCELL_INTERBUF(pResInfo);
if (pResInfo->numOfRes == 0) {
pResInfo->isNullRes = 1;
if (pInfo->isNull == true) {
pResInfo->numOfRes = 0;
} else {
if (pInfo->win.ekey == pInfo->win.skey) {
pInfo->dOutput = pInfo->p.val;

View File

@ -913,8 +913,8 @@ void udfdConnectMnodeThreadFunc(void *args) {
}
int main(int argc, char *argv[]) {
if (!taosCheckSystemIsSmallEnd()) {
printf("failed to start since on non-small-end machines\n");
if (!taosCheckSystemIsLittleEnd()) {
printf("failed to start since on non-little-end machines\n");
return -1;
}

View File

@ -707,6 +707,8 @@ static int32_t sifCalculate(SNode *pNode, SIFParam *pDst) {
sifFreeParam(res);
taosHashRemove(ctx.pRes, (void *)&pNode, POINTER_BYTES);
}
sifFreeRes(ctx.pRes);
SIF_RET(code);
}

View File

@ -369,6 +369,8 @@ static void destroyPhysiNode(SPhysiNode* pNode) {
nodesDestroyList(pNode->pChildren);
nodesDestroyNode(pNode->pConditions);
nodesDestroyNode((SNode*)pNode->pOutputDataBlockDesc);
nodesDestroyNode(pNode->pLimit);
nodesDestroyNode(pNode->pSlimit);
}
static void destroyWinodwPhysiNode(SWinodwPhysiNode* pNode) {
@ -389,11 +391,16 @@ static void destroyDataSinkNode(SDataSinkNode* pNode) { nodesDestroyNode((SNode*
static void destroyExprNode(SExprNode* pExpr) { taosArrayDestroy(pExpr->pAssociation); }
static void nodesDestroyNodePointer(void* node) {
SNode* pNode = *(SNode**)node;
nodesDestroyNode(pNode);
static void destroyTableCfg(STableCfg* pCfg) {
taosArrayDestroy(pCfg->pFuncs);
taosMemoryFree(pCfg->pComment);
taosMemoryFree(pCfg->pSchemas);
taosMemoryFree(pCfg->pTags);
taosMemoryFree(pCfg);
}
static void destroySmaIndex(void* pIndex) { taosMemoryFree(((STableIndexInfo*)pIndex)->expr); }
void nodesDestroyNode(SNode* pNode) {
if (NULL == pNode) {
return;
@ -431,6 +438,7 @@ void nodesDestroyNode(SNode* pNode) {
SRealTableNode* pReal = (SRealTableNode*)pNode;
taosMemoryFreeClear(pReal->pMeta);
taosMemoryFreeClear(pReal->pVgroupList);
taosArrayDestroyEx(pReal->pSmaIndexes, destroySmaIndex);
break;
}
case QUERY_NODE_TEMP_TABLE:
@ -451,9 +459,12 @@ void nodesDestroyNode(SNode* pNode) {
break;
case QUERY_NODE_LIMIT: // no pointer field
break;
case QUERY_NODE_STATE_WINDOW:
nodesDestroyNode(((SStateWindowNode*)pNode)->pExpr);
case QUERY_NODE_STATE_WINDOW: {
SStateWindowNode* pState = (SStateWindowNode*)pNode;
nodesDestroyNode(pState->pCol);
nodesDestroyNode(pState->pExpr);
break;
}
case QUERY_NODE_SESSION_WINDOW: {
SSessionWindowNode* pSession = (SSessionWindowNode*)pNode;
nodesDestroyNode((SNode*)pSession->pCol);
@ -500,8 +511,10 @@ void nodesDestroyNode(SNode* pNode) {
}
case QUERY_NODE_TABLE_OPTIONS: {
STableOptions* pOptions = (STableOptions*)pNode;
nodesDestroyList(pOptions->pSma);
nodesDestroyList(pOptions->pMaxDelay);
nodesDestroyList(pOptions->pWatermark);
nodesDestroyList(pOptions->pRollupFuncs);
nodesDestroyList(pOptions->pSma);
break;
}
case QUERY_NODE_INDEX_OPTIONS: {
@ -510,17 +523,22 @@ void nodesDestroyNode(SNode* pNode) {
nodesDestroyNode(pOptions->pInterval);
nodesDestroyNode(pOptions->pOffset);
nodesDestroyNode(pOptions->pSliding);
nodesDestroyNode(pOptions->pStreamOptions);
break;
}
case QUERY_NODE_EXPLAIN_OPTIONS: // no pointer field
break;
case QUERY_NODE_STREAM_OPTIONS:
nodesDestroyNode(((SStreamOptions*)pNode)->pWatermark);
case QUERY_NODE_STREAM_OPTIONS: {
SStreamOptions* pOptions = (SStreamOptions*)pNode;
nodesDestroyNode(pOptions->pDelay);
nodesDestroyNode(pOptions->pWatermark);
break;
}
case QUERY_NODE_LEFT_VALUE: // no pointer field
break;
case QUERY_NODE_SET_OPERATOR: {
SSetOperator* pStmt = (SSetOperator*)pNode;
nodesDestroyList(pStmt->pProjectionList);
nodesDestroyNode(pStmt->pLeft);
nodesDestroyNode(pStmt->pRight);
nodesDestroyList(pStmt->pOrderByList);
@ -582,7 +600,8 @@ void nodesDestroyNode(SNode* pNode) {
break;
case QUERY_NODE_DROP_SUPER_TABLE_STMT: // no pointer field
break;
case QUERY_NODE_ALTER_TABLE_STMT: {
case QUERY_NODE_ALTER_TABLE_STMT:
case QUERY_NODE_ALTER_SUPER_TABLE_STMT: {
SAlterTableStmt* pStmt = (SAlterTableStmt*)pNode;
nodesDestroyNode((SNode*)pStmt->pOptions);
nodesDestroyNode((SNode*)pStmt->pVal);
@ -686,14 +705,15 @@ void nodesDestroyNode(SNode* pNode) {
nodesDestroyNode(pStmt->pTbName);
break;
}
case QUERY_NODE_SHOW_DNODE_VARIABLES_STMT: // no pointer field
case QUERY_NODE_SHOW_DNODE_VARIABLES_STMT:
nodesDestroyNode(((SShowDnodeVariablesStmt*)pNode)->pDnodeId);
break;
case QUERY_NODE_SHOW_CREATE_DATABASE_STMT:
taosMemoryFreeClear(((SShowCreateDatabaseStmt*)pNode)->pCfg);
break;
case QUERY_NODE_SHOW_CREATE_TABLE_STMT:
case QUERY_NODE_SHOW_CREATE_STABLE_STMT:
taosMemoryFreeClear(((SShowCreateTableStmt*)pNode)->pCfg);
destroyTableCfg((STableCfg*)(((SShowCreateTableStmt*)pNode)->pCfg));
break;
case QUERY_NODE_SHOW_TABLE_DISTRIBUTED_STMT: // no pointer field
case QUERY_NODE_KILL_CONNECTION_STMT: // no pointer field
@ -725,7 +745,8 @@ void nodesDestroyNode(SNode* pNode) {
}
taosArrayDestroy(pQuery->pDbList);
taosArrayDestroy(pQuery->pTableList);
taosArrayDestroyEx(pQuery->pPlaceholderValues, nodesDestroyNodePointer);
taosArrayDestroy(pQuery->pPlaceholderValues);
nodesDestroyNode(pQuery->pPrepareRoot);
break;
}
case QUERY_NODE_LOGIC_PLAN_SCAN: {
@ -737,7 +758,7 @@ void nodesDestroyNode(SNode* pNode) {
nodesDestroyList(pLogicNode->pDynamicScanFuncs);
nodesDestroyNode(pLogicNode->pTagCond);
nodesDestroyNode(pLogicNode->pTagIndexCond);
taosArrayDestroy(pLogicNode->pSmaIndexes);
taosArrayDestroyEx(pLogicNode->pSmaIndexes, destroySmaIndex);
nodesDestroyList(pLogicNode->pGroupTags);
break;
}
@ -766,6 +787,9 @@ void nodesDestroyNode(SNode* pNode) {
destroyLogicNode((SLogicNode*)pLogicNode);
destroyVgDataBlockArray(pLogicNode->pDataBlocks);
// pVgDataBlocks is weak reference
nodesDestroyNode(pLogicNode->pAffectedRows);
taosMemoryFreeClear(pLogicNode->pVgroupList);
nodesDestroyList(pLogicNode->pInsertCols);
break;
}
case QUERY_NODE_LOGIC_PLAN_EXCHANGE:
@ -784,6 +808,7 @@ void nodesDestroyNode(SNode* pNode) {
nodesDestroyList(pLogicNode->pFuncs);
nodesDestroyNode(pLogicNode->pTspk);
nodesDestroyNode(pLogicNode->pTsEnd);
nodesDestroyNode(pLogicNode->pStateExpr);
break;
}
case QUERY_NODE_LOGIC_PLAN_FILL: {
@ -833,9 +858,14 @@ void nodesDestroyNode(SNode* pNode) {
case QUERY_NODE_PHYSICAL_PLAN_TAG_SCAN:
case QUERY_NODE_PHYSICAL_PLAN_SYSTABLE_SCAN:
case QUERY_NODE_PHYSICAL_PLAN_BLOCK_DIST_SCAN:
case QUERY_NODE_PHYSICAL_PLAN_LAST_ROW_SCAN:
destroyScanPhysiNode((SScanPhysiNode*)pNode);
break;
case QUERY_NODE_PHYSICAL_PLAN_LAST_ROW_SCAN: {
SLastRowScanPhysiNode* pPhyNode = (SLastRowScanPhysiNode*)pNode;
destroyScanPhysiNode((SScanPhysiNode*)pNode);
nodesDestroyList(pPhyNode->pGroupTags);
break;
}
case QUERY_NODE_PHYSICAL_PLAN_TABLE_SCAN:
case QUERY_NODE_PHYSICAL_PLAN_TABLE_SEQ_SCAN:
case QUERY_NODE_PHYSICAL_PLAN_TABLE_MERGE_SCAN:

View File

@ -462,7 +462,7 @@ explain_options(A) ::= explain_options(B) VERBOSE NK_BOOL(C).
explain_options(A) ::= explain_options(B) RATIO NK_FLOAT(C). { A = setExplainRatio(pCxt, B, &C); }
/************************************************ compact *************************************************************/
cmd ::= COMPACT VNODES IN NK_LP integer_list(A) NK_RP. { pCxt->pRootNode = createCompactStmt(pCxt, A); }
cmd ::= COMPACT VNODES IN NK_LP integer_list NK_RP. { pCxt->errCode = generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_EXPRIE_STATEMENT); }
/************************************************ create/drop function ************************************************/
cmd ::= CREATE agg_func_opt(A) FUNCTION not_exists_opt(F) function_name(B)

View File

@ -387,6 +387,19 @@ SNode* createLogicConditionNode(SAstCreateContext* pCxt, ELogicConditionType typ
return (SNode*)cond;
}
static uint8_t getMinusDataType(uint8_t orgType) {
switch (orgType) {
case TSDB_DATA_TYPE_UTINYINT:
case TSDB_DATA_TYPE_USMALLINT:
case TSDB_DATA_TYPE_UINT:
case TSDB_DATA_TYPE_UBIGINT:
return TSDB_DATA_TYPE_BIGINT;
default:
break;
}
return orgType;
}
SNode* createOperatorNode(SAstCreateContext* pCxt, EOperatorType type, SNode* pLeft, SNode* pRight) {
CHECK_PARSER_STATUS(pCxt);
if (OP_TYPE_MINUS == type && QUERY_NODE_VALUE == nodeType(pLeft)) {
@ -402,7 +415,7 @@ SNode* createOperatorNode(SAstCreateContext* pCxt, EOperatorType type, SNode* pL
}
taosMemoryFree(pVal->literal);
pVal->literal = pNewLiteral;
pVal->node.resType.type = TSDB_DATA_TYPE_BIGINT;
pVal->node.resType.type = getMinusDataType(pVal->node.resType.type);
return pLeft;
}
SOperatorNode* op = (SOperatorNode*)nodesMakeNode(QUERY_NODE_OPERATOR);

View File

@ -1497,7 +1497,6 @@ static int32_t parseInsertBody(SInsertParseContext* pCxt) {
memset(&pCxt->tags, 0, sizeof(pCxt->tags));
pCxt->pVgroupsHashObj = NULL;
pCxt->pTableBlockHashObj = NULL;
pCxt->pTableMeta = NULL;
return TSDB_CODE_SUCCESS;
}
@ -1554,7 +1553,10 @@ int32_t parseInsertSql(SParseContext* pContext, SQuery** pQuery, SParseMetaCache
if (NULL == *pQuery) {
return TSDB_CODE_OUT_OF_MEMORY;
}
} else {
nodesDestroyNode((*pQuery)->pRoot);
}
(*pQuery)->execMode = QUERY_EXEC_MODE_SCHEDULE;
(*pQuery)->haveResultSet = false;
(*pQuery)->msgType = TDMT_VND_SUBMIT;

View File

@ -678,6 +678,7 @@ void qFreeStmtDataBlock(void* pDataBlock) {
return;
}
taosMemoryFreeClear(((STableDataBlocks*)pDataBlock)->pTableMeta);
taosMemoryFreeClear(((STableDataBlocks*)pDataBlock)->pData);
taosMemoryFreeClear(pDataBlock);
}

View File

@ -1257,6 +1257,7 @@ static int32_t rewriteFuncToValue(STranslateContext* pCxt, char* pLiteral, SNode
}
}
if (DEAL_RES_ERROR != translateValue(pCxt, pVal)) {
nodesDestroyNode(*pNode);
*pNode = (SNode*)pVal;
} else {
nodesDestroyNode((SNode*)pVal);
@ -4009,30 +4010,7 @@ static SSchema* getTagSchema(STableMeta* pTableMeta, const char* pTagName) {
return NULL;
}
static int32_t checkAlterSuperTable(STranslateContext* pCxt, SAlterTableStmt* pStmt) {
if (TSDB_ALTER_TABLE_UPDATE_TAG_VAL == pStmt->alterType || TSDB_ALTER_TABLE_UPDATE_COLUMN_NAME == pStmt->alterType) {
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_ALTER_TABLE,
"Set tag value only available for child table");
}
if (pStmt->alterType == TSDB_ALTER_TABLE_UPDATE_OPTIONS && -1 != pStmt->pOptions->ttl) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_ALTER_TABLE);
}
if (pStmt->dataType.type == TSDB_DATA_TYPE_JSON && pStmt->alterType == TSDB_ALTER_TABLE_ADD_TAG) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_ONLY_ONE_JSON_TAG);
}
if (pStmt->dataType.type == TSDB_DATA_TYPE_JSON && pStmt->alterType == TSDB_ALTER_TABLE_ADD_COLUMN) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_COL_JSON);
}
STableMeta* pTableMeta = NULL;
int32_t code = getTableMeta(pCxt, pStmt->dbName, pStmt->tableName, &pTableMeta);
if (TSDB_CODE_SUCCESS != code) {
return code;
}
static int32_t checkAlterSuperTableImpl(STranslateContext* pCxt, SAlterTableStmt* pStmt, STableMeta* pTableMeta) {
SSchema* pTagsSchema = getTableTagSchema(pTableMeta);
if (getNumOfTags(pTableMeta) == 1 && pTagsSchema->type == TSDB_DATA_TYPE_JSON &&
(pStmt->alterType == TSDB_ALTER_TABLE_ADD_TAG || pStmt->alterType == TSDB_ALTER_TABLE_DROP_TAG ||
@ -4057,6 +4035,33 @@ static int32_t checkAlterSuperTable(STranslateContext* pCxt, SAlterTableStmt* pS
return TSDB_CODE_SUCCESS;
}
static int32_t checkAlterSuperTable(STranslateContext* pCxt, SAlterTableStmt* pStmt) {
if (TSDB_ALTER_TABLE_UPDATE_TAG_VAL == pStmt->alterType || TSDB_ALTER_TABLE_UPDATE_COLUMN_NAME == pStmt->alterType) {
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_ALTER_TABLE,
"Set tag value only available for child table");
}
if (pStmt->alterType == TSDB_ALTER_TABLE_UPDATE_OPTIONS && -1 != pStmt->pOptions->ttl) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_ALTER_TABLE);
}
if (pStmt->dataType.type == TSDB_DATA_TYPE_JSON && pStmt->alterType == TSDB_ALTER_TABLE_ADD_TAG) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_ONLY_ONE_JSON_TAG);
}
if (pStmt->dataType.type == TSDB_DATA_TYPE_JSON && pStmt->alterType == TSDB_ALTER_TABLE_ADD_COLUMN) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_COL_JSON);
}
STableMeta* pTableMeta = NULL;
int32_t code = getTableMeta(pCxt, pStmt->dbName, pStmt->tableName, &pTableMeta);
if (TSDB_CODE_SUCCESS == code) {
code = checkAlterSuperTableImpl(pCxt, pStmt, pTableMeta);
}
taosMemoryFree(pTableMeta);
return code;
}
static int32_t translateAlterSuperTable(STranslateContext* pCxt, SAlterTableStmt* pStmt) {
SMAlterStbReq alterReq = {0};
int32_t code = checkAlterSuperTable(pCxt, pStmt);
@ -6438,6 +6443,7 @@ static int32_t toMsgType(ENodeType type) {
static int32_t setRefreshMate(STranslateContext* pCxt, SQuery* pQuery) {
if (NULL != pCxt->pDbs) {
taosArrayDestroy(pQuery->pDbList);
pQuery->pDbList = taosArrayInit(taosHashGetSize(pCxt->pDbs), TSDB_DB_FNAME_LEN);
if (NULL == pQuery->pDbList) {
return TSDB_CODE_OUT_OF_MEMORY;
@ -6450,6 +6456,7 @@ static int32_t setRefreshMate(STranslateContext* pCxt, SQuery* pQuery) {
}
if (NULL != pCxt->pTables) {
taosArrayDestroy(pQuery->pTableList);
pQuery->pTableList = taosArrayInit(taosHashGetSize(pCxt->pTables), sizeof(SName));
if (NULL == pQuery->pTableList) {
return TSDB_CODE_OUT_OF_MEMORY;
@ -6521,6 +6528,7 @@ static int32_t setQuery(STranslateContext* pCxt, SQuery* pQuery) {
pQuery->stableQuery = pCxt->stableQuery;
if (pQuery->haveResultSet) {
taosMemoryFreeClear(pQuery->pResSchema);
if (TSDB_CODE_SUCCESS != extractResultSchema(pQuery->pRoot, &pQuery->numOfResCols, &pQuery->pResSchema)) {
return TSDB_CODE_OUT_OF_MEMORY;
}

View File

@ -865,12 +865,17 @@ STableCfg* tableCfgDup(STableCfg* pCfg) {
STableCfg* pNew = taosMemoryMalloc(sizeof(*pNew));
memcpy(pNew, pCfg, sizeof(*pNew));
if (pNew->pComment) {
pNew->pComment = strdup(pNew->pComment);
if (NULL != pNew->pComment) {
pNew->pComment = taosMemoryCalloc(pNew->commentLen + 1, 1);
memcpy(pNew->pComment, pCfg->pComment, pNew->commentLen);
}
if (pNew->pFuncs) {
if (NULL != pNew->pFuncs) {
pNew->pFuncs = taosArrayDup(pNew->pFuncs);
}
if (NULL != pNew->pTags) {
pNew->pTags = taosMemoryCalloc(pNew->tagsLen + 1, 1);
memcpy(pNew->pTags, pCfg->pTags, pNew->tagsLen);
}
int32_t schemaSize = (pCfg->numOfColumns + pCfg->numOfTags) * sizeof(SSchema);

View File

@ -82,11 +82,16 @@ static int32_t parseSqlSyntax(SParseContext* pCxt, SQuery** pQuery, SParseMetaCa
}
static int32_t setValueByBindParam(SValueNode* pVal, TAOS_MULTI_BIND* pParam) {
if (IS_VAR_DATA_TYPE(pVal->node.resType.type)) {
taosMemoryFreeClear(pVal->datum.p);
}
if (pParam->is_null && 1 == *(pParam->is_null)) {
pVal->node.resType.type = TSDB_DATA_TYPE_NULL;
pVal->node.resType.bytes = tDataTypes[TSDB_DATA_TYPE_NULL].bytes;
return TSDB_CODE_SUCCESS;
}
int32_t inputSize = (NULL != pParam->length ? *(pParam->length) : tDataTypes[pParam->buffer_type].bytes);
pVal->node.resType.type = pParam->buffer_type;
pVal->node.resType.bytes = inputSize;
@ -239,6 +244,7 @@ int32_t qStmtBindParams(SQuery* pQuery, TAOS_MULTI_BIND* pParams, int32_t colIdx
}
if (TSDB_CODE_SUCCESS == code && (colIdx < 0 || colIdx + 1 == pQuery->placeholderNum)) {
nodesDestroyNode(pQuery->pRoot);
pQuery->pRoot = nodesCloneNode(pQuery->pPrepareRoot);
if (NULL == pQuery->pRoot) {
code = TSDB_CODE_OUT_OF_MEMORY;

View File

@ -4117,7 +4117,8 @@ static YYACTIONTYPE yy_reduce(
yymsp[-2].minor.yy616 = yylhsminor.yy616;
break;
case 254: /* cmd ::= COMPACT VNODES IN NK_LP integer_list NK_RP */
{ pCxt->pRootNode = createCompactStmt(pCxt, yymsp[-1].minor.yy356); }
{ pCxt->errCode = generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_EXPRIE_STATEMENT); }
yy_destructor(yypParser,273,&yymsp[-1].minor);
break;
case 255: /* cmd ::= CREATE agg_func_opt FUNCTION not_exists_opt function_name AS NK_STRING OUTPUTTYPE type_name bufsize_opt */
{ pCxt->pRootNode = createCreateFunctionStmt(pCxt, yymsp[-6].minor.yy151, yymsp[-8].minor.yy151, &yymsp[-5].minor.yy361, &yymsp[-3].minor.yy0, yymsp[-1].minor.yy600, yymsp[0].minor.yy734); }

View File

@ -93,6 +93,17 @@ class MockCatalogServiceImpl {
MockCatalogServiceImpl() : id_(1) {}
~MockCatalogServiceImpl() {
for (auto& cfg : dbCfg_) {
taosArrayDestroy(cfg.second.pRetensions);
}
for (auto& indexes : index_) {
for (auto& index : indexes.second) {
taosMemoryFree(index.expr);
}
}
}
int32_t catalogGetHandle() const { return 0; }
int32_t catalogGetTableMeta(const SName* pTableName, STableMeta** pTableMeta) const {
@ -676,6 +687,7 @@ void MockCatalogService::destoryCatalogReq(SCatalogReq* pReq) {
taosArrayDestroy(pReq->pIndex);
taosArrayDestroy(pReq->pUser);
taosArrayDestroy(pReq->pTableIndex);
taosArrayDestroy(pReq->pTableCfg);
delete pReq;
}
@ -684,6 +696,11 @@ void MockCatalogService::destoryMetaRes(void* p) {
taosMemoryFree(pRes->pRes);
}
void MockCatalogService::destoryMetaArrayRes(void* p) {
SMetaRes* pRes = (SMetaRes*)p;
taosArrayDestroy((SArray*)pRes->pRes);
}
void MockCatalogService::destoryMetaData(SMetaData* pData) {
taosArrayDestroyEx(pData->pDbVgroup, destoryMetaRes);
taosArrayDestroyEx(pData->pDbCfg, destoryMetaRes);
@ -695,5 +712,8 @@ void MockCatalogService::destoryMetaData(SMetaData* pData) {
taosArrayDestroyEx(pData->pIndex, destoryMetaRes);
taosArrayDestroyEx(pData->pUser, destoryMetaRes);
taosArrayDestroyEx(pData->pQnodeList, destoryMetaRes);
taosArrayDestroyEx(pData->pTableCfg, destoryMetaRes);
taosArrayDestroyEx(pData->pDnodeList, destoryMetaArrayRes);
taosMemoryFree(pData->pSvrVer);
delete pData;
}

View File

@ -52,6 +52,7 @@ class MockCatalogService {
public:
static void destoryCatalogReq(SCatalogReq* pReq);
static void destoryMetaRes(void* p);
static void destoryMetaArrayRes(void* p);
static void destoryMetaData(SMetaData* pData);
MockCatalogService();

View File

@ -21,7 +21,11 @@ namespace ParserTest {
class ParserInitialCTest : public ParserDdlTest {};
// todo compact
TEST_F(ParserInitialCTest, compact) {
useDb("root", "test");
run("COMPACT VNODES IN (1, 2)", TSDB_CODE_PAR_EXPRIE_STATEMENT, PARSER_STAGE_PARSE);
}
TEST_F(ParserInitialCTest, createAccount) {
useDb("root", "test");
@ -32,6 +36,19 @@ TEST_F(ParserInitialCTest, createAccount) {
TEST_F(ParserInitialCTest, createBnode) {
useDb("root", "test");
SMCreateQnodeReq expect = {0};
auto setCreateQnodeReq = [&](int32_t dnodeId) { expect.dnodeId = dnodeId; };
setCheckDdlFunc([&](const SQuery* pQuery, ParserStage stage) {
ASSERT_EQ(nodeType(pQuery->pRoot), QUERY_NODE_CREATE_BNODE_STMT);
SMCreateQnodeReq req = {0};
ASSERT_TRUE(TSDB_CODE_SUCCESS ==
tDeserializeSCreateDropMQSBNodeReq(pQuery->pCmdMsg->pMsg, pQuery->pCmdMsg->msgLen, &req));
ASSERT_EQ(req.dnodeId, expect.dnodeId);
});
setCreateQnodeReq(1);
run("CREATE BNODE ON DNODE 1");
}

View File

@ -123,6 +123,14 @@ class ParserTestBaseImpl {
delete pMetaCache;
}
static void _destroyQuery(SQuery** pQuery) {
if (nullptr == pQuery) {
return;
}
qDestroyQuery(*pQuery);
taosMemoryFree(pQuery);
}
bool checkResultCode(const string& pFunc, int32_t resultCode) {
return !(stmtEnv_.checkFunc_.empty())
? ((stmtEnv_.checkFunc_ == pFunc) ? stmtEnv_.expect_ == resultCode : TSDB_CODE_SUCCESS == resultCode)
@ -278,9 +286,9 @@ class ParserTestBaseImpl {
SParseContext cxt = {0};
setParseContext(sql, &cxt);
SQuery* pQuery = nullptr;
doParse(&cxt, &pQuery);
unique_ptr<SQuery, void (*)(SQuery*)> query(pQuery, qDestroyQuery);
unique_ptr<SQuery*, void (*)(SQuery**)> query((SQuery**)taosMemoryCalloc(1, sizeof(SQuery*)), _destroyQuery);
doParse(&cxt, query.get());
SQuery* pQuery = *(query.get());
doAuthenticate(&cxt, pQuery, nullptr);
@ -306,9 +314,9 @@ class ParserTestBaseImpl {
SParseContext cxt = {0};
setParseContext(sql, &cxt);
SQuery* pQuery = nullptr;
doParseSql(&cxt, &pQuery);
unique_ptr<SQuery, void (*)(SQuery*)> query(pQuery, qDestroyQuery);
unique_ptr<SQuery*, void (*)(SQuery**)> query((SQuery**)taosMemoryCalloc(1, sizeof(SQuery*)), _destroyQuery);
doParseSql(&cxt, query.get());
SQuery* pQuery = *(query.get());
if (g_dump) {
dump();
@ -328,9 +336,9 @@ class ParserTestBaseImpl {
SParseContext cxt = {0};
setParseContext(sql, &cxt, true);
SQuery* pQuery = nullptr;
doParse(&cxt, &pQuery);
unique_ptr<SQuery, void (*)(SQuery*)> query(pQuery, qDestroyQuery);
unique_ptr<SQuery*, void (*)(SQuery**)> query((SQuery**)taosMemoryCalloc(1, sizeof(SQuery*)), _destroyQuery);
doParse(&cxt, query.get());
SQuery* pQuery = *(query.get());
unique_ptr<SParseMetaCache, void (*)(SParseMetaCache*)> metaCache(new SParseMetaCache(), _destoryParseMetaCache);
doCollectMetaKey(&cxt, pQuery, metaCache.get());
@ -386,9 +394,9 @@ class ParserTestBaseImpl {
unique_ptr<SCatalogReq, void (*)(SCatalogReq*)> catalogReq(new SCatalogReq(),
MockCatalogService::destoryCatalogReq);
SQuery* pQuery = nullptr;
doParseSqlSyntax(&cxt, &pQuery, catalogReq.get());
unique_ptr<SQuery, void (*)(SQuery*)> query(pQuery, qDestroyQuery);
unique_ptr<SQuery*, void (*)(SQuery**)> query((SQuery**)taosMemoryCalloc(1, sizeof(SQuery*)), _destroyQuery);
doParseSqlSyntax(&cxt, query.get(), catalogReq.get());
SQuery* pQuery = *(query.get());
string err;
thread t1([&]() {

View File

@ -1068,7 +1068,11 @@ static int32_t createExchangePhysiNode(SPhysiPlanContext* pCxt, SExchangeLogicNo
}
static int32_t createWindowPhysiNodeFinalize(SPhysiPlanContext* pCxt, SNodeList* pChildren, SWinodwPhysiNode* pWindow,
SWindowLogicNode* pWindowLogicNode, SPhysiNode** pPhyNode) {
SWindowLogicNode* pWindowLogicNode) {
pWindow->triggerType = pWindowLogicNode->triggerType;
pWindow->watermark = pWindowLogicNode->watermark;
pWindow->igExpired = pWindowLogicNode->igExpired;
SNodeList* pPrecalcExprs = NULL;
SNodeList* pFuncs = NULL;
int32_t code = rewritePrecalcExprs(pCxt, pWindowLogicNode->pFuncs, &pPrecalcExprs, &pFuncs);
@ -1100,16 +1104,6 @@ static int32_t createWindowPhysiNodeFinalize(SPhysiPlanContext* pCxt, SNodeList*
code = setConditionsSlotId(pCxt, (const SLogicNode*)pWindowLogicNode, (SPhysiNode*)pWindow);
}
pWindow->triggerType = pWindowLogicNode->triggerType;
pWindow->watermark = pWindowLogicNode->watermark;
pWindow->igExpired = pWindowLogicNode->igExpired;
if (TSDB_CODE_SUCCESS == code) {
*pPhyNode = (SPhysiNode*)pWindow;
} else {
nodesDestroyNode((SNode*)pWindow);
}
nodesDestroyList(pPrecalcExprs);
nodesDestroyList(pFuncs);
@ -1156,7 +1150,14 @@ static int32_t createIntervalPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChil
pInterval->intervalUnit = pWindowLogicNode->intervalUnit;
pInterval->slidingUnit = pWindowLogicNode->slidingUnit;
return createWindowPhysiNodeFinalize(pCxt, pChildren, &pInterval->window, pWindowLogicNode, pPhyNode);
int32_t code = createWindowPhysiNodeFinalize(pCxt, pChildren, &pInterval->window, pWindowLogicNode);
if (TSDB_CODE_SUCCESS == code) {
*pPhyNode = (SPhysiNode*)pInterval;
} else {
nodesDestroyNode((SNode*)pInterval);
}
return code;
}
static int32_t createSessionWindowPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChildren,
@ -1169,7 +1170,14 @@ static int32_t createSessionWindowPhysiNode(SPhysiPlanContext* pCxt, SNodeList*
pSession->gap = pWindowLogicNode->sessionGap;
return createWindowPhysiNodeFinalize(pCxt, pChildren, &pSession->window, pWindowLogicNode, pPhyNode);
int32_t code = createWindowPhysiNodeFinalize(pCxt, pChildren, &pSession->window, pWindowLogicNode);
if (TSDB_CODE_SUCCESS == code) {
*pPhyNode = (SPhysiNode*)pSession;
} else {
nodesDestroyNode((SNode*)pSession);
}
return code;
}
static int32_t createStateWindowPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChildren,
@ -1201,12 +1209,20 @@ static int32_t createStateWindowPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pC
}
}
if (TSDB_CODE_SUCCESS != code) {
nodesDestroyNode((SNode*)pState);
return code;
if (TSDB_CODE_SUCCESS == code) {
code = createWindowPhysiNodeFinalize(pCxt, pChildren, &pState->window, pWindowLogicNode);
}
return createWindowPhysiNodeFinalize(pCxt, pChildren, &pState->window, pWindowLogicNode, pPhyNode);
if (TSDB_CODE_SUCCESS == code) {
*pPhyNode = (SPhysiNode*)pState;
} else {
nodesDestroyNode((SNode*)pState);
}
nodesDestroyList(pPrecalcExprs);
nodesDestroyNode(pStateKey);
return code;
}
static int32_t createWindowPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChildren, SWindowLogicNode* pWindowLogicNode,

View File

@ -867,10 +867,11 @@ static int32_t stbSplSplitSortNode(SSplitContext* pCxt, SStableSplitInfo* pInfo)
if (TSDB_CODE_SUCCESS == code) {
code = stbSplCreateMergeNode(pCxt, pInfo->pSubplan, pInfo->pSplitNode, pMergeKeys, pPartSort, groupSort);
}
if (TSDB_CODE_SUCCESS == code && groupSort) {
stbSplSetScanPartSort(pPartSort);
}
if (TSDB_CODE_SUCCESS == code) {
nodesDestroyNode((SNode*)pInfo->pSplitNode);
if (groupSort) {
stbSplSetScanPartSort(pPartSort);
}
code = nodesListMakeStrictAppend(&pInfo->pSubplan->pChildren,
(SNode*)splCreateScanSubplan(pCxt, pPartSort, SPLIT_FLAG_STABLE_SPLIT));
}

View File

@ -24,6 +24,16 @@ class PlanStmtTest : public PlannerTestBase {
return (TAOS_MULTI_BIND*)taosMemoryCalloc(nParams, sizeof(TAOS_MULTI_BIND));
}
void destoryBindParams(TAOS_MULTI_BIND* pParams, int32_t nParams) {
for (int32_t i = 0; i < nParams; ++i) {
TAOS_MULTI_BIND* pParam = pParams + i;
taosMemoryFree(pParam->buffer);
taosMemoryFree(pParam->length);
taosMemoryFree(pParam->is_null);
}
taosMemoryFree(pParams);
}
TAOS_MULTI_BIND* buildIntegerParam(TAOS_MULTI_BIND* pBindParams, int32_t index, int64_t val, int32_t type) {
TAOS_MULTI_BIND* pBindParam = initParam(pBindParams, index, type, 0);
@ -127,8 +137,10 @@ TEST_F(PlanStmtTest, basic) {
useDb("root", "test");
prepare("SELECT * FROM t1 WHERE c1 = ?");
bindParams(buildIntegerParam(createBindParams(1), 0, 10, TSDB_DATA_TYPE_INT), 0);
TAOS_MULTI_BIND* pBindParams = buildIntegerParam(createBindParams(1), 0, 10, TSDB_DATA_TYPE_INT);
bindParams(pBindParams, 0);
exec();
destoryBindParams(pBindParams, 1);
{
prepare("SELECT * FROM t1 WHERE c1 = ? AND c2 = ?");
@ -137,7 +149,7 @@ TEST_F(PlanStmtTest, basic) {
buildStringParam(pBindParams, 1, "abc", TSDB_DATA_TYPE_VARCHAR, strlen("abc"));
bindParams(pBindParams, -1);
exec();
taosMemoryFreeClear(pBindParams);
destoryBindParams(pBindParams, 2);
}
{
@ -147,7 +159,7 @@ TEST_F(PlanStmtTest, basic) {
buildIntegerParam(pBindParams, 1, 20, TSDB_DATA_TYPE_INT);
bindParams(pBindParams, -1);
exec();
taosMemoryFreeClear(pBindParams);
destoryBindParams(pBindParams, 2);
}
}
@ -155,12 +167,16 @@ TEST_F(PlanStmtTest, multiExec) {
useDb("root", "test");
prepare("SELECT * FROM t1 WHERE c1 = ?");
bindParams(buildIntegerParam(createBindParams(1), 0, 10, TSDB_DATA_TYPE_INT), 0);
TAOS_MULTI_BIND* pBindParams = buildIntegerParam(createBindParams(1), 0, 10, TSDB_DATA_TYPE_INT);
bindParams(pBindParams, 0);
exec();
bindParams(buildIntegerParam(createBindParams(1), 0, 20, TSDB_DATA_TYPE_INT), 0);
destoryBindParams(pBindParams, 1);
pBindParams = buildIntegerParam(createBindParams(1), 0, 20, TSDB_DATA_TYPE_INT);
bindParams(pBindParams, 0);
exec();
bindParams(buildIntegerParam(createBindParams(1), 0, 30, TSDB_DATA_TYPE_INT), 0);
destoryBindParams(pBindParams, 1);
pBindParams = buildIntegerParam(createBindParams(1), 0, 30, TSDB_DATA_TYPE_INT);
bindParams(pBindParams, 0);
exec();
destoryBindParams(pBindParams, 1);
}
TEST_F(PlanStmtTest, allDataType) { useDb("root", "test"); }

View File

@ -126,9 +126,9 @@ class PlannerTestBaseImpl {
reset();
tsQueryPolicy = queryPolicy;
try {
SQuery* pQuery = nullptr;
doParseSql(sql, &pQuery);
unique_ptr<SQuery, void (*)(SQuery*)> query(pQuery, qDestroyQuery);
unique_ptr<SQuery*, void (*)(SQuery**)> query((SQuery**)taosMemoryCalloc(1, sizeof(SQuery*)), _destroyQuery);
doParseSql(sql, query.get());
SQuery* pQuery = *(query.get());
SPlanContext cxt = {0};
setPlanContext(pQuery, &cxt);
@ -199,6 +199,8 @@ class PlannerTestBaseImpl {
SLogicSubplan* pLogicSubplan = nullptr;
doCreateLogicPlan(&cxt, &pLogicSubplan);
unique_ptr<SLogicSubplan, void (*)(SLogicSubplan*)> logicSubplan(pLogicSubplan,
(void (*)(SLogicSubplan*))nodesDestroyNode);
doOptimizeLogicPlan(&cxt, pLogicSubplan);
@ -206,9 +208,12 @@ class PlannerTestBaseImpl {
SQueryLogicPlan* pLogicPlan = nullptr;
doScaleOutLogicPlan(&cxt, pLogicSubplan, &pLogicPlan);
unique_ptr<SQueryLogicPlan, void (*)(SQueryLogicPlan*)> logicPlan(pLogicPlan,
(void (*)(SQueryLogicPlan*))nodesDestroyNode);
SQueryPlan* pPlan = nullptr;
doCreatePhysiPlan(&cxt, pLogicPlan, &pPlan);
unique_ptr<SQueryPlan, void (*)(SQueryPlan*)> plan(pPlan, (void (*)(SQueryPlan*))nodesDestroyNode);
dump(g_dumpModule);
} catch (...) {
@ -249,6 +254,14 @@ class PlannerTestBaseImpl {
vector<string> physiSubplans_;
};
static void _destroyQuery(SQuery** pQuery) {
if (nullptr == pQuery) {
return;
}
qDestroyQuery(*pQuery);
taosMemoryFree(pQuery);
}
void reset() {
stmtEnv_.sql_.clear();
stmtEnv_.msgBuf_.fill(0);
@ -400,20 +413,30 @@ class PlannerTestBaseImpl {
pCxt->queryId = 1;
pCxt->pUser = caseEnv_.user_.c_str();
if (QUERY_NODE_CREATE_TOPIC_STMT == nodeType(pQuery->pRoot)) {
pCxt->pAstRoot = ((SCreateTopicStmt*)pQuery->pRoot)->pQuery;
SCreateTopicStmt* pStmt = (SCreateTopicStmt*)pQuery->pRoot;
pCxt->pAstRoot = pStmt->pQuery;
pStmt->pQuery = nullptr;
nodesDestroyNode(pQuery->pRoot);
pQuery->pRoot = pCxt->pAstRoot;
pCxt->topicQuery = true;
} else if (QUERY_NODE_CREATE_INDEX_STMT == nodeType(pQuery->pRoot)) {
SMCreateSmaReq req = {0};
tDeserializeSMCreateSmaReq(pQuery->pCmdMsg->pMsg, pQuery->pCmdMsg->msgLen, &req);
g_mockCatalogService->createSmaIndex(&req);
nodesStringToNode(req.ast, &pCxt->pAstRoot);
tFreeSMCreateSmaReq(&req);
nodesDestroyNode(pQuery->pRoot);
pQuery->pRoot = pCxt->pAstRoot;
pCxt->streamQuery = true;
} else if (QUERY_NODE_CREATE_STREAM_STMT == nodeType(pQuery->pRoot)) {
SCreateStreamStmt* pStmt = (SCreateStreamStmt*)pQuery->pRoot;
pCxt->pAstRoot = pStmt->pQuery;
pStmt->pQuery = nullptr;
pCxt->streamQuery = true;
pCxt->triggerType = pStmt->pOptions->triggerType;
pCxt->watermark = (NULL != pStmt->pOptions->pWatermark ? ((SValueNode*)pStmt->pOptions->pWatermark)->datum.i : 0);
nodesDestroyNode(pQuery->pRoot);
pQuery->pRoot = pCxt->pAstRoot;
} else {
pCxt->pAstRoot = pQuery->pRoot;
}

View File

@ -148,11 +148,12 @@ void destroySendMsgInfo(SMsgSendInfo* pMsgBody) {
taosMemoryFreeClear(pMsgBody);
}
int32_t asyncSendMsgToServerExt(void* pTransporter, SEpSet* epSet, int64_t* pTransporterId, const SMsgSendInfo* pInfo,
int32_t asyncSendMsgToServerExt(void* pTransporter, SEpSet* epSet, int64_t* pTransporterId, SMsgSendInfo* pInfo,
bool persistHandle, void* rpcCtx) {
char* pMsg = rpcMallocCont(pInfo->msgInfo.len);
if (NULL == pMsg) {
qError("0x%" PRIx64 " msg:%s malloc failed", pInfo->requestId, TMSG_INFO(pInfo->msgType));
destroySendMsgInfo(pInfo);
terrno = TSDB_CODE_TSC_OUT_OF_MEMORY;
return terrno;
}
@ -167,13 +168,15 @@ int32_t asyncSendMsgToServerExt(void* pTransporter, SEpSet* epSet, int64_t* pTra
.info.persistHandle = persistHandle,
.code = 0
};
assert(pInfo->fp != NULL);
TRACE_SET_ROOTID(&rpcMsg.info.traceId, pInfo->requestId);
rpcSendRequestWithCtx(pTransporter, epSet, &rpcMsg, pTransporterId, rpcCtx);
return TSDB_CODE_SUCCESS;
int code = rpcSendRequestWithCtx(pTransporter, epSet, &rpcMsg, pTransporterId, rpcCtx);
if (code) {
destroySendMsgInfo(pInfo);
}
return code;
}
int32_t asyncSendMsgToServer(void* pTransporter, SEpSet* epSet, int64_t* pTransporterId, const SMsgSendInfo* pInfo) {
int32_t asyncSendMsgToServer(void* pTransporter, SEpSet* epSet, int64_t* pTransporterId, SMsgSendInfo* pInfo) {
return asyncSendMsgToServerExt(pTransporter, epSet, pTransporterId, pInfo, false, NULL);
}

View File

@ -2750,6 +2750,7 @@ static bool getHistogramBinDesc(SHistoFuncBin** bins, int32_t* binNum, char* bin
(*bins)[i].count = 0;
}
cJSON_Delete(binDesc);
taosMemoryFree(intervals);
return true;
}

View File

@ -1195,7 +1195,7 @@ static void vectorMathTsSubHelper(SColumnInfoData* pLeftCol, SColumnInfoData* pR
colDataAppendNULL(pOutputCol, i);
continue; // TODO set null or ignore
}
*output = taosTimeSub(getVectorBigintValueFnLeft(pLeftCol->pData, i), getVectorBigintValueFnRight(pRightCol->pData, 0),
*output = taosTimeAdd(getVectorBigintValueFnLeft(pLeftCol->pData, i), -getVectorBigintValueFnRight(pRightCol->pData, 0),
pRightCol->info.scale, pRightCol->info.precision);
}

View File

@ -509,7 +509,7 @@ int32_t schGenerateCallBackInfo(SSchJob *pJob, SSchTask *pTask, void *msg, uint3
SMsgSendInfo *msgSendInfo = taosMemoryCalloc(1, sizeof(SMsgSendInfo));
if (NULL == msgSendInfo) {
SCH_TASK_ELOG("calloc %d failed", (int32_t)sizeof(SMsgSendInfo));
SCH_ERR_RET(TSDB_CODE_QRY_OUT_OF_MEMORY);
SCH_ERR_JRET(TSDB_CODE_QRY_OUT_OF_MEMORY);
}
msgSendInfo->paramFreeFp = taosMemoryFree;
@ -535,8 +535,12 @@ int32_t schGenerateCallBackInfo(SSchJob *pJob, SSchTask *pTask, void *msg, uint3
_return:
destroySendMsgInfo(msgSendInfo);
if (msgSendInfo) {
destroySendMsgInfo(msgSendInfo);
}
taosMemoryFree(msg);
SCH_RET(code);
}
@ -843,6 +847,7 @@ int32_t schAsyncSendMsg(SSchJob *pJob, SSchTask *pTask, SSchTrans *trans, SQuery
int64_t transporterId = 0;
code = asyncSendMsgToServerExt(trans->pTrans, epSet, &transporterId, pMsgSendInfo, persistHandle, ctx);
pMsgSendInfo = NULL;
if (code) {
SCH_ERR_JRET(code);
}
@ -919,7 +924,9 @@ int32_t schBuildAndSendHbMsg(SQueryNodeEpId *nodeEpId, SArray *taskAction) {
addr.epSet.numOfEps = 1;
memcpy(&addr.epSet.eps[0], &nodeEpId->ep, sizeof(nodeEpId->ep));
SCH_ERR_JRET(schAsyncSendMsg(NULL, NULL, &trans, &addr, msgType, msg, msgSize, true, &rpcCtx));
code = schAsyncSendMsg(NULL, NULL, &trans, &addr, msgType, msg, msgSize, true, &rpcCtx);
msg = NULL;
SCH_ERR_JRET(code);
return TSDB_CODE_SUCCESS;
@ -1087,9 +1094,10 @@ int32_t schBuildAndSendMsg(SSchJob *pJob, SSchTask *pTask, SQueryNodeAddr *addr,
}
SSchTrans trans = {.pTrans = pJob->conn.pTrans, .pHandle = SCH_GET_TASK_HANDLE(pTask)};
SCH_ERR_JRET(
schAsyncSendMsg(pJob, pTask, &trans, addr, msgType, msg, msgSize, persistHandle, (rpcCtx.args ? &rpcCtx : NULL)));
schAsyncSendMsg(pJob, pTask, &trans, addr, msgType, msg, msgSize, persistHandle, (rpcCtx.args ? &rpcCtx : NULL));
msg = NULL;
SCH_ERR_JRET(code);
if (msgType == TDMT_SCH_QUERY || msgType == TDMT_SCH_MERGE_QUERY) {
SCH_ERR_RET(schAppendTaskExecNode(pJob, pTask, addr, pTask->execId));
}

View File

@ -102,14 +102,14 @@ int32_t schRecordTaskSucceedNode(SSchJob *pJob, SSchTask *pTask) {
}
int32_t schAppendTaskExecNode(SSchJob *pJob, SSchTask *pTask, SQueryNodeAddr *addr, int32_t execId) {
SSchNodeInfo nodeInfo = {.addr = *addr, .handle = NULL};
SSchNodeInfo nodeInfo = {.addr = *addr, .handle = SCH_GET_TASK_HANDLE(pTask)};
if (taosHashPut(pTask->execNodes, &execId, sizeof(execId), &nodeInfo, sizeof(nodeInfo))) {
SCH_TASK_ELOG("taosHashPut nodeInfo to execNodes failed, errno:%d", errno);
SCH_ERR_RET(TSDB_CODE_QRY_OUT_OF_MEMORY);
}
SCH_TASK_DLOG("task execNode added, execId:%d", execId);
SCH_TASK_DLOG("task execNode added, execId:%d, handle:%p", execId, nodeInfo.handle);
return TSDB_CODE_SUCCESS;
}
@ -752,12 +752,18 @@ void schDropTaskOnExecNode(SSchJob *pJob, SSchTask *pTask) {
return;
}
int32_t i = 0;
SSchNodeInfo *nodeInfo = taosHashIterate(pTask->execNodes, NULL);
while (nodeInfo) {
SCH_SET_TASK_HANDLE(pTask, nodeInfo->handle);
schBuildAndSendMsg(pJob, pTask, &nodeInfo->addr, TDMT_SCH_DROP_TASK);
if (nodeInfo->handle) {
SCH_SET_TASK_HANDLE(pTask, nodeInfo->handle);
schBuildAndSendMsg(pJob, pTask, &nodeInfo->addr, TDMT_SCH_DROP_TASK);
SCH_TASK_DLOG("start to drop task's %dth execNode", i);
} else {
SCH_TASK_DLOG("no need to drop task %dth execNode", i);
}
++i;
nodeInfo = taosHashIterate(pTask->execNodes, nodeInfo);
}

View File

@ -15,9 +15,12 @@
#include "tstreamUpdate.h"
#include "ttime.h"
#include "query.h"
#define DEFAULT_FALSE_POSITIVE 0.01
#define DEFAULT_BUCKET_SIZE 131072
#define DEFAULT_BUCKET_SIZE 1310720
#define DEFAULT_MAP_CAPACITY 1310720
#define DEFAULT_MAP_SIZE (DEFAULT_MAP_CAPACITY * 10)
#define ROWS_PER_MILLISECOND 1
#define MAX_NUM_SCALABLE_BF 100000
#define MIN_NUM_SCALABLE_BF 10
@ -120,6 +123,8 @@ SUpdateInfo *updateInfoInit(int64_t interval, int32_t precision, int64_t waterma
}
pInfo->numBuckets = DEFAULT_BUCKET_SIZE;
pInfo->pCloseWinSBF = NULL;
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY);
pInfo->pMap = taosHashInit(DEFAULT_MAP_CAPACITY, hashFn, true, HASH_NO_LOCK);
return pInfo;
}
@ -149,8 +154,9 @@ static SScalableBf *getSBf(SUpdateInfo *pInfo, TSKEY ts) {
return res;
}
bool updateInfoIsUpdated(SUpdateInfo *pInfo, tb_uid_t tableId, TSKEY ts) {
bool updateInfoIsUpdated(SUpdateInfo *pInfo, uint64_t tableId, TSKEY ts) {
int32_t res = TSDB_CODE_FAILED;
TSKEY* pMapMaxTs = taosHashGet(pInfo->pMap, &tableId, sizeof(uint64_t));
uint64_t index = ((uint64_t)tableId) % pInfo->numBuckets;
TSKEY maxTs = *(TSKEY *)taosArrayGet(pInfo->pTsBuckets, index);
if (ts < maxTs - pInfo->watermark) {
@ -167,7 +173,13 @@ bool updateInfoIsUpdated(SUpdateInfo *pInfo, tb_uid_t tableId, TSKEY ts) {
res = tScalableBfPut(pSBf, &ts, sizeof(TSKEY));
}
if (maxTs < ts) {
int32_t size = taosHashGetSize(pInfo->pMap);
if ( (!pMapMaxTs && size < DEFAULT_MAP_SIZE) || (pMapMaxTs && *pMapMaxTs < ts)) {
taosHashPut(pInfo->pMap, &tableId, sizeof(uint64_t), &ts, sizeof(TSKEY));
return false;
}
if ( !pMapMaxTs && maxTs < ts ) {
taosArraySet(pInfo->pTsBuckets, index, &ts);
return false;
}
@ -177,7 +189,7 @@ bool updateInfoIsUpdated(SUpdateInfo *pInfo, tb_uid_t tableId, TSKEY ts) {
} else if (res == TSDB_CODE_SUCCESS) {
return false;
}
qDebug("===stream===bucket:%d, tableId:%" PRIu64 ", maxTs:" PRIu64 ", maxMapTs:" PRIu64 ", ts:%" PRIu64, index, tableId, maxTs, *pMapMaxTs, ts);
// check from tsdb api
return true;
}

View File

@ -257,6 +257,22 @@ int32_t syncNodeLeaderTransfer(SSyncNode* pSyncNode);
int32_t syncNodeLeaderTransferTo(SSyncNode* pSyncNode, SNodeInfo newLeader);
int32_t syncDoLeaderTransfer(SSyncNode* ths, SRpcMsg* pRpcMsg, SSyncRaftEntry* pEntry);
// trace log
void syncLogSendRequestVote(SSyncNode* pSyncNode, const SyncRequestVote* pMsg, const char* s);
void syncLogRecvRequestVote(SSyncNode* pSyncNode, const SyncRequestVote* pMsg, const char* s);
void syncLogSendRequestVoteReply(SSyncNode* pSyncNode, const SyncRequestVoteReply* pMsg, const char* s);
void syncLogRecvRequestVoteReply(SSyncNode* pSyncNode, const SyncRequestVoteReply* pMsg, const char* s);
void syncLogSendAppendEntries(SSyncNode* pSyncNode, const SyncAppendEntries* pMsg, const char* s);
void syncLogRecvAppendEntries(SSyncNode* pSyncNode, const SyncAppendEntries* pMsg, const char* s);
void syncLogSendAppendEntriesBatch(SSyncNode* pSyncNode, const SyncAppendEntriesBatch* pMsg, const char* s);
void syncLogRecvAppendEntriesBatch(SSyncNode* pSyncNode, const SyncAppendEntriesBatch* pMsg, const char* s);
void syncLogSendAppendEntriesReply(SSyncNode* pSyncNode, const SyncAppendEntriesReply* pMsg, const char* s);
void syncLogRecvAppendEntriesReply(SSyncNode* pSyncNode, const SyncAppendEntriesReply* pMsg, const char* s);
// for debug --------------
void syncNodePrint(SSyncNode* pObj);
void syncNodePrint2(char* s, SSyncNode* pObj);

View File

@ -92,13 +92,10 @@
int32_t syncNodeOnAppendEntriesCb(SSyncNode* ths, SyncAppendEntries* pMsg) {
int32_t ret = 0;
// print log
syncAppendEntriesLog2("==syncNodeOnAppendEntriesCb==", pMsg);
// if already drop replica, do not process
if (!syncNodeInRaftGroup(ths, &(pMsg->srcId)) && !ths->pRaftCfg->isStandBy) {
syncNodeEventLog(ths, "recv sync-append-entries, maybe replica already dropped");
return ret;
syncLogRecvAppendEntries(ths, pMsg, "maybe replica already dropped");
return -1;
}
// maybe update term
@ -114,17 +111,12 @@ int32_t syncNodeOnAppendEntriesCb(SSyncNode* ths, SyncAppendEntries* pMsg) {
}
ASSERT(pMsg->dataLen >= 0);
do {
// return to follower state
if (pMsg->term == ths->pRaftStore->currentTerm && ths->state == TAOS_SYNC_STATE_CANDIDATE) {
syncNodeEventLog(ths, "recv sync-append-entries, candidate to follower");
syncNodeBecomeFollower(ths, "from candidate by append entries");
// ret or reply?
return ret;
}
} while (0);
// return to follower state
if (pMsg->term == ths->pRaftStore->currentTerm && ths->state == TAOS_SYNC_STATE_CANDIDATE) {
syncLogRecvAppendEntries(ths, pMsg, "candidate to follower");
syncNodeBecomeFollower(ths, "from candidate by append entries");
return -1; // ret or reply?
}
SyncTerm localPreLogTerm = 0;
if (pMsg->prevLogIndex >= SYNC_INDEX_BEGIN && pMsg->prevLogIndex <= ths->pLogStore->getLastIndex(ths->pLogStore)) {
@ -148,13 +140,7 @@ int32_t syncNodeOnAppendEntriesCb(SSyncNode* ths, SyncAppendEntries* pMsg) {
// reject request
if ((pMsg->term < ths->pRaftStore->currentTerm) ||
((pMsg->term == ths->pRaftStore->currentTerm) && (ths->state == TAOS_SYNC_STATE_FOLLOWER) && !logOK)) {
do {
char logBuf[128];
snprintf(logBuf, sizeof(logBuf),
"recv sync-append-entries, reject, pre-index:%" PRId64 ", pre-term:%" PRIu64 ", datalen:%d",
pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->dataLen);
syncNodeEventLog(ths, logBuf);
} while (0);
syncLogRecvAppendEntries(ths, pMsg, "reject");
SyncAppendEntriesReply* pReply = syncAppendEntriesReplyBuild(ths->vgId);
pReply->srcId = ths->myRaftId;
@ -164,14 +150,7 @@ int32_t syncNodeOnAppendEntriesCb(SSyncNode* ths, SyncAppendEntries* pMsg) {
pReply->matchIndex = SYNC_INDEX_INVALID;
// msg event log
do {
char host[128];
uint16_t port;
syncUtilU642Addr(pReply->destId.addr, host, sizeof(host), &port);
sDebug("vgId:%d, send sync-append-entries-reply to %s:%d, {term:%" PRIu64 ", pterm:%" PRIu64
", success:%d, match-index:%" PRId64 "}",
ths->vgId, host, port, pReply->term, pReply->privateTerm, pReply->success, pReply->matchIndex);
} while (0);
syncLogSendAppendEntriesReply(ths, pReply, "");
SRpcMsg rpcMsg;
syncAppendEntriesReply2RpcMsg(pReply, &rpcMsg);
@ -192,13 +171,7 @@ int32_t syncNodeOnAppendEntriesCb(SSyncNode* ths, SyncAppendEntries* pMsg) {
// has entries in SyncAppendEntries msg
bool hasAppendEntries = pMsg->dataLen > 0;
do {
char logBuf[128];
snprintf(logBuf, sizeof(logBuf),
"recv sync-append-entries, accept, pre-index:%" PRId64 ", pre-term:%" PRIu64 ", datalen:%d",
pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->dataLen);
syncNodeEventLog(ths, logBuf);
} while (0);
syncLogRecvAppendEntries(ths, pMsg, "accept");
if (hasExtraEntries && hasAppendEntries) {
// not conflict by default
@ -348,14 +321,7 @@ int32_t syncNodeOnAppendEntriesCb(SSyncNode* ths, SyncAppendEntries* pMsg) {
}
// msg event log
do {
char host[128];
uint16_t port;
syncUtilU642Addr(pReply->destId.addr, host, sizeof(host), &port);
sDebug("vgId:%d, send sync-append-entries-reply to %s:%d, {term:%" PRIu64 ", pterm:%" PRIu64
", success:%d, match-index:%" PRId64 "}",
ths->vgId, host, port, pReply->term, pReply->privateTerm, pReply->success, pReply->matchIndex);
} while (0);
syncLogSendAppendEntriesReply(ths, pReply, "");
SRpcMsg rpcMsg;
syncAppendEntriesReply2RpcMsg(pReply, &rpcMsg);
@ -507,13 +473,13 @@ static bool syncNodeOnAppendEntriesBatchLogOK(SSyncNode* pSyncNode, SyncAppendEn
SyncIndex myLastIndex = syncNodeGetLastIndex(pSyncNode);
if (pMsg->prevLogIndex > myLastIndex) {
sDebug("vgId:%d sync log not ok, preindex:%" PRId64, pSyncNode->vgId, pMsg->prevLogIndex);
sDebug("vgId:%d, sync log not ok, preindex:%" PRId64, pSyncNode->vgId, pMsg->prevLogIndex);
return false;
}
SyncTerm myPreLogTerm = syncNodeGetPreTerm(pSyncNode, pMsg->prevLogIndex + 1);
if (myPreLogTerm == SYNC_TERM_INVALID) {
sDebug("vgId:%d sync log not ok2, preindex:%" PRId64, pSyncNode->vgId, pMsg->prevLogIndex);
sDebug("vgId:%d, sync log not ok2, preindex:%" PRId64, pSyncNode->vgId, pMsg->prevLogIndex);
return false;
}
@ -521,7 +487,7 @@ static bool syncNodeOnAppendEntriesBatchLogOK(SSyncNode* pSyncNode, SyncAppendEn
return true;
}
sDebug("vgId:%d sync log not ok3, preindex:%" PRId64, pSyncNode->vgId, pMsg->prevLogIndex);
sDebug("vgId:%d, sync log not ok3, preindex:%" PRId64, pSyncNode->vgId, pMsg->prevLogIndex);
return false;
}
@ -534,13 +500,13 @@ static bool syncNodeOnAppendEntriesLogOK(SSyncNode* pSyncNode, SyncAppendEntries
SyncIndex myLastIndex = syncNodeGetLastIndex(pSyncNode);
if (pMsg->prevLogIndex > myLastIndex) {
sDebug("vgId:%d sync log not ok, preindex:%" PRId64, pSyncNode->vgId, pMsg->prevLogIndex);
sDebug("vgId:%d, sync log not ok, preindex:%" PRId64, pSyncNode->vgId, pMsg->prevLogIndex);
return false;
}
SyncTerm myPreLogTerm = syncNodeGetPreTerm(pSyncNode, pMsg->prevLogIndex + 1);
if (myPreLogTerm == SYNC_TERM_INVALID) {
sDebug("vgId:%d sync log not ok2, preindex:%" PRId64, pSyncNode->vgId, pMsg->prevLogIndex);
sDebug("vgId:%d, sync log not ok2, preindex:%" PRId64, pSyncNode->vgId, pMsg->prevLogIndex);
return false;
}
@ -548,7 +514,7 @@ static bool syncNodeOnAppendEntriesLogOK(SSyncNode* pSyncNode, SyncAppendEntries
return true;
}
sDebug("vgId:%d sync log not ok3, preindex:%" PRId64, pSyncNode->vgId, pMsg->prevLogIndex);
sDebug("vgId:%d, sync log not ok3, preindex:%" PRId64, pSyncNode->vgId, pMsg->prevLogIndex);
return false;
}
@ -558,8 +524,8 @@ int32_t syncNodeOnAppendEntriesSnapshot2Cb(SSyncNode* ths, SyncAppendEntriesBatc
// if already drop replica, do not process
if (!syncNodeInRaftGroup(ths, &(pMsg->srcId)) && !ths->pRaftCfg->isStandBy) {
syncNodeEventLog(ths, "recv sync-append-entries-batch, maybe replica already dropped");
return ret;
syncLogRecvAppendEntriesBatch(ths, pMsg, "maybe replica already dropped");
return -1;
}
// maybe update term
@ -582,15 +548,13 @@ int32_t syncNodeOnAppendEntriesSnapshot2Cb(SSyncNode* ths, SyncAppendEntriesBatc
do {
bool condition = pMsg->term == ths->pRaftStore->currentTerm && ths->state == TAOS_SYNC_STATE_CANDIDATE;
if (condition) {
syncNodeEventLog(ths, "recv sync-append-entries-batch, candidate to follower");
syncLogRecvAppendEntriesBatch(ths, pMsg, "candidate to follower");
syncNodeBecomeFollower(ths, "from candidate by append entries");
// do not reply?
return ret;
return 0; // do not reply?
}
} while (0);
// fake match2
// fake match
//
// condition1:
// preIndex <= my commit index
@ -602,14 +566,7 @@ int32_t syncNodeOnAppendEntriesSnapshot2Cb(SSyncNode* ths, SyncAppendEntriesBatc
bool condition = (pMsg->term == ths->pRaftStore->currentTerm) && (ths->state == TAOS_SYNC_STATE_FOLLOWER) &&
(pMsg->prevLogIndex <= ths->commitIndex);
if (condition) {
do {
char logBuf[128];
snprintf(logBuf, sizeof(logBuf),
"recv sync-append-entries-batch, fake match2, {pre-index:%" PRId64 ", pre-term:%" PRIu64
", datalen:%d, datacount:%d}",
pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->dataLen, pMsg->dataCount);
syncNodeEventLog(ths, logBuf);
} while (0);
syncLogRecvAppendEntriesBatch(ths, pMsg, "fake match");
SyncIndex matchIndex = ths->commitIndex;
bool hasAppendEntries = pMsg->dataLen > 0;
@ -662,14 +619,7 @@ int32_t syncNodeOnAppendEntriesSnapshot2Cb(SSyncNode* ths, SyncAppendEntriesBatc
pReply->matchIndex = matchIndex;
// msg event log
do {
char host[128];
uint16_t port;
syncUtilU642Addr(pReply->destId.addr, host, sizeof(host), &port);
sDebug("vgId:%d, send sync-append-entries-reply to %s:%d, {term:%" PRIu64 ", pterm:%" PRIu64
", success:%d, match-index:%" PRId64 "}",
ths->vgId, host, port, pReply->term, pReply->privateTerm, pReply->success, pReply->matchIndex);
} while (0);
syncLogSendAppendEntriesReply(ths, pReply, "");
// send response
SRpcMsg rpcMsg;
@ -702,14 +652,7 @@ int32_t syncNodeOnAppendEntriesSnapshot2Cb(SSyncNode* ths, SyncAppendEntriesBatc
bool condition = condition1 || condition2;
if (condition) {
do {
char logBuf[128];
snprintf(logBuf, sizeof(logBuf),
"recv sync-append-entries-batch, not match, {pre-index:%" PRId64 ", pre-term:%" PRIu64
", datalen:%d, datacount:%d}",
pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->dataLen, pMsg->dataCount);
syncNodeEventLog(ths, logBuf);
} while (0);
syncLogRecvAppendEntriesBatch(ths, pMsg, "not match");
// maybe update commit index by snapshot
syncNodeMaybeUpdateCommitBySnapshot(ths);
@ -724,14 +667,7 @@ int32_t syncNodeOnAppendEntriesSnapshot2Cb(SSyncNode* ths, SyncAppendEntriesBatc
pReply->matchIndex = ths->commitIndex;
// msg event log
do {
char host[128];
uint16_t port;
syncUtilU642Addr(pReply->destId.addr, host, sizeof(host), &port);
sDebug("vgId:%d, send sync-append-entries-reply to %s:%d, {term:%" PRIu64 ", pterm:%" PRIu64
", success:%d, match-index:%" PRId64 "}",
ths->vgId, host, port, pReply->term, pReply->privateTerm, pReply->success, pReply->matchIndex);
} while (0);
syncLogSendAppendEntriesReply(ths, pReply, "");
// send response
SRpcMsg rpcMsg;
@ -762,14 +698,7 @@ int32_t syncNodeOnAppendEntriesSnapshot2Cb(SSyncNode* ths, SyncAppendEntriesBatc
bool hasAppendEntries = pMsg->dataLen > 0;
SOffsetAndContLen* metaTableArr = syncAppendEntriesBatchMetaTableArray(pMsg);
do {
char logBuf[128];
snprintf(logBuf, sizeof(logBuf),
"recv sync-append-entries-batch, match, {pre-index:%" PRId64 ", pre-term:%" PRIu64
", datalen:%d, datacount:%d}",
pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->dataLen, pMsg->dataCount);
syncNodeEventLog(ths, logBuf);
} while (0);
syncLogRecvAppendEntriesBatch(ths, pMsg, "really match");
if (hasExtraEntries) {
// make log same, rollback deleted entries
@ -808,14 +737,7 @@ int32_t syncNodeOnAppendEntriesSnapshot2Cb(SSyncNode* ths, SyncAppendEntriesBatc
pReply->matchIndex = hasAppendEntries ? pMsg->prevLogIndex + pMsg->dataCount : pMsg->prevLogIndex;
// msg event log
do {
char host[128];
uint16_t port;
syncUtilU642Addr(pReply->destId.addr, host, sizeof(host), &port);
sDebug("vgId:%d, send sync-append-entries-reply to %s:%d, {term:%" PRIu64 ", pterm:%" PRIu64
", success:%d, match-index:%" PRId64 "}",
ths->vgId, host, port, pReply->term, pReply->privateTerm, pReply->success, pReply->matchIndex);
} while (0);
syncLogSendAppendEntriesReply(ths, pReply, "");
// send response
SRpcMsg rpcMsg;
@ -866,13 +788,10 @@ int32_t syncNodeOnAppendEntriesSnapshotCb(SSyncNode* ths, SyncAppendEntries* pMs
int32_t ret = 0;
int32_t code = 0;
// print log
syncAppendEntriesLog2("==syncNodeOnAppendEntriesSnapshotCb==", pMsg);
// if already drop replica, do not process
if (!syncNodeInRaftGroup(ths, &(pMsg->srcId)) && !ths->pRaftCfg->isStandBy) {
syncNodeEventLog(ths, "recv sync-append-entries, maybe replica already dropped");
return ret;
syncLogRecvAppendEntries(ths, pMsg, "maybe replica already dropped");
return -1;
}
// maybe update term
@ -895,11 +814,9 @@ int32_t syncNodeOnAppendEntriesSnapshotCb(SSyncNode* ths, SyncAppendEntries* pMs
do {
bool condition = pMsg->term == ths->pRaftStore->currentTerm && ths->state == TAOS_SYNC_STATE_CANDIDATE;
if (condition) {
syncNodeEventLog(ths, "recv sync-append-entries, candidate to follower");
syncLogRecvAppendEntries(ths, pMsg, "candidate to follower");
syncNodeBecomeFollower(ths, "from candidate by append entries");
// do not reply?
return ret;
return 0; // do not reply?
}
} while (0);
@ -962,7 +879,7 @@ int32_t syncNodeOnAppendEntriesSnapshotCb(SSyncNode* ths, SyncAppendEntries* pMs
} while (0);
#endif
// fake match2
// fake match
//
// condition1:
// preIndex <= my commit index
@ -975,13 +892,7 @@ int32_t syncNodeOnAppendEntriesSnapshotCb(SSyncNode* ths, SyncAppendEntries* pMs
bool condition = (pMsg->term == ths->pRaftStore->currentTerm) && (ths->state == TAOS_SYNC_STATE_FOLLOWER) &&
(pMsg->prevLogIndex <= ths->commitIndex);
if (condition) {
do {
char logBuf[128];
snprintf(logBuf, sizeof(logBuf),
"recv sync-append-entries, fake match2, pre-index:%" PRId64 ", pre-term:%" PRIu64 ", datalen:%d",
pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->dataLen);
syncNodeEventLog(ths, logBuf);
} while (0);
syncLogRecvAppendEntries(ths, pMsg, "fake match");
SyncIndex matchIndex = ths->commitIndex;
bool hasAppendEntries = pMsg->dataLen > 0;
@ -1027,14 +938,7 @@ int32_t syncNodeOnAppendEntriesSnapshotCb(SSyncNode* ths, SyncAppendEntries* pMs
pReply->matchIndex = matchIndex;
// msg event log
do {
char host[128];
uint16_t port;
syncUtilU642Addr(pReply->destId.addr, host, sizeof(host), &port);
sDebug("vgId:%d, send sync-append-entries-reply to %s:%d, {term:%" PRIu64 ", pterm:%" PRIu64
", success:%d, match-index:%" PRId64 "}",
ths->vgId, host, port, pReply->term, pReply->privateTerm, pReply->success, pReply->matchIndex);
} while (0);
syncLogSendAppendEntriesReply(ths, pReply, "");
// send response
SRpcMsg rpcMsg;
@ -1067,11 +971,7 @@ int32_t syncNodeOnAppendEntriesSnapshotCb(SSyncNode* ths, SyncAppendEntries* pMs
bool condition = condition1 || condition2;
if (condition) {
char logBuf[128];
snprintf(logBuf, sizeof(logBuf),
"recv sync-append-entries, not match, pre-index:%" PRId64 ", pre-term:%" PRIu64 ", datalen:%d",
pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->dataLen);
syncNodeEventLog(ths, logBuf);
syncLogRecvAppendEntries(ths, pMsg, "not match");
// prepare response msg
SyncAppendEntriesReply* pReply = syncAppendEntriesReplyBuild(ths->vgId);
@ -1083,14 +983,7 @@ int32_t syncNodeOnAppendEntriesSnapshotCb(SSyncNode* ths, SyncAppendEntries* pMs
pReply->matchIndex = SYNC_INDEX_INVALID;
// msg event log
do {
char host[128];
uint16_t port;
syncUtilU642Addr(pReply->destId.addr, host, sizeof(host), &port);
sDebug("vgId:%d, send sync-append-entries-reply to %s:%d, {term:%" PRIu64 ", pterm:%" PRIu64
", success:%d, match-index:%" PRId64 "}",
ths->vgId, host, port, pReply->term, pReply->privateTerm, pReply->success, pReply->matchIndex);
} while (0);
syncLogSendAppendEntriesReply(ths, pReply, "");
// send response
SRpcMsg rpcMsg;
@ -1120,11 +1013,7 @@ int32_t syncNodeOnAppendEntriesSnapshotCb(SSyncNode* ths, SyncAppendEntries* pMs
// has entries in SyncAppendEntries msg
bool hasAppendEntries = pMsg->dataLen > 0;
char logBuf[128];
snprintf(logBuf, sizeof(logBuf),
"recv sync-append-entries, match, pre-index:%" PRId64 ", pre-term:%" PRIu64 ", datalen:%d",
pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->dataLen);
syncNodeEventLog(ths, logBuf);
syncLogRecvAppendEntries(ths, pMsg, "really match");
if (hasExtraEntries) {
// make log same, rollback deleted entries
@ -1159,14 +1048,7 @@ int32_t syncNodeOnAppendEntriesSnapshotCb(SSyncNode* ths, SyncAppendEntries* pMs
pReply->matchIndex = hasAppendEntries ? pMsg->prevLogIndex + 1 : pMsg->prevLogIndex;
// msg event log
do {
char host[128];
uint16_t port;
syncUtilU642Addr(pReply->destId.addr, host, sizeof(host), &port);
sDebug("vgId:%d, send sync-append-entries-reply to %s:%d, {term:%" PRIu64 ", pterm:%" PRIu64
", success:%d, match-index:%" PRId64 "}",
ths->vgId, host, port, pReply->term, pReply->privateTerm, pReply->success, pReply->matchIndex);
} while (0);
syncLogSendAppendEntriesReply(ths, pReply, "");
// send response
SRpcMsg rpcMsg;

View File

@ -40,44 +40,33 @@
int32_t syncNodeOnAppendEntriesReplyCb(SSyncNode* ths, SyncAppendEntriesReply* pMsg) {
int32_t ret = 0;
// print log
syncAppendEntriesReplyLog2("==syncNodeOnAppendEntriesReplyCb==", pMsg);
// if already drop replica, do not process
if (!syncNodeInRaftGroup(ths, &(pMsg->srcId)) && !ths->pRaftCfg->isStandBy) {
syncNodeEventLog(ths, "recv sync-append-entries-reply, maybe replica already dropped");
return 0;
syncLogRecvAppendEntriesReply(ths, pMsg, "maybe replica already dropped");
return -1;
}
// drop stale response
if (pMsg->term < ths->pRaftStore->currentTerm) {
char logBuf[128];
snprintf(logBuf, sizeof(logBuf), "recv sync-append-entries-reply, recv-term:%" PRIu64 ", drop stale response",
pMsg->term);
syncNodeEventLog(ths, logBuf);
syncLogRecvAppendEntriesReply(ths, pMsg, "drop stale response");
return 0;
}
if (gRaftDetailLog) {
syncNodeEventLog(ths, "recv sync-append-entries-reply, before");
}
syncIndexMgrLog2("==syncNodeOnAppendEntriesReplyCb== before pNextIndex", ths->pNextIndex);
syncIndexMgrLog2("==syncNodeOnAppendEntriesReplyCb== before pMatchIndex", ths->pMatchIndex);
// no need this code, because if I receive reply.term, then I must have sent for that term.
// if (pMsg->term > ths->pRaftStore->currentTerm) {
// syncNodeUpdateTerm(ths, pMsg->term);
// }
if (pMsg->term > ths->pRaftStore->currentTerm) {
char logBuf[128];
snprintf(logBuf, sizeof(logBuf), "recv sync-append-entries-reply, error term, recv-term:%" PRIu64, pMsg->term);
syncNodeErrorLog(ths, logBuf);
syncLogRecvAppendEntriesReply(ths, pMsg, "error term");
return -1;
}
ASSERT(pMsg->term == ths->pRaftStore->currentTerm);
SyncIndex beforeNextIndex = syncIndexMgrGetIndex(ths->pNextIndex, &(pMsg->srcId));
SyncIndex beforeMatchIndex = syncIndexMgrGetIndex(ths->pMatchIndex, &(pMsg->srcId));
if (pMsg->success) {
// nextIndex' = [nextIndex EXCEPT ![i][j] = m.mmatchIndex + 1]
syncIndexMgrSetIndex(ths->pNextIndex, &(pMsg->srcId), pMsg->matchIndex + 1);
@ -100,13 +89,16 @@ int32_t syncNodeOnAppendEntriesReplyCb(SSyncNode* ths, SyncAppendEntriesReply* p
syncIndexMgrSetIndex(ths->pNextIndex, &(pMsg->srcId), nextIndex);
}
if (gRaftDetailLog) {
syncNodeEventLog(ths, "recv sync-append-entries-reply, after");
}
syncIndexMgrLog2("==syncNodeOnAppendEntriesReplyCb== after pNextIndex", ths->pNextIndex);
syncIndexMgrLog2("==syncNodeOnAppendEntriesReplyCb== after pMatchIndex", ths->pMatchIndex);
SyncIndex afterNextIndex = syncIndexMgrGetIndex(ths->pNextIndex, &(pMsg->srcId));
SyncIndex afterMatchIndex = syncIndexMgrGetIndex(ths->pMatchIndex, &(pMsg->srcId));
do {
char logBuf[256];
snprintf(logBuf, sizeof(logBuf), "before next:%ld, match:%ld, after next:%ld, match:%ld", beforeNextIndex,
beforeMatchIndex, afterNextIndex, afterMatchIndex);
syncLogRecvAppendEntriesReply(ths, pMsg, logBuf);
} while (0);
return ret;
return 0;
}
// only start once
@ -147,40 +139,29 @@ static void syncNodeStartSnapshotOnce(SSyncNode* ths, SyncIndex beginIndex, Sync
int32_t syncNodeOnAppendEntriesReplySnapshot2Cb(SSyncNode* ths, SyncAppendEntriesReply* pMsg) {
int32_t ret = 0;
// print log
do {
char logBuf[256];
snprintf(logBuf, sizeof(logBuf), "recv sync-append-entries-reply, term:%lu, match:%ld, success:%d", pMsg->term,
pMsg->matchIndex, pMsg->success);
syncNodeEventLog(ths, logBuf);
} while (0);
// if already drop replica, do not process
if (!syncNodeInRaftGroup(ths, &(pMsg->srcId)) && !ths->pRaftCfg->isStandBy) {
syncNodeEventLog(ths, "recv sync-append-entries-reply, maybe replica already dropped");
syncLogRecvAppendEntriesReply(ths, pMsg, "maybe replica already dropped");
return -1;
}
// drop stale response
if (pMsg->term < ths->pRaftStore->currentTerm) {
char logBuf[128];
snprintf(logBuf, sizeof(logBuf), "recv sync-append-entries-reply, recv-term:%" PRIu64 ", drop stale response",
pMsg->term);
syncNodeEventLog(ths, logBuf);
return -1;
syncLogRecvAppendEntriesReply(ths, pMsg, "drop stale response");
return 0;
}
// error term
if (pMsg->term > ths->pRaftStore->currentTerm) {
char logBuf[128];
snprintf(logBuf, sizeof(logBuf), "recv sync-append-entries-reply, error term, recv-term:%" PRIu64, pMsg->term);
syncNodeErrorLog(ths, logBuf);
syncLogRecvAppendEntriesReply(ths, pMsg, "error term");
return -1;
}
ASSERT(pMsg->term == ths->pRaftStore->currentTerm);
SyncIndex beforeNextIndex = syncIndexMgrGetIndex(ths->pNextIndex, &(pMsg->srcId));
SyncIndex beforeMatchIndex = syncIndexMgrGetIndex(ths->pMatchIndex, &(pMsg->srcId));
if (pMsg->success) {
SyncIndex newNextIndex = pMsg->matchIndex + 1;
SyncIndex newMatchIndex = pMsg->matchIndex;
@ -293,50 +274,48 @@ int32_t syncNodeOnAppendEntriesReplySnapshot2Cb(SSyncNode* ths, SyncAppendEntrie
} while (0);
}
SyncIndex afterNextIndex = syncIndexMgrGetIndex(ths->pNextIndex, &(pMsg->srcId));
SyncIndex afterMatchIndex = syncIndexMgrGetIndex(ths->pMatchIndex, &(pMsg->srcId));
do {
char logBuf[256];
snprintf(logBuf, sizeof(logBuf), "before next:%ld, match:%ld, after next:%ld, match:%ld", beforeNextIndex,
beforeMatchIndex, afterNextIndex, afterMatchIndex);
syncLogRecvAppendEntriesReply(ths, pMsg, logBuf);
} while (0);
return 0;
}
int32_t syncNodeOnAppendEntriesReplySnapshotCb(SSyncNode* ths, SyncAppendEntriesReply* pMsg) {
int32_t ret = 0;
// print log
syncAppendEntriesReplyLog2("==syncNodeOnAppendEntriesReplySnapshotCb==", pMsg);
// if already drop replica, do not process
if (!syncNodeInRaftGroup(ths, &(pMsg->srcId)) && !ths->pRaftCfg->isStandBy) {
syncNodeEventLog(ths, "recv sync-append-entries-reply, maybe replica already dropped");
return 0;
syncLogRecvAppendEntriesReply(ths, pMsg, "maybe replica already dropped");
return -1;
}
// drop stale response
if (pMsg->term < ths->pRaftStore->currentTerm) {
char logBuf[128];
snprintf(logBuf, sizeof(logBuf), "recv sync-append-entries-reply, recv-term:%" PRIu64 ", drop stale response",
pMsg->term);
syncNodeEventLog(ths, logBuf);
syncLogRecvAppendEntriesReply(ths, pMsg, "drop stale response");
return 0;
}
if (gRaftDetailLog) {
syncNodeEventLog(ths, "recv sync-append-entries-reply, before");
}
syncIndexMgrLog2("recv sync-append-entries-reply, before pNextIndex:", ths->pNextIndex);
syncIndexMgrLog2("recv sync-append-entries-reply, before pMatchIndex:", ths->pMatchIndex);
// no need this code, because if I receive reply.term, then I must have sent for that term.
// if (pMsg->term > ths->pRaftStore->currentTerm) {
// syncNodeUpdateTerm(ths, pMsg->term);
// }
if (pMsg->term > ths->pRaftStore->currentTerm) {
char logBuf[128];
snprintf(logBuf, sizeof(logBuf), "recv sync-append-entries-reply, error term, recv-term:%" PRIu64, pMsg->term);
syncNodeErrorLog(ths, logBuf);
syncLogRecvAppendEntriesReply(ths, pMsg, "error term");
return -1;
}
ASSERT(pMsg->term == ths->pRaftStore->currentTerm);
SyncIndex beforeNextIndex = syncIndexMgrGetIndex(ths->pNextIndex, &(pMsg->srcId));
SyncIndex beforeMatchIndex = syncIndexMgrGetIndex(ths->pMatchIndex, &(pMsg->srcId));
if (pMsg->success) {
// nextIndex' = [nextIndex EXCEPT ![i][j] = m.mmatchIndex + 1]
syncIndexMgrSetIndex(ths->pNextIndex, &(pMsg->srcId), pMsg->matchIndex + 1);
@ -404,11 +383,14 @@ int32_t syncNodeOnAppendEntriesReplySnapshotCb(SSyncNode* ths, SyncAppendEntries
}
}
if (gRaftDetailLog) {
syncNodeEventLog(ths, "recv sync-append-entries-reply, after");
}
syncIndexMgrLog2("recv sync-append-entries-reply, after pNextIndex:", ths->pNextIndex);
syncIndexMgrLog2("recv sync-append-entries-reply, after pMatchIndex:", ths->pMatchIndex);
SyncIndex afterNextIndex = syncIndexMgrGetIndex(ths->pNextIndex, &(pMsg->srcId));
SyncIndex afterMatchIndex = syncIndexMgrGetIndex(ths->pMatchIndex, &(pMsg->srcId));
do {
char logBuf[256];
snprintf(logBuf, sizeof(logBuf), "before next:%ld, match:%ld, after next:%ld, match:%ld", beforeNextIndex,
beforeMatchIndex, afterNextIndex, afterMatchIndex);
syncLogRecvAppendEntriesReply(ths, pMsg, logBuf);
} while (0);
return 0;
}

View File

@ -71,6 +71,8 @@ int32_t syncNodeRequestVotePeersSnapshot(SSyncNode* pSyncNode) {
}
int32_t syncNodeElect(SSyncNode* pSyncNode) {
syncNodeEventLog(pSyncNode, "begin election");
int32_t ret = 0;
if (pSyncNode->state == TAOS_SYNC_STATE_FOLLOWER) {
syncNodeFollower2Candidate(pSyncNode);
@ -118,15 +120,7 @@ int32_t syncNodeElect(SSyncNode* pSyncNode) {
int32_t syncNodeRequestVote(SSyncNode* pSyncNode, const SRaftId* destRaftId, const SyncRequestVote* pMsg) {
int32_t ret = 0;
do {
char host[128];
uint16_t port;
syncUtilU642Addr(destRaftId->addr, host, sizeof(host), &port);
sDebug("vgId:%d, send sync-request-vote to %s:%d, {term:%" PRIu64 ", last-index:%" PRId64 ", last-term:%" PRIu64
"}",
pSyncNode->vgId, host, port, pMsg->term, pMsg->lastLogIndex, pMsg->lastLogTerm);
} while (0);
syncLogSendRequestVote(pSyncNode, pMsg, "");
SRpcMsg rpcMsg;
syncRequestVote2RpcMsg(pMsg, &rpcMsg);

View File

@ -79,8 +79,7 @@ SyncIndex syncIndexMgrGetIndex(SSyncIndexMgr *pSyncIndexMgr, const SRaftId *pRaf
}
}
syncNodeLog3("syncIndexMgrGetIndex", pSyncIndexMgr->pSyncNode);
ASSERT(0);
return SYNC_INDEX_INVALID;
}
cJSON *syncIndexMgr2Json(SSyncIndexMgr *pSyncIndexMgr) {
@ -126,7 +125,7 @@ cJSON *syncIndexMgr2Json(SSyncIndexMgr *pSyncIndexMgr) {
char *syncIndexMgr2Str(SSyncIndexMgr *pSyncIndexMgr) {
cJSON *pJson = syncIndexMgr2Json(pSyncIndexMgr);
char * serialized = cJSON_Print(pJson);
char *serialized = cJSON_Print(pJson);
cJSON_Delete(pJson);
return serialized;
}

View File

@ -559,10 +559,11 @@ void syncGetRetryEpSet(int64_t rid, SEpSet* pEpSet) {
snprintf(pEpSet->eps[i].fqdn, sizeof(pEpSet->eps[i].fqdn), "%s", (pSyncNode->pRaftCfg->cfg.nodeInfo)[i].nodeFqdn);
pEpSet->eps[i].port = (pSyncNode->pRaftCfg->cfg.nodeInfo)[i].nodePort;
(pEpSet->numOfEps)++;
sInfo("vgId:%d sync get retry epset: index:%d %s:%d", pSyncNode->vgId, i, pEpSet->eps[i].fqdn, pEpSet->eps[i].port);
sInfo("vgId:%d, sync get retry epset: index:%d %s:%d", pSyncNode->vgId, i, pEpSet->eps[i].fqdn,
pEpSet->eps[i].port);
}
pEpSet->inUse = (pSyncNode->pRaftCfg->cfg.myIndex + 1) % pEpSet->numOfEps;
sInfo("vgId:%d sync get retry epset in-use:%d", pSyncNode->vgId, pEpSet->inUse);
sInfo("vgId:%d, sync get retry epset in-use:%d", pSyncNode->vgId, pEpSet->inUse);
taosReleaseRef(tsNodeRefId, pSyncNode->rid);
}
@ -999,7 +1000,18 @@ SSyncNode* syncNodeOpen(const SSyncInfo* pOldSyncInfo) {
// init TLA+ log vars
pSyncNode->pLogStore = logStoreCreate(pSyncNode);
ASSERT(pSyncNode->pLogStore != NULL);
pSyncNode->commitIndex = SYNC_INDEX_INVALID;
SyncIndex commitIndex = SYNC_INDEX_INVALID;
if (pSyncNode->pFsm != NULL && pSyncNode->pFsm->FpGetSnapshotInfo != NULL) {
SSnapshot snapshot = {0};
int32_t code = pSyncNode->pFsm->FpGetSnapshotInfo(pSyncNode->pFsm, &snapshot);
ASSERT(code == 0);
if (snapshot.lastApplyIndex > commitIndex) {
commitIndex = snapshot.lastApplyIndex;
syncNodeEventLog(pSyncNode, "reset commit index by snapshot");
}
}
pSyncNode->commitIndex = commitIndex;
// timer ms init
pSyncNode->pingBaseLine = PING_TIMER_MS;
@ -1553,7 +1565,8 @@ void syncNodeEventLog(const SSyncNode* pSyncNode, char* str) {
snprintf(logBuf, sizeof(logBuf), "%s", str);
}
// sDebug("%s", logBuf);
sInfo("%s", logBuf);
// sInfo("%s", logBuf);
sTrace("%s", logBuf);
} else {
int len = 256 + userStrLen;
@ -1575,7 +1588,8 @@ void syncNodeEventLog(const SSyncNode* pSyncNode, char* str) {
snprintf(s, len, "%s", str);
}
// sDebug("%s", s);
sInfo("%s", s);
// sInfo("%s", s);
sTrace("%s", s);
taosMemoryFree(s);
}
@ -2061,21 +2075,21 @@ void syncNodeFollower2Candidate(SSyncNode* pSyncNode) {
ASSERT(pSyncNode->state == TAOS_SYNC_STATE_FOLLOWER);
pSyncNode->state = TAOS_SYNC_STATE_CANDIDATE;
syncNodeLog2("==state change syncNodeFollower2Candidate==", pSyncNode);
syncNodeEventLog(pSyncNode, "follower to candidate");
}
void syncNodeLeader2Follower(SSyncNode* pSyncNode) {
ASSERT(pSyncNode->state == TAOS_SYNC_STATE_LEADER);
syncNodeBecomeFollower(pSyncNode, "leader to follower");
syncNodeLog2("==state change syncNodeLeader2Follower==", pSyncNode);
syncNodeEventLog(pSyncNode, "leader to follower");
}
void syncNodeCandidate2Follower(SSyncNode* pSyncNode) {
ASSERT(pSyncNode->state == TAOS_SYNC_STATE_CANDIDATE);
syncNodeBecomeFollower(pSyncNode, "candidate to follower");
syncNodeLog2("==state change syncNodeCandidate2Follower==", pSyncNode);
syncNodeEventLog(pSyncNode, "candidate to follower");
}
// raft vote --------------
@ -2912,4 +2926,126 @@ bool syncNodeCanChange(SSyncNode* pSyncNode) {
}
return true;
}
}
void syncLogSendRequestVote(SSyncNode* pSyncNode, const SyncRequestVote* pMsg, const char* s) {
char host[64];
uint16_t port;
syncUtilU642Addr(pMsg->destId.addr, host, sizeof(host), &port);
char logBuf[256];
snprintf(logBuf, sizeof(logBuf),
"send sync-request-vote to %s:%d {term:%" PRIu64 ", lindex:%" PRId64 ", lterm:%" PRIu64 "}, %s", host, port,
pMsg->term, pMsg->lastLogIndex, pMsg->lastLogTerm, s);
syncNodeEventLog(pSyncNode, logBuf);
}
void syncLogRecvRequestVote(SSyncNode* pSyncNode, const SyncRequestVote* pMsg, const char* s) {
char logBuf[256];
char host[64];
uint16_t port;
syncUtilU642Addr(pMsg->srcId.addr, host, sizeof(host), &port);
snprintf(logBuf, sizeof(logBuf),
"recv sync-request-vote from %s:%d, {term:%" PRIu64 ", lindex:%" PRId64 ", lterm:%" PRIu64 "}, %s", host,
port, pMsg->term, pMsg->lastLogIndex, pMsg->lastLogTerm, s);
syncNodeEventLog(pSyncNode, logBuf);
}
void syncLogSendRequestVoteReply(SSyncNode* pSyncNode, const SyncRequestVoteReply* pMsg, const char* s) {
char host[64];
uint16_t port;
syncUtilU642Addr(pMsg->destId.addr, host, sizeof(host), &port);
char logBuf[256];
snprintf(logBuf, sizeof(logBuf), "send sync-request-vote-reply to %s:%d {term:%" PRIu64 ", grant:%d}, %s", host, port,
pMsg->term, pMsg->voteGranted, s);
syncNodeEventLog(pSyncNode, logBuf);
}
void syncLogRecvRequestVoteReply(SSyncNode* pSyncNode, const SyncRequestVoteReply* pMsg, const char* s) {
char host[64];
uint16_t port;
syncUtilU642Addr(pMsg->srcId.addr, host, sizeof(host), &port);
char logBuf[256];
snprintf(logBuf, sizeof(logBuf), "recv sync-request-vote-reply from %s:%d {term:%" PRIu64 ", grant:%d}, %s", host,
port, pMsg->term, pMsg->voteGranted, s);
syncNodeEventLog(pSyncNode, logBuf);
}
void syncLogSendAppendEntries(SSyncNode* pSyncNode, const SyncAppendEntries* pMsg, const char* s) {
char host[64];
uint16_t port;
syncUtilU642Addr(pMsg->destId.addr, host, sizeof(host), &port);
char logBuf[256];
snprintf(logBuf, sizeof(logBuf),
"send sync-append-entries to %s:%d, {term:%" PRIu64 ", pre-index:%" PRId64 ", pre-term:%" PRIu64
", pterm:%" PRIu64 ", commit:%" PRId64
", "
"datalen:%d}, %s",
host, port, pMsg->term, pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->privateTerm, pMsg->commitIndex,
pMsg->dataLen, s);
syncNodeEventLog(pSyncNode, logBuf);
}
void syncLogRecvAppendEntries(SSyncNode* pSyncNode, const SyncAppendEntries* pMsg, const char* s) {
char host[64];
uint16_t port;
syncUtilU642Addr(pMsg->srcId.addr, host, sizeof(host), &port);
char logBuf[256];
snprintf(logBuf, sizeof(logBuf),
"recv sync-append-entries from %s:%d {term:%" PRIu64 ", pre-index:%" PRIu64 ", pre-term:%" PRIu64
", commit:%" PRIu64 ", pterm:%" PRIu64
", "
"datalen:%d}, %s",
host, port, pMsg->term, pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->commitIndex, pMsg->privateTerm,
pMsg->dataLen, s);
syncNodeEventLog(pSyncNode, logBuf);
}
void syncLogSendAppendEntriesBatch(SSyncNode* pSyncNode, const SyncAppendEntriesBatch* pMsg, const char* s) {
char host[64];
uint16_t port;
syncUtilU642Addr(pMsg->destId.addr, host, sizeof(host), &port);
char logBuf[256];
snprintf(logBuf, sizeof(logBuf),
"send sync-append-entries-batch to %s:%d, {term:%" PRIu64 ", pre-index:%" PRId64 ", pre-term:%" PRIu64
", pterm:%" PRIu64 ", commit:%" PRId64 ", datalen:%d, count:%d}, %s",
host, port, pMsg->term, pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->privateTerm, pMsg->commitIndex,
pMsg->dataLen, pMsg->dataCount, s);
syncNodeEventLog(pSyncNode, logBuf);
}
void syncLogRecvAppendEntriesBatch(SSyncNode* pSyncNode, const SyncAppendEntriesBatch* pMsg, const char* s) {
char host[64];
uint16_t port;
syncUtilU642Addr(pMsg->srcId.addr, host, sizeof(host), &port);
char logBuf[256];
snprintf(logBuf, sizeof(logBuf),
"recv sync-append-entries-batch from %s:%d, {term:%" PRIu64 ", pre-index:%" PRId64 ", pre-term:%" PRIu64
", pterm:%" PRIu64 ", commit:%" PRId64 ", datalen:%d, count:%d}, %s",
host, port, pMsg->term, pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->privateTerm, pMsg->commitIndex,
pMsg->dataLen, pMsg->dataCount, s);
syncNodeEventLog(pSyncNode, logBuf);
}
void syncLogSendAppendEntriesReply(SSyncNode* pSyncNode, const SyncAppendEntriesReply* pMsg, const char* s) {
char host[64];
uint16_t port;
syncUtilU642Addr(pMsg->destId.addr, host, sizeof(host), &port);
char logBuf[256];
snprintf(logBuf, sizeof(logBuf),
"send sync-append-entries-reply to %s:%d, {term:%" PRIu64 ", pterm:%" PRIu64 ", success:%d, match:%" PRId64
"}, %s",
host, port, pMsg->term, pMsg->privateTerm, pMsg->success, pMsg->matchIndex, s);
syncNodeEventLog(pSyncNode, logBuf);
}
void syncLogRecvAppendEntriesReply(SSyncNode* pSyncNode, const SyncAppendEntriesReply* pMsg, const char* s) {
char host[64];
uint16_t port;
syncUtilU642Addr(pMsg->srcId.addr, host, sizeof(host), &port);
char logBuf[256];
snprintf(logBuf, sizeof(logBuf),
"recv sync-append-entries-reply from %s:%d {term:%" PRIu64 ", pterm:%" PRIu64 ", success:%d, match:%" PRId64
"}, %s",
host, port, pMsg->term, pMsg->privateTerm, pMsg->success, pMsg->matchIndex, s);
syncNodeEventLog(pSyncNode, logBuf);
}

View File

@ -108,10 +108,10 @@ int32_t raftStoreSerialize(SRaftStore *pRaftStore, char *buf, size_t len) {
cJSON *pRoot = cJSON_CreateObject();
char u64Buf[128] = {0};
snprintf(u64Buf, sizeof(u64Buf), "%lu", pRaftStore->currentTerm);
snprintf(u64Buf, sizeof(u64Buf), "%" PRIu64 "", pRaftStore->currentTerm);
cJSON_AddStringToObject(pRoot, "current_term", u64Buf);
snprintf(u64Buf, sizeof(u64Buf), "%lu", pRaftStore->voteFor.addr);
snprintf(u64Buf, sizeof(u64Buf), "%" PRIu64 "", pRaftStore->voteFor.addr);
cJSON_AddStringToObject(pRoot, "vote_for_addr", u64Buf);
cJSON_AddNumberToObject(pRoot, "vote_for_vgid", pRaftStore->voteFor.vgId);
@ -142,11 +142,11 @@ int32_t raftStoreDeserialize(SRaftStore *pRaftStore, char *buf, size_t len) {
cJSON *pCurrentTerm = cJSON_GetObjectItem(pRoot, "current_term");
ASSERT(cJSON_IsString(pCurrentTerm));
sscanf(pCurrentTerm->valuestring, "%lu", &(pRaftStore->currentTerm));
sscanf(pCurrentTerm->valuestring, "%" PRIu64 "", &(pRaftStore->currentTerm));
cJSON *pVoteForAddr = cJSON_GetObjectItem(pRoot, "vote_for_addr");
ASSERT(cJSON_IsString(pVoteForAddr));
sscanf(pVoteForAddr->valuestring, "%lu", &(pRaftStore->voteFor.addr));
sscanf(pVoteForAddr->valuestring, "%" PRIu64 "", &(pRaftStore->voteFor.addr));
cJSON *pVoteForVgid = cJSON_GetObjectItem(pRoot, "vote_for_vgid");
pRaftStore->voteFor.vgId = pVoteForVgid->valueint;
@ -188,11 +188,11 @@ cJSON *raftStore2Json(SRaftStore *pRaftStore) {
cJSON *pRoot = cJSON_CreateObject();
if (pRaftStore != NULL) {
snprintf(u64buf, sizeof(u64buf), "%lu", pRaftStore->currentTerm);
snprintf(u64buf, sizeof(u64buf), "%" PRIu64 "", pRaftStore->currentTerm);
cJSON_AddStringToObject(pRoot, "currentTerm", u64buf);
cJSON *pVoteFor = cJSON_CreateObject();
snprintf(u64buf, sizeof(u64buf), "%lu", pRaftStore->voteFor.addr);
snprintf(u64buf, sizeof(u64buf), "%" PRIu64 "", pRaftStore->voteFor.addr);
cJSON_AddStringToObject(pVoteFor, "addr", u64buf);
{
uint64_t u64 = pRaftStore->voteFor.addr;
@ -216,7 +216,7 @@ cJSON *raftStore2Json(SRaftStore *pRaftStore) {
char *raftStore2Str(SRaftStore *pRaftStore) {
cJSON *pJson = raftStore2Json(pRaftStore);
char * serialized = cJSON_Print(pJson);
char *serialized = cJSON_Print(pJson);
cJSON_Delete(pJson);
return serialized;
}
@ -224,25 +224,25 @@ char *raftStore2Str(SRaftStore *pRaftStore) {
// for debug -------------------
void raftStorePrint(SRaftStore *pObj) {
char *serialized = raftStore2Str(pObj);
printf("raftStorePrint | len:%lu | %s \n", strlen(serialized), serialized);
printf("raftStorePrint | len:%" PRIu64 " | %s \n", strlen(serialized), serialized);
fflush(NULL);
taosMemoryFree(serialized);
}
void raftStorePrint2(char *s, SRaftStore *pObj) {
char *serialized = raftStore2Str(pObj);
printf("raftStorePrint2 | len:%lu | %s | %s \n", strlen(serialized), s, serialized);
printf("raftStorePrint2 | len:%" PRIu64 " | %s | %s \n", strlen(serialized), s, serialized);
fflush(NULL);
taosMemoryFree(serialized);
}
void raftStoreLog(SRaftStore *pObj) {
char *serialized = raftStore2Str(pObj);
sTrace("raftStoreLog | len:%lu | %s", strlen(serialized), serialized);
sTrace("raftStoreLog | len:%" PRIu64 " | %s", strlen(serialized), serialized);
taosMemoryFree(serialized);
}
void raftStoreLog2(char *s, SRaftStore *pObj) {
char *serialized = raftStore2Str(pObj);
sTrace("raftStoreLog2 | len:%lu | %s | %s", strlen(serialized), s, serialized);
sTrace("raftStoreLog2 | len:%" PRIu64 " | %s | %s", strlen(serialized), s, serialized);
taosMemoryFree(serialized);
}

View File

@ -313,18 +313,7 @@ int32_t syncNodeReplicate(SSyncNode* pSyncNode) {
int32_t syncNodeAppendEntries(SSyncNode* pSyncNode, const SRaftId* destRaftId, const SyncAppendEntries* pMsg) {
int32_t ret = 0;
do {
char host[128];
uint16_t port;
syncUtilU642Addr(destRaftId->addr, host, sizeof(host), &port);
sDebug("vgId:%d, send sync-append-entries to %s:%d, {term:%" PRIu64 ", pre-index:%" PRId64 ", pre-term:%" PRIu64
", pterm:%" PRIu64 ", commit:%" PRId64
", "
"datalen:%d}",
pSyncNode->vgId, host, port, pMsg->term, pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->privateTerm,
pMsg->commitIndex, pMsg->dataLen);
} while (0);
syncLogSendAppendEntries(pSyncNode, pMsg, "");
SRpcMsg rpcMsg;
syncAppendEntries2RpcMsg(pMsg, &rpcMsg);
@ -334,15 +323,7 @@ int32_t syncNodeAppendEntries(SSyncNode* pSyncNode, const SRaftId* destRaftId, c
int32_t syncNodeAppendEntriesBatch(SSyncNode* pSyncNode, const SRaftId* destRaftId,
const SyncAppendEntriesBatch* pMsg) {
do {
char host[128];
uint16_t port;
syncUtilU642Addr(destRaftId->addr, host, sizeof(host), &port);
sDebug("vgId:%d, send sync-append-entries-batch to %s:%d, {term:%" PRIu64 ", pre-index:%" PRId64
", pre-term:%" PRIu64 ", pterm:%" PRIu64 ", commit:%" PRId64 ", datalen:%d, datacount:%d}",
pSyncNode->vgId, host, port, pMsg->term, pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->privateTerm,
pMsg->commitIndex, pMsg->dataLen, pMsg->dataCount);
} while (0);
syncLogSendAppendEntriesBatch(pSyncNode, pMsg, "");
SRpcMsg rpcMsg;
syncAppendEntriesBatch2RpcMsg(pMsg, &rpcMsg);

View File

@ -45,22 +45,9 @@
int32_t syncNodeOnRequestVoteCb(SSyncNode* ths, SyncRequestVote* pMsg) {
int32_t ret = 0;
syncRequestVoteLog2("==syncNodeOnRequestVoteCb==", pMsg);
// if already drop replica, do not process
if (!syncNodeInRaftGroup(ths, &(pMsg->srcId)) && !ths->pRaftCfg->isStandBy) {
do {
char logBuf[256];
char host[64];
uint16_t port;
syncUtilU642Addr(pMsg->srcId.addr, host, sizeof(host), &port);
snprintf(logBuf, sizeof(logBuf),
"recv sync-request-vote from %s:%d, term:%" PRIu64 ", lindex:%" PRId64 ", lterm:%" PRIu64
", maybe replica already dropped",
host, port, pMsg->term, pMsg->lastLogIndex, pMsg->lastLogTerm);
syncNodeEventLog(ths, logBuf);
} while (0);
syncLogRecvRequestVote(ths, pMsg, "maybe replica already dropped");
return -1;
}
@ -93,15 +80,10 @@ int32_t syncNodeOnRequestVoteCb(SSyncNode* ths, SyncRequestVote* pMsg) {
// trace log
do {
char logBuf[256];
char host[64];
uint16_t port;
syncUtilU642Addr(pMsg->srcId.addr, host, sizeof(host), &port);
snprintf(logBuf, sizeof(logBuf),
"recv sync-request-vote from %s:%d, term:%" PRIu64 ", lindex:%" PRId64 ", lterm:%" PRIu64
", reply-grant:%d",
host, port, pMsg->term, pMsg->lastLogIndex, pMsg->lastLogTerm, pReply->voteGranted);
syncNodeEventLog(ths, logBuf);
char logBuf[32];
snprintf(logBuf, sizeof(logBuf), "grant:%d", pReply->voteGranted);
syncLogRecvRequestVote(ths, pMsg, logBuf);
syncLogSendRequestVoteReply(ths, pReply, "");
} while (0);
SRpcMsg rpcMsg;
@ -214,18 +196,7 @@ int32_t syncNodeOnRequestVoteSnapshotCb(SSyncNode* ths, SyncRequestVote* pMsg) {
// if already drop replica, do not process
if (!syncNodeInRaftGroup(ths, &(pMsg->srcId)) && !ths->pRaftCfg->isStandBy) {
do {
char logBuf[256];
char host[64];
uint16_t port;
syncUtilU642Addr(pMsg->srcId.addr, host, sizeof(host), &port);
snprintf(logBuf, sizeof(logBuf),
"recv sync-request-vote from %s:%d, term:%" PRIu64 ", lindex:%" PRId64 ", lterm:%" PRIu64
", maybe replica already dropped",
host, port, pMsg->term, pMsg->lastLogIndex, pMsg->lastLogTerm);
syncNodeEventLog(ths, logBuf);
} while (0);
syncLogRecvRequestVote(ths, pMsg, "maybe replica already dropped");
return -1;
}
@ -256,15 +227,10 @@ int32_t syncNodeOnRequestVoteSnapshotCb(SSyncNode* ths, SyncRequestVote* pMsg) {
// trace log
do {
char logBuf[256];
char host[64];
uint16_t port;
syncUtilU642Addr(pMsg->srcId.addr, host, sizeof(host), &port);
snprintf(logBuf, sizeof(logBuf),
"recv sync-request-vote from %s:%d, {term:%" PRIu64 ", lindex:%" PRId64 ", lterm:%" PRIu64
", reply-grant:%d}",
host, port, pMsg->term, pMsg->lastLogIndex, pMsg->lastLogTerm, pReply->voteGranted);
syncNodeEventLog(ths, logBuf);
char logBuf[32];
snprintf(logBuf, sizeof(logBuf), "grant:%d", pReply->voteGranted);
syncLogRecvRequestVote(ths, pMsg, logBuf);
syncLogSendRequestVoteReply(ths, pReply, "");
} while (0);
SRpcMsg rpcMsg;

View File

@ -40,22 +40,16 @@
int32_t syncNodeOnRequestVoteReplyCb(SSyncNode* ths, SyncRequestVoteReply* pMsg) {
int32_t ret = 0;
// print log
char logBuf[128] = {0};
snprintf(logBuf, sizeof(logBuf), "==syncNodeOnRequestVoteReplyCb== term:%" PRIu64, ths->pRaftStore->currentTerm);
syncRequestVoteReplyLog2(logBuf, pMsg);
// if already drop replica, do not process
if (!syncNodeInRaftGroup(ths, &(pMsg->srcId)) && !ths->pRaftCfg->isStandBy) {
sInfo("recv SyncRequestVoteReply, maybe replica already dropped");
return ret;
syncLogRecvRequestVoteReply(ths, pMsg, "maybe replica already dropped");
return -1;
}
// drop stale response
if (pMsg->term < ths->pRaftStore->currentTerm) {
sTrace("recv SyncRequestVoteReply, drop stale response, receive_term:%" PRIu64 " current_term:%" PRIu64, pMsg->term,
ths->pRaftStore->currentTerm);
return ret;
syncLogRecvRequestVoteReply(ths, pMsg, "drop stale response");
return -1;
}
// ASSERT(!(pMsg->term > ths->pRaftStore->currentTerm));
@ -65,14 +59,11 @@ int32_t syncNodeOnRequestVoteReplyCb(SSyncNode* ths, SyncRequestVoteReply* pMsg)
// }
if (pMsg->term > ths->pRaftStore->currentTerm) {
char logBuf[128] = {0};
snprintf(logBuf, sizeof(logBuf), "syncNodeOnRequestVoteReplyCb error term, receive:%" PRIu64 " current:%" PRIu64,
pMsg->term, ths->pRaftStore->currentTerm);
syncNodePrint2(logBuf, ths);
sError("%s", logBuf);
return ret;
syncLogRecvRequestVoteReply(ths, pMsg, "error term");
return -1;
}
syncLogRecvRequestVoteReply(ths, pMsg, "");
ASSERT(pMsg->term == ths->pRaftStore->currentTerm);
// This tallies votes even when the current state is not Candidate,
@ -99,7 +90,7 @@ int32_t syncNodeOnRequestVoteReplyCb(SSyncNode* ths, SyncRequestVoteReply* pMsg)
}
}
return ret;
return 0;
}
#if 0
@ -164,22 +155,16 @@ int32_t syncNodeOnRequestVoteReplyCb(SSyncNode* ths, SyncRequestVoteReply* pMsg)
int32_t syncNodeOnRequestVoteReplySnapshotCb(SSyncNode* ths, SyncRequestVoteReply* pMsg) {
int32_t ret = 0;
// print log
char logBuf[128] = {0};
snprintf(logBuf, sizeof(logBuf), "recv SyncRequestVoteReply, term:%" PRIu64, ths->pRaftStore->currentTerm);
syncRequestVoteReplyLog2(logBuf, pMsg);
// if already drop replica, do not process
if (!syncNodeInRaftGroup(ths, &(pMsg->srcId)) && !ths->pRaftCfg->isStandBy) {
sInfo("recv SyncRequestVoteReply, maybe replica already dropped");
return ret;
syncLogRecvRequestVoteReply(ths, pMsg, "maybe replica already dropped");
return -1;
}
// drop stale response
if (pMsg->term < ths->pRaftStore->currentTerm) {
sTrace("recv SyncRequestVoteReply, drop stale response, receive_term:%" PRIu64 " current_term:%" PRIu64, pMsg->term,
ths->pRaftStore->currentTerm);
return ret;
syncLogRecvRequestVoteReply(ths, pMsg, "drop stale response");
return -1;
}
// ASSERT(!(pMsg->term > ths->pRaftStore->currentTerm));
@ -189,15 +174,11 @@ int32_t syncNodeOnRequestVoteReplySnapshotCb(SSyncNode* ths, SyncRequestVoteRepl
// }
if (pMsg->term > ths->pRaftStore->currentTerm) {
char logBuf[128] = {0};
snprintf(logBuf, sizeof(logBuf),
"recv SyncRequestVoteReply, error term, receive_term:%" PRIu64 " current_term:%" PRIu64, pMsg->term,
ths->pRaftStore->currentTerm);
syncNodePrint2(logBuf, ths);
sError("%s", logBuf);
return ret;
syncLogRecvRequestVoteReply(ths, pMsg, "error term");
return -1;
}
syncLogRecvRequestVoteReply(ths, pMsg, "");
ASSERT(pMsg->term == ths->pRaftStore->currentTerm);
// This tallies votes even when the current state is not Candidate,
@ -224,5 +205,5 @@ int32_t syncNodeOnRequestVoteReplySnapshotCb(SSyncNode* ths, SyncRequestVoteRepl
}
}
return ret;
return 0;
}

View File

@ -573,6 +573,12 @@ static int32_t snapshotReceiverFinish(SSyncSnapshotReceiver *pReceiver, SyncSnap
pReceiver->pSyncNode->commitIndex = pReceiver->snapshot.lastApplyIndex;
}
// maybe update term
if (pReceiver->snapshot.lastApplyTerm > pReceiver->pSyncNode->pRaftStore->currentTerm) {
pReceiver->pSyncNode->pRaftStore->currentTerm = pReceiver->snapshot.lastApplyTerm;
raftStorePersist(pReceiver->pSyncNode->pRaftStore);
}
// stop writer, apply data
code = pReceiver->pSyncNode->pFsm->FpSnapshotStopWrite(pReceiver->pSyncNode->pFsm, pReceiver->pWriter, true,
&(pReceiver->snapshot));

View File

@ -0,0 +1,84 @@
#!/bin/bash
if [ $# != 1 ] ; then
echo "Uasge: $0 log-path"
echo ""
exit 1
fi
logpath=$1
echo "logpath: ${logpath}"
echo ""
echo "clean old log ..."
rm -f ${logpath}/log.*
echo ""
echo "generate log.dnode ..."
for dnode in `ls ${logpath} | grep dnode`;do
echo "generate log.${dnode}"
cat ${logpath}/${dnode}/log/taosdlog.* | grep SYN > ${logpath}/log.${dnode}
done
echo ""
echo "generate vgId ..."
cat ${logpath}/log.dnode* | grep "vgId:" | grep -v ERROR | awk '{print $5}' | awk -F, '{print $1}' | sort | uniq > ${logpath}/log.vgIds.tmp
echo "all vgIds:" > ${logpath}/log.vgIds
cat ${logpath}/log.dnode* | grep "vgId:" | grep -v ERROR | awk '{print $5}' | awk -F, '{print $1}' | sort | uniq >> ${logpath}/log.vgIds
for dnode in `ls ${logpath} | grep dnode | grep -v log`;do
echo "" >> ${logpath}/log.vgIds
echo "" >> ${logpath}/log.vgIds
echo "${dnode}:" >> ${logpath}/log.vgIds
cat ${logpath}/${dnode}/log/taosdlog.* | grep SYN | grep "vgId:" | grep -v ERROR | awk '{print $5}' | awk -F, '{print $1}' | sort | uniq >> ${logpath}/log.vgIds
done
echo ""
echo "generate log.dnode.vgId ..."
for logdnode in `ls ${logpath}/log.dnode*`;do
for vgId in `cat ${logpath}/log.vgIds.tmp`;do
rowNum=`cat ${logdnode} | grep "${vgId}" | awk 'BEGIN{rowNum=0}{rowNum++}END{print rowNum}'`
#echo "-----${rowNum}"
if [ $rowNum -gt 0 ] ; then
echo "generate ${logdnode}.${vgId}"
cat ${logdnode} | grep "${vgId}" > ${logdnode}.${vgId}
fi
done
done
echo ""
echo "generate log.dnode.main ..."
for file in `ls ${logpath}/log.dnode* | grep -v vgId`;do
echo "generate ${file}.main"
cat ${file} | awk '{ if(index($0, "sync open") > 0 || index($0, "sync close") > 0 || index($0, "become leader") > 0) {print $0} }' > ${file}.main
done
echo ""
echo "generate log.leader.term ..."
cat ${logpath}/*.main | grep "become leader" | grep -v "config change" | awk '{print $5,$0}' | awk -F, '{print $4"_"$0}' | sort -k1 > ${logpath}/log.leader.term
echo ""
echo "generate log.index, log.snapshot, log.records, log.actions ..."
for file in `ls ${logpath}/log.dnode*vgId*`;do
destfile1=${file}.index
echo "generate ${destfile1}"
cat ${file} | awk '{ if(index($0, "write index:") > 0 || index($0, "wal truncate, from-index") > 0) {print $0} }' > ${destfile1}
destfile2=${file}.snapshot
echo "generate ${destfile2}"
cat ${file} | awk '{ if(index($0, "snapshot sender") > 0 || index($0, "snapshot receiver") > 0) {print $0} }' | grep -v "save old" | grep -v "create new" | grep -v "udpate replicaIndex" | grep -v "delete old" | grep -v "reset for" > ${destfile2}
destfile3=${file}.records
echo "generate ${destfile3}"
cat ${file} | awk '{ if(index($0, "write index:") > 0 || index($0, "wal truncate, from-index") > 0 || index($0, "snapshot sender") > 0 || index($0, "snapshot receiver") > 0) {print $0} }' | grep -v "save old" | grep -v "create new" | grep -v "udpate replicaIndex" | grep -v "delete old" | grep -v "reset for" > ${destfile3}
destfile4=${file}.commit
echo "generate ${destfile4}"
cat ${file} | awk '{ if(index($0, "commit by") > 0) {print $0} }' > ${destfile4}
destfile5=${file}.actions
echo "generate ${destfile5}"
cat ${file} | awk '{ if(index($0, "commit by") > 0 || index($0, "sync open") > 0 || index($0, "sync close") > 0 || index($0, "become leader") > 0 || index($0, "write index:") > 0 || index($0, "wal truncate, from-index") > 0 || index($0, "snapshot sender") > 0 || index($0, "snapshot receiver") > 0) {print $0} }' | grep -v "save old" | grep -v "create new" | grep -v "udpate replicaIndex" | grep -v "delete old" | grep -v "reset for" > ${destfile5}
done
exit 0

Some files were not shown because too many files have changed in this diff Show More