Merge branch '3.0' into cpwu/3.0

This commit is contained in:
cpwu 2022-07-22 14:36:00 +08:00
commit d55d195e8a
205 changed files with 7036 additions and 5811 deletions

View File

@ -51,6 +51,13 @@ IF(${TD_WINDOWS})
"If build unit tests using googletest"
ON
)
option(
TDENGINE_3
"TDengine 3.x"
ON
)
ELSEIF (TD_DARWIN_64)
add_definitions(-DCOMPILER_SUPPORTS_CXX13)
option(

View File

@ -52,7 +52,7 @@ TDengine的主要功能如下
采用 TDengine可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。表现在几个方面
1. 由于其超强性能,它能将系统需要的计算资源和存储资源大幅降低
2. 因为采用 SQL 接口,能与众多第三软件无缝集成,学习迁移成本大幅下降
2. 因为采用 SQL 接口,能与众多第三软件无缝集成,学习迁移成本大幅下降
3. 因为其 All In One 的特性,系统复杂度降低,能降研发成本
4. 因为运维维护简单,运营维护成本能大幅降低

View File

@ -1,4 +1,5 @@
---
sidebar_label: Docker
title: 通过 Docker 快速体验 TDengine
---

View File

@ -0,0 +1,296 @@
---
sidebar_label: 安装包
title: 使用安装包立即开始
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
:::info
如果您希望对 TDengine 贡献代码或对内部实现感兴趣,请参考我们的 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装.
:::
TDengine 开源版本提供 deb 和 rpm 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 deb 支持 Debian/Ubuntu 及衍生系统rpm 支持 CentOS/RHEL/SUSE 及衍生系统。同时我们也为企业用户提供 tar.gz 格式安装包。
## 安装
<Tabs>
<TabItem value="apt-get" label="apt-get">
可以使用 apt-get 工具从官方仓库安装。
**安装包仓库**
```
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-stable stable main" | sudo tee /etc/apt/sources.list.d/tdengine-stable.list
```
如果安装 Beta 版需要安装包仓库
```
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-beta beta main" | sudo tee /etc/apt/sources.list.d/tdengine-beta.list
```
**使用 apt-get 命令安装**
```
sudo apt-get update
apt-cache policy tdengine
sudo apt-get install tdengine
```
:::tip
apt-get 方式只适用于 Debian 或 Ubuntu 系统
::::
</TabItem>
<TabItem label="Deb 安装" value="debinst">
1、从官网下载获得 deb 安装包,例如 TDengine-server-2.4.0.7-Linux-x64.deb
2、进入到 TDengine-server-2.4.0.7-Linux-x64.deb 安装包所在目录,执行如下的安装命令:
```
$ sudo dpkg -i TDengine-server-2.4.0.7-Linux-x64.deb
(Reading database ... 137504 files and directories currently installed.)
Preparing to unpack TDengine-server-2.4.0.7-Linux-x64.deb ...
TDengine is removed successfully!
Unpacking tdengine (2.4.0.7) over (2.4.0.7) ...
Setting up tdengine (2.4.0.7) ...
Start to install TDengine...
System hostname is: ubuntu-1804
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join
OR leave it blank to build one:
Enter your email address for priority support or enter empty to skip:
Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service.
To configure TDengine : edit /etc/taos/taos.cfg
To start TDengine : sudo systemctl start taosd
To access TDengine : taos -h ubuntu-1804 to login into TDengine server
TDengine is installed successfully!
```
</TabItem>
<TabItem label="RPM 安装" value="rpminst">
1、从官网下载获得 rpm 安装包,例如 TDengine-server-2.4.0.7-Linux-x64.rpm
2、进入到 TDengine-server-2.4.0.7-Linux-x64.rpm 安装包所在目录,执行如下的安装命令:
```
$ sudo rpm -ivh TDengine-server-2.4.0.7-Linux-x64.rpm
Preparing... ################################# [100%]
Updating / installing...
1:tdengine-2.4.0.7-3 ################################# [100%]
Start to install TDengine...
System hostname is: centos7
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join
OR leave it blank to build one:
Enter your email address for priority support or enter empty to skip:
Created symlink from /etc/systemd/system/multi-user.target.wants/taosd.service to /etc/systemd/system/taosd.service.
To configure TDengine : edit /etc/taos/taos.cfg
To start TDengine : sudo systemctl start taosd
To access TDengine : taos -h centos7 to login into TDengine server
TDengine is installed successfully!
```
</TabItem>
<TabItem label="tar.gz 安装" value="tarinst">
1、从官网下载获得 tar.gz 安装包,例如 TDengine-server-2.4.0.7-Linux-x64.tar.gz
2、进入到 TDengine-server-2.4.0.7-Linux-x64.tar.gz 安装包所在目录,先解压文件后,进入子目录,执行其中的 install.sh 安装脚本:
```
$ tar xvzf TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz
TDengine-enterprise-server-2.4.0.7/
TDengine-enterprise-server-2.4.0.7/driver/
TDengine-enterprise-server-2.4.0.7/driver/vercomp.txt
TDengine-enterprise-server-2.4.0.7/driver/libtaos.so.2.4.0.7
TDengine-enterprise-server-2.4.0.7/install.sh
TDengine-enterprise-server-2.4.0.7/examples/
...
$ ll
total 43816
drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ./
drwxr-xr-x 20 ubuntu ubuntu 4096 Feb 22 09:30 ../
drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 TDengine-enterprise-server-2.4.0.7/
-rw-rw-r-- 1 ubuntu ubuntu 44852544 Feb 22 09:31 TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz
$ cd TDengine-enterprise-server-2.4.0.7/
$ ll
total 40784
drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 ./
drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ../
drwxrwxr-x 2 ubuntu ubuntu 4096 Feb 22 09:30 driver/
drwxrwxr-x 10 ubuntu ubuntu 4096 Feb 22 09:30 examples/
-rwxrwxr-x 1 ubuntu ubuntu 33294 Feb 22 09:30 install.sh*
-rw-rw-r-- 1 ubuntu ubuntu 41704288 Feb 22 09:30 taos.tar.gz
$ sudo ./install.sh
Start to update TDengine...
Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service.
Nginx for TDengine is updated successfully!
To configure TDengine : edit /etc/taos/taos.cfg
To configure Taos Adapter (if has) : edit /etc/taos/taosadapter.toml
To start TDengine : sudo systemctl start taosd
To access TDengine : use taos -h ubuntu-1804 in shell OR from http://127.0.0.1:6060
TDengine is updated successfully!
Install taoskeeper as a standalone service
taoskeeper is installed, enable it by `systemctl enable taoskeeper`
```
:::info
install.sh 安装脚本在执行过程中,会通过命令行交互界面询问一些配置信息。如果希望采取无交互安装方式,那么可以用 -e no 参数来执行 install.sh 脚本。运行 `./install.sh -h` 指令可以查看所有参数的详细说明信息。
:::
</TabItem>
</Tabs>
:::note
当安装第一个节点时,出现 Enter FQDN提示的时候不需要输入任何内容。只有当安装第二个或以后更多的节点时才需要输入已有集群中任何一个可用节点的 FQDN支持该新节点加入集群。当然也可以不输入而是在新节点启动前配置到新节点的配置文件中。
:::
## 启动
安装后,请使用 `systemctl` 命令来启动 TDengine 的服务进程。
```bash
systemctl start taosd
```
检查服务是否正常工作:
```bash
systemctl status taosd
```
如果服务进程处于活动状态,则 status 指令会显示如下的相关信息:
```
Active: active (running)
```
如果后台服务进程处于停止状态,则 status 指令会显示如下的相关信息:
```
Active: inactive (dead)
```
如果 TDengine 服务正常工作,那么您可以通过 TDengine 的命令行程序 `taos` 来访问并体验 TDengine。
systemctl 命令汇总:
- 启动服务进程:`systemctl start taosd`
- 停止服务进程:`systemctl stop taosd`
- 重启服务进程:`systemctl restart taosd`
- 查看服务状态:`systemctl status taosd`
:::info
- systemctl 命令需要 _root_ 权限来运行,如果您非 _root_ 用户,请在命令前添加 sudo 。
- `systemctl stop taosd` 指令在执行后并不会马上停止 TDengine 服务,而是会等待系统中必要的落盘工作正常完成。在数据量很大的情况下,这可能会消耗较长时间。
- 如果系统中不支持 `systemd`,也可以用手动运行 `/usr/local/taos/bin/taosd` 方式启动 TDengine 服务。
:::
## TDengine 命令行 (CLI)
为便于检查 TDengine 的状态,执行数据库 (Database) 的各种即席(Ad Hoc)查询TDengine 提供一命令行应用程序(以下简称为 TDengine CLI) taos。要进入 TDengine 命令行,您只要在安装有 TDengine 的 Linux 终端执行 `taos` 即可。
```bash
taos
```
如果连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息出来(请参考 [FAQ](/train-faq/faq) 来解决终端连接服务端失败的问题)。 TDengine CLI 的提示符号如下:
```cmd
taos>
```
在 TDengine CLI 中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行数据库(database)插入查询操作。在终端中运行的 SQL 语句需要以分号结束来运行。示例:
```sql
create database demo;
use demo;
create table t (ts timestamp, speed int);
insert into t values ('2019-07-15 00:00:00', 10);
insert into t values ('2019-07-15 01:00:00', 20);
select * from t;
ts | speed |
========================================
2019-07-15 00:00:00.000 | 10 |
2019-07-15 01:00:00.000 | 20 |
Query OK, 2 row(s) in set (0.003128s)
```
除执行 SQL 语句外,系统管理员还可以从 TDengine CLI 进行检查系统运行状态、添加删除用户账号等操作。TDengine CLI 连同应用驱动也可以独立安装在 Linux 或 Windows 机器上运行,更多细节请参考 [这里](../reference/taos-shell/)
## 使用 taosBenchmark 体验写入速度
启动 TDengine 的服务,在 Linux 终端执行 `taosBenchmark` (曾命名为 `taosdemo`
```bash
taosBenchmark
```
该命令将在数据库 test 下面自动创建一张超级表 meters该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupIdgroupId 被设置为 1 到 10 location 被设置为 "California.SanFrancisco" 或者 "California.LosAngeles"。
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能,即使在一台普通的 PC 服务器往往也仅需十几秒。
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,您可以设置不同参数进行体验,请执行 `taosBenchmark --help` 详细列出。taosBenchmark 详细使用方法请参照 [如何使用 taosBenchmark 对 TDengine 进行性能测试](https://www.taosdata.com/2021/10/09/3111.html)。
## 使用 TDengine CLI 体验查询速度
使用上述 taosBenchmark 插入数据后,可以在 TDengine CLI 输入查询命令,体验查询速度。
查询超级表下记录总条数:
```sql
taos> select count(*) from test.meters;
```
查询 1 亿条记录的平均值、最大值、最小值等:
```sql
taos> select avg(current), max(voltage), min(phase) from test.meters;
```
查询 location="California.SanFrancisco" 的记录总条数:
```sql
taos> select count(*) from test.meters where location="California.SanFrancisco";
```
查询 groupId=10 的所有记录的平均值、最大值、最小值等:
```sql
taos> select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
```
对表 d10 按 10s 进行平均值、最大值和最小值聚合统计:
```sql
taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
```

View File

@ -1,173 +1,14 @@
---
title: 立即开始
description: '从 Docker安装包或使用 apt-get 快速安装 TDengine, 通过命令行程序TDengine CLI和工具 taosdemo 快速体验 TDengine 功能'
description: '快速设置 TDengine 环境并体验其高效写入和查询'
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import PkgInstall from "./\_pkg_install.mdx";
import AptGetInstall from "./\_apt_get_install.mdx";
## 安装
本章主要介绍如何利用 Docker 或者安装包快速设置 TDengine 环境并体验其高效写入和查询。
TDengine 完整的软件包包括服务端taosd、用于与第三方系统对接并提供 RESTful 接口的 taosAdapter、应用驱动taosc、命令行程序 (CLItaos) 和一些工具软件,目前 2.X 版服务端 taosd 和 taosAdapter 仅在 Linux 系统上安装和运行,后续将支持 Windows、macOS 等系统。应用驱动 taosc 与 TDengine CLI 可以在 Windows 或 Linux 上安装和运行。TDengine 除了提供多种语言的连接器之外,还通过 [taosAdapter](/reference/taosadapter) 提供 [RESTful 接口](/reference/rest-api)。但在 2.4 之前的版本中没有 taosAdapterRESTful 接口是由 taosd 内置的 HTTP 服务提供的。
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
TDengine 支持 X64/ARM64/MIPS64/Alpha64 硬件平台,后续将支持 ARM32、RISC-V 等 CPU 架构。
<Tabs defaultValue="apt-get">
<TabItem value="docker" label="Docker">
如果已经安装了 docker 只需执行下面的命令。
```shell
docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
```
确定该容器已经启动并且在正常运行
```shell
docker ps
```
进入该容器并执行 bash
```shell
docker exec -it <container name> bash
```
然后就可以执行相关的 Linux 命令操作和访问 TDengine
详细操作方法请参照 [通过 Docker 快速体验 TDengine](/train-faq/docker)。
:::info
从 2.4.0.10 开始,除 taosd 以外Docker 镜像还包含taos、taosAdapter、taosdump、taosBenchmark、TDinsight 安装脚本和示例代码。启动 Docker 容器时,将同时启动 taosAdapter 和 taosd实现对 RESTful 的支持。
:::
</TabItem>
<TabItem value="apt-get" label="apt-get">
<AptGetInstall />
</TabItem>
<TabItem value="pkg" label="安装包">
<PkgInstall />
</TabItem>
<TabItem value="src" label="源码">
如果您希望对 TDengine 贡献代码或对内部实现感兴趣,请参考我们的 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装.
下载其他组件、最新 Beta 版及之前版本的安装包,请点击[这里](https://www.taosdata.com/cn/all-downloads/)。
</TabItem>
</Tabs>
## 启动
安装后,请使用 `systemctl` 命令来启动 TDengine 的服务进程。
```bash
systemctl start taosd
```
检查服务是否正常工作:
```bash
systemctl status taosd
```
如果 TDengine 服务正常工作,那么您可以通过 TDengine 的命令行程序 `taos` 来访问并体验 TDengine。
:::info
- systemctl 命令需要 _root_ 权限来运行,如果您非 _root_ 用户,请在命令前添加 sudo 。
- 为更好的获得产品反馈改善产品TDengine 会采集基本的使用信息,但您可以修改系统配置文件 taos.cfg 里的配置参数 telemetryReporting将其设为 0就可将其关闭。
- TDengine 采用 FQDN一般就是 hostname作为节点的 ID为保证正常运行需要给运行 taosd 的服务器配置好 FQDN在 TDengine CLI 或应用运行的机器配置好 DNS 服务或 hosts 文件,保证 FQDN 能够解析。
- `systemctl stop taosd` 指令在执行后并不会马上停止 TDengine 服务,而是会等待系统中必要的落盘工作正常完成。在数据量很大的情况下,这可能会消耗较长时间。
TDengine 支持在使用 [`systemd`](https://en.wikipedia.org/wiki/Systemd) 做进程服务管理的 Linux 系统上安装,用 `which systemctl` 命令来检测系统中是否存在 `systemd` 包:
```bash
which systemctl
```
如果系统中不支持 `systemd`,也可以用手动运行 `/usr/local/taos/bin/taosd` 方式启动 TDengine 服务。
:::note
## TDengine 命令行 (CLI)
为便于检查 TDengine 的状态,执行数据库 (Database) 的各种即席(Ad Hoc)查询TDengine 提供一命令行应用程序(以下简称为 TDengine CLI) taos。要进入 TDengine 命令行,您只要在安装有 TDengine 的 Linux 终端执行 `taos` 即可。
```bash
taos
```
如果连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息出来(请参考 [FAQ](/train-faq/faq) 来解决终端连接服务端失败的问题)。 TDengine CLI 的提示符号如下:
```cmd
taos>
```
在 TDengine CLI 中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行数据库(database)插入查询操作。在终端中运行的 SQL 语句需要以分号结束来运行。示例:
```sql
create database demo;
use demo;
create table t (ts timestamp, speed int);
insert into t values ('2019-07-15 00:00:00', 10);
insert into t values ('2019-07-15 01:00:00', 20);
select * from t;
ts | speed |
========================================
2019-07-15 00:00:00.000 | 10 |
2019-07-15 01:00:00.000 | 20 |
Query OK, 2 row(s) in set (0.003128s)
```
除执行 SQL 语句外,系统管理员还可以从 TDengine CLI 进行检查系统运行状态、添加删除用户账号等操作。TDengine CLI 连同应用驱动也可以独立安装在 Linux 或 Windows 机器上运行,更多细节请参考 [这里](../reference/taos-shell/)
## 使用 taosBenchmark 体验写入速度
启动 TDengine 的服务,在 Linux 终端执行 `taosBenchmark` (曾命名为 `taosdemo`
```bash
taosBenchmark
```
该命令将在数据库 test 下面自动创建一张超级表 meters该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupIdgroupId 被设置为 1 到 10 location 被设置为 "California.SanFrancisco" 或者 "California.LosAngeles"。
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能,即使在一台普通的 PC 服务器往往也仅需十几秒。
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,您可以设置不同参数进行体验,请执行 `taosBenchmark --help` 详细列出。taosBenchmark 详细使用方法请参照 [如何使用 taosBenchmark 对 TDengine 进行性能测试](https://www.taosdata.com/2021/10/09/3111.html)。
## 使用 TDengine CLI 体验查询速度
使用上述 taosBenchmark 插入数据后,可以在 TDengine CLI 输入查询命令,体验查询速度。
查询超级表下记录总条数:
```sql
taos> select count(*) from test.meters;
```
查询 1 亿条记录的平均值、最大值、最小值等:
```sql
taos> select avg(current), max(voltage), min(phase) from test.meters;
```
查询 location="California.SanFrancisco" 的记录总条数:
```sql
taos> select count(*) from test.meters where location="California.SanFrancisco";
```
查询 groupId=10 的所有记录的平均值、最大值、最小值等:
```sql
taos> select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
```
对表 d10 按 10s 进行平均值、最大值和最小值聚合统计:
```sql
taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
```
<DocCardList items={useCurrentSidebarCategory().items}/>
```

View File

@ -1,105 +0,0 @@
---
title: 数据节点管理
---
上面已经介绍如何从零开始搭建集群。集群组建完成后,可以随时查看集群中当前的数据节点的状态,还可以添加新的数据节点进行扩容,删除数据节点,甚至手动进行数据节点之间的负载均衡操作。
:::note
以下所有执行命令的操作需要先登陆进 TDengine 系统,必要时请使用 root 权限。
:::
## 查看数据节点
启动 TDengine CLI 程序 taos然后执行
```sql
SHOW DNODES;
```
它将列出集群中所有的 dnode每个 dnode 的 IDend_pointfqdn:port状态readyoffline 等vnode 数目,还未使用的 vnode 数目等信息。在添加或删除一个数据节点后,可以使用该命令查看。
输出如下(具体内容仅供参考,取决于实际的集群配置)
```
taos> show dnodes;
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | trd01:6030 | 100 | 1024 | ready | 2022-07-15 16:47:47.726 | |
Query OK, 1 rows affected (0.006684s)
```
## 查看虚拟节点组
为充分利用多核技术,并提供横向扩展能力,数据需要分片处理。因此 TDengine 会将一个 DB 的数据切分成多份,存放在多个 vnode 里。这些 vnode 可能分布在多个数据节点 dnode 里,这样就实现了水平扩展。一个 vnode 仅仅属于一个 DB但一个 DB 可以有多个 vnode。vnode 所在的数据节点是 mnode 根据当前系统资源的情况,自动进行分配的,无需任何人工干预。
启动 CLI 程序 taos然后执行
```sql
USE SOME_DATABASE;
SHOW VGROUPS;
```
输出如下(具体内容仅供参考,取决于实际的集群配置)
```
taos> use db;
Database changed.
taos> show vgroups;
vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | status | nfiles | file_size | tsma |
================================================================================================================================================================================================
2 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 |
3 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 |
4 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 |
Query OK, 8 row(s) in set (0.001154s)
```
## 添加数据节点
启动 CLI 程序 taos然后执行
```sql
CREATE DNODE "fqdn:port";
```
将新数据节点的 End Point 添加进集群的 EP 列表。“fqdn:port“需要用双引号引起来否则出错。一个数据节点对外服务的 fqdn 和 port 可以通过配置文件 taos.cfg 进行配置,缺省是自动获取。【强烈不建议用自动获取方式来配置 FQDN可能导致生成的数据节点的 End Point 不是所期望的】
然后启动新加入的数据节点的 taosd 进程,再通过 taos 查看数据节点状态:
```
taos> show dnodes;
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | localhost:6030 | 100 | 1024 | ready | 2022-07-15 16:47:47.726 | |
2 | localhost:7030 | 0 | 1024 | ready | 2022-07-15 16:56:13.670 | |
Query OK, 2 rows affected (0.007031s)
```
从中可以看到两个 dnode 状态都为 ready
## 删除数据节点
先停止要删除的数据节点的 taosd 进程,然后启动 CLI 程序 taos执行
```sql
DROP DNODE "fqdn:port";
```
或者
```sql
DROP DNODE dnodeId;
```
通过 “fqdn:port” 或 dnodeID 来指定一个具体的节点都是可以的。其中 fqdn 是被删除的节点的 FQDNport 是其对外服务器的端口号dnodeID 可以通过 SHOW DNODES 获得。
:::warning
数据节点一旦被 drop 之后,不能重新加入集群。需要将此节点重新部署(清空数据文件夹)。集群在完成 `drop dnode` 操作之前,会将该 dnode 的数据迁移走。
请注意 `drop dnode` 和 停止 taosd 进程是两个不同的概念,不要混淆:因为删除 dnode 之前要执行迁移数据的操作,因此被删除的 dnode 必须保持在线状态。待删除操作结束之后,才能停止 taosd 进程。
一个数据节点被 drop 之后,其他节点都会感知到这个 dnodeID 的删除操作,任何集群中的节点都不会再接收此 dnodeID 的请求。
dnodeID 是集群自动分配的,不得人工指定。它在生成时是递增的,不会重复。
:::

View File

@ -1 +0,0 @@
label: 集群管理

View File

@ -1,5 +1,6 @@
---
title: 集群部署
sidebar_label: 手动部署
title: 集群部署和管理
---
## 准备工作
@ -54,6 +55,8 @@ fqdn h1.taosdata.com
// 配置本数据节点的端口号,缺省是 6030
serverPort 6030
```
一定要修改的参数是 firstEp 和 fqdn。在每个数据节点firstEp 需全部配置成一样,但 fqdn 一定要配置成其所在数据节点的值。其他参数可不做任何修改,除非你很清楚为什么要修改。
加入到集群中的数据节点 dnode下表中涉及集群相关的参数必须完全相同否则不能成功加入到集群中。
@ -67,8 +70,6 @@ serverPort 6030
## 启动集群
### 启动第一个数据节点
按照《立即开始》里的步骤,启动第一个数据节点,例如 h1.taosdata.com然后执行 taos启动 taos shell从 shell 里执行命令“SHOW DNODES”如下所示
```
@ -78,20 +79,17 @@ Copyright (c) 2022 by TAOS Data, Inc. All rights reserved.
Server is Enterprise trial Edition, ver:3.0.0.0 and will never expire.
taos> show dnodes;
id | endpoint | vnodes | support_vnodes | status | create_time | note |
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | h1.taosdata.com:6030 | 0 | 1024 | ready | 2022-07-16 10:50:42.673 | |
1 | h1.taosdata.com:6030 | 0 | 1024 | ready | 2022-07-16 10:50:42.673 | |
Query OK, 1 rows affected (0.007984s)
taos>
taos>
````
```
上述命令里,可以看到刚启动的数据节点的 End Point 是h1.taos.com:6030就是这个新集群的 firstEp。
### 启动后续数据节点
## 添加数据节点
将后续的数据节点添加到现有集群,具体有以下几步:
@ -125,3 +123,74 @@ firstEp 这个参数仅仅在该数据节点首次加入集群时有作用,加
两个没有配置 firstEp 参数的数据节点 dnode 启动后,会独立运行起来。这个时候,无法将其中一个数据节点加入到另外一个数据节点,形成集群。无法将两个独立的集群合并成为新的集群。
:::
## 查看数据节点
启动 TDengine CLI 程序 taos然后执行
```sql
SHOW DNODES;
```
它将列出集群中所有的 dnode每个 dnode 的 IDend_pointfqdn:port状态readyoffline 等vnode 数目,还未使用的 vnode 数目等信息。在添加或删除一个数据节点后,可以使用该命令查看。
输出如下(具体内容仅供参考,取决于实际的集群配置)
```
taos> show dnodes;
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | trd01:6030 | 100 | 1024 | ready | 2022-07-15 16:47:47.726 | |
Query OK, 1 rows affected (0.006684s)
```
## 查看虚拟节点组
为充分利用多核技术,并提供横向扩展能力,数据需要分片处理。因此 TDengine 会将一个 DB 的数据切分成多份,存放在多个 vnode 里。这些 vnode 可能分布在多个数据节点 dnode 里,这样就实现了水平扩展。一个 vnode 仅仅属于一个 DB但一个 DB 可以有多个 vnode。vnode 所在的数据节点是 mnode 根据当前系统资源的情况,自动进行分配的,无需任何人工干预。
启动 CLI 程序 taos然后执行
```sql
USE SOME_DATABASE;
SHOW VGROUPS;
```
输出如下(具体内容仅供参考,取决于实际的集群配置)
```
taos> use db;
Database changed.
taos> show vgroups;
vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | status | nfiles | file_size | tsma |
================================================================================================================================================================================================
2 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 |
3 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 |
4 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 |
Query OK, 8 row(s) in set (0.001154s)
```
## 删除数据节点
先停止要删除的数据节点的 taosd 进程,然后启动 CLI 程序 taos执行
```sql
DROP DNODE "fqdn:port";
```
或者
```sql
DROP DNODE dnodeId;
```
通过 “fqdn:port” 或 dnodeID 来指定一个具体的节点都是可以的。其中 fqdn 是被删除的节点的 FQDNport 是其对外服务器的端口号dnodeID 可以通过 SHOW DNODES 获得。
:::warning
数据节点一旦被 drop 之后,不能重新加入集群。需要将此节点重新部署(清空数据文件夹)。集群在完成 `drop dnode` 操作之前,会将该 dnode 的数据迁移走。
请注意 `drop dnode` 和 停止 taosd 进程是两个不同的概念,不要混淆:因为删除 dnode 之前要执行迁移数据的操作,因此被删除的 dnode 必须保持在线状态。待删除操作结束之后,才能停止 taosd 进程。
一个数据节点被 drop 之后,其他节点都会感知到这个 dnodeID 的删除操作,任何集群中的节点都不会再接收此 dnodeID 的请求。
dnodeID 是集群自动分配的,不得人工指定。它在生成时是递增的,不会重复。
:::

View File

@ -0,0 +1,452 @@
---
sidebar_label: Kubernetes
title: 在 Kubernetes 上部署 TDengine 集群
---
## 配置 ConfigMap
为 TDengine 创建 `taoscfg.yaml`,此文件中的配置将作为环境变量传入 TDengine 镜像,更新此配置将导致所有 TDengine POD 重启。
```yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: taoscfg
labels:
app: tdengine
data:
CLUSTER: "1"
TAOS_KEEP: "3650"
TAOS_DEBUG_FLAG: "135"
```
## 配置服务
创建一个 service 配置文件:`taosd-service.yaml`,服务名称 `metadata.name` (此处为 "taosd") 将在下一步中使用到。添加 TDengine 所用到的所有端口:
```yaml
---
apiVersion: v1
kind: Service
metadata:
name: "taosd"
labels:
app: "tdengine"
spec:
ports:
- name: tcp6030
protocol: "TCP"
port: 6030
- name: tcp6035
protocol: "TCP"
port: 6035
- name: tcp6041
protocol: "TCP"
port: 6041
- name: udp6030
protocol: "UDP"
port: 6030
- name: udp6031
protocol: "UDP"
port: 6031
- name: udp6032
protocol: "UDP"
port: 6032
- name: udp6033
protocol: "UDP"
port: 6033
- name: udp6034
protocol: "UDP"
port: 6034
- name: udp6035
protocol: "UDP"
port: 6035
- name: udp6036
protocol: "UDP"
port: 6036
- name: udp6037
protocol: "UDP"
port: 6037
- name: udp6038
protocol: "UDP"
port: 6038
- name: udp6039
protocol: "UDP"
port: 6039
- name: udp6040
protocol: "UDP"
port: 6040
selector:
app: "tdengine"
```
## 有状态服务 StatefulSet
根据 Kubernetes 对各类部署的说明,我们将使用 StatefulSet 作为 TDengine 的服务类型,创建文件 `tdengine.yaml`
```yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: "tdengine"
labels:
app: "tdengine"
spec:
serviceName: "taosd"
replicas: 2
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: "tdengine"
template:
metadata:
name: "tdengine"
labels:
app: "tdengine"
spec:
containers:
- name: "tdengine"
image: "zitsen/taosd:develop"
imagePullPolicy: "Always"
envFrom:
- configMapRef:
name: taoscfg
ports:
- name: tcp6030
protocol: "TCP"
containerPort: 6030
- name: tcp6035
protocol: "TCP"
containerPort: 6035
- name: tcp6041
protocol: "TCP"
containerPort: 6041
- name: udp6030
protocol: "UDP"
containerPort: 6030
- name: udp6031
protocol: "UDP"
containerPort: 6031
- name: udp6032
protocol: "UDP"
containerPort: 6032
- name: udp6033
protocol: "UDP"
containerPort: 6033
- name: udp6034
protocol: "UDP"
containerPort: 6034
- name: udp6035
protocol: "UDP"
containerPort: 6035
- name: udp6036
protocol: "UDP"
containerPort: 6036
- name: udp6037
protocol: "UDP"
containerPort: 6037
- name: udp6038
protocol: "UDP"
containerPort: 6038
- name: udp6039
protocol: "UDP"
containerPort: 6039
- name: udp6040
protocol: "UDP"
containerPort: 6040
env:
# POD_NAME for FQDN config
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# SERVICE_NAME and NAMESPACE for fqdn resolve
- name: SERVICE_NAME
value: "taosd"
- name: STS_NAME
value: "tdengine"
- name: STS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# TZ for timezone settings, we recommend to always set it.
- name: TZ
value: "Asia/Shanghai"
# TAOS_ prefix will configured in taos.cfg, strip prefix and camelCase.
- name: TAOS_SERVER_PORT
value: "6030"
# Must set if you want a cluster.
- name: TAOS_FIRST_EP
value: "$(STS_NAME)-0.$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local:$(TAOS_SERVER_PORT)"
# TAOS_FQND should always be setted in k8s env.
- name: TAOS_FQDN
value: "$(POD_NAME).$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local"
volumeMounts:
- name: taosdata
mountPath: /var/lib/taos
readinessProbe:
exec:
command:
- taos
- -s
- "show mnodes"
initialDelaySeconds: 5
timeoutSeconds: 5000
livenessProbe:
tcpSocket:
port: 6030
initialDelaySeconds: 15
periodSeconds: 20
volumeClaimTemplates:
- metadata:
name: taosdata
spec:
accessModes:
- "ReadWriteOnce"
storageClassName: "csi-rbd-sc"
resources:
requests:
storage: "10Gi"
```
## 启动集群
将前述三个文件添加到 Kubernetes 集群中:
```bash
kubectl apply -f taoscfg.yaml
kubectl apply -f taosd-service.yaml
kubectl apply -f tdengine.yaml
```
上面的配置将生成一个两节点的 TDengine 集群dnode 是自动配置的,可以使用 `show dnodes` 命令查看当前集群的节点:
```bash
kubectl exec -i -t tdengine-0 -- taos -s "show dnodes"
kubectl exec -i -t tdengine-1 -- taos -s "show dnodes"
```
输出如下:
```
Welcome to the TDengine shell from Linux, Client Version:2.1.1.0
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
taos> show dnodes
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 1 | 40 | ready | any | 2021-06-01 17:13:24.181 | |
2 | tdengine-1.taosd.default.sv... | 0 | 40 | ready | any | 2021-06-01 17:14:09.257 | |
Query OK, 2 row(s) in set (0.000997s)
```
## 集群扩容
TDengine 集群支持自动扩容:
```bash
kubectl scale statefulsets tdengine --replicas=4
```
上面命令行中参数 `--replica=4` 表示要将 TDengine 集群扩容到 4 个节点,执行后首先检查 POD 的状态:
```bash
kubectl get pods -l app=tdengine
```
输出如下:
```
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 161m
tdengine-1 1/1 Running 0 161m
tdengine-2 1/1 Running 0 32m
tdengine-3 1/1 Running 0 32m
```
此时 POD 的状态仍然是 RunningTDengine 集群中的 dnode 状态要等 POD 状态为 `ready` 之后才能看到:
```bash
kubectl exec -i -t tdengine-0 -- taos -s "show dnodes"
```
扩容后的四节点 TDengine 集群的 dnode 列表:
```
Welcome to the TDengine shell from Linux, Client Version:2.1.1.0
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
taos> show dnodes
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 40 | ready | any | 2021-06-01 11:58:12.915 | |
2 | tdengine-1.taosd.default.sv... | 0 | 40 | ready | any | 2021-06-01 11:58:33.127 | |
3 | tdengine-2.taosd.default.sv... | 0 | 40 | ready | any | 2021-06-01 14:07:27.078 | |
4 | tdengine-3.taosd.default.sv... | 1 | 40 | ready | any | 2021-06-01 14:07:48.362 | |
Query OK, 4 row(s) in set (0.001293s)
```
## 集群缩容
TDengine 的缩容并没有自动化,我们尝试将一个三节点集群缩容到两节点。
首先,确认一个三节点 TDengine 集群正常工作,在 TDengine CLI 中查看 dnode 的状态:
```bash
taos> show dnodes
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 1 | 40 | ready | any | 2021-06-01 16:27:24.852 | |
2 | tdengine-1.taosd.default.sv... | 0 | 40 | ready | any | 2021-06-01 16:27:53.339 | |
3 | tdengine-2.taosd.default.sv... | 0 | 40 | ready | any | 2021-06-01 16:28:49.787 | |
Query OK, 3 row(s) in set (0.001101s)
```
想要安全的缩容,首先需要将节点从 dnode 列表中移除,也即从集群中移除:
```bash
kubectl exec -i -t tdengine-0 -- taos -s "drop dnode 'tdengine-2.taosd.default.svc.cluster.local:6030'"
```
通过 `show dondes` 命令确认移除成功后,移除相应的 POD
```bash
kubectl scale statefulsets tdengine --replicas=2
```
最后一个 POD 会被删除,使用 `kubectl get pods -l app=tdengine` 查看集群状态:
```
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 3h40m
tdengine-1 1/1 Running 0 3h40m
```
POD 删除后,需要手动删除 PVC否则下次扩容时会继续使用以前的数据导致无法正常加入集群。
```bash
kubectl delete pvc taosdata-tdengine-2
```
此时的集群状态是安全的,需要时还可以再次进行扩容:
```bash
kubectl scale statefulsets tdengine --replicas=3
```
`show dnodes` 输出如下:
```
taos> show dnodes
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 1 | 40 | ready | any | 2021-06-01 16:27:24.852 | |
2 | tdengine-1.taosd.default.sv... | 0 | 40 | ready | any | 2021-06-01 16:27:53.339 | |
4 | tdengine-2.taosd.default.sv... | 0 | 40 | ready | any | 2021-06-01 16:40:49.177 | |
```
## 删除集群
完整移除 TDengine 集群,需要分别清理 statefulset、svc、configmap、pvc。
```bash
kubectl delete statefulset -l app=tdengine
kubectl delete svc -l app=tdengine
kubectl delete pvc -l app=tdengine
kubectl delete configmap taoscfg
```
## 常见错误
### 错误一
扩容到四节点之后缩容到两节点,删除的 POD 会进入 offline 状态:
```
Welcome to the TDengine shell from Linux, Client Version:2.1.1.0
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
taos> show dnodes
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 40 | ready | any | 2021-06-01 11:58:12.915 | |
2 | tdengine-1.taosd.default.sv... | 0 | 40 | ready | any | 2021-06-01 11:58:33.127 | |
3 | tdengine-2.taosd.default.sv... | 0 | 40 | offline | any | 2021-06-01 14:07:27.078 | status msg timeout |
4 | tdengine-3.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 14:07:48.362 | status msg timeout |
Query OK, 4 row(s) in set (0.001236s)
```
`drop dnode` 的行为按不会按照预期进行,且下次集群重启后,所有的 dnode 节点将无法启动 dropping 状态无法退出。
### 错误二
TDengine 集群会持有 replica 参数,如果缩容后的节点数小于这个值,集群将无法使用:
创建一个库使用 replica 参数为 2插入部分数据
```bash
kubectl exec -i -t tdengine-0 -- \
taos -s \
"create database if not exists test replica 2;
use test;
create table if not exists t1(ts timestamp, n int);
insert into t1 values(now, 1)(now+1s, 2);"
```
缩容到单节点:
```bash
kubectl scale statefulsets tdengine --replicas=1
```
在 taos shell 中的所有数据库操作将无法成功。
```
taos> show dnodes;
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 2 | 40 | ready | any | 2021-06-01 15:55:52.562 | |
2 | tdengine-1.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 15:56:07.212 | status msg timeout |
Query OK, 2 row(s) in set (0.000845s)
taos> show dnodes;
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 2 | 40 | ready | any | 2021-06-01 15:55:52.562 | |
2 | tdengine-1.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 15:56:07.212 | status msg timeout |
Query OK, 2 row(s) in set (0.000837s)
taos> use test;
Database changed.
taos> insert into t1 values(now, 3);
DB error: Unable to resolve FQDN (0.013874s)
```

View File

@ -0,0 +1,434 @@
---
sidebar_label: Helm
title: 使用 Helm 部署 TDengine 集群
---
Helm 是 Kubernetes 的包管理器,上一节使用 Kubernets 部署 TDengine 集群的操作已经足够简单,但 Helm 依然可以提供更强大的能力。
## 安装 Helm
```bash
curl -fsSL -o get_helm.sh \
https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod +x get_helm.sh
./get_helm.sh
```
Helm 会使用 kubectl 和 kubeconfig 的配置来操作 Kubernetes可以参考 Rancher 安装 Kubernetes 的配置来进行设置。
## 安装 TDengine Chart
TDengine Chart 尚未发布到 Helm 仓库,当前可以从 GitHub 直接下载:
```bash
wget https://github.com/taosdata/TDengine-Operator/raw/main/helm/tdengine-0.3.0.tgz
```
获取当前 Kubernetes 的存储类:
```bash
kubectl get storageclass
```
在 minikube 默认为 standard.
之后,使用 helm 命令安装:
```bash
helm install tdengine tdengine-0.3.0.tgz \
--set storage.className=<your storage class name>
```
在 minikube 环境下,可以设置一个较小的容量避免超出磁盘可用空间:
```bash
helm install tdengine tdengine-0.3.0.tgz \
--set storage.className=standard \
--set storage.dataSize=2Gi \
--set storage.logSize=10Mi
```
部署成功后TDengine Chart 将会输出操作 TDengine 的说明:
```bash
export POD_NAME=$(kubectl get pods --namespace default \
-l "app.kubernetes.io/name=tdengine,app.kubernetes.io/instance=tdengine" \
-o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default exec $POD_NAME -- taos -s "show dnodes; show mnodes"
kubectl --namespace default exec -it $POD_NAME -- taos
```
可以创建一个表进行测试:
```bash
kubectl --namespace default exec $POD_NAME -- \
taos -s "create database test;
use test;
create table t1 (ts timestamp, n int);
insert into t1 values(now, 1)(now + 1s, 2);
select * from t1;"
```
## 配置 Values
TDengine 支持 `values.yaml` 自定义。
通过 helm show values 可以获取 TDengine Chart 支持的全部 values 列表:
```bash
helm show values tdengine-0.3.0.tgz
```
你可以将结果保存为 values.yaml之后可以修改其中的各项参数如 replica 数量存储类名称容量大小TDengine 配置等,然后使用如下命令安装 TDengine 集群:
```bash
helm install tdengine tdengine-0.3.0.tgz -f values.yaml
```
全部参数如下:
```yaml
# Default values for tdengine.
# This is a YAML-formatted file.
# Declare variables to be passed into helm templates.
replicaCount: 1
image:
prefix: tdengine/tdengine
#pullPolicy: Always
# Overrides the image tag whose default is the chart appVersion.
#tag: "2.4.0.5"
service:
# ClusterIP is the default service type, use NodeIP only if you know what you are doing.
type: ClusterIP
ports:
# TCP range required
tcp:
[
6030,
6031,
6032,
6033,
6034,
6035,
6036,
6037,
6038,
6039,
6040,
6041,
6042,
6043,
6044,
6045,
6060,
]
# UDP range 6030-6039
udp: [6030, 6031, 6032, 6033, 6034, 6035, 6036, 6037, 6038, 6039]
arbitrator: true
# Set timezone here, not in taoscfg
timezone: "Asia/Shanghai"
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
storage:
# Set storageClassName for pvc. K8s use default storage class if not set.
#
className: ""
dataSize: "100Gi"
logSize: "10Gi"
nodeSelectors:
taosd:
# node selectors
clusterDomainSuffix: ""
# Config settings in taos.cfg file.
#
# The helm/k8s support will use environment variables for taos.cfg,
# converting an upper-snake-cased variable like `TAOS_DEBUG_FLAG`,
# to a camelCase taos config variable `debugFlag`.
#
# See the variable list at https://www.taosdata.com/cn/documentation/administrator .
#
# Note:
# 1. firstEp/secondEp: should not be setted here, it's auto generated at scale-up.
# 2. serverPort: should not be setted, we'll use the default 6030 in many places.
# 3. fqdn: will be auto generated in kubenetes, user should not care about it.
# 4. role: currently role is not supported - every node is able to be mnode and vnode.
#
# Btw, keep quotes "" around the value like below, even the value will be number or not.
taoscfg:
# number of replications, for cluster only
TAOS_REPLICA: "1"
# number of management nodes in the system
TAOS_NUM_OF_MNODES: "1"
# number of days per DB file
# TAOS_DAYS: "10"
# number of days to keep DB file, default is 10 years.
#TAOS_KEEP: "3650"
# cache block size (Mbyte)
#TAOS_CACHE: "16"
# number of cache blocks per vnode
#TAOS_BLOCKS: "6"
# minimum rows of records in file block
#TAOS_MIN_ROWS: "100"
# maximum rows of records in file block
#TAOS_MAX_ROWS: "4096"
#
# TAOS_NUM_OF_THREADS_PER_CORE: number of threads per CPU core
#TAOS_NUM_OF_THREADS_PER_CORE: "1.0"
#
# TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data
#TAOS_NUM_OF_COMMIT_THREADS: "4"
#
# TAOS_RATIO_OF_QUERY_CORES:
# the proportion of total CPU cores available for query processing
# 2.0: the query threads will be set to double of the CPU cores.
# 1.0: all CPU cores are available for query processing [default].
# 0.5: only half of the CPU cores are available for query.
# 0.0: only one core available.
#TAOS_RATIO_OF_QUERY_CORES: "1.0"
#
# TAOS_KEEP_COLUMN_NAME:
# the last_row/first/last aggregator will not change the original column name in the result fields
#TAOS_KEEP_COLUMN_NAME: "0"
# enable/disable backuping vnode directory when removing vnode
#TAOS_VNODE_BAK: "1"
# enable/disable installation / usage report
#TAOS_TELEMETRY_REPORTING: "1"
# enable/disable load balancing
#TAOS_BALANCE: "1"
# max timer control blocks
#TAOS_MAX_TMR_CTRL: "512"
# time interval of system monitor, seconds
#TAOS_MONITOR_INTERVAL: "30"
# number of seconds allowed for a dnode to be offline, for cluster only
#TAOS_OFFLINE_THRESHOLD: "8640000"
# RPC re-try timer, millisecond
#TAOS_RPC_TIMER: "1000"
# RPC maximum time for ack, seconds.
#TAOS_RPC_MAX_TIME: "600"
# time interval of dnode status reporting to mnode, seconds, for cluster only
#TAOS_STATUS_INTERVAL: "1"
# time interval of heart beat from shell to dnode, seconds
#TAOS_SHELL_ACTIVITY_TIMER: "3"
# minimum sliding window time, milli-second
#TAOS_MIN_SLIDING_TIME: "10"
# minimum time window, milli-second
#TAOS_MIN_INTERVAL_TIME: "10"
# maximum delay before launching a stream computation, milli-second
#TAOS_MAX_STREAM_COMP_DELAY: "20000"
# maximum delay before launching a stream computation for the first time, milli-second
#TAOS_MAX_FIRST_STREAM_COMP_DELAY: "10000"
# retry delay when a stream computation fails, milli-second
#TAOS_RETRY_STREAM_COMP_DELAY: "10"
# the delayed time for launching a stream computation, from 0.1(default, 10% of whole computing time window) to 0.9
#TAOS_STREAM_COMP_DELAY_RATIO: "0.1"
# max number of vgroups per db, 0 means configured automatically
#TAOS_MAX_VGROUPS_PER_DB: "0"
# max number of tables per vnode
#TAOS_MAX_TABLES_PER_VNODE: "1000000"
# the number of acknowledgments required for successful data writing
#TAOS_QUORUM: "1"
# enable/disable compression
#TAOS_COMP: "2"
# write ahead log (WAL) level, 0: no wal; 1: write wal, but no fysnc; 2: write wal, and call fsync
#TAOS_WAL_LEVEL: "1"
# if walLevel is set to 2, the cycle of fsync being executed, if set to 0, fsync is called right away
#TAOS_FSYNC: "3000"
# the compressed rpc message, option:
# -1 (no compression)
# 0 (all message compressed),
# > 0 (rpc message body which larger than this value will be compressed)
#TAOS_COMPRESS_MSG_SIZE: "-1"
# max length of an SQL
#TAOS_MAX_SQL_LENGTH: "1048576"
# the maximum number of records allowed for super table time sorting
#TAOS_MAX_NUM_OF_ORDERED_RES: "100000"
# max number of connections allowed in dnode
#TAOS_MAX_SHELL_CONNS: "5000"
# max number of connections allowed in client
#TAOS_MAX_CONNECTIONS: "5000"
# stop writing logs when the disk size of the log folder is less than this value
#TAOS_MINIMAL_LOG_DIR_G_B: "0.1"
# stop writing temporary files when the disk size of the tmp folder is less than this value
#TAOS_MINIMAL_TMP_DIR_G_B: "0.1"
# if disk free space is less than this value, taosd service exit directly within startup process
#TAOS_MINIMAL_DATA_DIR_G_B: "0.1"
# One mnode is equal to the number of vnode consumed
#TAOS_MNODE_EQUAL_VNODE_NUM: "4"
# enbale/disable http service
#TAOS_HTTP: "1"
# enable/disable system monitor
#TAOS_MONITOR: "1"
# enable/disable recording the SQL statements via restful interface
#TAOS_HTTP_ENABLE_RECORD_SQL: "0"
# number of threads used to process http requests
#TAOS_HTTP_MAX_THREADS: "2"
# maximum number of rows returned by the restful interface
#TAOS_RESTFUL_ROW_LIMIT: "10240"
# The following parameter is used to limit the maximum number of lines in log files.
# max number of lines per log filters
# numOfLogLines 10000000
# enable/disable async log
#TAOS_ASYNC_LOG: "0"
#
# time of keeping log files, days
#TAOS_LOG_KEEP_DAYS: "0"
# The following parameters are used for debug purpose only.
# debugFlag 8 bits mask: FILE-SCREEN-UNUSED-HeartBeat-DUMP-TRACE_WARN-ERROR
# 131: output warning and error
# 135: output debug, warning and error
# 143: output trace, debug, warning and error to log
# 199: output debug, warning and error to both screen and file
# 207: output trace, debug, warning and error to both screen and file
#
# debug flag for all log type, take effect when non-zero value\
#TAOS_DEBUG_FLAG: "143"
# enable/disable recording the SQL in taos client
#TAOS_ENABLE_RECORD_SQL: "0"
# generate core file when service crash
#TAOS_ENABLE_CORE_FILE: "1"
# maximum display width of binary and nchar fields in the shell. The parts exceeding this limit will be hidden
#TAOS_MAX_BINARY_DISPLAY_WIDTH: "30"
# enable/disable stream (continuous query)
#TAOS_STREAM: "1"
# in retrieve blocking model, only in 50% query threads will be used in query processing in dnode
#TAOS_RETRIEVE_BLOCKING_MODEL: "0"
# the maximum allowed query buffer size in MB during query processing for each data node
# -1 no limit (default)
# 0 no query allowed, queries are disabled
#TAOS_QUERY_BUFFER_SIZE: "-1"
```
## 扩容
关于扩容可参考上一节的说明,有一些额外的操作需要从 helm 的部署中获取。
首先,从部署中获取 StatefulSet 的名称。
```bash
export STS_NAME=$(kubectl get statefulset \
-l "app.kubernetes.io/name=tdengine" \
-o jsonpath="{.items[0].metadata.name}")
```
扩容操作极其简单,增加 replica 即可。以下命令将 TDengine 扩充到三节点:
```bash
kubectl scale --replicas 3 statefulset/$STS_NAME
```
使用命令 `show dnodes``show mnodes` 检查是否扩容成功。
## 缩容
:::warning
缩容操作并没有完整测试,可能造成数据风险,请谨慎使用。
:::
获取需要缩容的 dnode 列表,并手动 Drop。
```bash
kubectl --namespace default exec $POD_NAME -- \
cat /var/lib/taos/dnode/dnodeEps.json \
| jq '.dnodeInfos[1:] |map(.dnodeFqdn + ":" + (.dnodePort|tostring)) | .[]' -r
kubectl --namespace default exec $POD_NAME -- taos -s "show dnodes"
kubectl --namespace default exec $POD_NAME -- taos -s 'drop dnode "<you dnode in list>"'
```
## 删除集群
Helm 管理下,清理操作也变得简单:
```bash
helm uninstall tdengine
```
但 Helm 也不会自动移除 PVC需要手动获取 PVC 然后删除掉。

View File

@ -0,0 +1 @@
label: 部署集群

View File

@ -1,10 +1,10 @@
---
title: 集群管理
title: 部署集群
---
TDengine 支持集群提供水平扩展的能力。如果需要获得更高的处理能力只需要多增加节点即可。TDengine 采用虚拟节点技术将一个节点虚拟化为多个虚拟节点以实现负载均衡。同时TDengine可以将多个节点上的虚拟节点组成虚拟节点组通过多副本机制以保证供系统的高可用。TDengine的集群功能完全开源。
本章节主要介绍集群的部署、维护,以及如何实现高可用和负载均衡
本章节主要介绍如何在主机上人工部署集群,以及如何使用 Kubernetes 和 Helm部署集群
```mdx-code-block
import DocCardList from '@theme/DocCardList';

View File

@ -6,139 +6,50 @@ description: 安装、卸载、启动、停止和升级
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
TDengine 开源版本提供 deb 和 rpm 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 deb 支持 Debian/Ubuntu 及衍生系统rpm 支持 CentOS/RHEL/SUSE 及衍生系统。同时我们也为企业用户提供 tar.gz 格式安装包
本节将介绍一些关于安装和卸载更深层次的内容,以及升级的注意事项
## 安装
<Tabs>
<TabItem label="Deb 安装" value="debinst">
关于安装,请参考 [使用安装包立即开始](../get-started/package)
1、从官网下载获得 deb 安装包,例如 TDengine-server-2.4.0.7-Linux-x64.deb
2、进入到 TDengine-server-2.4.0.7-Linux-x64.deb 安装包所在目录,执行如下的安装命令:
## 安装目录说明
TDengine 成功安装后,主安装目录是 /usr/local/taos目录内容如下
```
$ sudo dpkg -i TDengine-server-2.4.0.7-Linux-x64.deb
(Reading database ... 137504 files and directories currently installed.)
Preparing to unpack TDengine-server-2.4.0.7-Linux-x64.deb ...
TDengine is removed successfully!
Unpacking tdengine (2.4.0.7) over (2.4.0.7) ...
Setting up tdengine (2.4.0.7) ...
Start to install TDengine...
System hostname is: ubuntu-1804
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join
OR leave it blank to build one:
Enter your email address for priority support or enter empty to skip:
Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service.
To configure TDengine : edit /etc/taos/taos.cfg
To start TDengine : sudo systemctl start taosd
To access TDengine : taos -h ubuntu-1804 to login into TDengine server
TDengine is installed successfully!
```
</TabItem>
<TabItem label="RPM 安装" value="rpminst">
1、从官网下载获得 rpm 安装包,例如 TDengine-server-2.4.0.7-Linux-x64.rpm
2、进入到 TDengine-server-2.4.0.7-Linux-x64.rpm 安装包所在目录,执行如下的安装命令:
```
$ sudo rpm -ivh TDengine-server-2.4.0.7-Linux-x64.rpm
Preparing... ################################# [100%]
Updating / installing...
1:tdengine-2.4.0.7-3 ################################# [100%]
Start to install TDengine...
System hostname is: centos7
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join
OR leave it blank to build one:
Enter your email address for priority support or enter empty to skip:
Created symlink from /etc/systemd/system/multi-user.target.wants/taosd.service to /etc/systemd/system/taosd.service.
To configure TDengine : edit /etc/taos/taos.cfg
To start TDengine : sudo systemctl start taosd
To access TDengine : taos -h centos7 to login into TDengine server
TDengine is installed successfully!
```
</TabItem>
<TabItem label="tar.gz 安装" value="tarinst">
1、从官网下载获得 tar.gz 安装包,例如 TDengine-server-2.4.0.7-Linux-x64.tar.gz
2、进入到 TDengine-server-2.4.0.7-Linux-x64.tar.gz 安装包所在目录,先解压文件后,进入子目录,执行其中的 install.sh 安装脚本:
```
$ tar xvzf TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz
TDengine-enterprise-server-2.4.0.7/
TDengine-enterprise-server-2.4.0.7/driver/
TDengine-enterprise-server-2.4.0.7/driver/vercomp.txt
TDengine-enterprise-server-2.4.0.7/driver/libtaos.so.2.4.0.7
TDengine-enterprise-server-2.4.0.7/install.sh
TDengine-enterprise-server-2.4.0.7/examples/
...
$ cd /usr/local/taos
$ ll
total 43816
drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ./
drwxr-xr-x 20 ubuntu ubuntu 4096 Feb 22 09:30 ../
drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 TDengine-enterprise-server-2.4.0.7/
-rw-rw-r-- 1 ubuntu ubuntu 44852544 Feb 22 09:31 TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz
$ cd TDengine-enterprise-server-2.4.0.7/
$ ll
total 40784
drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 ./
drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ../
drwxrwxr-x 2 ubuntu ubuntu 4096 Feb 22 09:30 driver/
drwxrwxr-x 10 ubuntu ubuntu 4096 Feb 22 09:30 examples/
-rwxrwxr-x 1 ubuntu ubuntu 33294 Feb 22 09:30 install.sh*
-rw-rw-r-- 1 ubuntu ubuntu 41704288 Feb 22 09:30 taos.tar.gz
$ sudo ./install.sh
Start to update TDengine...
Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service.
Nginx for TDengine is updated successfully!
To configure TDengine : edit /etc/taos/taos.cfg
To configure Taos Adapter (if has) : edit /etc/taos/taosadapter.toml
To start TDengine : sudo systemctl start taosd
To access TDengine : use taos -h ubuntu-1804 in shell OR from http://127.0.0.1:6060
TDengine is updated successfully!
Install taoskeeper as a standalone service
taoskeeper is installed, enable it by `systemctl enable taoskeeper`
$ ll
total 28
drwxr-xr-x 7 root root 4096 Feb 22 09:34 ./
drwxr-xr-x 12 root root 4096 Feb 22 09:34 ../
drwxr-xr-x 2 root root 4096 Feb 22 09:34 bin/
drwxr-xr-x 2 root root 4096 Feb 22 09:34 cfg/
lrwxrwxrwx 1 root root 13 Feb 22 09:34 data -> /var/lib/taos/
drwxr-xr-x 2 root root 4096 Feb 22 09:34 driver/
drwxr-xr-x 10 root root 4096 Feb 22 09:34 examples/
drwxr-xr-x 2 root root 4096 Feb 22 09:34 include/
lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/
```
:::info
install.sh 安装脚本在执行过程中,会通过命令行交互界面询问一些配置信息。如果希望采取无交互安装方式,那么可以用 -e no 参数来执行 install.sh 脚本。运行 `./install.sh -h` 指令可以查看所有参数的详细说明信息。
:::
</TabItem>
</Tabs>
:::note
当安装第一个节点时,出现 Enter FQDN提示的时候不需要输入任何内容。只有当安装第二个或以后更多的节点时才需要输入已有集群中任何一个可用节点的 FQDN支持该新节点加入集群。当然也可以不输入而是在新节点启动前配置到新节点的配置文件中。
:::
- 自动生成配置文件目录、数据库目录、日志目录。
- 配置文件缺省目录:/etc/taos/taos.cfg 软链接到 /usr/local/taos/cfg/taos.cfg
- 数据库缺省目录:/var/lib/taos 软链接到 /usr/local/taos/data
- 日志缺省目录:/var/log/taos 软链接到 /usr/local/taos/log
- /usr/local/taos/bin 目录下的可执行文件,会软链接到 /usr/bin 目录下;
- /usr/local/taos/driver 目录下的动态库文件,会软链接到 /usr/lib 目录下;
- /usr/local/taos/include 目录下的头文件,会软链接到到 /usr/include 目录下;
## 卸载
<Tabs>
<TabItem label="apt-get 卸载" value="aptremove">
内容 TBD
</TabItem>
<TabItem label="Deb 卸载" value="debuninst">
卸载命令如下:
@ -180,88 +91,33 @@ taosKeeper is removed successfully!
</Tabs>
:::info
- TDengine 提供了多种安装包,但最好不要在一个系统上同时使用 tar.gz 安装包和 deb 或 rpm 安装包。否则会相互影响,导致在使用时出现问题。
- 对于 deb 包安装后,如果安装目录被手工误删了部分,出现卸载、或重新安装不能成功。此时,需要清除 TDengine 包的安装信息,执行如下命令:
```
$ sudo rm -f /var/lib/dpkg/info/tdengine*
```
```
$ sudo rm -f /var/lib/dpkg/info/tdengine*
```
然后再重新进行安装就可以了。
- 对于 rpm 包安装后,如果安装目录被手工误删了部分,出现卸载、或重新安装不能成功。此时,需要清除 TDengine 包的安装信息,执行如下命令:
```
$ sudo rpm -e --noscripts tdengine
```
```
$ sudo rpm -e --noscripts tdengine
```
然后再重新进行安装就可以了。
:::
## 安装目录说明
TDengine 成功安装后,主安装目录是 /usr/local/taos目录内容如下
```
$ cd /usr/local/taos
$ ll
$ ll
total 28
drwxr-xr-x 7 root root 4096 Feb 22 09:34 ./
drwxr-xr-x 12 root root 4096 Feb 22 09:34 ../
drwxr-xr-x 2 root root 4096 Feb 22 09:34 bin/
drwxr-xr-x 2 root root 4096 Feb 22 09:34 cfg/
lrwxrwxrwx 1 root root 13 Feb 22 09:34 data -> /var/lib/taos/
drwxr-xr-x 2 root root 4096 Feb 22 09:34 driver/
drwxr-xr-x 10 root root 4096 Feb 22 09:34 examples/
drwxr-xr-x 2 root root 4096 Feb 22 09:34 include/
lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/
```
- 自动生成配置文件目录、数据库目录、日志目录。
- 配置文件缺省目录:/etc/taos/taos.cfg 软链接到 /usr/local/taos/cfg/taos.cfg
- 数据库缺省目录:/var/lib/taos 软链接到 /usr/local/taos/data
- 日志缺省目录:/var/log/taos 软链接到 /usr/local/taos/log
- /usr/local/taos/bin 目录下的可执行文件,会软链接到 /usr/bin 目录下;
- /usr/local/taos/driver 目录下的动态库文件,会软链接到 /usr/lib 目录下;
- /usr/local/taos/include 目录下的头文件,会软链接到到 /usr/include 目录下;
## 卸载和更新文件说明
卸载安装包的时候,将保留配置文件、数据库文件和日志文件,即 /etc/taos/taos.cfg 、 /var/lib/taos 、 /var/log/taos 。如果用户确认后不需保留,可以手工删除,但一定要慎重,因为删除后,数据将永久丢失,不可以恢复!
如果是更新安装,当缺省配置文件( /etc/taos/taos.cfg )存在时,仍然使用已有的配置文件,安装包中携带的配置文件修改为 taos.cfg.orig 保存在 /usr/local/taos/cfg/ 目录,可以作为设置配置参数的参考样例;如果不存在配置文件,就使用安装包中自带的配置文件。
## 启动和停止
TDengine 使用 Linux 系统的 systemd/systemctl/service 来管理系统的启动和、停止、重启操作。TDengine 的服务进程是 taosd默认情况下 TDengine 在系统启动后将自动启动。DBA 可以通过 systemd/systemctl/service 手动操作停止、启动、重新启动服务。
以 systemctl 为例,命令如下:
- 启动服务进程:`systemctl start taosd`
- 停止服务进程:`systemctl stop taosd`
- 重启服务进程:`systemctl restart taosd`
- 查看服务状态:`systemctl status taosd`
注意TDengine 在 2.4 版本之后包含一个独立组件 taosAdapter 需要使用 systemctl 命令管理 taosAdapter 服务的启动和停止。
如果服务进程处于活动状态,则 status 指令会显示如下的相关信息:
```
Active: active (running)
```
如果后台服务进程处于停止状态,则 status 指令会显示如下的相关信息:
```
Active: inactive (dead)
```
## 升级
升级分为两个层面:升级安装包 和 升级运行中的实例。
@ -280,4 +136,4 @@ TDengine 使用 Linux 系统的 systemd/systemctl/service 来管理系统的启
:::warning
TDengine 不保证低版本能够兼容高版本的数据,所以任何时候都不推荐降级
:::
:::

View File

@ -103,6 +103,7 @@ typedef struct SDataBlockInfo {
int16_t hasVarCol;
uint32_t capacity;
// TODO: optimize and remove following
int64_t version; // used for stream, and need serialization
int32_t childId; // used for stream, do not serialize
EStreamType type; // used for stream, do not serialize
STimeWindow calWin; // used for stream, do not serialize

View File

@ -438,7 +438,7 @@ static FORCE_INLINE int32_t tDecodeSSchemaWrapperEx(SDecoder* pDecoder, SSchemaW
return 0;
}
STSchema* tdGetSTSChemaFromSSChema(SSchema** pSchema, int32_t nCols);
STSchema* tdGetSTSChemaFromSSChema(SSchema* pSchema, int32_t nCols, int32_t sver);
typedef struct {
char name[TSDB_TABLE_FNAME_LEN];
@ -1359,6 +1359,7 @@ typedef struct {
int32_t numOfCols;
int64_t skey;
int64_t ekey;
int64_t version; // for stream
char data[];
} SRetrieveTableRsp;

View File

@ -50,6 +50,7 @@ bool tNameIsValid(const SName* name);
const char* tNameGetTableName(const SName* name);
int32_t tNameGetDbName(const SName* name, char* dst);
const char* tNameGetDbNameP(const SName* name);
int32_t tNameGetFullDbName(const SName* name, char* dst);

View File

@ -64,7 +64,7 @@ qTaskInfo_t qCreateStreamExecTaskInfo(void* msg, SReadHandle* readers);
* @param SReadHandle
* @return
*/
qTaskInfo_t qCreateQueueExecTaskInfo(void* msg, SReadHandle* readers, int32_t* numOfCols);
qTaskInfo_t qCreateQueueExecTaskInfo(void* msg, SReadHandle* readers, int32_t* numOfCols, SSchemaWrapper** pSchemaWrapper);
/**
* Set the input data block for the stream scan.

View File

@ -197,6 +197,7 @@ bool fmIsSystemInfoFunc(int32_t funcId);
bool fmIsImplicitTsFunc(int32_t funcId);
bool fmIsClientPseudoColumnFunc(int32_t funcId);
bool fmIsMultiRowsFunc(int32_t funcId);
bool fmIsKeepOrderFunc(int32_t funcId);
int32_t fmGetDistMethod(const SFunctionNode* pFunc, SFunctionNode** pPartialFunc, SFunctionNode** pMergeFunc);

View File

@ -29,9 +29,17 @@ extern "C" {
typedef enum EDataOrderLevel {
DATA_ORDER_LEVEL_NONE = 1,
DATA_ORDER_LEVEL_IN_BLOCK,
DATA_ORDER_LEVEL_IN_GROUP
DATA_ORDER_LEVEL_IN_GROUP,
DATA_ORDER_LEVEL_GLOBAL
} EDataOrderLevel;
typedef enum EGroupAction {
GROUP_ACTION_NONE = 1,
GROUP_ACTION_SET,
GROUP_ACTION_KEEP,
GROUP_ACTION_CLEAR
} EGroupAction;
typedef struct SLogicNode {
ENodeType type;
SNodeList* pTargets; // SColumnNode
@ -44,6 +52,7 @@ typedef struct SLogicNode {
SNode* pSlimit;
EDataOrderLevel requireDataOrder; // requirements for input data
EDataOrderLevel resultDataOrder; // properties of the output data
EGroupAction groupAction;
} SLogicNode;
typedef enum EScanType {

View File

@ -193,7 +193,7 @@ int32_t taosAsyncExec(__async_exec_fn_t execFn, void* execParam, int32_t* code);
void destroySendMsgInfo(SMsgSendInfo* pMsgBody);
int32_t asyncSendMsgToServerExt(void* pTransporter, SEpSet* epSet, int64_t* pTransporterId, const SMsgSendInfo* pInfo,
int32_t asyncSendMsgToServerExt(void* pTransporter, SEpSet* epSet, int64_t* pTransporterId, SMsgSendInfo* pInfo,
bool persistHandle, void* ctx);
/**
@ -205,7 +205,7 @@ int32_t asyncSendMsgToServerExt(void* pTransporter, SEpSet* epSet, int64_t* pTra
* @param pInfo
* @return
*/
int32_t asyncSendMsgToServer(void* pTransporter, SEpSet* epSet, int64_t* pTransporterId, const SMsgSendInfo* pInfo);
int32_t asyncSendMsgToServer(void* pTransporter, SEpSet* epSet, int64_t* pTransporterId, SMsgSendInfo* pInfo);
int32_t queryBuildUseDbOutput(SUseDbOutput* pOut, SUseDbRsp* usedbRsp);
@ -260,6 +260,8 @@ extern int32_t (*queryProcessMsgRsp[TDMT_MAX])(void* output, char* msg, int32_t
#define REQUEST_TOTAL_EXEC_TIMES 2
#define IS_SYS_DBNAME(_dbname) (((*(_dbname) == 'i') && (0 == strcmp(_dbname, TSDB_INFORMATION_SCHEMA_DB))) || ((*(_dbname) == 'p') && (0 == strcmp(_dbname, TSDB_PERFORMANCE_SCHEMA_DB))))
#define qFatal(...) \
do { \
if (qDebugFlag & DEBUG_FATAL) { \

View File

@ -142,6 +142,7 @@ static FORCE_INLINE void* streamQueueNextItem(SStreamQueue* queue) {
ASSERT(queue->qItem != NULL);
return streamQueueCurItem(queue);
} else {
queue->qItem = NULL;
taosGetQitem(queue->qall, &queue->qItem);
if (queue->qItem == NULL) {
taosReadAllQitems(queue->queue, queue->qall);

View File

@ -33,11 +33,12 @@ typedef struct SUpdateInfo {
int64_t watermark;
TSKEY minTS;
SScalableBf* pCloseWinSBF;
SHashObj* pMap;
} SUpdateInfo;
SUpdateInfo *updateInfoInitP(SInterval* pInterval, int64_t watermark);
SUpdateInfo *updateInfoInit(int64_t interval, int32_t precision, int64_t watermark);
bool updateInfoIsUpdated(SUpdateInfo *pInfo, tb_uid_t tableId, TSKEY ts);
bool updateInfoIsUpdated(SUpdateInfo *pInfo, uint64_t tableId, TSKEY ts);
void updateInfoDestroy(SUpdateInfo *pInfo);
void updateInfoAddCloseWindowSBF(SUpdateInfo *pInfo);
void updateInfoDestoryColseWinSBF(SUpdateInfo *pInfo);

View File

@ -124,18 +124,16 @@ void *rpcReallocCont(void *ptr, int32_t contLen);
// Because taosd supports multi-process mode
// These functions should not be used on the server side
// Please use tmsg<xx> functions, which are defined in tmsgcb.h
void rpcSendRequest(void *thandle, const SEpSet *pEpSet, SRpcMsg *pMsg, int64_t *rid);
void rpcSendResponse(const SRpcMsg *pMsg);
void rpcRegisterBrokenLinkArg(SRpcMsg *msg);
void rpcReleaseHandle(void *handle, int8_t type); // just release conn to rpc instance, no close sock
int rpcSendRequest(void *thandle, const SEpSet *pEpSet, SRpcMsg *pMsg, int64_t *rid);
int rpcSendResponse(const SRpcMsg *pMsg);
int rpcRegisterBrokenLinkArg(SRpcMsg *msg);
int rpcReleaseHandle(void *handle, int8_t type); // just release conn to rpc instance, no close sock
// These functions will not be called in the child process
void rpcSendRedirectRsp(void *pConn, const SEpSet *pEpSet);
void rpcSendRequestWithCtx(void *thandle, const SEpSet *pEpSet, SRpcMsg *pMsg, int64_t *rid, SRpcCtx *ctx);
int32_t rpcGetConnInfo(void *thandle, SRpcConnInfo *pInfo);
void rpcSendRecv(void *shandle, SEpSet *pEpSet, SRpcMsg *pReq, SRpcMsg *pRsp);
void rpcSetDefaultAddr(void *thandle, const char *ip, const char *fqdn);
void* rpcAllocHandle();
int rpcSendRequestWithCtx(void *thandle, const SEpSet *pEpSet, SRpcMsg *pMsg, int64_t *rid, SRpcCtx *ctx);
int rpcSendRecv(void *shandle, SEpSet *pEpSet, SRpcMsg *pReq, SRpcMsg *pRsp);
int rpcSetDefaultAddr(void *thandle, const char *ip, const char *fqdn);
void *rpcAllocHandle();
#ifdef __cplusplus
}

View File

@ -44,7 +44,11 @@ extern "C" {
#define SIGBREAK 1234
#endif
#ifdef WINDOWS
typedef BOOL (*FSignalHandler)(DWORD fdwCtrlType);
#else
typedef void (*FSignalHandler)(int32_t signum, void *sigInfo, void *context);
#endif
void taosSetSignal(int32_t signum, FSignalHandler sigfp);
void taosIgnSignal(int32_t signum);
void taosDflSignal(int32_t signum);

View File

@ -33,7 +33,7 @@ typedef struct {
SDiskSize size;
} SDiskSpace;
bool taosCheckSystemIsSmallEnd();
bool taosCheckSystemIsLittleEnd();
void taosGetSystemInfo();
int32_t taosGetEmail(char *email, int32_t maxLen);
int32_t taosGetOsReleaseName(char *releaseName, int32_t maxLen);

View File

@ -29,9 +29,6 @@ extern "C" {
#define tcgetattr TCGETATTR_FUNC_TAOS_FORBID
#endif
#define TAOS_CONSOLE_PROMPT_HEADER "taos> "
#define TAOS_CONSOLE_PROMPT_CONTINUE " -> "
typedef struct TdCmd *TdCmdPtr;
TdCmdPtr taosOpenCmd(const char* cmd);

View File

@ -72,7 +72,6 @@ typedef struct SStmtBindInfo {
typedef struct SStmtExecInfo {
int32_t affectedRows;
SRequestObj* pRequest;
SHashObj* pVgHash;
SHashObj* pBlockHash;
bool autoCreateTbl;
} SStmtExecInfo;
@ -88,6 +87,7 @@ typedef struct SStmtSQLInfo {
SArray* nodeList;
SStmtQueryResInfo queryRes;
bool autoCreateTbl;
SHashObj* pVgHash;
} SStmtSQLInfo;
typedef struct STscStmt {

View File

@ -88,7 +88,7 @@ void closeTransporter(SAppInstInfo *pAppInfo) {
static bool clientRpcRfp(int32_t code, tmsg_t msgType) {
if (NEED_REDIRECT_ERROR(code)) {
if (msgType == TDMT_SCH_QUERY || msgType == TDMT_SCH_MERGE_QUERY || msgType == TDMT_SCH_FETCH ||
msgType == TDMT_SCH_MERGE_FETCH) {
msgType == TDMT_SCH_MERGE_FETCH || msgType == TDMT_SCH_QUERY_HEARTBEAT || msgType == TDMT_SCH_DROP_TASK) {
return false;
}
return true;

View File

@ -590,6 +590,11 @@ int32_t buildAsyncExecNodeList(SRequestObj* pRequest, SArray** pNodeList, SArray
return code;
}
void freeVgList(void *list) {
SArray* pList = *(SArray**)list;
taosArrayDestroy(pList);
}
int32_t buildSyncExecNodeList(SRequestObj* pRequest, SArray** pNodeList, SArray* pMnodeList) {
SArray* pDbVgList = NULL;
SArray* pQnodeList = NULL;
@ -641,7 +646,7 @@ int32_t buildSyncExecNodeList(SRequestObj* pRequest, SArray** pNodeList, SArray*
_return:
taosArrayDestroy(pDbVgList);
taosArrayDestroyEx(pDbVgList, freeVgList);
taosArrayDestroy(pQnodeList);
return code;

View File

@ -6,11 +6,16 @@
#include "clientStmt.h"
static int32_t stmtCreateRequest(STscStmt* pStmt) {
int32_t code = 0;
if (pStmt->exec.pRequest == NULL) {
return buildRequest(pStmt->taos->id, pStmt->sql.sqlStr, pStmt->sql.sqlLen, NULL, false, &pStmt->exec.pRequest);
} else {
return TSDB_CODE_SUCCESS;
code = buildRequest(pStmt->taos->id, pStmt->sql.sqlStr, pStmt->sql.sqlLen, NULL, false, &pStmt->exec.pRequest);
if (TSDB_CODE_SUCCESS == code) {
pStmt->exec.pRequest->syncQuery = true;
}
}
return code;
}
int32_t stmtSwitchStatus(STscStmt* pStmt, STMT_STATUS newStatus) {
@ -155,7 +160,7 @@ int32_t stmtUpdateBindInfo(TAOS_STMT* stmt, STableMeta* pTableMeta, void* tags,
int32_t stmtUpdateExecInfo(TAOS_STMT* stmt, SHashObj* pVgHash, SHashObj* pBlockHash, bool autoCreateTbl) {
STscStmt* pStmt = (STscStmt*)stmt;
pStmt->exec.pVgHash = pVgHash;
pStmt->sql.pVgHash = pVgHash;
pStmt->exec.pBlockHash = pBlockHash;
pStmt->exec.autoCreateTbl = autoCreateTbl;
@ -177,7 +182,7 @@ int32_t stmtUpdateInfo(TAOS_STMT* stmt, STableMeta* pTableMeta, void* tags, char
int32_t stmtGetExecInfo(TAOS_STMT* stmt, SHashObj** pVgHash, SHashObj** pBlockHash) {
STscStmt* pStmt = (STscStmt*)stmt;
*pVgHash = pStmt->exec.pVgHash;
*pVgHash = pStmt->sql.pVgHash;
*pBlockHash = pStmt->exec.pBlockHash;
return TSDB_CODE_SUCCESS;
@ -227,7 +232,7 @@ int32_t stmtParseSql(STscStmt* pStmt) {
};
STMT_ERR_RET(stmtCreateRequest(pStmt));
STMT_ERR_RET(parseSql(pStmt->exec.pRequest, false, &pStmt->sql.pQuery, &stmtCb));
pStmt->bInfo.needParse = false;
@ -308,6 +313,8 @@ int32_t stmtCleanSQLInfo(STscStmt* pStmt) {
taosMemoryFree(pStmt->sql.sqlStr);
qDestroyQuery(pStmt->sql.pQuery);
taosArrayDestroy(pStmt->sql.nodeList);
taosHashCleanup(pStmt->sql.pVgHash);
pStmt->sql.pVgHash = NULL;
void* pIter = taosHashIterate(pStmt->sql.pTableCache, NULL);
while (pIter) {
@ -340,7 +347,7 @@ int32_t stmtRebuildDataBlock(STscStmt* pStmt, STableDataBlocks* pDataBlock, STab
STMT_ERR_RET(catalogGetTableHashVgroup(pStmt->pCatalog, &conn, &pStmt->bInfo.sname, &vgInfo));
STMT_ERR_RET(
taosHashPut(pStmt->exec.pVgHash, (const char*)&vgInfo.vgId, sizeof(vgInfo.vgId), (char*)&vgInfo, sizeof(vgInfo)));
taosHashPut(pStmt->sql.pVgHash, (const char*)&vgInfo.vgId, sizeof(vgInfo.vgId), (char*)&vgInfo, sizeof(vgInfo)));
STMT_ERR_RET(qRebuildStmtDataBlock(newBlock, pDataBlock, uid, vgInfo.vgId));
@ -680,6 +687,7 @@ int stmtBindBatch(TAOS_STMT* stmt, TAOS_MULTI_BIND* bind, int32_t colIdx) {
if (pStmt->sql.pQuery->haveResultSet) {
setResSchemaInfo(&pStmt->exec.pRequest->body.resInfo, pStmt->sql.pQuery->pResSchema,
pStmt->sql.pQuery->numOfResCols);
taosMemoryFreeClear(pStmt->sql.pQuery->pResSchema);
setResPrecision(&pStmt->exec.pRequest->body.resInfo, pStmt->sql.pQuery->precision);
}
@ -804,7 +812,7 @@ int stmtExec(TAOS_STMT* stmt) {
if (STMT_TYPE_QUERY == pStmt->sql.type) {
launchQueryImpl(pStmt->exec.pRequest, pStmt->sql.pQuery, true, NULL);
} else {
STMT_ERR_RET(qBuildStmtOutput(pStmt->sql.pQuery, pStmt->exec.pVgHash, pStmt->exec.pBlockHash));
STMT_ERR_RET(qBuildStmtOutput(pStmt->sql.pQuery, pStmt->sql.pVgHash, pStmt->exec.pBlockHash));
launchQueryImpl(pStmt->exec.pRequest, pStmt->sql.pQuery, true, (autoCreateTbl ? (void**)&pRsp : NULL));
}
@ -847,9 +855,10 @@ _return:
int stmtClose(TAOS_STMT* stmt) {
STscStmt* pStmt = (STscStmt*)stmt;
STMT_RET(stmtCleanSQLInfo(pStmt));
stmtCleanSQLInfo(pStmt);
taosMemoryFree(stmt);
return TSDB_CODE_SUCCESS;
}
const char* stmtErrstr(TAOS_STMT* stmt) {

View File

@ -826,7 +826,7 @@ TEST(testCase, update_test) {
TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(pConn, nullptr);
TAOS_RES* pRes = taos_query(pConn, "create database if not exists abc1");
TAOS_RES* pRes = taos_query(pConn, "select cast(0 as timestamp)-1y");
if (taos_errno(pRes) != TSDB_CODE_SUCCESS) {
printf("failed to create database, code:%s", taos_errstr(pRes));
taos_free_result(pRes);

View File

@ -1107,6 +1107,11 @@ int32_t blockDataSort_rv(SSDataBlock* pDataBlock, SArray* pOrderInfo, bool nullF
void blockDataCleanup(SSDataBlock* pDataBlock) {
pDataBlock->info.rows = 0;
pDataBlock->info.groupId = 0;
pDataBlock->info.window.ekey = 0;
pDataBlock->info.window.skey = 0;
size_t numOfCols = taosArrayGetSize(pDataBlock->pDataBlock);
for (int32_t i = 0; i < numOfCols; ++i) {
SColumnInfoData* p = taosArrayGet(pDataBlock->pDataBlock, i);
@ -1163,9 +1168,15 @@ static int32_t doEnsureCapacity(SColumnInfoData* pColumn, const SDataBlockInfo*
void colInfoDataCleanup(SColumnInfoData* pColumn, uint32_t numOfRows) {
if (IS_VAR_DATA_TYPE(pColumn->info.type)) {
pColumn->varmeta.length = 0;
if (pColumn->varmeta.offset > 0) {
memset(pColumn->varmeta.offset, 0, sizeof(int32_t) * numOfRows);
}
} else {
if (pColumn->nullbitmap != NULL) {
memset(pColumn->nullbitmap, 0, BitmapLen(numOfRows));
if (pColumn->pData != NULL) {
memset(pColumn->pData, 0, pColumn->info.bytes * numOfRows);
}
}
}
}

View File

@ -51,15 +51,15 @@ int32_t tsNumOfShmThreads = 1;
int32_t tsNumOfRpcThreads = 1;
int32_t tsNumOfCommitThreads = 2;
int32_t tsNumOfTaskQueueThreads = 1;
int32_t tsNumOfMnodeQueryThreads = 2;
int32_t tsNumOfMnodeQueryThreads = 4;
int32_t tsNumOfMnodeFetchThreads = 1;
int32_t tsNumOfMnodeReadThreads = 1;
int32_t tsNumOfVnodeQueryThreads = 2;
int32_t tsNumOfVnodeQueryThreads = 4;
int32_t tsNumOfVnodeStreamThreads = 2;
int32_t tsNumOfVnodeFetchThreads = 4;
int32_t tsNumOfVnodeWriteThreads = 2;
int32_t tsNumOfVnodeSyncThreads = 2;
int32_t tsNumOfQnodeQueryThreads = 2;
int32_t tsNumOfQnodeQueryThreads = 4;
int32_t tsNumOfQnodeFetchThreads = 4;
int32_t tsNumOfSnodeSharedThreads = 2;
int32_t tsNumOfSnodeUniqueThreads = 2;
@ -402,16 +402,16 @@ static int32_t taosAddServerCfg(SConfig *pCfg) {
tsNumOfCommitThreads = TRANGE(tsNumOfCommitThreads, 2, 4);
if (cfgAddInt32(pCfg, "numOfCommitThreads", tsNumOfCommitThreads, 1, 1024, 0) != 0) return -1;
tsNumOfMnodeQueryThreads = tsNumOfCores / 8;
tsNumOfMnodeQueryThreads = TRANGE(tsNumOfMnodeQueryThreads, 1, 4);
tsNumOfMnodeQueryThreads = tsNumOfCores * 2;
tsNumOfMnodeQueryThreads = TRANGE(tsNumOfMnodeQueryThreads, 4, 8);
if (cfgAddInt32(pCfg, "numOfMnodeQueryThreads", tsNumOfMnodeQueryThreads, 1, 1024, 0) != 0) return -1;
tsNumOfMnodeReadThreads = tsNumOfCores / 8;
tsNumOfMnodeReadThreads = TRANGE(tsNumOfMnodeReadThreads, 1, 4);
if (cfgAddInt32(pCfg, "numOfMnodeReadThreads", tsNumOfMnodeReadThreads, 1, 1024, 0) != 0) return -1;
tsNumOfVnodeQueryThreads = tsNumOfCores / 4;
tsNumOfVnodeQueryThreads = TMAX(tsNumOfVnodeQueryThreads, 2);
tsNumOfVnodeQueryThreads = tsNumOfCores * 2;
tsNumOfVnodeQueryThreads = TMAX(tsNumOfVnodeQueryThreads, 4);
if (cfgAddInt32(pCfg, "numOfVnodeQueryThreads", tsNumOfVnodeQueryThreads, 1, 1024, 0) != 0) return -1;
tsNumOfVnodeStreamThreads = tsNumOfCores / 4;
@ -430,8 +430,8 @@ static int32_t taosAddServerCfg(SConfig *pCfg) {
tsNumOfVnodeSyncThreads = TMAX(tsNumOfVnodeSyncThreads, 1);
if (cfgAddInt32(pCfg, "numOfVnodeSyncThreads", tsNumOfVnodeSyncThreads, 1, 1024, 0) != 0) return -1;
tsNumOfQnodeQueryThreads = tsNumOfCores / 2;
tsNumOfQnodeQueryThreads = TMAX(tsNumOfQnodeQueryThreads, 1);
tsNumOfQnodeQueryThreads = tsNumOfCores * 2;
tsNumOfQnodeQueryThreads = TMAX(tsNumOfQnodeQueryThreads, 4);
if (cfgAddInt32(pCfg, "numOfQnodeQueryThreads", tsNumOfQnodeQueryThreads, 1, 1024, 0) != 0) return -1;
tsNumOfQnodeFetchThreads = tsNumOfCores / 2;

View File

@ -4941,14 +4941,14 @@ int tDecodeSVCreateStbReq(SDecoder *pCoder, SVCreateStbReq *pReq) {
return 0;
}
STSchema *tdGetSTSChemaFromSSChema(SSchema **pSchema, int32_t nCols) {
STSchema *tdGetSTSChemaFromSSChema(SSchema *pSchema, int32_t nCols, int32_t sver) {
STSchemaBuilder schemaBuilder = {0};
if (tdInitTSchemaBuilder(&schemaBuilder, 1) < 0) {
if (tdInitTSchemaBuilder(&schemaBuilder, sver) < 0) {
return NULL;
}
for (int i = 0; i < nCols; i++) {
SSchema *schema = *pSchema + i;
SSchema *schema = pSchema + i;
if (tdAddColToSchema(&schemaBuilder, schema->type, schema->flags, schema->colId, schema->bytes) < 0) {
tdDestroyTSchemaBuilder(&schemaBuilder);
return NULL;

View File

@ -190,6 +190,11 @@ int32_t tNameGetDbName(const SName* name, char* dst) {
return 0;
}
const char* tNameGetDbNameP(const SName* name) {
return &name->dbname[0];
}
int32_t tNameGetFullDbName(const SName* name, char* dst) {
assert(name != NULL && dst != NULL);
snprintf(dst, TSDB_DB_FNAME_LEN, "%d.%s", name->acctId, name->dbname);

View File

@ -568,6 +568,7 @@ int32_t tdSTSRowNew(SArray *pArray, STSchema *pTSchema, STSRow **ppRow) {
int32_t maxVarDataLen = 0;
int32_t iColVal = 0;
void *varBuf = NULL;
bool isAlloc = false;
ASSERT(nColVal > 1);
@ -610,8 +611,11 @@ int32_t tdSTSRowNew(SArray *pArray, STSchema *pTSchema, STSRow **ppRow) {
++iColVal;
}
*ppRow = (STSRow *)taosMemoryCalloc(
1, sizeof(STSRow) + pTSchema->flen + varDataLen + TD_BITMAP_BYTES(pTSchema->numOfCols - 1));
if (!(*ppRow)) {
*ppRow = (STSRow *)taosMemoryCalloc(
1, sizeof(STSRow) + pTSchema->flen + varDataLen + TD_BITMAP_BYTES(pTSchema->numOfCols - 1));
isAlloc = true;
}
if (!(*ppRow)) {
terrno = TSDB_CODE_OUT_OF_MEMORY;
@ -621,7 +625,9 @@ int32_t tdSTSRowNew(SArray *pArray, STSchema *pTSchema, STSRow **ppRow) {
if (maxVarDataLen > 0) {
varBuf = taosMemoryMalloc(maxVarDataLen);
if (!varBuf) {
taosMemoryFreeClear(*ppRow);
if(isAlloc) {
taosMemoryFreeClear(*ppRow);
}
terrno = TSDB_CODE_OUT_OF_MEMORY;
return -1;
}
@ -1323,12 +1329,11 @@ void tTSRowGetVal(STSRow *pRow, STSchema *pTSchema, int16_t iCol, SColVal *pColV
SCellVal cv;
SValue value;
ASSERT(iCol > 0);
ASSERT((pTColumn->colId == PRIMARYKEY_TIMESTAMP_COL_ID) || (iCol > 0));
if (TD_IS_TP_ROW(pRow)) {
tdSTpRowGetVal(pRow, pTColumn->colId, pTColumn->type, pTSchema->flen, pTColumn->offset, iCol - 1, &cv);
} else if (TD_IS_KV_ROW(pRow)) {
ASSERT(iCol > 0);
tdSKvRowGetVal(pRow, pTColumn->colId, iCol - 1, &cv);
} else {
ASSERT(0);

View File

@ -116,7 +116,7 @@ STSchema *genSTSchema(int16_t nCols) {
}
STSchema *pResult = NULL;
pResult = tdGetSTSChemaFromSSChema(&pSchema, nCols);
pResult = tdGetSTSChemaFromSSChema(pSchema, nCols, 1);
taosMemoryFree(pSchema);
return pResult;

View File

@ -158,8 +158,8 @@ static void taosCleanupArgs() {
}
int main(int argc, char const *argv[]) {
if (!taosCheckSystemIsSmallEnd()) {
printf("failed to start since on non-small-end machines\n");
if (!taosCheckSystemIsLittleEnd()) {
printf("failed to start since on non-little-end machines\n");
return -1;
}

View File

@ -868,7 +868,10 @@ int32_t mndDropSubByTopic(SMnode *pMnode, STrans *pTrans, const char *topicName)
}
// iter all vnode to delete handle
ASSERT(taosHashGetSize(pSub->consumerHash) == 0);
if (taosHashGetSize(pSub->consumerHash) != 0) {
sdbRelease(pSdb, pSub);
return -1;
}
int32_t sz = taosArrayGetSize(pSub->unassignedVgs);
for (int32_t i = 0; i < sz; i++) {
SMqVgEp *pVgEp = taosArrayGetP(pSub->unassignedVgs, i);

View File

@ -583,6 +583,7 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) {
mndTransSetDbName(pTrans, pTopic->db, NULL);
if (pTrans == NULL) {
mError("topic:%s, failed to drop since %s", pTopic->name, terrstr());
mndReleaseTopic(pMnode, pTopic);
return -1;
}
@ -590,11 +591,17 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) {
if (mndDropOffsetByTopic(pMnode, pTrans, dropReq.name) < 0) {
ASSERT(0);
mndTransDrop(pTrans);
mndReleaseTopic(pMnode, pTopic);
return -1;
}
// TODO check if rebalancing
if (mndDropSubByTopic(pMnode, pTrans, dropReq.name) < 0) {
ASSERT(0);
/*ASSERT(0);*/
mError("topic:%s, failed to drop since %s", pTopic->name, terrstr());
mndTransDrop(pTrans);
mndReleaseTopic(pMnode, pTopic);
return -1;
}

View File

@ -68,7 +68,7 @@ typedef struct {
typedef struct {
char* qmsg;
qTaskInfo_t task[5];
qTaskInfo_t task;
} STqExecCol;
typedef struct {
@ -82,13 +82,14 @@ typedef struct {
typedef struct {
int8_t subType;
STqReader* pExecReader[5];
STqReader* pExecReader;
union {
STqExecCol execCol;
STqExecTb execTb;
STqExecDb execDb;
};
int32_t numOfCols; // number of out pout column, temporarily used
int32_t numOfCols; // number of out pout column, temporarily used
SSchemaWrapper *pSchemaWrapper; // columns that are involved in query
} STqExecHandle;
typedef struct {
@ -138,8 +139,7 @@ int64_t tqScan(STQ* pTq, const STqHandle* pHandle, SMqDataRsp* pRsp, STqOffsetVa
int64_t tqFetchLog(STQ* pTq, STqHandle* pHandle, int64_t* fetchOffset, SWalCkHead** pHeadWithCkSum);
// tqExec
int32_t tqLogScanExec(STQ* pTq, STqExecHandle* pExec, SSubmitReq* pReq, SMqDataRsp* pRsp, int32_t workerId);
int32_t tqScanSnapshot(STQ* pTq, const STqExecHandle* pExec, SMqDataRsp* pRsp, STqOffsetVal offset, int32_t workerId);
int32_t tqLogScanExec(STQ* pTq, STqExecHandle* pExec, SSubmitReq* pReq, SMqDataRsp* pRsp);
int32_t tqSendDataRsp(STQ* pTq, const SRpcMsg* pMsg, const SMqPollReq* pReq, const SMqDataRsp* pRsp);
// tqMeta

View File

@ -40,12 +40,9 @@ typedef struct SDelIdx SDelIdx;
typedef struct STbData STbData;
typedef struct SMemTable SMemTable;
typedef struct STbDataIter STbDataIter;
typedef struct STable STable;
typedef struct SMapData SMapData;
typedef struct SBlockIdx SBlockIdx;
typedef struct SBlock SBlock;
typedef struct SBlockStatis SBlockStatis;
typedef struct SAggrBlkCol SAggrBlkCol;
typedef struct SColData SColData;
typedef struct SBlockDataHdr SBlockDataHdr;
typedef struct SBlockData SBlockData;
@ -62,8 +59,7 @@ typedef struct SDelFReader SDelFReader;
typedef struct SRowIter SRowIter;
typedef struct STsdbFS STsdbFS;
typedef struct SRowMerger SRowMerger;
typedef struct STsdbFSState STsdbFSState;
typedef struct STsdbSnapHdr STsdbSnapHdr;
typedef struct STsdbReadSnap STsdbReadSnap;
#define TSDB_MAX_SUBBLOCKS 8
#define TSDB_FHDR_SIZE 512
@ -176,8 +172,6 @@ void tsdbMemTableDestroy(SMemTable *pMemTable);
void tsdbGetTbDataFromMemTable(SMemTable *pMemTable, tb_uid_t suid, tb_uid_t uid, STbData **ppTbData);
void tsdbRefMemTable(SMemTable *pMemTable);
void tsdbUnrefMemTable(SMemTable *pMemTable);
int32_t tsdbTakeMemSnapshot(STsdb *pTsdb, SMemTable **ppMem, SMemTable **ppIMem);
void tsdbUntakeMemSnapshot(STsdb *pTsdb, SMemTable *pMem, SMemTable *pIMem);
// STbDataIter
int32_t tsdbTbDataIterCreate(STbData *pTbData, TSDBKEY *pFrom, int8_t backward, STbDataIter **ppIter);
void *tsdbTbDataIterDestroy(STbDataIter *pIter);
@ -188,30 +182,39 @@ bool tsdbTbDataIterNext(STbDataIter *pIter);
int32_t tsdbGetNRowsInTbData(STbData *pTbData);
// tsdbFile.c ==============================================================================================
typedef enum { TSDB_HEAD_FILE = 0, TSDB_DATA_FILE, TSDB_LAST_FILE, TSDB_SMA_FILE } EDataFileT;
void tsdbDataFileName(STsdb *pTsdb, SDFileSet *pDFileSet, EDataFileT ftype, char fname[]);
bool tsdbFileIsSame(SDFileSet *pDFileSet1, SDFileSet *pDFileSet2, EDataFileT ftype);
bool tsdbDelFileIsSame(SDelFile *pDelFile1, SDelFile *pDelFile2);
int32_t tsdbUpdateDFileHdr(TdFilePtr pFD, SDFileSet *pSet, EDataFileT ftype);
int32_t tsdbDFileRollback(STsdb *pTsdb, SDFileSet *pSet, EDataFileT ftype);
int32_t tPutDataFileHdr(uint8_t *p, SDFileSet *pSet, EDataFileT ftype);
int32_t tPutHeadFile(uint8_t *p, SHeadFile *pHeadFile);
int32_t tPutDataFile(uint8_t *p, SDataFile *pDataFile);
int32_t tPutLastFile(uint8_t *p, SLastFile *pLastFile);
int32_t tPutSmaFile(uint8_t *p, SSmaFile *pSmaFile);
int32_t tPutDelFile(uint8_t *p, SDelFile *pDelFile);
int32_t tGetDelFile(uint8_t *p, SDelFile *pDelFile);
int32_t tPutDFileSet(uint8_t *p, SDFileSet *pSet);
int32_t tGetDFileSet(uint8_t *p, SDFileSet *pSet);
void tsdbHeadFileName(STsdb *pTsdb, SDiskID did, int32_t fid, SHeadFile *pHeadF, char fname[]);
void tsdbDataFileName(STsdb *pTsdb, SDiskID did, int32_t fid, SDataFile *pDataF, char fname[]);
void tsdbLastFileName(STsdb *pTsdb, SDiskID did, int32_t fid, SLastFile *pLastF, char fname[]);
void tsdbSmaFileName(STsdb *pTsdb, SDiskID did, int32_t fid, SSmaFile *pSmaF, char fname[]);
// SDelFile
void tsdbDelFileName(STsdb *pTsdb, SDelFile *pFile, char fname[]);
// tsdbFS.c ==============================================================================================
int32_t tsdbFSOpen(STsdb *pTsdb, STsdbFS **ppFS);
int32_t tsdbFSClose(STsdbFS *pFS);
int32_t tsdbFSBegin(STsdbFS *pFS);
int32_t tsdbFSCommit(STsdbFS *pFS);
int32_t tsdbFSOpen(STsdb *pTsdb);
int32_t tsdbFSClose(STsdb *pTsdb);
int32_t tsdbFSCopy(STsdb *pTsdb, STsdbFS *pFS);
void tsdbFSDestroy(STsdbFS *pFS);
int32_t tDFileSetCmprFn(const void *p1, const void *p2);
int32_t tsdbFSCommit1(STsdb *pTsdb, STsdbFS *pFS);
int32_t tsdbFSCommit2(STsdb *pTsdb, STsdbFS *pFS);
int32_t tsdbFSRef(STsdb *pTsdb, STsdbFS *pFS);
void tsdbFSUnref(STsdb *pTsdb, STsdbFS *pFS);
int32_t tsdbFSRollback(STsdbFS *pFS);
int32_t tsdbFSStateUpsertDelFile(STsdbFSState *pState, SDelFile *pDelFile);
int32_t tsdbFSStateUpsertDFileSet(STsdbFSState *pState, SDFileSet *pSet);
void tsdbFSStateDeleteDFileSet(STsdbFSState *pState, int32_t fid);
SDelFile *tsdbFSStateGetDelFile(STsdbFSState *pState);
SDFileSet *tsdbFSStateGetDFileSet(STsdbFSState *pState, int32_t fid, int32_t flag);
int32_t tsdbFSUpsertFSet(STsdbFS *pFS, SDFileSet *pSet);
int32_t tsdbFSUpsertDelFile(STsdbFS *pFS, SDelFile *pDelFile);
// tsdbReaderWriter.c ==============================================================================================
// SDataFWriter
int32_t tsdbDataFWriterOpen(SDataFWriter **ppWriter, STsdb *pTsdb, SDFileSet *pSet);
@ -222,8 +225,7 @@ int32_t tsdbWriteBlock(SDataFWriter *pWriter, SMapData *pMapData, uint8_t **ppBu
int32_t tsdbWriteBlockData(SDataFWriter *pWriter, SBlockData *pBlockData, uint8_t **ppBuf1, uint8_t **ppBuf2,
SBlockIdx *pBlockIdx, SBlock *pBlock, int8_t cmprAlg);
SDFileSet *tsdbDataFWriterGetWSet(SDataFWriter *pWriter);
int32_t tsdbDFileSetCopy(STsdb *pTsdb, SDFileSet *pSetFrom, SDFileSet *pSetTo);
int32_t tsdbDFileSetCopy(STsdb *pTsdb, SDFileSet *pSetFrom, SDFileSet *pSetTo);
// SDataFReader
int32_t tsdbDataFReaderOpen(SDataFReader **ppReader, STsdb *pTsdb, SDFileSet *pSet);
int32_t tsdbDataFReaderClose(SDataFReader **ppReader);
@ -245,6 +247,9 @@ int32_t tsdbDelFReaderOpen(SDelFReader **ppReader, SDelFile *pFile, STsdb *pTsdb
int32_t tsdbDelFReaderClose(SDelFReader **ppReader);
int32_t tsdbReadDelData(SDelFReader *pReader, SDelIdx *pDelIdx, SArray *aDelData, uint8_t **ppBuf);
int32_t tsdbReadDelIdx(SDelFReader *pReader, SArray *aDelIdx, uint8_t **ppBuf);
// tsdbRead.c ==============================================================================================
int32_t tsdbTakeReadSnap(STsdb *pTsdb, STsdbReadSnap **ppSnap);
void tsdbUntakeReadSnap(STsdb *pTsdb, STsdbReadSnap *pSnap);
#define TSDB_CACHE_NO(c) ((c).cacheLast == 0)
#define TSDB_CACHE_LAST_ROW(c) (((c).cacheLast & 1) > 0)
@ -276,6 +281,11 @@ typedef struct {
TSKEY minKey;
} SRtn;
struct STsdbFS {
SDelFile *pDelFile;
SArray *aDFileSet; // SArray<SDFileSet>
};
struct STsdb {
char *path;
SVnode *pVnode;
@ -283,7 +293,7 @@ struct STsdb {
TdThreadRwlock rwLock;
SMemTable *mem;
SMemTable *imem;
STsdbFS *pFS;
STsdbFS fs;
SLRUCache *lruCache;
};
@ -402,16 +412,6 @@ struct SBlock {
SSubBlock aSubBlock[TSDB_MAX_SUBBLOCKS];
};
struct SAggrBlkCol {
int16_t colId;
int16_t maxIndex;
int16_t minIndex;
int16_t numOfNull;
int64_t sum;
int64_t max;
int64_t min;
};
struct SColData {
int16_t cid;
int8_t type;
@ -465,12 +465,6 @@ struct SDelIdx {
int64_t size;
};
struct SDelFile {
int64_t commitID;
int64_t size;
int64_t offset;
};
#pragma pack(push, 1)
struct SBlockDataHdr {
uint32_t delimiter;
@ -479,34 +473,50 @@ struct SBlockDataHdr {
};
#pragma pack(pop)
struct SDelFile {
volatile int32_t nRef;
int64_t commitID;
int64_t size;
int64_t offset;
};
struct SHeadFile {
volatile int32_t nRef;
int64_t commitID;
int64_t size;
int64_t offset;
};
struct SDataFile {
volatile int32_t nRef;
int64_t commitID;
int64_t size;
};
struct SLastFile {
volatile int32_t nRef;
int64_t commitID;
int64_t size;
};
struct SSmaFile {
volatile int32_t nRef;
int64_t commitID;
int64_t size;
};
struct SDFileSet {
SDiskID diskId;
int32_t fid;
SHeadFile fHead;
SDataFile fData;
SLastFile fLast;
SSmaFile fSma;
SDiskID diskId;
int32_t fid;
SHeadFile *pHeadF;
SDataFile *pDataF;
SLastFile *pLastF;
SSmaFile *pSmaF;
};
struct SRowIter {
@ -521,26 +531,33 @@ struct SRowMerger {
SArray *pArray; // SArray<SColVal>
};
struct STsdbFSState {
SDelFile *pDelFile;
SArray *aDFileSet; // SArray<aDFileSet>
SDelFile delFile;
};
struct STsdbFS {
STsdb *pTsdb;
TdThreadRwlock lock;
int8_t inTxn;
STsdbFSState *cState;
STsdbFSState *nState;
};
struct SDelFWriter {
STsdb *pTsdb;
SDelFile fDel;
TdFilePtr pWriteH;
};
struct SDataFWriter {
STsdb *pTsdb;
SDFileSet wSet;
TdFilePtr pHeadFD;
TdFilePtr pDataFD;
TdFilePtr pLastFD;
TdFilePtr pSmaFD;
SHeadFile fHead;
SDataFile fData;
SLastFile fLast;
SSmaFile fSma;
};
struct STsdbReadSnap {
SMemTable *pMem;
SMemTable *pIMem;
STsdbFS fs;
};
#ifdef __cplusplus
}
#endif

View File

@ -146,7 +146,7 @@ int32_t tqCheckColModifiable(STQ* pTq, int32_t colId);
int32_t tqProcessVgChangeReq(STQ* pTq, char* msg, int32_t msgLen);
int32_t tqProcessVgDeleteReq(STQ* pTq, char* msg, int32_t msgLen);
int32_t tqProcessOffsetCommitReq(STQ* pTq, char* msg, int32_t msgLen);
int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg, int32_t workerId);
int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg);
int32_t tqProcessTaskDeployReq(STQ* pTq, char* msg, int32_t msgLen);
int32_t tqProcessTaskDropReq(STQ* pTq, char* msg, int32_t msgLen);
int32_t tqProcessStreamTrigger(STQ* pTq, SSubmitReq* data);

View File

@ -579,9 +579,11 @@ static int32_t tdRSmaFetchAndSubmitResult(SRSmaInfoItem *pItem, STSchema *pTSche
while (1) {
SSDataBlock *output = NULL;
uint64_t ts;
if (qExecTask(pItem->taskInfo, &output, &ts) < 0) {
smaError("vgId:%d, qExecTask for rsma table %" PRIi64 "l evel %" PRIi8 " failed since %s", SMA_VID(pSma), suid,
pItem->level, terrstr());
int32_t code = qExecTask(pItem->taskInfo, &output, &ts);
if (code < 0) {
smaError("vgId:%d, qExecTask for rsma table %" PRIi64 " level %" PRIi8 " failed since %s", SMA_VID(pSma), suid,
pItem->level, terrstr(code));
goto _err;
}
if (!output) {
@ -597,35 +599,36 @@ static int32_t tdRSmaFetchAndSubmitResult(SRSmaInfoItem *pItem, STSchema *pTSche
}
taosArrayPush(pResult, output);
}
if (taosArrayGetSize(pResult) > 0) {
if (taosArrayGetSize(pResult) > 0) {
#if 1
char flag[10] = {0};
snprintf(flag, 10, "level %" PRIi8, pItem->level);
blockDebugShowDataBlocks(pResult, flag);
char flag[10] = {0};
snprintf(flag, 10, "level %" PRIi8, pItem->level);
blockDebugShowDataBlocks(pResult, flag);
#endif
STsdb *sinkTsdb = (pItem->level == TSDB_RETENTION_L1 ? pSma->pRSmaTsdb[0] : pSma->pRSmaTsdb[1]);
SSubmitReq *pReq = NULL;
// TODO: the schema update should be handled
if (buildSubmitReqFromDataBlock(&pReq, pResult, pTSchema, SMA_VID(pSma), suid) < 0) {
smaError("vgId:%d, build submit req for rsma table %" PRIi64 "l evel %" PRIi8 " failed since %s", SMA_VID(pSma),
suid, pItem->level, terrstr());
goto _err;
}
STsdb *sinkTsdb = (pItem->level == TSDB_RETENTION_L1 ? pSma->pRSmaTsdb[0] : pSma->pRSmaTsdb[1]);
SSubmitReq *pReq = NULL;
// TODO: the schema update should be handled
if (buildSubmitReqFromDataBlock(&pReq, pResult, pTSchema, SMA_VID(pSma), suid) < 0) {
smaError("vgId:%d, build submit req for rsma table %" PRIi64 "l evel %" PRIi8 " failed since %s", SMA_VID(pSma),
suid, pItem->level, terrstr());
goto _err;
}
if (pReq && tdProcessSubmitReq(sinkTsdb, atomic_add_fetch_64(&pStat->submitVer, 1), pReq) < 0) {
taosMemoryFreeClear(pReq);
smaError("vgId:%d, process submit req for rsma table %" PRIi64 " level %" PRIi8 " failed since %s",
SMA_VID(pSma), suid, pItem->level, terrstr());
goto _err;
}
if (pReq && tdProcessSubmitReq(sinkTsdb, atomic_add_fetch_64(&pStat->submitVer, 1), pReq) < 0) {
taosMemoryFreeClear(pReq);
smaError("vgId:%d, process submit req for rsma table %" PRIi64 " level %" PRIi8 " failed since %s", SMA_VID(pSma),
suid, pItem->level, terrstr());
goto _err;
taosArrayClear(pResult);
} else if (terrno == 0) {
smaDebug("vgId:%d, no rsma %" PRIi8 " data fetched yet", SMA_VID(pSma), pItem->level);
} else {
smaDebug("vgId:%d, no rsma %" PRIi8 " data fetched since %s", SMA_VID(pSma), pItem->level, tstrerror(terrno));
}
taosMemoryFreeClear(pReq);
} else if (terrno == 0) {
smaDebug("vgId:%d, no rsma %" PRIi8 " data fetched yet", SMA_VID(pSma), pItem->level);
} else {
smaDebug("vgId:%d, no rsma %" PRIi8 " data fetched since %s", SMA_VID(pSma), pItem->level, tstrerror(terrno));
}
tdDestroySDataBlockArray(pResult);
@ -1364,4 +1367,4 @@ static void tdRSmaFetchTrigger(void *param, void *tmrId) {
_end:
tdReleaseSmaRef(smaMgmt.rsetId, pItem->refId, __func__, __LINE__);
}
}

View File

@ -215,10 +215,10 @@ int32_t tqCheckColModifiable(STQ* pTq, int32_t colId) {
if (pIter == NULL) break;
STqHandle* pExec = (STqHandle*)pIter;
if (pExec->execHandle.subType == TOPIC_SUB_TYPE__COLUMN) {
int32_t sz = taosArrayGetSize(pExec->colIdList);
int32_t sz = pExec->execHandle.pSchemaWrapper->nCols;
for (int32_t i = 0; i < sz; i++) {
int32_t forbidColId = *(int32_t*)taosArrayGet(pExec->colIdList, i);
if (forbidColId == colId) {
SSchema* pSchema = &pExec->execHandle.pSchemaWrapper->pSchema[i];
if (pSchema->colId == colId) {
taosHashCancelIterate(pTq->handles, pIter);
return -1;
}
@ -262,7 +262,7 @@ static int32_t tqInitDataRsp(SMqDataRsp* pRsp, const SMqPollReq* pReq, int8_t su
static int32_t tqInitMetaRsp(SMqMetaRsp* pRsp, const SMqPollReq* pReq) { return 0; }
int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg, int32_t workerId) {
int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg) {
SMqPollReq* pReq = pMsg->pCont;
int64_t consumerId = pReq->consumerId;
int64_t timeout = pReq->timeout;
@ -271,9 +271,6 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg, int32_t workerId) {
STqOffsetVal reqOffset = pReq->reqOffset;
STqOffsetVal fetchOffsetNew;
// todo
workerId = 0;
// 1.find handle
STqHandle* pHandle = taosHashGet(pTq->handles, pReq->subKey, strlen(pReq->subKey));
/*ASSERT(pHandle);*/
@ -405,7 +402,7 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg, int32_t workerId) {
if (pHead->msgType == TDMT_VND_SUBMIT) {
SSubmitReq* pCont = (SSubmitReq*)&pHead->body;
if (tqLogScanExec(pTq, &pHandle->execHandle, pCont, &dataRsp, workerId) < 0) {
if (tqLogScanExec(pTq, &pHandle->execHandle, pCont, &dataRsp) < 0) {
/*ASSERT(0);*/
}
// TODO batch optimization:
@ -518,27 +515,24 @@ int32_t tqProcessVgChangeReq(STQ* pTq, char* msg, int32_t msgLen) {
pHandle->execHandle.execCol.qmsg = req.qmsg;
pHandle->snapshotVer = ver;
req.qmsg = NULL;
for (int32_t i = 0; i < 5; i++) {
SReadHandle handle = {
.meta = pTq->pVnode->pMeta,
.vnode = pTq->pVnode,
.initTableReader = true,
.initTqReader = true,
.version = ver,
};
pHandle->execHandle.execCol.task[i] =
qCreateQueueExecTaskInfo(pHandle->execHandle.execCol.qmsg, &handle, &pHandle->execHandle.numOfCols);
ASSERT(pHandle->execHandle.execCol.task[i]);
void* scanner = NULL;
qExtractStreamScanner(pHandle->execHandle.execCol.task[i], &scanner);
ASSERT(scanner);
pHandle->execHandle.pExecReader[i] = qExtractReaderFromStreamScanner(scanner);
ASSERT(pHandle->execHandle.pExecReader[i]);
}
SReadHandle handle = {
.meta = pTq->pVnode->pMeta,
.vnode = pTq->pVnode,
.initTableReader = true,
.initTqReader = true,
.version = ver,
};
pHandle->execHandle.execCol.task =
qCreateQueueExecTaskInfo(pHandle->execHandle.execCol.qmsg, &handle, &pHandle->execHandle.numOfCols,
&pHandle->execHandle.pSchemaWrapper);
ASSERT(pHandle->execHandle.execCol.task);
void* scanner = NULL;
qExtractStreamScanner(pHandle->execHandle.execCol.task, &scanner);
ASSERT(scanner);
pHandle->execHandle.pExecReader = qExtractReaderFromStreamScanner(scanner);
ASSERT(pHandle->execHandle.pExecReader);
} else if (pHandle->execHandle.subType == TOPIC_SUB_TYPE__DB) {
for (int32_t i = 0; i < 5; i++) {
pHandle->execHandle.pExecReader[i] = tqOpenReader(pTq->pVnode);
}
pHandle->execHandle.pExecReader = tqOpenReader(pTq->pVnode);
pHandle->execHandle.execDb.pFilterOutTbUid =
taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK);
} else if (pHandle->execHandle.subType == TOPIC_SUB_TYPE__TABLE) {
@ -550,10 +544,8 @@ int32_t tqProcessVgChangeReq(STQ* pTq, char* msg, int32_t msgLen) {
int64_t tbUid = *(int64_t*)taosArrayGet(tbUidList, i);
tqDebug("vgId:%d, idx %d, uid:%" PRId64, TD_VID(pTq->pVnode), i, tbUid);
}
for (int32_t i = 0; i < 5; i++) {
pHandle->execHandle.pExecReader[i] = tqOpenReader(pTq->pVnode);
tqReaderSetTbUidList(pHandle->execHandle.pExecReader[i], tbUidList);
}
pHandle->execHandle.pExecReader = tqOpenReader(pTq->pVnode);
tqReaderSetTbUidList(pHandle->execHandle.pExecReader, tbUidList);
taosArrayDestroy(tbUidList);
}
taosHashPut(pTq->handles, req.subKey, strlen(req.subKey), pHandle, sizeof(STqHandle));
@ -634,7 +626,7 @@ int32_t tqProcessTaskDeployReq(STQ* pTq, char* msg, int32_t msgLen) {
ASSERT(pTask->tbSink.pSchemaWrapper->pSchema);
pTask->tbSink.pTSchema =
tdGetSTSChemaFromSSChema(&pTask->tbSink.pSchemaWrapper->pSchema, pTask->tbSink.pSchemaWrapper->nCols);
tdGetSTSChemaFromSSChema(pTask->tbSink.pSchemaWrapper->pSchema, pTask->tbSink.pSchemaWrapper->nCols, 1);
ASSERT(pTask->tbSink.pTSchema);
}

View File

@ -37,8 +37,8 @@ static int32_t tqAddBlockDataToRsp(const SSDataBlock* pBlock, SMqDataRsp* pRsp,
return 0;
}
static int32_t tqAddBlockSchemaToRsp(const STqExecHandle* pExec, int32_t workerId, SMqDataRsp* pRsp) {
SSchemaWrapper* pSW = tCloneSSchemaWrapper(pExec->pExecReader[workerId]->pSchemaWrapper);
static int32_t tqAddBlockSchemaToRsp(const STqExecHandle* pExec, SMqDataRsp* pRsp) {
SSchemaWrapper* pSW = tCloneSSchemaWrapper(pExec->pExecReader->pSchemaWrapper);
if (pSW == NULL) {
return -1;
}
@ -51,6 +51,7 @@ static int32_t tqAddTbNameToRsp(const STQ* pTq, int64_t uid, SMqDataRsp* pRsp) {
metaReaderInit(&mr, pTq->pVnode->pMeta, 0);
// TODO add reference to gurantee success
if (metaGetTableEntryByUid(&mr, uid) < 0) {
metaReaderClear(&mr);
return -1;
}
char* tbName = strdup(mr.me.name);
@ -61,7 +62,7 @@ static int32_t tqAddTbNameToRsp(const STQ* pTq, int64_t uid, SMqDataRsp* pRsp) {
int64_t tqScan(STQ* pTq, const STqHandle* pHandle, SMqDataRsp* pRsp, STqOffsetVal* pOffset) {
const STqExecHandle* pExec = &pHandle->execHandle;
qTaskInfo_t task = pExec->execCol.task[0];
qTaskInfo_t task = pExec->execCol.task;
if (qStreamPrepareScan(task, pOffset) < 0) {
if (pOffset->type == TMQ_OFFSET__LOG) {
@ -89,7 +90,7 @@ int64_t tqScan(STQ* pTq, const STqHandle* pHandle, SMqDataRsp* pRsp, STqOffsetVa
if (pDataBlock != NULL) {
if (pRsp->withTbName) {
if (pOffset->type == TMQ_OFFSET__LOG) {
int64_t uid = pExec->pExecReader[0]->msgIter.uid;
int64_t uid = pExec->pExecReader->msgIter.uid;
if (tqAddTbNameToRsp(pTq, uid, pRsp) < 0) {
continue;
}
@ -108,6 +109,7 @@ int64_t tqScan(STQ* pTq, const STqHandle* pHandle, SMqDataRsp* pRsp, STqOffsetVa
}
if (pRsp->blockNum == 0 && pOffset->type == TMQ_OFFSET__SNAPSHOT_DATA) {
tqDebug("vgId: %d, tsdb consume over, switch to wal, ver %ld", TD_VID(pTq->pVnode), pHandle->snapshotVer + 1);
tqOffsetResetToLog(pOffset, pHandle->snapshotVer);
qStreamPrepareScan(task, pOffset);
continue;
@ -184,12 +186,12 @@ int32_t tqScanSnapshot(STQ* pTq, const STqExecHandle* pExec, SMqDataRsp* pRsp, S
}
#endif
int32_t tqLogScanExec(STQ* pTq, STqExecHandle* pExec, SSubmitReq* pReq, SMqDataRsp* pRsp, int32_t workerId) {
int32_t tqLogScanExec(STQ* pTq, STqExecHandle* pExec, SSubmitReq* pReq, SMqDataRsp* pRsp) {
ASSERT(pExec->subType != TOPIC_SUB_TYPE__COLUMN);
if (pExec->subType == TOPIC_SUB_TYPE__TABLE) {
pRsp->withSchema = 1;
STqReader* pReader = pExec->pExecReader[workerId];
STqReader* pReader = pExec->pExecReader;
tqReaderSetDataMsg(pReader, pReq, 0);
while (tqNextDataBlock(pReader)) {
SSDataBlock block = {0};
@ -197,18 +199,18 @@ int32_t tqLogScanExec(STQ* pTq, STqExecHandle* pExec, SSubmitReq* pReq, SMqDataR
if (terrno == TSDB_CODE_TQ_TABLE_SCHEMA_NOT_FOUND) continue;
}
if (pRsp->withTbName) {
int64_t uid = pExec->pExecReader[workerId]->msgIter.uid;
int64_t uid = pExec->pExecReader->msgIter.uid;
if (tqAddTbNameToRsp(pTq, uid, pRsp) < 0) {
continue;
}
}
tqAddBlockDataToRsp(&block, pRsp, taosArrayGetSize(block.pDataBlock));
tqAddBlockSchemaToRsp(pExec, workerId, pRsp);
tqAddBlockSchemaToRsp(pExec, pRsp);
pRsp->blockNum++;
}
} else if (pExec->subType == TOPIC_SUB_TYPE__DB) {
pRsp->withSchema = 1;
STqReader* pReader = pExec->pExecReader[workerId];
STqReader* pReader = pExec->pExecReader;
tqReaderSetDataMsg(pReader, pReq, 0);
while (tqNextDataBlockFilterOut(pReader, pExec->execDb.pFilterOutTbUid)) {
SSDataBlock block = {0};
@ -216,13 +218,13 @@ int32_t tqLogScanExec(STQ* pTq, STqExecHandle* pExec, SSubmitReq* pReq, SMqDataR
if (terrno == TSDB_CODE_TQ_TABLE_SCHEMA_NOT_FOUND) continue;
}
if (pRsp->withTbName) {
int64_t uid = pExec->pExecReader[workerId]->msgIter.uid;
int64_t uid = pExec->pExecReader->msgIter.uid;
if (tqAddTbNameToRsp(pTq, uid, pRsp) < 0) {
continue;
}
}
tqAddBlockDataToRsp(&block, pRsp, taosArrayGetSize(block.pDataBlock));
tqAddBlockSchemaToRsp(pExec, workerId, pRsp);
tqAddBlockSchemaToRsp(pExec, pRsp);
pRsp->blockNum++;
}
}

View File

@ -80,27 +80,23 @@ int32_t tqMetaOpen(STQ* pTq) {
tDecoderInit(&decoder, (uint8_t*)pVal, vLen);
tDecodeSTqHandle(&decoder, &handle);
handle.pWalReader = walOpenReader(pTq->pVnode->pWal, NULL);
/*for (int32_t i = 0; i < 5; i++) {*/
/*handle.execHandle.pExecReader[i] = tqOpenReader(pTq->pVnode);*/
/*}*/
if (handle.execHandle.subType == TOPIC_SUB_TYPE__COLUMN) {
for (int32_t i = 0; i < 5; i++) {
SReadHandle reader = {
.meta = pTq->pVnode->pMeta,
.vnode = pTq->pVnode,
.initTableReader = true,
.initTqReader = true,
.version = handle.snapshotVer,
};
SReadHandle reader = {
.meta = pTq->pVnode->pMeta,
.vnode = pTq->pVnode,
.initTableReader = true,
.initTqReader = true,
.version = handle.snapshotVer,
};
handle.execHandle.execCol.task[i] = qCreateQueueExecTaskInfo(handle.execHandle.execCol.qmsg, &reader, &handle.execHandle.numOfCols);
ASSERT(handle.execHandle.execCol.task[i]);
void* scanner = NULL;
qExtractStreamScanner(handle.execHandle.execCol.task[i], &scanner);
ASSERT(scanner);
handle.execHandle.pExecReader[i] = qExtractReaderFromStreamScanner(scanner);
ASSERT(handle.execHandle.pExecReader[i]);
}
handle.execHandle.execCol.task =
qCreateQueueExecTaskInfo(handle.execHandle.execCol.qmsg, &reader, &handle.execHandle.numOfCols, &handle.execHandle.pSchemaWrapper);
ASSERT(handle.execHandle.execCol.task);
void* scanner = NULL;
qExtractStreamScanner(handle.execHandle.execCol.task, &scanner);
ASSERT(scanner);
handle.execHandle.pExecReader = qExtractReaderFromStreamScanner(scanner);
ASSERT(handle.execHandle.pExecReader);
} else {
handle.execHandle.execDb.pFilterOutTbUid =
taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK);

View File

@ -249,6 +249,8 @@ int tqPushMsg(STQ* pTq, void* msg, int32_t msgLen, tmsg_t msgType, int64_t ver)
return -1;
}
memcpy(data, msg, msgLen);
SSubmitReq* pReq = (SSubmitReq*)data;
pReq->version = ver;
tqProcessStreamTrigger(pTq, data);
}

View File

@ -314,6 +314,7 @@ int32_t tqRetrieveDataBlock(SSDataBlock* pBlock, STqReader* pReader) {
pBlock->info.uid = pReader->msgIter.uid;
pBlock->info.rows = pReader->msgIter.numOfRows;
pBlock->info.version = pReader->pMsg->version;
while ((row = tGetSubmitBlkNext(&pReader->blkIter)) != NULL) {
tdSTSRowIterReset(&iter, row);
@ -393,10 +394,8 @@ int32_t tqUpdateTbUidList(STQ* pTq, const SArray* tbUidList, bool isAdd) {
if (pIter == NULL) break;
STqHandle* pExec = (STqHandle*)pIter;
if (pExec->execHandle.subType == TOPIC_SUB_TYPE__COLUMN) {
for (int32_t i = 0; i < 5; i++) {
int32_t code = qUpdateQualifiedTableId(pExec->execHandle.execCol.task[i], tbUidList, isAdd);
ASSERT(code == 0);
}
int32_t code = qUpdateQualifiedTableId(pExec->execHandle.execCol.task, tbUidList, isAdd);
ASSERT(code == 0);
} else if (pExec->execHandle.subType == TOPIC_SUB_TYPE__DB) {
if (!isAdd) {
int32_t sz = taosArrayGetSize(tbUidList);

View File

@ -127,6 +127,8 @@ SSubmitReq* tdBlockToSubmit(const SArray* pBlocks, const STSchema* pTSchema, boo
int32_t rows = pDataBlock->info.rows;
tqDebug("tq sink, convert block %d, rows: %d", i, rows);
int32_t dataLen = 0;
void* blkSchema = POINTER_SHIFT(blkHead, sizeof(SSubmitBlk));
@ -178,11 +180,14 @@ void tqTableSink(SStreamTask* pTask, void* vnode, int64_t ver, void* data) {
const SArray* pRes = (const SArray*)data;
SVnode* pVnode = (SVnode*)vnode;
tqDebug("task write into table, vgId %d, block num: %d", pVnode->config.vgId, (int32_t)pRes->size);
tqDebug("vgId:%d, task %d write into table, block num: %d", TD_VID(pVnode), pTask->taskId, (int32_t)pRes->size);
ASSERT(pTask->tbSink.pTSchema);
SSubmitReq* pReq = tdBlockToSubmit(pRes, pTask->tbSink.pTSchema, true, pTask->tbSink.stbUid,
pTask->tbSink.stbFullName, pVnode->config.vgId);
tqDebug("vgId:%d, task %d convert blocks over, put into write-queue", TD_VID(pVnode), pTask->taskId);
/*tPrintFixedSchemaSubmitReq(pReq, pTask->tbSink.pTSchema);*/
// build write msg
SRpcMsg msg = {

View File

@ -464,7 +464,7 @@ static int32_t getNextRowFromFS(void *iter, TSDBROW **ppRow) {
switch (state->state) {
case SFSNEXTROW_FS:
state->aDFileSet = state->pTsdb->pFS->cState->aDFileSet;
// state->aDFileSet = state->pTsdb->pFS->cState->aDFileSet;
state->nFileSet = taosArrayGetSize(state->aDFileSet);
state->iFileSet = state->nFileSet;
@ -793,9 +793,10 @@ typedef struct {
TSDBROW memRow, imemRow, fsRow;
TsdbNextRowState input[3];
SMemTable *pMemTable;
SMemTable *pIMemTable;
STsdb *pTsdb;
// SMemTable *pMemTable;
// SMemTable *pIMemTable;
STsdbReadSnap *pReadSnap;
STsdb *pTsdb;
} CacheNextRowIter;
static int32_t nextRowIterOpen(CacheNextRowIter *pIter, tb_uid_t uid, STsdb *pTsdb) {
@ -803,16 +804,16 @@ static int32_t nextRowIterOpen(CacheNextRowIter *pIter, tb_uid_t uid, STsdb *pTs
tb_uid_t suid = getTableSuidByUid(uid, pTsdb);
tsdbTakeMemSnapshot(pTsdb, &pIter->pMemTable, &pIter->pIMemTable);
tsdbTakeReadSnap(pTsdb, &pIter->pReadSnap);
STbData *pMem = NULL;
if (pIter->pMemTable) {
tsdbGetTbDataFromMemTable(pIter->pMemTable, suid, uid, &pMem);
if (pIter->pReadSnap->pMem) {
tsdbGetTbDataFromMemTable(pIter->pReadSnap->pMem, suid, uid, &pMem);
}
STbData *pIMem = NULL;
if (pIter->pIMemTable) {
tsdbGetTbDataFromMemTable(pIter->pIMemTable, suid, uid, &pIMem);
if (pIter->pReadSnap->pIMem) {
tsdbGetTbDataFromMemTable(pIter->pReadSnap->pIMem, suid, uid, &pIMem);
}
pIter->pTsdb = pTsdb;
@ -821,7 +822,7 @@ static int32_t nextRowIterOpen(CacheNextRowIter *pIter, tb_uid_t uid, STsdb *pTs
SDelIdx delIdx;
SDelFile *pDelFile = tsdbFSStateGetDelFile(pTsdb->pFS->cState);
SDelFile *pDelFile = pIter->pReadSnap->fs.pDelFile;
if (pDelFile) {
SDelFReader *pDelFReader;
@ -846,6 +847,7 @@ static int32_t nextRowIterOpen(CacheNextRowIter *pIter, tb_uid_t uid, STsdb *pTs
pIter->fsState.state = SFSNEXTROW_FS;
pIter->fsState.pTsdb = pTsdb;
pIter->fsState.aDFileSet = pIter->pReadSnap->fs.aDFileSet;
pIter->fsState.pBlockIdxExp = &pIter->idx;
pIter->input[0] = (TsdbNextRowState){&pIter->memRow, true, false, &pIter->memState, getNextRowFromMem, NULL};
@ -885,7 +887,7 @@ static int32_t nextRowIterClose(CacheNextRowIter *pIter) {
taosArrayDestroy(pIter->pSkyline);
}
tsdbUntakeMemSnapshot(pIter->pTsdb, pIter->pMemTable, pIter->pIMemTable);
tsdbUntakeReadSnap(pIter->pTsdb, pIter->pReadSnap);
return code;
_err:
@ -1172,480 +1174,480 @@ _err:
return code;
}
static int32_t mergeLastRow(tb_uid_t uid, STsdb *pTsdb, bool *dup, STSRow **ppRow) {
int32_t code = 0;
SArray *pSkyline = NULL;
// static int32_t mergeLastRow(tb_uid_t uid, STsdb *pTsdb, bool *dup, STSRow **ppRow) {
// int32_t code = 0;
// SArray *pSkyline = NULL;
STSchema *pTSchema = metaGetTbTSchema(pTsdb->pVnode->pMeta, uid, -1);
int16_t nCol = pTSchema->numOfCols;
SArray *pColArray = taosArrayInit(nCol, sizeof(SColVal));
// STSchema *pTSchema = metaGetTbTSchema(pTsdb->pVnode->pMeta, uid, -1);
// int16_t nCol = pTSchema->numOfCols;
// SArray *pColArray = taosArrayInit(nCol, sizeof(SColVal));
tb_uid_t suid = getTableSuidByUid(uid, pTsdb);
// tb_uid_t suid = getTableSuidByUid(uid, pTsdb);
STbData *pMem = NULL;
if (pTsdb->mem) {
tsdbGetTbDataFromMemTable(pTsdb->mem, suid, uid, &pMem);
}
// STbData *pMem = NULL;
// if (pTsdb->mem) {
// tsdbGetTbDataFromMemTable(pTsdb->mem, suid, uid, &pMem);
// }
STbData *pIMem = NULL;
if (pTsdb->imem) {
tsdbGetTbDataFromMemTable(pTsdb->imem, suid, uid, &pIMem);
}
// STbData *pIMem = NULL;
// if (pTsdb->imem) {
// tsdbGetTbDataFromMemTable(pTsdb->imem, suid, uid, &pIMem);
// }
*ppRow = NULL;
// *ppRow = NULL;
pSkyline = taosArrayInit(32, sizeof(TSDBKEY));
// pSkyline = taosArrayInit(32, sizeof(TSDBKEY));
SDelIdx delIdx;
// SDelIdx delIdx;
SDelFile *pDelFile = tsdbFSStateGetDelFile(pTsdb->pFS->cState);
if (pDelFile) {
SDelFReader *pDelFReader;
// SDelFile *pDelFile = tsdbFSStateGetDelFile(pTsdb->pFS->cState);
// if (pDelFile) {
// SDelFReader *pDelFReader;
code = tsdbDelFReaderOpen(&pDelFReader, pDelFile, pTsdb, NULL);
if (code) goto _err;
// code = tsdbDelFReaderOpen(&pDelFReader, pDelFile, pTsdb, NULL);
// if (code) goto _err;
code = getTableDelIdx(pDelFReader, suid, uid, &delIdx);
if (code) goto _err;
// code = getTableDelIdx(pDelFReader, suid, uid, &delIdx);
// if (code) goto _err;
code = getTableDelSkyline(pMem, pIMem, pDelFReader, &delIdx, pSkyline);
if (code) goto _err;
// code = getTableDelSkyline(pMem, pIMem, pDelFReader, &delIdx, pSkyline);
// if (code) goto _err;
tsdbDelFReaderClose(&pDelFReader);
} else {
code = getTableDelSkyline(pMem, pIMem, NULL, NULL, pSkyline);
if (code) goto _err;
}
// tsdbDelFReaderClose(&pDelFReader);
// } else {
// code = getTableDelSkyline(pMem, pIMem, NULL, NULL, pSkyline);
// if (code) goto _err;
// }
int64_t iSkyline = taosArrayGetSize(pSkyline) - 1;
// int64_t iSkyline = taosArrayGetSize(pSkyline) - 1;
SBlockIdx idx = {.suid = suid, .uid = uid};
// SBlockIdx idx = {.suid = suid, .uid = uid};
SFSNextRowIter fsState = {0};
fsState.state = SFSNEXTROW_FS;
fsState.pTsdb = pTsdb;
fsState.pBlockIdxExp = &idx;
// SFSNextRowIter fsState = {0};
// fsState.state = SFSNEXTROW_FS;
// fsState.pTsdb = pTsdb;
// fsState.pBlockIdxExp = &idx;
SMemNextRowIter memState = {0};
SMemNextRowIter imemState = {0};
TSDBROW memRow, imemRow, fsRow;
// SMemNextRowIter memState = {0};
// SMemNextRowIter imemState = {0};
// TSDBROW memRow, imemRow, fsRow;
TsdbNextRowState input[3] = {{&memRow, true, false, &memState, getNextRowFromMem, NULL},
{&imemRow, true, false, &imemState, getNextRowFromMem, NULL},
{&fsRow, false, true, &fsState, getNextRowFromFS, clearNextRowFromFS}};
// TsdbNextRowState input[3] = {{&memRow, true, false, &memState, getNextRowFromMem, NULL},
// {&imemRow, true, false, &imemState, getNextRowFromMem, NULL},
// {&fsRow, false, true, &fsState, getNextRowFromFS, clearNextRowFromFS}};
if (pMem) {
memState.pMem = pMem;
memState.state = SMEMNEXTROW_ENTER;
input[0].stop = false;
input[0].next = true;
}
if (pIMem) {
imemState.pMem = pIMem;
imemState.state = SMEMNEXTROW_ENTER;
input[1].stop = false;
input[1].next = true;
}
// if (pMem) {
// memState.pMem = pMem;
// memState.state = SMEMNEXTROW_ENTER;
// input[0].stop = false;
// input[0].next = true;
// }
// if (pIMem) {
// imemState.pMem = pIMem;
// imemState.state = SMEMNEXTROW_ENTER;
// input[1].stop = false;
// input[1].next = true;
// }
int16_t nilColCount = nCol - 1; // count of null & none cols
int iCol = 0; // index of first nil col index from left to right
bool setICol = false;
// int16_t nilColCount = nCol - 1; // count of null & none cols
// int iCol = 0; // index of first nil col index from left to right
// bool setICol = false;
do {
for (int i = 0; i < 3; ++i) {
if (input[i].next && !input[i].stop) {
if (input[i].pRow == NULL) {
code = input[i].nextRowFn(input[i].iter, &input[i].pRow);
if (code) goto _err;
// do {
// for (int i = 0; i < 3; ++i) {
// if (input[i].next && !input[i].stop) {
// if (input[i].pRow == NULL) {
// code = input[i].nextRowFn(input[i].iter, &input[i].pRow);
// if (code) goto _err;
if (input[i].pRow == NULL) {
input[i].stop = true;
input[i].next = false;
}
}
}
}
// if (input[i].pRow == NULL) {
// input[i].stop = true;
// input[i].next = false;
// }
// }
// }
// }
if (input[0].stop && input[1].stop && input[2].stop) {
break;
}
// if (input[0].stop && input[1].stop && input[2].stop) {
// break;
// }
// select maxpoint(s) from mem, imem, fs
TSDBROW *max[3] = {0};
int iMax[3] = {-1, -1, -1};
int nMax = 0;
TSKEY maxKey = TSKEY_MIN;
// // select maxpoint(s) from mem, imem, fs
// TSDBROW *max[3] = {0};
// int iMax[3] = {-1, -1, -1};
// int nMax = 0;
// TSKEY maxKey = TSKEY_MIN;
for (int i = 0; i < 3; ++i) {
if (!input[i].stop && input[i].pRow != NULL) {
TSDBKEY key = TSDBROW_KEY(input[i].pRow);
// for (int i = 0; i < 3; ++i) {
// if (!input[i].stop && input[i].pRow != NULL) {
// TSDBKEY key = TSDBROW_KEY(input[i].pRow);
// merging & deduplicating on client side
if (maxKey <= key.ts) {
if (maxKey < key.ts) {
nMax = 0;
maxKey = key.ts;
}
// // merging & deduplicating on client side
// if (maxKey <= key.ts) {
// if (maxKey < key.ts) {
// nMax = 0;
// maxKey = key.ts;
// }
iMax[nMax] = i;
max[nMax++] = input[i].pRow;
}
}
}
// iMax[nMax] = i;
// max[nMax++] = input[i].pRow;
// }
// }
// }
// delete detection
TSDBROW *merge[3] = {0};
int iMerge[3] = {-1, -1, -1};
int nMerge = 0;
for (int i = 0; i < nMax; ++i) {
TSDBKEY maxKey = TSDBROW_KEY(max[i]);
// // delete detection
// TSDBROW *merge[3] = {0};
// int iMerge[3] = {-1, -1, -1};
// int nMerge = 0;
// for (int i = 0; i < nMax; ++i) {
// TSDBKEY maxKey = TSDBROW_KEY(max[i]);
bool deleted = tsdbKeyDeleted(&maxKey, pSkyline, &iSkyline);
if (!deleted) {
iMerge[nMerge] = i;
merge[nMerge++] = max[i];
}
// bool deleted = tsdbKeyDeleted(&maxKey, pSkyline, &iSkyline);
// if (!deleted) {
// iMerge[nMerge] = i;
// merge[nMerge++] = max[i];
// }
input[iMax[i]].next = deleted;
}
// input[iMax[i]].next = deleted;
// }
// merge if nMerge > 1
if (nMerge > 0) {
*dup = false;
// // merge if nMerge > 1
// if (nMerge > 0) {
// *dup = false;
if (nMerge == 1) {
code = tsRowFromTsdbRow(pTSchema, merge[nMerge - 1], ppRow);
if (code) goto _err;
} else {
// merge 2 or 3 rows
SRowMerger merger = {0};
// if (nMerge == 1) {
// code = tsRowFromTsdbRow(pTSchema, merge[nMerge - 1], ppRow);
// if (code) goto _err;
// } else {
// // merge 2 or 3 rows
// SRowMerger merger = {0};
tRowMergerInit(&merger, merge[0], pTSchema);
for (int i = 1; i < nMerge; ++i) {
tRowMerge(&merger, merge[i]);
}
tRowMergerGetRow(&merger, ppRow);
tRowMergerClear(&merger);
}
}
// tRowMergerInit(&merger, merge[0], pTSchema);
// for (int i = 1; i < nMerge; ++i) {
// tRowMerge(&merger, merge[i]);
// }
// tRowMergerGetRow(&merger, ppRow);
// tRowMergerClear(&merger);
// }
// }
} while (1);
// } while (1);
for (int i = 0; i < 3; ++i) {
if (input[i].nextRowClearFn) {
input[i].nextRowClearFn(input[i].iter);
}
}
if (pSkyline) {
taosArrayDestroy(pSkyline);
}
taosMemoryFreeClear(pTSchema);
// for (int i = 0; i < 3; ++i) {
// if (input[i].nextRowClearFn) {
// input[i].nextRowClearFn(input[i].iter);
// }
// }
// if (pSkyline) {
// taosArrayDestroy(pSkyline);
// }
// taosMemoryFreeClear(pTSchema);
return code;
_err:
for (int i = 0; i < 3; ++i) {
if (input[i].nextRowClearFn) {
input[i].nextRowClearFn(input[i].iter);
}
}
if (pSkyline) {
taosArrayDestroy(pSkyline);
}
taosMemoryFreeClear(pTSchema);
tsdbError("vgId:%d merge last_row failed since %s", TD_VID(pTsdb->pVnode), tstrerror(code));
return code;
}
// return code;
// _err:
// for (int i = 0; i < 3; ++i) {
// if (input[i].nextRowClearFn) {
// input[i].nextRowClearFn(input[i].iter);
// }
// }
// if (pSkyline) {
// taosArrayDestroy(pSkyline);
// }
// taosMemoryFreeClear(pTSchema);
// tsdbError("vgId:%d merge last_row failed since %s", TD_VID(pTsdb->pVnode), tstrerror(code));
// return code;
// }
// static int32_t mergeLast(tb_uid_t uid, STsdb *pTsdb, STSRow **ppRow) {
static int32_t mergeLast(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray) {
int32_t code = 0;
SArray *pSkyline = NULL;
STSRow *pRow = NULL;
STSRow **ppRow = &pRow;
// static int32_t mergeLast(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray) {
// int32_t code = 0;
// SArray *pSkyline = NULL;
// STSRow *pRow = NULL;
// STSRow **ppRow = &pRow;
STSchema *pTSchema = metaGetTbTSchema(pTsdb->pVnode->pMeta, uid, -1);
int16_t nCol = pTSchema->numOfCols;
// SArray *pColArray = taosArrayInit(nCol, sizeof(SColVal));
SArray *pColArray = taosArrayInit(nCol, sizeof(SLastCol));
// STSchema *pTSchema = metaGetTbTSchema(pTsdb->pVnode->pMeta, uid, -1);
// int16_t nCol = pTSchema->numOfCols;
// // SArray *pColArray = taosArrayInit(nCol, sizeof(SColVal));
// SArray *pColArray = taosArrayInit(nCol, sizeof(SLastCol));
tb_uid_t suid = getTableSuidByUid(uid, pTsdb);
// tb_uid_t suid = getTableSuidByUid(uid, pTsdb);
STbData *pMem = NULL;
if (pTsdb->mem) {
tsdbGetTbDataFromMemTable(pTsdb->mem, suid, uid, &pMem);
}
// STbData *pMem = NULL;
// if (pTsdb->mem) {
// tsdbGetTbDataFromMemTable(pTsdb->mem, suid, uid, &pMem);
// }
STbData *pIMem = NULL;
if (pTsdb->imem) {
tsdbGetTbDataFromMemTable(pTsdb->imem, suid, uid, &pIMem);
}
// STbData *pIMem = NULL;
// if (pTsdb->imem) {
// tsdbGetTbDataFromMemTable(pTsdb->imem, suid, uid, &pIMem);
// }
*ppLastArray = NULL;
// *ppLastArray = NULL;
pSkyline = taosArrayInit(32, sizeof(TSDBKEY));
// pSkyline = taosArrayInit(32, sizeof(TSDBKEY));
SDelIdx delIdx;
// SDelIdx delIdx;
SDelFile *pDelFile = tsdbFSStateGetDelFile(pTsdb->pFS->cState);
if (pDelFile) {
SDelFReader *pDelFReader;
// SDelFile *pDelFile = tsdbFSStateGetDelFile(pTsdb->pFS->cState);
// if (pDelFile) {
// SDelFReader *pDelFReader;
code = tsdbDelFReaderOpen(&pDelFReader, pDelFile, pTsdb, NULL);
if (code) goto _err;
// code = tsdbDelFReaderOpen(&pDelFReader, pDelFile, pTsdb, NULL);
// if (code) goto _err;
code = getTableDelIdx(pDelFReader, suid, uid, &delIdx);
if (code) goto _err;
// code = getTableDelIdx(pDelFReader, suid, uid, &delIdx);
// if (code) goto _err;
code = getTableDelSkyline(pMem, pIMem, pDelFReader, &delIdx, pSkyline);
if (code) goto _err;
// code = getTableDelSkyline(pMem, pIMem, pDelFReader, &delIdx, pSkyline);
// if (code) goto _err;
tsdbDelFReaderClose(&pDelFReader);
} else {
code = getTableDelSkyline(pMem, pIMem, NULL, NULL, pSkyline);
if (code) goto _err;
}
// tsdbDelFReaderClose(&pDelFReader);
// } else {
// code = getTableDelSkyline(pMem, pIMem, NULL, NULL, pSkyline);
// if (code) goto _err;
// }
int64_t iSkyline = taosArrayGetSize(pSkyline) - 1;
// int64_t iSkyline = taosArrayGetSize(pSkyline) - 1;
SBlockIdx idx = {.suid = suid, .uid = uid};
// SBlockIdx idx = {.suid = suid, .uid = uid};
SFSNextRowIter fsState = {0};
fsState.state = SFSNEXTROW_FS;
fsState.pTsdb = pTsdb;
fsState.pBlockIdxExp = &idx;
// SFSNextRowIter fsState = {0};
// fsState.state = SFSNEXTROW_FS;
// fsState.pTsdb = pTsdb;
// fsState.pBlockIdxExp = &idx;
SMemNextRowIter memState = {0};
SMemNextRowIter imemState = {0};
TSDBROW memRow, imemRow, fsRow;
// SMemNextRowIter memState = {0};
// SMemNextRowIter imemState = {0};
// TSDBROW memRow, imemRow, fsRow;
TsdbNextRowState input[3] = {{&memRow, true, false, &memState, getNextRowFromMem, NULL},
{&imemRow, true, false, &imemState, getNextRowFromMem, NULL},
{&fsRow, false, true, &fsState, getNextRowFromFS, clearNextRowFromFS}};
// TsdbNextRowState input[3] = {{&memRow, true, false, &memState, getNextRowFromMem, NULL},
// {&imemRow, true, false, &imemState, getNextRowFromMem, NULL},
// {&fsRow, false, true, &fsState, getNextRowFromFS, clearNextRowFromFS}};
if (pMem) {
memState.pMem = pMem;
memState.state = SMEMNEXTROW_ENTER;
input[0].stop = false;
input[0].next = true;
}
if (pIMem) {
imemState.pMem = pIMem;
imemState.state = SMEMNEXTROW_ENTER;
input[1].stop = false;
input[1].next = true;
}
// if (pMem) {
// memState.pMem = pMem;
// memState.state = SMEMNEXTROW_ENTER;
// input[0].stop = false;
// input[0].next = true;
// }
// if (pIMem) {
// imemState.pMem = pIMem;
// imemState.state = SMEMNEXTROW_ENTER;
// input[1].stop = false;
// input[1].next = true;
// }
int16_t nilColCount = nCol - 1; // count of null & none cols
int iCol = 0; // index of first nil col index from left to right
bool setICol = false;
// int16_t nilColCount = nCol - 1; // count of null & none cols
// int iCol = 0; // index of first nil col index from left to right
// bool setICol = false;
do {
for (int i = 0; i < 3; ++i) {
if (input[i].next && !input[i].stop) {
code = input[i].nextRowFn(input[i].iter, &input[i].pRow);
if (code) goto _err;
// do {
// for (int i = 0; i < 3; ++i) {
// if (input[i].next && !input[i].stop) {
// code = input[i].nextRowFn(input[i].iter, &input[i].pRow);
// if (code) goto _err;
if (input[i].pRow == NULL) {
input[i].stop = true;
input[i].next = false;
}
}
}
// if (input[i].pRow == NULL) {
// input[i].stop = true;
// input[i].next = false;
// }
// }
// }
if (input[0].stop && input[1].stop && input[2].stop) {
break;
}
// if (input[0].stop && input[1].stop && input[2].stop) {
// break;
// }
// select maxpoint(s) from mem, imem, fs
TSDBROW *max[3] = {0};
int iMax[3] = {-1, -1, -1};
int nMax = 0;
TSKEY maxKey = TSKEY_MIN;
// // select maxpoint(s) from mem, imem, fs
// TSDBROW *max[3] = {0};
// int iMax[3] = {-1, -1, -1};
// int nMax = 0;
// TSKEY maxKey = TSKEY_MIN;
for (int i = 0; i < 3; ++i) {
if (!input[i].stop && input[i].pRow != NULL) {
TSDBKEY key = TSDBROW_KEY(input[i].pRow);
// for (int i = 0; i < 3; ++i) {
// if (!input[i].stop && input[i].pRow != NULL) {
// TSDBKEY key = TSDBROW_KEY(input[i].pRow);
// merging & deduplicating on client side
if (maxKey <= key.ts) {
if (maxKey < key.ts) {
nMax = 0;
maxKey = key.ts;
}
// // merging & deduplicating on client side
// if (maxKey <= key.ts) {
// if (maxKey < key.ts) {
// nMax = 0;
// maxKey = key.ts;
// }
iMax[nMax] = i;
max[nMax++] = input[i].pRow;
}
}
}
// iMax[nMax] = i;
// max[nMax++] = input[i].pRow;
// }
// }
// }
// delete detection
TSDBROW *merge[3] = {0};
int iMerge[3] = {-1, -1, -1};
int nMerge = 0;
for (int i = 0; i < nMax; ++i) {
TSDBKEY maxKey = TSDBROW_KEY(max[i]);
// // delete detection
// TSDBROW *merge[3] = {0};
// int iMerge[3] = {-1, -1, -1};
// int nMerge = 0;
// for (int i = 0; i < nMax; ++i) {
// TSDBKEY maxKey = TSDBROW_KEY(max[i]);
bool deleted = tsdbKeyDeleted(&maxKey, pSkyline, &iSkyline);
if (!deleted) {
iMerge[nMerge] = iMax[i];
merge[nMerge++] = max[i];
}
// bool deleted = tsdbKeyDeleted(&maxKey, pSkyline, &iSkyline);
// if (!deleted) {
// iMerge[nMerge] = iMax[i];
// merge[nMerge++] = max[i];
// }
input[iMax[i]].next = deleted;
}
// input[iMax[i]].next = deleted;
// }
// merge if nMerge > 1
if (nMerge > 0) {
if (nMerge == 1) {
code = tsRowFromTsdbRow(pTSchema, merge[nMerge - 1], ppRow);
if (code) goto _err;
} else {
// merge 2 or 3 rows
SRowMerger merger = {0};
// // merge if nMerge > 1
// if (nMerge > 0) {
// if (nMerge == 1) {
// code = tsRowFromTsdbRow(pTSchema, merge[nMerge - 1], ppRow);
// if (code) goto _err;
// } else {
// // merge 2 or 3 rows
// SRowMerger merger = {0};
tRowMergerInit(&merger, merge[0], pTSchema);
for (int i = 1; i < nMerge; ++i) {
tRowMerge(&merger, merge[i]);
}
tRowMergerGetRow(&merger, ppRow);
tRowMergerClear(&merger);
}
} else {
/* *ppRow = NULL; */
/* return code; */
continue;
}
// tRowMergerInit(&merger, merge[0], pTSchema);
// for (int i = 1; i < nMerge; ++i) {
// tRowMerge(&merger, merge[i]);
// }
// tRowMergerGetRow(&merger, ppRow);
// tRowMergerClear(&merger);
// }
// } else {
// /* *ppRow = NULL; */
// /* return code; */
// continue;
// }
if (iCol == 0) {
STColumn *pTColumn = &pTSchema->columns[0];
SColVal *pColVal = &(SColVal){0};
// if (iCol == 0) {
// STColumn *pTColumn = &pTSchema->columns[0];
// SColVal *pColVal = &(SColVal){0};
*pColVal = COL_VAL_VALUE(pTColumn->colId, pTColumn->type, (SValue){.ts = maxKey});
// *pColVal = COL_VAL_VALUE(pTColumn->colId, pTColumn->type, (SValue){.ts = maxKey});
// if (taosArrayPush(pColArray, pColVal) == NULL) {
if (taosArrayPush(pColArray, &(SLastCol){.ts = maxKey, .colVal = *pColVal}) == NULL) {
code = TSDB_CODE_OUT_OF_MEMORY;
goto _err;
}
// // if (taosArrayPush(pColArray, pColVal) == NULL) {
// if (taosArrayPush(pColArray, &(SLastCol){.ts = maxKey, .colVal = *pColVal}) == NULL) {
// code = TSDB_CODE_OUT_OF_MEMORY;
// goto _err;
// }
++iCol;
// ++iCol;
setICol = false;
for (int16_t i = iCol; i < nCol; ++i) {
// tsdbRowGetColVal(*ppRow, pTSchema, i, pColVal);
tTSRowGetVal(*ppRow, pTSchema, i, pColVal);
// if (taosArrayPush(pColArray, pColVal) == NULL) {
if (taosArrayPush(pColArray, &(SLastCol){.ts = maxKey, .colVal = *pColVal}) == NULL) {
code = TSDB_CODE_OUT_OF_MEMORY;
goto _err;
}
// setICol = false;
// for (int16_t i = iCol; i < nCol; ++i) {
// // tsdbRowGetColVal(*ppRow, pTSchema, i, pColVal);
// tTSRowGetVal(*ppRow, pTSchema, i, pColVal);
// // if (taosArrayPush(pColArray, pColVal) == NULL) {
// if (taosArrayPush(pColArray, &(SLastCol){.ts = maxKey, .colVal = *pColVal}) == NULL) {
// code = TSDB_CODE_OUT_OF_MEMORY;
// goto _err;
// }
if (pColVal->isNull || pColVal->isNone) {
for (int j = 0; j < nMerge; ++j) {
SColVal jColVal = {0};
tsdbRowGetColVal(merge[j], pTSchema, i, &jColVal);
if (jColVal.isNull || jColVal.isNone) {
input[iMerge[j]].next = true;
}
}
if (!setICol) {
iCol = i;
setICol = true;
}
} else {
--nilColCount;
}
}
// if (pColVal->isNull || pColVal->isNone) {
// for (int j = 0; j < nMerge; ++j) {
// SColVal jColVal = {0};
// tsdbRowGetColVal(merge[j], pTSchema, i, &jColVal);
// if (jColVal.isNull || jColVal.isNone) {
// input[iMerge[j]].next = true;
// }
// }
// if (!setICol) {
// iCol = i;
// setICol = true;
// }
// } else {
// --nilColCount;
// }
// }
if (*ppRow) {
taosMemoryFreeClear(*ppRow);
}
// if (*ppRow) {
// taosMemoryFreeClear(*ppRow);
// }
continue;
}
// continue;
// }
setICol = false;
for (int16_t i = iCol; i < nCol; ++i) {
SColVal colVal = {0};
tTSRowGetVal(*ppRow, pTSchema, i, &colVal);
TSKEY rowTs = (*ppRow)->ts;
// setICol = false;
// for (int16_t i = iCol; i < nCol; ++i) {
// SColVal colVal = {0};
// tTSRowGetVal(*ppRow, pTSchema, i, &colVal);
// TSKEY rowTs = (*ppRow)->ts;
// SColVal *tColVal = (SColVal *)taosArrayGet(pColArray, i);
SLastCol *tTsVal = (SLastCol *)taosArrayGet(pColArray, i);
SColVal *tColVal = &tTsVal->colVal;
// // SColVal *tColVal = (SColVal *)taosArrayGet(pColArray, i);
// SLastCol *tTsVal = (SLastCol *)taosArrayGet(pColArray, i);
// SColVal *tColVal = &tTsVal->colVal;
if (!colVal.isNone && !colVal.isNull) {
if (tColVal->isNull || tColVal->isNone) {
// taosArraySet(pColArray, i, &colVal);
taosArraySet(pColArray, i, &(SLastCol){.ts = rowTs, .colVal = colVal});
--nilColCount;
}
} else {
if ((tColVal->isNull || tColVal->isNone) && !setICol) {
iCol = i;
setICol = true;
// if (!colVal.isNone && !colVal.isNull) {
// if (tColVal->isNull || tColVal->isNone) {
// // taosArraySet(pColArray, i, &colVal);
// taosArraySet(pColArray, i, &(SLastCol){.ts = rowTs, .colVal = colVal});
// --nilColCount;
// }
// } else {
// if ((tColVal->isNull || tColVal->isNone) && !setICol) {
// iCol = i;
// setICol = true;
for (int j = 0; j < nMerge; ++j) {
SColVal jColVal = {0};
tsdbRowGetColVal(merge[j], pTSchema, i, &jColVal);
if (jColVal.isNull || jColVal.isNone) {
input[iMerge[j]].next = true;
}
}
}
}
}
// for (int j = 0; j < nMerge; ++j) {
// SColVal jColVal = {0};
// tsdbRowGetColVal(merge[j], pTSchema, i, &jColVal);
// if (jColVal.isNull || jColVal.isNone) {
// input[iMerge[j]].next = true;
// }
// }
// }
// }
// }
if (*ppRow) {
taosMemoryFreeClear(*ppRow);
}
} while (nilColCount > 0);
// if (*ppRow) {
// taosMemoryFreeClear(*ppRow);
// }
// } while (nilColCount > 0);
// if () new ts row from pColArray if non empty
/* if (taosArrayGetSize(pColArray) == nCol) { */
/* code = tdSTSRowNew(pColArray, pTSchema, ppRow); */
/* if (code) goto _err; */
/* } */
/* taosArrayDestroy(pColArray); */
if (taosArrayGetSize(pColArray) <= 0) {
*ppLastArray = NULL;
taosArrayDestroy(pColArray);
} else {
*ppLastArray = pColArray;
}
if (*ppRow) {
taosMemoryFreeClear(*ppRow);
}
// // if () new ts row from pColArray if non empty
// /* if (taosArrayGetSize(pColArray) == nCol) { */
// /* code = tdSTSRowNew(pColArray, pTSchema, ppRow); */
// /* if (code) goto _err; */
// /* } */
// /* taosArrayDestroy(pColArray); */
// if (taosArrayGetSize(pColArray) <= 0) {
// *ppLastArray = NULL;
// taosArrayDestroy(pColArray);
// } else {
// *ppLastArray = pColArray;
// }
// if (*ppRow) {
// taosMemoryFreeClear(*ppRow);
// }
for (int i = 0; i < 3; ++i) {
if (input[i].nextRowClearFn) {
input[i].nextRowClearFn(input[i].iter);
}
}
if (pSkyline) {
taosArrayDestroy(pSkyline);
}
taosMemoryFreeClear(pTSchema);
// for (int i = 0; i < 3; ++i) {
// if (input[i].nextRowClearFn) {
// input[i].nextRowClearFn(input[i].iter);
// }
// }
// if (pSkyline) {
// taosArrayDestroy(pSkyline);
// }
// taosMemoryFreeClear(pTSchema);
return code;
_err:
taosArrayDestroy(pColArray);
if (*ppRow) {
taosMemoryFreeClear(*ppRow);
}
for (int i = 0; i < 3; ++i) {
if (input[i].nextRowClearFn) {
input[i].nextRowClearFn(input[i].iter);
}
}
if (pSkyline) {
taosArrayDestroy(pSkyline);
}
taosMemoryFreeClear(pTSchema);
tsdbError("vgId:%d merge last_row failed since %s", TD_VID(pTsdb->pVnode), tstrerror(code));
return code;
}
// return code;
// _err:
// taosArrayDestroy(pColArray);
// if (*ppRow) {
// taosMemoryFreeClear(*ppRow);
// }
// for (int i = 0; i < 3; ++i) {
// if (input[i].nextRowClearFn) {
// input[i].nextRowClearFn(input[i].iter);
// }
// }
// if (pSkyline) {
// taosArrayDestroy(pSkyline);
// }
// taosMemoryFreeClear(pTSchema);
// tsdbError("vgId:%d merge last_row failed since %s", TD_VID(pTsdb->pVnode), tstrerror(code));
// return code;
// }
int32_t tsdbCacheGetLastrowH(SLRUCache *pCache, tb_uid_t uid, STsdb *pTsdb, LRUHandle **handle) {
int32_t code = 0;

View File

@ -29,6 +29,7 @@ typedef struct {
int32_t minRow;
int32_t maxRow;
int8_t cmprAlg;
STsdbFS fs;
// --------------
TSKEY nextKey; // reset by each table commit
int32_t commitFid;
@ -119,9 +120,6 @@ int32_t tsdbCommit(STsdb *pTsdb) {
code = tsdbCommitDel(&commith);
if (code) goto _err;
code = tsdbCommitCache(&commith);
if (code) goto _err;
// end commit
code = tsdbEndCommit(&commith, 0);
if (code) goto _err;
@ -158,7 +156,7 @@ static int32_t tsdbCommitDelStart(SCommitter *pCommitter) {
goto _err;
}
SDelFile *pDelFileR = pTsdb->pFS->nState->pDelFile;
SDelFile *pDelFileR = pCommitter->fs.pDelFile;
if (pDelFileR) {
code = tsdbDelFReaderOpen(&pCommitter->pDelFReader, pDelFileR, pTsdb, NULL);
if (code) goto _err;
@ -247,7 +245,7 @@ static int32_t tsdbCommitDelEnd(SCommitter *pCommitter) {
code = tsdbUpdateDelFileHdr(pCommitter->pDelFWriter);
if (code) goto _err;
code = tsdbFSStateUpsertDelFile(pTsdb->pFS->nState, &pCommitter->pDelFWriter->fDel);
code = tsdbFSUpsertDelFile(&pCommitter->fs, &pCommitter->pDelFWriter->fDel);
if (code) goto _err;
code = tsdbDelFWriterClose(&pCommitter->pDelFWriter, 1);
@ -273,7 +271,6 @@ static int32_t tsdbCommitFileDataStart(SCommitter *pCommitter) {
int32_t code = 0;
STsdb *pTsdb = pCommitter->pTsdb;
SDFileSet *pRSet = NULL;
SDFileSet wSet;
// memory
pCommitter->nextKey = TSKEY_MAX;
@ -282,7 +279,8 @@ static int32_t tsdbCommitFileDataStart(SCommitter *pCommitter) {
taosArrayClear(pCommitter->aBlockIdx);
tMapDataReset(&pCommitter->oBlockMap);
tBlockDataReset(&pCommitter->oBlockData);
pRSet = tsdbFSStateGetDFileSet(pTsdb->pFS->nState, pCommitter->commitFid, TD_EQ);
pRSet = (SDFileSet *)taosArraySearch(pCommitter->fs.aDFileSet, &(SDFileSet){.fid = pCommitter->commitFid},
tDFileSetCmprFn, TD_EQ);
if (pRSet) {
code = tsdbDataFReaderOpen(&pCommitter->pReader, pTsdb, pRSet);
if (code) goto _err;
@ -292,23 +290,29 @@ static int32_t tsdbCommitFileDataStart(SCommitter *pCommitter) {
}
// new
SHeadFile fHead;
SDataFile fData;
SLastFile fLast;
SSmaFile fSma;
SDFileSet wSet = {.pHeadF = &fHead, .pDataF = &fData, .pLastF = &fLast, .pSmaF = &fSma};
taosArrayClear(pCommitter->aBlockIdxN);
tMapDataReset(&pCommitter->nBlockMap);
tBlockDataReset(&pCommitter->nBlockData);
if (pRSet) {
wSet = (SDFileSet){.diskId = pRSet->diskId,
.fid = pCommitter->commitFid,
.fHead = {.commitID = pCommitter->commitID, .offset = 0, .size = 0},
.fData = pRSet->fData,
.fLast = {.commitID = pCommitter->commitID, .size = 0},
.fSma = pRSet->fSma};
wSet.diskId = pRSet->diskId;
wSet.fid = pCommitter->commitFid;
fHead = (SHeadFile){.commitID = pCommitter->commitID, .offset = 0, .size = 0};
fData = *pRSet->pDataF;
fLast = (SLastFile){.commitID = pCommitter->commitID, .size = 0};
fSma = *pRSet->pSmaF;
} else {
wSet = (SDFileSet){.diskId = (SDiskID){.level = 0, .id = 0},
.fid = pCommitter->commitFid,
.fHead = {.commitID = pCommitter->commitID, .offset = 0, .size = 0},
.fData = {.commitID = pCommitter->commitID, .size = 0},
.fLast = {.commitID = pCommitter->commitID, .size = 0},
.fSma = {.commitID = pCommitter->commitID, .size = 0}};
wSet.diskId = (SDiskID){.level = 0, .id = 0};
wSet.fid = pCommitter->commitFid;
fHead = (SHeadFile){.commitID = pCommitter->commitID, .offset = 0, .size = 0};
fData = (SDataFile){.commitID = pCommitter->commitID, .size = 0};
fLast = (SLastFile){.commitID = pCommitter->commitID, .size = 0};
fSma = (SSmaFile){.commitID = pCommitter->commitID, .size = 0};
}
code = tsdbDataFWriterOpen(&pCommitter->pWriter, pTsdb, &wSet);
if (code) goto _err;
@ -855,7 +859,7 @@ static int32_t tsdbCommitFileDataEnd(SCommitter *pCommitter) {
if (code) goto _err;
// upsert SDFileSet
code = tsdbFSStateUpsertDFileSet(pCommitter->pTsdb->pFS->nState, tsdbDataFWriterGetWSet(pCommitter->pWriter));
code = tsdbFSUpsertFSet(&pCommitter->fs, &pCommitter->pWriter->wSet);
if (code) goto _err;
// close and sync
@ -973,7 +977,7 @@ static int32_t tsdbStartCommit(STsdb *pTsdb, SCommitter *pCommitter) {
pCommitter->maxRow = pTsdb->pVnode->config.tsdbCfg.maxRows;
pCommitter->cmprAlg = pTsdb->pVnode->config.tsdbCfg.compression;
code = tsdbFSBegin(pTsdb->pFS);
code = tsdbFSCopy(pTsdb, &pCommitter->fs);
if (code) goto _err;
return code;
@ -1142,28 +1146,33 @@ _err:
return code;
}
static int32_t tsdbCommitCache(SCommitter *pCommitter) {
int32_t code = 0;
// TODO
return code;
}
static int32_t tsdbEndCommit(SCommitter *pCommitter, int32_t eno) {
int32_t code = 0;
STsdb *pTsdb = pCommitter->pTsdb;
SMemTable *pMemTable = pTsdb->imem;
if (eno == 0) {
code = tsdbFSCommit(pTsdb->pFS);
} else {
code = tsdbFSRollback(pTsdb->pFS);
ASSERT(eno == 0);
code = tsdbFSCommit1(pTsdb, &pCommitter->fs);
if (code) goto _err;
// lock
taosThreadRwlockWrlock(&pTsdb->rwLock);
// commit or rollback
code = tsdbFSCommit2(pTsdb, &pCommitter->fs);
if (code) {
taosThreadRwlockUnlock(&pTsdb->rwLock);
goto _err;
}
taosThreadRwlockWrlock(&pTsdb->rwLock);
pTsdb->imem = NULL;
// unlock
taosThreadRwlockUnlock(&pTsdb->rwLock);
tsdbUnrefMemTable(pMemTable);
tsdbFSDestroy(&pCommitter->fs);
tsdbInfo("vgId:%d tsdb end commit", TD_VID(pTsdb->pVnode));
return code;

File diff suppressed because it is too large Load Diff

View File

@ -15,7 +15,7 @@
#include "tsdb.h"
static int32_t tPutHeadFile(uint8_t *p, SHeadFile *pHeadFile) {
int32_t tPutHeadFile(uint8_t *p, SHeadFile *pHeadFile) {
int32_t n = 0;
n += tPutI64v(p ? p + n : p, pHeadFile->commitID);
@ -35,7 +35,7 @@ static int32_t tGetHeadFile(uint8_t *p, SHeadFile *pHeadFile) {
return n;
}
static int32_t tPutDataFile(uint8_t *p, SDataFile *pDataFile) {
int32_t tPutDataFile(uint8_t *p, SDataFile *pDataFile) {
int32_t n = 0;
n += tPutI64v(p ? p + n : p, pDataFile->commitID);
@ -53,7 +53,7 @@ static int32_t tGetDataFile(uint8_t *p, SDataFile *pDataFile) {
return n;
}
static int32_t tPutLastFile(uint8_t *p, SLastFile *pLastFile) {
int32_t tPutLastFile(uint8_t *p, SLastFile *pLastFile) {
int32_t n = 0;
n += tPutI64v(p ? p + n : p, pLastFile->commitID);
@ -71,7 +71,7 @@ static int32_t tGetLastFile(uint8_t *p, SLastFile *pLastFile) {
return n;
}
static int32_t tPutSmaFile(uint8_t *p, SSmaFile *pSmaFile) {
int32_t tPutSmaFile(uint8_t *p, SSmaFile *pSmaFile) {
int32_t n = 0;
n += tPutI64v(p ? p + n : p, pSmaFile->commitID);
@ -90,90 +90,53 @@ static int32_t tGetSmaFile(uint8_t *p, SSmaFile *pSmaFile) {
}
// EXPOSED APIS ==================================================
void tsdbDataFileName(STsdb *pTsdb, SDFileSet *pDFileSet, EDataFileT ftype, char fname[]) {
STfs *pTfs = pTsdb->pVnode->pTfs;
switch (ftype) {
case TSDB_HEAD_FILE:
snprintf(fname, TSDB_FILENAME_LEN - 1, "%s%s%s%sv%df%dver%" PRId64 "%s", tfsGetDiskPath(pTfs, pDFileSet->diskId),
TD_DIRSEP, pTsdb->path, TD_DIRSEP, TD_VID(pTsdb->pVnode), pDFileSet->fid, pDFileSet->fHead.commitID,
".head");
break;
case TSDB_DATA_FILE:
snprintf(fname, TSDB_FILENAME_LEN - 1, "%s%s%s%sv%df%dver%" PRId64 "%s", tfsGetDiskPath(pTfs, pDFileSet->diskId),
TD_DIRSEP, pTsdb->path, TD_DIRSEP, TD_VID(pTsdb->pVnode), pDFileSet->fid, pDFileSet->fData.commitID,
".data");
break;
case TSDB_LAST_FILE:
snprintf(fname, TSDB_FILENAME_LEN - 1, "%s%s%s%sv%df%dver%" PRId64 "%s", tfsGetDiskPath(pTfs, pDFileSet->diskId),
TD_DIRSEP, pTsdb->path, TD_DIRSEP, TD_VID(pTsdb->pVnode), pDFileSet->fid, pDFileSet->fLast.commitID,
".last");
break;
case TSDB_SMA_FILE:
snprintf(fname, TSDB_FILENAME_LEN - 1, "%s%s%s%sv%df%dver%" PRId64 "%s", tfsGetDiskPath(pTfs, pDFileSet->diskId),
TD_DIRSEP, pTsdb->path, TD_DIRSEP, TD_VID(pTsdb->pVnode), pDFileSet->fid, pDFileSet->fSma.commitID,
".sma");
break;
default:
ASSERT(0);
break;
}
void tsdbHeadFileName(STsdb *pTsdb, SDiskID did, int32_t fid, SHeadFile *pHeadF, char fname[]) {
snprintf(fname, TSDB_FILENAME_LEN - 1, "%s%s%s%sv%df%dver%" PRId64 "%s", tfsGetDiskPath(pTsdb->pVnode->pTfs, did),
TD_DIRSEP, pTsdb->path, TD_DIRSEP, TD_VID(pTsdb->pVnode), fid, pHeadF->commitID, ".head");
}
bool tsdbFileIsSame(SDFileSet *pDFileSet1, SDFileSet *pDFileSet2, EDataFileT ftype) {
if (pDFileSet1->diskId.level != pDFileSet2->diskId.level || pDFileSet1->diskId.id != pDFileSet2->diskId.id) {
return false;
}
void tsdbDataFileName(STsdb *pTsdb, SDiskID did, int32_t fid, SDataFile *pDataF, char fname[]) {
snprintf(fname, TSDB_FILENAME_LEN - 1, "%s%s%s%sv%df%dver%" PRId64 "%s", tfsGetDiskPath(pTsdb->pVnode->pTfs, did),
TD_DIRSEP, pTsdb->path, TD_DIRSEP, TD_VID(pTsdb->pVnode), fid, pDataF->commitID, ".data");
}
switch (ftype) {
case TSDB_HEAD_FILE:
return pDFileSet1->fHead.commitID == pDFileSet2->fHead.commitID;
case TSDB_DATA_FILE:
return pDFileSet1->fData.commitID == pDFileSet2->fData.commitID;
case TSDB_LAST_FILE:
return pDFileSet1->fLast.commitID == pDFileSet2->fLast.commitID;
case TSDB_SMA_FILE:
return pDFileSet1->fSma.commitID == pDFileSet2->fSma.commitID;
default:
ASSERT(0);
break;
}
void tsdbLastFileName(STsdb *pTsdb, SDiskID did, int32_t fid, SLastFile *pLastF, char fname[]) {
snprintf(fname, TSDB_FILENAME_LEN - 1, "%s%s%s%sv%df%dver%" PRId64 "%s", tfsGetDiskPath(pTsdb->pVnode->pTfs, did),
TD_DIRSEP, pTsdb->path, TD_DIRSEP, TD_VID(pTsdb->pVnode), fid, pLastF->commitID, ".last");
}
void tsdbSmaFileName(STsdb *pTsdb, SDiskID did, int32_t fid, SSmaFile *pSmaF, char fname[]) {
snprintf(fname, TSDB_FILENAME_LEN - 1, "%s%s%s%sv%df%dver%" PRId64 "%s", tfsGetDiskPath(pTsdb->pVnode->pTfs, did),
TD_DIRSEP, pTsdb->path, TD_DIRSEP, TD_VID(pTsdb->pVnode), fid, pSmaF->commitID, ".sma");
}
bool tsdbDelFileIsSame(SDelFile *pDelFile1, SDelFile *pDelFile2) { return pDelFile1->commitID == pDelFile2->commitID; }
int32_t tsdbUpdateDFileHdr(TdFilePtr pFD, SDFileSet *pSet, EDataFileT ftype) {
int32_t code = 0;
int64_t n;
char hdr[TSDB_FHDR_SIZE];
memset(hdr, 0, TSDB_FHDR_SIZE);
tPutDataFileHdr(hdr, pSet, ftype);
taosCalcChecksumAppend(0, hdr, TSDB_FHDR_SIZE);
n = taosLSeekFile(pFD, 0, SEEK_SET);
if (n < 0) {
code = TAOS_SYSTEM_ERROR(errno);
goto _exit;
}
n = taosWriteFile(pFD, hdr, TSDB_FHDR_SIZE);
if (n < 0) {
code = TAOS_SYSTEM_ERROR(errno);
goto _exit;
}
_exit:
return code;
}
int32_t tsdbDFileRollback(STsdb *pTsdb, SDFileSet *pSet, EDataFileT ftype) {
int32_t code = 0;
int64_t size;
int64_t n;
TdFilePtr pFD;
char fname[TSDB_FILENAME_LEN];
char hdr[TSDB_FHDR_SIZE] = {0};
tsdbDataFileName(pTsdb, pSet, ftype, fname);
// truncate
switch (ftype) {
case TSDB_DATA_FILE:
size = pSet->pDataF->size;
tsdbDataFileName(pTsdb, pSet->diskId, pSet->fid, pSet->pDataF, fname);
tPutDataFile(hdr, pSet->pDataF);
break;
case TSDB_SMA_FILE:
size = pSet->pSmaF->size;
tsdbSmaFileName(pTsdb, pSet->diskId, pSet->fid, pSet->pSmaF, fname);
tPutSmaFile(hdr, pSet->pSmaF);
break;
default:
ASSERT(0);
}
taosCalcChecksumAppend(0, hdr, TSDB_FHDR_SIZE);
// open
pFD = taosOpenFile(fname, TD_FILE_WRITE);
@ -182,31 +145,24 @@ int32_t tsdbDFileRollback(STsdb *pTsdb, SDFileSet *pSet, EDataFileT ftype) {
goto _err;
}
// truncate
switch (ftype) {
case TSDB_HEAD_FILE:
size = pSet->fHead.size;
break;
case TSDB_DATA_FILE:
size = pSet->fData.size;
break;
case TSDB_LAST_FILE:
size = pSet->fLast.size;
break;
case TSDB_SMA_FILE:
size = pSet->fSma.size;
break;
default:
ASSERT(0);
}
// ftruncate
if (taosFtruncateFile(pFD, size) < 0) {
code = TAOS_SYSTEM_ERROR(errno);
goto _err;
}
// update header
code = tsdbUpdateDFileHdr(pFD, pSet, ftype);
if (code) goto _err;
n = taosLSeekFile(pFD, 0, SEEK_SET);
if (n < 0) {
code = TAOS_SYSTEM_ERROR(errno);
goto _err;
}
n = taosWriteFile(pFD, hdr, TSDB_FHDR_SIZE);
if (n < 0) {
code = TAOS_SYSTEM_ERROR(errno);
goto _err;
}
// sync
if (taosFsyncFile(pFD) < 0) {
@ -220,42 +176,20 @@ int32_t tsdbDFileRollback(STsdb *pTsdb, SDFileSet *pSet, EDataFileT ftype) {
return code;
_err:
tsdbError("vgId:%d tsdb rollback file failed since %s", TD_VID(pTsdb->pVnode), tstrerror(code));
return code;
}
int32_t tPutDataFileHdr(uint8_t *p, SDFileSet *pSet, EDataFileT ftype) {
int32_t n = 0;
switch (ftype) {
case TSDB_HEAD_FILE:
n += tPutHeadFile(p ? p + n : p, &pSet->fHead);
break;
case TSDB_DATA_FILE:
n += tPutDataFile(p ? p + n : p, &pSet->fData);
break;
case TSDB_LAST_FILE:
n += tPutLastFile(p ? p + n : p, &pSet->fLast);
break;
case TSDB_SMA_FILE:
n += tPutSmaFile(p ? p + n : p, &pSet->fSma);
break;
default:
ASSERT(0);
}
return n;
}
int32_t tPutDFileSet(uint8_t *p, SDFileSet *pSet) {
int32_t n = 0;
n += tPutI32v(p ? p + n : p, pSet->diskId.level);
n += tPutI32v(p ? p + n : p, pSet->diskId.id);
n += tPutI32v(p ? p + n : p, pSet->fid);
n += tPutHeadFile(p ? p + n : p, &pSet->fHead);
n += tPutDataFile(p ? p + n : p, &pSet->fData);
n += tPutLastFile(p ? p + n : p, &pSet->fLast);
n += tPutSmaFile(p ? p + n : p, &pSet->fSma);
n += tPutHeadFile(p ? p + n : p, pSet->pHeadF);
n += tPutDataFile(p ? p + n : p, pSet->pDataF);
n += tPutLastFile(p ? p + n : p, pSet->pLastF);
n += tPutSmaFile(p ? p + n : p, pSet->pSmaF);
return n;
}
@ -266,20 +200,18 @@ int32_t tGetDFileSet(uint8_t *p, SDFileSet *pSet) {
n += tGetI32v(p + n, &pSet->diskId.level);
n += tGetI32v(p + n, &pSet->diskId.id);
n += tGetI32v(p + n, &pSet->fid);
n += tGetHeadFile(p + n, &pSet->fHead);
n += tGetDataFile(p + n, &pSet->fData);
n += tGetLastFile(p + n, &pSet->fLast);
n += tGetSmaFile(p + n, &pSet->fSma);
n += tGetHeadFile(p + n, pSet->pHeadF);
n += tGetDataFile(p + n, pSet->pDataF);
n += tGetLastFile(p + n, pSet->pLastF);
n += tGetSmaFile(p + n, pSet->pSmaF);
return n;
}
// SDelFile ===============================================
void tsdbDelFileName(STsdb *pTsdb, SDelFile *pFile, char fname[]) {
STfs *pTfs = pTsdb->pVnode->pTfs;
snprintf(fname, TSDB_FILENAME_LEN - 1, "%s%s%s%sv%dver%" PRId64 "%s", tfsGetPrimaryPath(pTfs), TD_DIRSEP, pTsdb->path,
TD_DIRSEP, TD_VID(pTsdb->pVnode), pFile->commitID, ".del");
snprintf(fname, TSDB_FILENAME_LEN - 1, "%s%s%s%sv%dver%" PRId64 "%s", tfsGetPrimaryPath(pTsdb->pVnode->pTfs),
TD_DIRSEP, pTsdb->path, TD_DIRSEP, TD_VID(pTsdb->pVnode), pFile->commitID, ".del");
}
int32_t tPutDelFile(uint8_t *p, SDelFile *pDelFile) {

View File

@ -93,7 +93,11 @@ static int32_t tbDataPCmprFn(const void *p1, const void *p2) {
}
void tsdbGetTbDataFromMemTable(SMemTable *pMemTable, tb_uid_t suid, tb_uid_t uid, STbData **ppTbData) {
STbData *pTbData = &(STbData){.suid = suid, .uid = uid};
void *p = taosArraySearch(pMemTable->aTbData, &pTbData, tbDataPCmprFn, TD_EQ);
taosRLockLatch(&pMemTable->latch);
void *p = taosArraySearch(pMemTable->aTbData, &pTbData, tbDataPCmprFn, TD_EQ);
taosRUnLockLatch(&pMemTable->latch);
*ppTbData = p ? *(STbData **)p : NULL;
}
@ -363,10 +367,13 @@ static int32_t tsdbGetOrCreateTbData(SMemTable *pMemTable, tb_uid_t suid, tb_uid
void *p;
if (idx < 0) {
p = taosArrayPush(pMemTable->aTbData, &pTbData);
} else {
p = taosArrayInsert(pMemTable->aTbData, idx, &pTbData);
idx = taosArrayGetSize(pMemTable->aTbData);
}
taosWLockLatch(&pMemTable->latch);
p = taosArrayInsert(pMemTable->aTbData, idx, &pTbData);
taosWUnLockLatch(&pMemTable->latch);
if (p == NULL) {
code = TSDB_CODE_OUT_OF_MEMORY;
goto _err;
@ -605,46 +612,3 @@ void tsdbUnrefMemTable(SMemTable *pMemTable) {
tsdbMemTableDestroy(pMemTable);
}
}
int32_t tsdbTakeMemSnapshot(STsdb *pTsdb, SMemTable **ppMem, SMemTable **ppIMem) {
int32_t code = 0;
// lock
code = taosThreadRwlockRdlock(&pTsdb->rwLock);
if (code) {
code = TAOS_SYSTEM_ERROR(code);
goto _exit;
}
// take snapshot
*ppMem = pTsdb->mem;
*ppIMem = pTsdb->imem;
if (*ppMem) {
tsdbRefMemTable(*ppMem);
}
if (*ppIMem) {
tsdbRefMemTable(*ppIMem);
}
// unlock
code = taosThreadRwlockUnlock(&pTsdb->rwLock);
if (code) {
code = TAOS_SYSTEM_ERROR(code);
goto _exit;
}
_exit:
return code;
}
void tsdbUntakeMemSnapshot(STsdb *pTsdb, SMemTable *pMem, SMemTable *pIMem) {
if (pMem) {
tsdbUnrefMemTable(pMem);
}
if (pIMem) {
tsdbUnrefMemTable(pIMem);
}
}

View File

@ -66,7 +66,7 @@ int tsdbOpen(SVnode *pVnode, STsdb **ppTsdb, const char *dir, STsdbKeepCfg *pKee
tfsMkdir(pVnode->pTfs, pTsdb->path);
// open tsdb
if (tsdbFSOpen(pTsdb, &pTsdb->pFS) < 0) {
if (tsdbFSOpen(pTsdb) < 0) {
goto _err;
}
@ -88,7 +88,7 @@ _err:
int tsdbClose(STsdb **pTsdb) {
if (*pTsdb) {
taosThreadRwlockDestroy(&(*pTsdb)->rwLock);
tsdbFSClose((*pTsdb)->pFS);
tsdbFSClose(*pTsdb);
tsdbCloseCache((*pTsdb)->lruCache);
taosMemoryFreeClear(*pTsdb);
}

View File

@ -118,8 +118,7 @@ struct STsdbReader {
char* idStr; // query info handle, for debug purpose
int32_t type; // query type: 1. retrieve all data blocks, 2. retrieve direct prev|next rows
SBlockLoadSuppInfo suppInfo;
SMemTable* pMem;
SMemTable* pIMem;
STsdbReadSnap* pReadSnap;
SIOCostSummary cost;
STSchema* pSchema;
@ -275,20 +274,18 @@ static void limitOutputBufferSize(const SQueryTableDataCond* pCond, int32_t* cap
}
// init file iterator
static int32_t initFilesetIterator(SFilesetIter* pIter, const STsdbFSState* pFState, int32_t order, const char* idstr) {
size_t numOfFileset = taosArrayGetSize(pFState->aDFileSet);
static int32_t initFilesetIterator(SFilesetIter* pIter, SArray* aDFileSet, int32_t order, const char* idstr) {
size_t numOfFileset = taosArrayGetSize(aDFileSet);
pIter->index = ASCENDING_TRAVERSE(order) ? -1 : numOfFileset;
pIter->order = order;
pIter->pFileList = taosArrayDup(pFState->aDFileSet);
pIter->pFileList = aDFileSet;
pIter->numOfFiles = numOfFileset;
tsdbDebug("init fileset iterator, total files:%d %s", pIter->numOfFiles, idstr);
return TSDB_CODE_SUCCESS;
}
static void cleanupFilesetIterator(SFilesetIter* pIter) { taosArrayDestroy(pIter->pFileList); }
static bool filesetIteratorNext(SFilesetIter* pIter, STsdbReader* pReader) {
bool asc = ASCENDING_TRAVERSE(pIter->order);
int32_t step = asc ? 1 : -1;
@ -788,11 +785,11 @@ static int32_t copyBlockDataToSDataBlock(STsdbReader* pReader, STableBlockScanIn
doCopyColVal(pColData, rowIndex++, i, &cv, pSupInfo);
}
colIndex += 1;
ASSERT(rowIndex == remain);
} else { // the specified column does not exist in file block, fill with null data
colDataAppendNNULL(pColData, 0, remain);
}
ASSERT(rowIndex == remain);
i += 1;
}
@ -1881,8 +1878,8 @@ static int32_t initMemDataIterator(STableBlockScanInfo* pBlockScanInfo, STsdbRea
int32_t backward = (!ASCENDING_TRAVERSE(pReader->order));
STbData* d = NULL;
if (pReader->pMem != NULL) {
tsdbGetTbDataFromMemTable(pReader->pMem, pReader->suid, pBlockScanInfo->uid, &d);
if (pReader->pReadSnap->pMem != NULL) {
tsdbGetTbDataFromMemTable(pReader->pReadSnap->pMem, pReader->suid, pBlockScanInfo->uid, &d);
if (d != NULL) {
code = tsdbTbDataIterCreate(d, &startKey, backward, &pBlockScanInfo->iter.iter);
if (code == TSDB_CODE_SUCCESS) {
@ -1902,8 +1899,8 @@ static int32_t initMemDataIterator(STableBlockScanInfo* pBlockScanInfo, STsdbRea
}
STbData* di = NULL;
if (pReader->pIMem != NULL) {
tsdbGetTbDataFromMemTable(pReader->pIMem, pReader->suid, pBlockScanInfo->uid, &di);
if (pReader->pReadSnap->pIMem != NULL) {
tsdbGetTbDataFromMemTable(pReader->pReadSnap->pIMem, pReader->suid, pBlockScanInfo->uid, &di);
if (di != NULL) {
code = tsdbTbDataIterCreate(di, &startKey, backward, &pBlockScanInfo->iiter.iter);
if (code == TSDB_CODE_SUCCESS) {
@ -1939,21 +1936,24 @@ int32_t initDelSkylineIterator(STableBlockScanInfo* pBlockScanInfo, STsdbReader*
SArray* pDelData = taosArrayInit(4, sizeof(SDelData));
SDelFile* pDelFile = tsdbFSStateGetDelFile(pTsdb->pFS->cState);
SDelFile* pDelFile = pReader->pReadSnap->fs.pDelFile;
if (pDelFile) {
SDelFReader* pDelFReader = NULL;
code = tsdbDelFReaderOpen(&pDelFReader, pDelFile, pTsdb, NULL);
if (code) {
if (code != TSDB_CODE_SUCCESS) {
goto _err;
}
SArray* aDelIdx = taosArrayInit(4, sizeof(SDelIdx));
if (aDelIdx == NULL) {
tsdbDelFReaderClose(&pDelFReader);
goto _err;
}
code = tsdbReadDelIdx(pDelFReader, aDelIdx, NULL);
if (code) {
if (code != TSDB_CODE_SUCCESS) {
taosArrayDestroy(aDelIdx);
tsdbDelFReaderClose(&pDelFReader);
goto _err;
}
@ -1962,9 +1962,13 @@ int32_t initDelSkylineIterator(STableBlockScanInfo* pBlockScanInfo, STsdbReader*
if (pIdx != NULL) {
code = tsdbReadDelData(pDelFReader, pIdx, pDelData, NULL);
if (code != TSDB_CODE_SUCCESS) {
goto _err;
}
}
taosArrayDestroy(aDelIdx);
tsdbDelFReaderClose(&pDelFReader);
if (code != TSDB_CODE_SUCCESS) {
goto _err;
}
}
@ -2532,8 +2536,7 @@ static int32_t checkForNeighborFileBlock(STsdbReader* pReader, STableBlockScanIn
pDumpInfo->rowIndex =
doMergeRowsInFileBlockImpl(pBlockData, pDumpInfo->rowIndex, key, pMerger, &pReader->verRange, step);
if (pDumpInfo->rowIndex >= pBlock->nRow) {
if (pDumpInfo->rowIndex >= pDumpInfo->totalRows) {
*state = CHECK_FILEBLOCK_CONT;
}
}
@ -2830,8 +2833,10 @@ int32_t tsdbReaderOpen(SVnode* pVnode, SQueryTableDataCond* pCond, SArray* pTabl
SDataBlockIter* pBlockIter = &pReader->status.blockIter;
STsdbFSState* pFState = pReader->pTsdb->pFS->cState;
initFilesetIterator(&pReader->status.fileIter, pFState, pReader->order, pReader->idStr);
code = tsdbTakeReadSnap(pReader->pTsdb, &pReader->pReadSnap);
if (code) goto _err;
initFilesetIterator(&pReader->status.fileIter, (*ppReader)->pReadSnap->fs.aDFileSet, pReader->order, pReader->idStr);
resetDataBlockIterator(&pReader->status.blockIter, pReader->order);
// no data in files, let's try buffer in memory
@ -2844,8 +2849,6 @@ int32_t tsdbReaderOpen(SVnode* pVnode, SQueryTableDataCond* pCond, SArray* pTabl
}
}
tsdbTakeMemSnapshot(pReader->pTsdb, &pReader->pMem, &pReader->pIMem);
tsdbDebug("%p total numOfTable:%d in this query %s", pReader, numOfTables, pReader->idStr);
return code;
@ -2861,7 +2864,7 @@ void tsdbReaderClose(STsdbReader* pReader) {
SBlockLoadSuppInfo* pSupInfo = &pReader->suppInfo;
tsdbUntakeMemSnapshot(pReader->pTsdb, pReader->pMem, pReader->pIMem);
tsdbUntakeReadSnap(pReader->pTsdb, pReader->pReadSnap);
taosMemoryFreeClear(pSupInfo->plist);
taosMemoryFree(pSupInfo->colIds);
@ -2874,7 +2877,6 @@ void tsdbReaderClose(STsdbReader* pReader) {
}
taosMemoryFree(pSupInfo->buildBuf);
cleanupFilesetIterator(&pReader->status.fileIter);
cleanupDataBlockIterator(&pReader->status.blockIter);
destroyBlockScanInfo(pReader->status.pTableMap);
blockDataDestroy(pReader->pResBlock);
@ -3081,8 +3083,7 @@ int32_t tsdbReaderReset(STsdbReader* pReader, SQueryTableDataCond* pCond) {
tsdbDataFReaderClose(&pReader->pFileReader);
STsdbFSState* pFState = pReader->pTsdb->pFS->cState;
initFilesetIterator(&pReader->status.fileIter, pFState, pReader->order, pReader->idStr);
initFilesetIterator(&pReader->status.fileIter, pReader->pReadSnap->fs.aDFileSet, pReader->order, pReader->idStr);
resetDataBlockIterator(&pReader->status.blockIter, pReader->order);
resetDataBlockScanInfo(pReader->status.pTableMap);
@ -3244,3 +3245,68 @@ int32_t tsdbGetTableSchema(SVnode* pVnode, int64_t uid, STSchema** pSchema, int6
return TSDB_CODE_SUCCESS;
}
int32_t tsdbTakeReadSnap(STsdb* pTsdb, STsdbReadSnap** ppSnap) {
int32_t code = 0;
// alloc
*ppSnap = (STsdbReadSnap*)taosMemoryCalloc(1, sizeof(STsdbReadSnap));
if (*ppSnap == NULL) {
code = TSDB_CODE_OUT_OF_MEMORY;
goto _exit;
}
// lock
code = taosThreadRwlockRdlock(&pTsdb->rwLock);
if (code) {
code = TAOS_SYSTEM_ERROR(code);
goto _exit;
}
// take snapshot
(*ppSnap)->pMem = pTsdb->mem;
(*ppSnap)->pIMem = pTsdb->imem;
if ((*ppSnap)->pMem) {
tsdbRefMemTable((*ppSnap)->pMem);
}
if ((*ppSnap)->pIMem) {
tsdbRefMemTable((*ppSnap)->pIMem);
}
// fs
code = tsdbFSRef(pTsdb, &(*ppSnap)->fs);
if (code) {
taosThreadRwlockUnlock(&pTsdb->rwLock);
goto _exit;
}
// unlock
code = taosThreadRwlockUnlock(&pTsdb->rwLock);
if (code) {
code = TAOS_SYSTEM_ERROR(code);
goto _exit;
}
tsdbTrace("vgId:%d take read snapshot", TD_VID(pTsdb->pVnode));
_exit:
return code;
}
void tsdbUntakeReadSnap(STsdb* pTsdb, STsdbReadSnap* pSnap) {
if (pSnap) {
if (pSnap->pMem) {
tsdbUnrefMemTable(pSnap->pMem);
}
if (pSnap->pIMem) {
tsdbUnrefMemTable(pSnap->pIMem);
}
tsdbFSUnref(pTsdb, &pSnap->fs);
taosMemoryFree(pSnap);
}
tsdbTrace("vgId:%d untake read snapshot", TD_VID(pTsdb->pVnode));
}

View File

@ -459,7 +459,7 @@ int32_t tsdbDataFReaderOpen(SDataFReader **ppReader, STsdb *pTsdb, SDFileSet *pS
// open impl
// head
tsdbDataFileName(pTsdb, pSet, TSDB_HEAD_FILE, fname);
tsdbHeadFileName(pTsdb, pSet->diskId, pSet->fid, pSet->pHeadF, fname);
pReader->pHeadFD = taosOpenFile(fname, TD_FILE_READ);
if (pReader->pHeadFD == NULL) {
code = TAOS_SYSTEM_ERROR(errno);
@ -467,7 +467,7 @@ int32_t tsdbDataFReaderOpen(SDataFReader **ppReader, STsdb *pTsdb, SDFileSet *pS
}
// data
tsdbDataFileName(pTsdb, pSet, TSDB_DATA_FILE, fname);
tsdbDataFileName(pTsdb, pSet->diskId, pSet->fid, pSet->pDataF, fname);
pReader->pDataFD = taosOpenFile(fname, TD_FILE_READ);
if (pReader->pDataFD == NULL) {
code = TAOS_SYSTEM_ERROR(errno);
@ -475,7 +475,7 @@ int32_t tsdbDataFReaderOpen(SDataFReader **ppReader, STsdb *pTsdb, SDFileSet *pS
}
// last
tsdbDataFileName(pTsdb, pSet, TSDB_LAST_FILE, fname);
tsdbLastFileName(pTsdb, pSet->diskId, pSet->fid, pSet->pLastF, fname);
pReader->pLastFD = taosOpenFile(fname, TD_FILE_READ);
if (pReader->pLastFD == NULL) {
code = TAOS_SYSTEM_ERROR(errno);
@ -483,7 +483,7 @@ int32_t tsdbDataFReaderOpen(SDataFReader **ppReader, STsdb *pTsdb, SDFileSet *pS
}
// sma
tsdbDataFileName(pTsdb, pSet, TSDB_SMA_FILE, fname);
tsdbSmaFileName(pTsdb, pSet->diskId, pSet->fid, pSet->pSmaF, fname);
pReader->pSmaFD = taosOpenFile(fname, TD_FILE_READ);
if (pReader->pSmaFD == NULL) {
code = TAOS_SYSTEM_ERROR(errno);
@ -536,8 +536,8 @@ _err:
int32_t tsdbReadBlockIdx(SDataFReader *pReader, SArray *aBlockIdx, uint8_t **ppBuf) {
int32_t code = 0;
int64_t offset = pReader->pSet->fHead.offset;
int64_t size = pReader->pSet->fHead.size - offset;
int64_t offset = pReader->pSet->pHeadF->offset;
int64_t size = pReader->pSet->pHeadF->size - offset;
uint8_t *pBuf = NULL;
int64_t n;
uint32_t delimiter;
@ -1211,17 +1211,6 @@ _err:
}
// SDataFWriter ====================================================
struct SDataFWriter {
STsdb *pTsdb;
SDFileSet wSet;
TdFilePtr pHeadFD;
TdFilePtr pDataFD;
TdFilePtr pLastFD;
TdFilePtr pSmaFD;
};
SDFileSet *tsdbDataFWriterGetWSet(SDataFWriter *pWriter) { return &pWriter->wSet; }
int32_t tsdbDataFWriterOpen(SDataFWriter **ppWriter, STsdb *pTsdb, SDFileSet *pSet) {
int32_t code = 0;
int32_t flag;
@ -1237,12 +1226,20 @@ int32_t tsdbDataFWriterOpen(SDataFWriter **ppWriter, STsdb *pTsdb, SDFileSet *pS
goto _err;
}
pWriter->pTsdb = pTsdb;
pWriter->wSet = *pSet;
pSet = &pWriter->wSet;
pWriter->wSet = (SDFileSet){.diskId = pSet->diskId,
.fid = pSet->fid,
.pHeadF = &pWriter->fHead,
.pDataF = &pWriter->fData,
.pLastF = &pWriter->fLast,
.pSmaF = &pWriter->fSma};
pWriter->fHead = *pSet->pHeadF;
pWriter->fData = *pSet->pDataF;
pWriter->fLast = *pSet->pLastF;
pWriter->fSma = *pSet->pSmaF;
// head
flag = TD_FILE_WRITE | TD_FILE_CREATE | TD_FILE_TRUNC;
tsdbDataFileName(pTsdb, pSet, TSDB_HEAD_FILE, fname);
tsdbHeadFileName(pTsdb, pWriter->wSet.diskId, pWriter->wSet.fid, &pWriter->fHead, fname);
pWriter->pHeadFD = taosOpenFile(fname, flag);
if (pWriter->pHeadFD == NULL) {
code = TAOS_SYSTEM_ERROR(errno);
@ -1257,28 +1254,28 @@ int32_t tsdbDataFWriterOpen(SDataFWriter **ppWriter, STsdb *pTsdb, SDFileSet *pS
ASSERT(n == TSDB_FHDR_SIZE);
pSet->fHead.size += TSDB_FHDR_SIZE;
pWriter->fHead.size += TSDB_FHDR_SIZE;
// data
if (pSet->fData.size == 0) {
if (pWriter->fData.size == 0) {
flag = TD_FILE_WRITE | TD_FILE_CREATE | TD_FILE_TRUNC;
} else {
flag = TD_FILE_WRITE;
}
tsdbDataFileName(pTsdb, pSet, TSDB_DATA_FILE, fname);
tsdbDataFileName(pTsdb, pWriter->wSet.diskId, pWriter->wSet.fid, &pWriter->fData, fname);
pWriter->pDataFD = taosOpenFile(fname, flag);
if (pWriter->pDataFD == NULL) {
code = TAOS_SYSTEM_ERROR(errno);
goto _err;
}
if (pSet->fData.size == 0) {
if (pWriter->fData.size == 0) {
n = taosWriteFile(pWriter->pDataFD, hdr, TSDB_FHDR_SIZE);
if (n < 0) {
code = TAOS_SYSTEM_ERROR(errno);
goto _err;
}
pSet->fData.size += TSDB_FHDR_SIZE;
pWriter->fData.size += TSDB_FHDR_SIZE;
} else {
n = taosLSeekFile(pWriter->pDataFD, 0, SEEK_END);
if (n < 0) {
@ -1286,29 +1283,29 @@ int32_t tsdbDataFWriterOpen(SDataFWriter **ppWriter, STsdb *pTsdb, SDFileSet *pS
goto _err;
}
ASSERT(n == pSet->fData.size);
ASSERT(n == pWriter->fData.size);
}
// last
if (pSet->fLast.size == 0) {
if (pWriter->fLast.size == 0) {
flag = TD_FILE_WRITE | TD_FILE_CREATE | TD_FILE_TRUNC;
} else {
flag = TD_FILE_WRITE;
}
tsdbDataFileName(pTsdb, pSet, TSDB_LAST_FILE, fname);
tsdbLastFileName(pTsdb, pWriter->wSet.diskId, pWriter->wSet.fid, &pWriter->fLast, fname);
pWriter->pLastFD = taosOpenFile(fname, flag);
if (pWriter->pLastFD == NULL) {
code = TAOS_SYSTEM_ERROR(errno);
goto _err;
}
if (pSet->fLast.size == 0) {
if (pWriter->fLast.size == 0) {
n = taosWriteFile(pWriter->pLastFD, hdr, TSDB_FHDR_SIZE);
if (n < 0) {
code = TAOS_SYSTEM_ERROR(errno);
goto _err;
}
pSet->fLast.size += TSDB_FHDR_SIZE;
pWriter->fLast.size += TSDB_FHDR_SIZE;
} else {
n = taosLSeekFile(pWriter->pLastFD, 0, SEEK_END);
if (n < 0) {
@ -1316,29 +1313,29 @@ int32_t tsdbDataFWriterOpen(SDataFWriter **ppWriter, STsdb *pTsdb, SDFileSet *pS
goto _err;
}
ASSERT(n == pSet->fLast.size);
ASSERT(n == pWriter->fLast.size);
}
// sma
if (pSet->fSma.size == 0) {
if (pWriter->fSma.size == 0) {
flag = TD_FILE_WRITE | TD_FILE_CREATE | TD_FILE_TRUNC;
} else {
flag = TD_FILE_WRITE;
}
tsdbDataFileName(pTsdb, pSet, TSDB_SMA_FILE, fname);
tsdbSmaFileName(pTsdb, pWriter->wSet.diskId, pWriter->wSet.fid, &pWriter->fSma, fname);
pWriter->pSmaFD = taosOpenFile(fname, flag);
if (pWriter->pSmaFD == NULL) {
code = TAOS_SYSTEM_ERROR(errno);
goto _err;
}
if (pSet->fSma.size == 0) {
if (pWriter->fSma.size == 0) {
n = taosWriteFile(pWriter->pSmaFD, hdr, TSDB_FHDR_SIZE);
if (n < 0) {
code = TAOS_SYSTEM_ERROR(errno);
goto _err;
}
pSet->fSma.size += TSDB_FHDR_SIZE;
pWriter->fSma.size += TSDB_FHDR_SIZE;
} else {
n = taosLSeekFile(pWriter->pSmaFD, 0, SEEK_END);
if (n < 0) {
@ -1346,7 +1343,7 @@ int32_t tsdbDataFWriterOpen(SDataFWriter **ppWriter, STsdb *pTsdb, SDFileSet *pS
goto _err;
}
ASSERT(n == pSet->fSma.size);
ASSERT(n == pWriter->fSma.size);
}
*ppWriter = pWriter;
@ -1418,22 +1415,76 @@ _err:
int32_t tsdbUpdateDFileSetHeader(SDataFWriter *pWriter) {
int32_t code = 0;
int64_t n;
char hdr[TSDB_FHDR_SIZE];
// head ==============
code = tsdbUpdateDFileHdr(pWriter->pHeadFD, &pWriter->wSet, TSDB_HEAD_FILE);
if (code) goto _err;
memset(hdr, 0, TSDB_FHDR_SIZE);
tPutHeadFile(hdr, &pWriter->fHead);
taosCalcChecksumAppend(0, hdr, TSDB_FHDR_SIZE);
n = taosLSeekFile(pWriter->pHeadFD, 0, SEEK_SET);
if (n < 0) {
code = TAOS_SYSTEM_ERROR(errno);
goto _err;
}
n = taosWriteFile(pWriter->pHeadFD, hdr, TSDB_FHDR_SIZE);
if (n < 0) {
code = TAOS_SYSTEM_ERROR(errno);
goto _err;
}
// data ==============
code = tsdbUpdateDFileHdr(pWriter->pHeadFD, &pWriter->wSet, TSDB_DATA_FILE);
if (code) goto _err;
memset(hdr, 0, TSDB_FHDR_SIZE);
tPutDataFile(hdr, &pWriter->fData);
taosCalcChecksumAppend(0, hdr, TSDB_FHDR_SIZE);
n = taosLSeekFile(pWriter->pDataFD, 0, SEEK_SET);
if (n < 0) {
code = TAOS_SYSTEM_ERROR(errno);
goto _err;
}
n = taosWriteFile(pWriter->pDataFD, hdr, TSDB_FHDR_SIZE);
if (n < 0) {
code = TAOS_SYSTEM_ERROR(errno);
goto _err;
}
// last ==============
code = tsdbUpdateDFileHdr(pWriter->pHeadFD, &pWriter->wSet, TSDB_LAST_FILE);
if (code) goto _err;
memset(hdr, 0, TSDB_FHDR_SIZE);
tPutLastFile(hdr, &pWriter->fLast);
taosCalcChecksumAppend(0, hdr, TSDB_FHDR_SIZE);
n = taosLSeekFile(pWriter->pLastFD, 0, SEEK_SET);
if (n < 0) {
code = TAOS_SYSTEM_ERROR(errno);
goto _err;
}
n = taosWriteFile(pWriter->pLastFD, hdr, TSDB_FHDR_SIZE);
if (n < 0) {
code = TAOS_SYSTEM_ERROR(errno);
goto _err;
}
// sma ==============
code = tsdbUpdateDFileHdr(pWriter->pHeadFD, &pWriter->wSet, TSDB_SMA_FILE);
if (code) goto _err;
memset(hdr, 0, TSDB_FHDR_SIZE);
tPutSmaFile(hdr, &pWriter->fSma);
taosCalcChecksumAppend(0, hdr, TSDB_FHDR_SIZE);
n = taosLSeekFile(pWriter->pSmaFD, 0, SEEK_SET);
if (n < 0) {
code = TAOS_SYSTEM_ERROR(errno);
goto _err;
}
n = taosWriteFile(pWriter->pSmaFD, hdr, TSDB_FHDR_SIZE);
if (n < 0) {
code = TAOS_SYSTEM_ERROR(errno);
goto _err;
}
return code;
@ -1444,7 +1495,7 @@ _err:
int32_t tsdbWriteBlockIdx(SDataFWriter *pWriter, SArray *aBlockIdx, uint8_t **ppBuf) {
int32_t code = 0;
SHeadFile *pHeadFile = &pWriter->wSet.fHead;
SHeadFile *pHeadFile = &pWriter->fHead;
uint8_t *pBuf = NULL;
int64_t size;
int64_t n;
@ -1494,7 +1545,7 @@ _err:
int32_t tsdbWriteBlock(SDataFWriter *pWriter, SMapData *mBlock, uint8_t **ppBuf, SBlockIdx *pBlockIdx) {
int32_t code = 0;
SHeadFile *pHeadFile = &pWriter->wSet.fHead;
SHeadFile *pHeadFile = &pWriter->fHead;
SBlockDataHdr hdr = {.delimiter = TSDB_FILE_DLMT, .suid = pBlockIdx->suid, .uid = pBlockIdx->uid};
uint8_t *pBuf = NULL;
int64_t size;
@ -1831,9 +1882,9 @@ int32_t tsdbWriteBlockData(SDataFWriter *pWriter, SBlockData *pBlockData, uint8_
pSubBlock->nRow = pBlockData->nRow;
pSubBlock->cmprAlg = cmprAlg;
if (pBlock->last) {
pSubBlock->offset = pWriter->wSet.fLast.size;
pSubBlock->offset = pWriter->fLast.size;
} else {
pSubBlock->offset = pWriter->wSet.fData.size;
pSubBlock->offset = pWriter->fData.size;
}
// ======================= BLOCK DATA =======================
@ -1881,9 +1932,9 @@ int32_t tsdbWriteBlockData(SDataFWriter *pWriter, SBlockData *pBlockData, uint8_
pSubBlock->szBlock = pSubBlock->szBlockCol + sizeof(TSCKSUM) + nData;
if (pBlock->last) {
pWriter->wSet.fLast.size += pSubBlock->szBlock;
pWriter->fLast.size += pSubBlock->szBlock;
} else {
pWriter->wSet.fData.size += pSubBlock->szBlock;
pWriter->fData.size += pSubBlock->szBlock;
}
// ======================= BLOCK SMA =======================
@ -1896,8 +1947,8 @@ int32_t tsdbWriteBlockData(SDataFWriter *pWriter, SBlockData *pBlockData, uint8_
if (code) goto _err;
if (pSubBlock->nSma > 0) {
pSubBlock->sOffset = pWriter->wSet.fSma.size;
pWriter->wSet.fSma.size += (sizeof(SColumnDataAgg) * pSubBlock->nSma + sizeof(TSCKSUM));
pSubBlock->sOffset = pWriter->fSma.size;
pWriter->fSma.size += (sizeof(SColumnDataAgg) * pSubBlock->nSma + sizeof(TSCKSUM));
}
_exit:
@ -1924,8 +1975,8 @@ int32_t tsdbDFileSetCopy(STsdb *pTsdb, SDFileSet *pSetFrom, SDFileSet *pSetTo) {
char fNameTo[TSDB_FILENAME_LEN];
// head
tsdbDataFileName(pTsdb, pSetFrom, TSDB_HEAD_FILE, fNameFrom);
tsdbDataFileName(pTsdb, pSetTo, TSDB_HEAD_FILE, fNameTo);
tsdbHeadFileName(pTsdb, pSetFrom->diskId, pSetFrom->fid, pSetFrom->pHeadF, fNameFrom);
tsdbHeadFileName(pTsdb, pSetTo->diskId, pSetTo->fid, pSetTo->pHeadF, fNameTo);
pOutFD = taosOpenFile(fNameTo, TD_FILE_WRITE | TD_FILE_CREATE | TD_FILE_TRUNC);
if (pOutFD == NULL) {
@ -1939,7 +1990,7 @@ int32_t tsdbDFileSetCopy(STsdb *pTsdb, SDFileSet *pSetFrom, SDFileSet *pSetTo) {
goto _err;
}
n = taosFSendFile(pOutFD, PInFD, 0, pSetFrom->fHead.size);
n = taosFSendFile(pOutFD, PInFD, 0, pSetFrom->pHeadF->size);
if (n < 0) {
code = TAOS_SYSTEM_ERROR(errno);
goto _err;
@ -1948,8 +1999,8 @@ int32_t tsdbDFileSetCopy(STsdb *pTsdb, SDFileSet *pSetFrom, SDFileSet *pSetTo) {
taosCloseFile(&PInFD);
// data
tsdbDataFileName(pTsdb, pSetFrom, TSDB_DATA_FILE, fNameFrom);
tsdbDataFileName(pTsdb, pSetTo, TSDB_DATA_FILE, fNameTo);
tsdbDataFileName(pTsdb, pSetFrom->diskId, pSetFrom->fid, pSetFrom->pDataF, fNameFrom);
tsdbDataFileName(pTsdb, pSetTo->diskId, pSetTo->fid, pSetTo->pDataF, fNameTo);
pOutFD = taosOpenFile(fNameTo, TD_FILE_WRITE | TD_FILE_CREATE | TD_FILE_TRUNC);
if (pOutFD == NULL) {
@ -1963,7 +2014,7 @@ int32_t tsdbDFileSetCopy(STsdb *pTsdb, SDFileSet *pSetFrom, SDFileSet *pSetTo) {
goto _err;
}
n = taosFSendFile(pOutFD, PInFD, 0, pSetFrom->fData.size);
n = taosFSendFile(pOutFD, PInFD, 0, pSetFrom->pDataF->size);
if (n < 0) {
code = TAOS_SYSTEM_ERROR(errno);
goto _err;
@ -1972,8 +2023,9 @@ int32_t tsdbDFileSetCopy(STsdb *pTsdb, SDFileSet *pSetFrom, SDFileSet *pSetTo) {
taosCloseFile(&PInFD);
// last
tsdbDataFileName(pTsdb, pSetFrom, TSDB_LAST_FILE, fNameFrom);
tsdbDataFileName(pTsdb, pSetTo, TSDB_LAST_FILE, fNameTo);
tsdbLastFileName(pTsdb, pSetFrom->diskId, pSetFrom->fid, pSetFrom->pLastF, fNameFrom);
tsdbLastFileName(pTsdb, pSetTo->diskId, pSetTo->fid, pSetTo->pLastF, fNameTo);
pOutFD = taosOpenFile(fNameTo, TD_FILE_WRITE | TD_FILE_CREATE | TD_FILE_TRUNC);
if (pOutFD == NULL) {
code = TAOS_SYSTEM_ERROR(errno);
@ -1986,7 +2038,7 @@ int32_t tsdbDFileSetCopy(STsdb *pTsdb, SDFileSet *pSetFrom, SDFileSet *pSetTo) {
goto _err;
}
n = taosFSendFile(pOutFD, PInFD, 0, pSetFrom->fLast.size);
n = taosFSendFile(pOutFD, PInFD, 0, pSetFrom->pLastF->size);
if (n < 0) {
code = TAOS_SYSTEM_ERROR(errno);
goto _err;
@ -1995,8 +2047,8 @@ int32_t tsdbDFileSetCopy(STsdb *pTsdb, SDFileSet *pSetFrom, SDFileSet *pSetTo) {
taosCloseFile(&PInFD);
// sma
tsdbDataFileName(pTsdb, pSetFrom, TSDB_SMA_FILE, fNameFrom);
tsdbDataFileName(pTsdb, pSetTo, TSDB_SMA_FILE, fNameTo);
tsdbSmaFileName(pTsdb, pSetFrom->diskId, pSetFrom->fid, pSetFrom->pSmaF, fNameFrom);
tsdbSmaFileName(pTsdb, pSetTo->diskId, pSetTo->fid, pSetTo->pSmaF, fNameTo);
pOutFD = taosOpenFile(fNameTo, TD_FILE_WRITE | TD_FILE_CREATE | TD_FILE_TRUNC);
if (pOutFD == NULL) {
@ -2010,7 +2062,7 @@ int32_t tsdbDFileSetCopy(STsdb *pTsdb, SDFileSet *pSetFrom, SDFileSet *pSetTo) {
goto _err;
}
n = taosFSendFile(pOutFD, PInFD, 0, pSetFrom->fSma.size);
n = taosFSendFile(pOutFD, PInFD, 0, pSetFrom->pSmaF->size);
if (n < 0) {
code = TAOS_SYSTEM_ERROR(errno);
goto _err;

View File

@ -15,90 +15,99 @@
#include "tsdb.h"
static int32_t tsdbDoRetentionImpl(STsdb *pTsdb, int64_t now, int8_t try, int8_t *canDo) {
int32_t code = 0;
STsdbFSState *pState;
if (try) {
pState = pTsdb->pFS->cState;
*canDo = 0;
} else {
pState = pTsdb->pFS->nState;
}
for (int32_t iSet = 0; iSet < taosArrayGetSize(pState->aDFileSet); iSet++) {
SDFileSet *pDFileSet = (SDFileSet *)taosArrayGet(pState->aDFileSet, iSet);
int32_t expLevel = tsdbFidLevel(pDFileSet->fid, &pTsdb->keepCfg, now);
static bool tsdbShouldDoRetention(STsdb *pTsdb, int64_t now) {
for (int32_t iSet = 0; iSet < taosArrayGetSize(pTsdb->fs.aDFileSet); iSet++) {
SDFileSet *pSet = (SDFileSet *)taosArrayGet(pTsdb->fs.aDFileSet, iSet);
int32_t expLevel = tsdbFidLevel(pSet->fid, &pTsdb->keepCfg, now);
SDiskID did;
// check
if (expLevel == pDFileSet->diskId.id) continue;
if (expLevel == pSet->diskId.level) continue;
// delete or move
if (expLevel < 0) {
if (try) {
*canDo = 1;
} else {
tsdbFSStateDeleteDFileSet(pState, pDFileSet->fid);
iSet--;
}
return true;
} else {
if (tfsAllocDisk(pTsdb->pVnode->pTfs, expLevel, &did) < 0) {
return false;
}
if (did.level == pSet->diskId.level) continue;
return true;
}
}
return false;
}
int32_t tsdbDoRetention(STsdb *pTsdb, int64_t now) {
int32_t code = 0;
if (!tsdbShouldDoRetention(pTsdb, now)) {
return code;
}
// do retention
STsdbFS fs;
code = tsdbFSCopy(pTsdb, &fs);
if (code) goto _err;
for (int32_t iSet = 0; iSet < taosArrayGetSize(fs.aDFileSet); iSet++) {
SDFileSet *pSet = (SDFileSet *)taosArrayGet(pTsdb->fs.aDFileSet, iSet);
int32_t expLevel = tsdbFidLevel(pSet->fid, &pTsdb->keepCfg, now);
SDiskID did;
if (expLevel < 0) {
taosMemoryFree(pSet->pHeadF);
taosMemoryFree(pSet->pDataF);
taosMemoryFree(pSet->pLastF);
taosMemoryFree(pSet->pSmaF);
taosArrayRemove(fs.aDFileSet, iSet);
iSet--;
} else {
// alloc
if (tfsAllocDisk(pTsdb->pVnode->pTfs, expLevel, &did) < 0) {
code = terrno;
goto _exit;
}
if (did.level == pDFileSet->diskId.level) continue;
if (did.level == pSet->diskId.level) continue;
if (try) {
*canDo = 1;
} else {
// copy the file to new disk
// copy file to new disk (todo)
SDFileSet fSet = *pSet;
fSet.diskId = did;
SDFileSet nDFileSet = *pDFileSet;
nDFileSet.diskId = did;
code = tsdbDFileSetCopy(pTsdb, pSet, &fSet);
if (code) goto _err;
tfsMkdirRecurAt(pTsdb->pVnode->pTfs, pTsdb->path, did);
code = tsdbDFileSetCopy(pTsdb, pDFileSet, &nDFileSet);
if (code) goto _exit;
code = tsdbFSStateUpsertDFileSet(pState, &nDFileSet);
if (code) goto _exit;
}
code = tsdbFSUpsertFSet(&fs, &fSet);
if (code) goto _err;
}
/* code */
}
_exit:
return code;
}
int32_t tsdbDoRetention(STsdb *pTsdb, int64_t now) {
int32_t code = 0;
int8_t canDo;
// try
tsdbDoRetentionImpl(pTsdb, now, 1, &canDo);
if (!canDo) goto _exit;
// begin
code = tsdbFSBegin(pTsdb->pFS);
// do change fs
code = tsdbFSCommit1(pTsdb, &fs);
if (code) goto _err;
// do retention
code = tsdbDoRetentionImpl(pTsdb, now, 0, NULL);
if (code) goto _err;
taosThreadRwlockWrlock(&pTsdb->rwLock);
// commit
code = tsdbFSCommit(pTsdb->pFS);
if (code) goto _err;
code = tsdbFSCommit2(pTsdb, &fs);
if (code) {
taosThreadRwlockUnlock(&pTsdb->rwLock);
goto _err;
}
taosThreadRwlockUnlock(&pTsdb->rwLock);
tsdbFSDestroy(&fs);
_exit:
return code;
_err:
tsdbError("vgId:%d tsdb do retention failed since %s", TD_VID(pTsdb->pVnode), tstrerror(code));
tsdbFSRollback(pTsdb->pFS);
ASSERT(0);
// tsdbFSRollback(pTsdb->pFS);
return code;
}

View File

@ -20,6 +20,7 @@ struct STsdbSnapReader {
STsdb* pTsdb;
int64_t sver;
int64_t ever;
STsdbFS fs;
// for data file
int8_t dataDone;
int32_t fid;
@ -45,7 +46,8 @@ static int32_t tsdbSnapReadData(STsdbSnapReader* pReader, uint8_t** ppData) {
while (true) {
if (pReader->pDataFReader == NULL) {
SDFileSet* pSet = tsdbFSStateGetDFileSet(pTsdb->pFS->cState, pReader->fid, TD_GT);
SDFileSet* pSet =
taosArraySearch(pReader->fs.aDFileSet, &(SDFileSet){.fid = pReader->fid}, tDFileSetCmprFn, TD_GT);
if (pSet == NULL) goto _exit;
@ -159,7 +161,7 @@ _err:
static int32_t tsdbSnapReadDel(STsdbSnapReader* pReader, uint8_t** ppData) {
int32_t code = 0;
STsdb* pTsdb = pReader->pTsdb;
SDelFile* pDelFile = pTsdb->pFS->cState->pDelFile;
SDelFile* pDelFile = pReader->fs.pDelFile;
if (pReader->pDelFReader == NULL) {
if (pDelFile == NULL) {
@ -254,6 +256,24 @@ int32_t tsdbSnapReaderOpen(STsdb* pTsdb, int64_t sver, int64_t ever, STsdbSnapRe
pReader->sver = sver;
pReader->ever = ever;
code = taosThreadRwlockRdlock(&pTsdb->rwLock);
if (code) {
code = TAOS_SYSTEM_ERROR(code);
goto _err;
}
code = tsdbFSRef(pTsdb, &pReader->fs);
if (code) {
taosThreadRwlockUnlock(&pTsdb->rwLock);
goto _err;
}
code = taosThreadRwlockUnlock(&pTsdb->rwLock);
if (code) {
code = TAOS_SYSTEM_ERROR(code);
goto _err;
}
pReader->fid = INT32_MIN;
pReader->aBlockIdx = taosArrayInit(0, sizeof(SBlockIdx));
if (pReader->aBlockIdx == NULL) {
@ -305,6 +325,8 @@ int32_t tsdbSnapReaderClose(STsdbSnapReader** ppReader) {
taosArrayDestroy(pReader->aDelIdx);
taosArrayDestroy(pReader->aDelData);
tsdbFSUnref(pReader->pTsdb, &pReader->fs);
tsdbInfo("vgId:%d vnode snapshot tsdb reader closed", TD_VID(pReader->pTsdb->pVnode));
taosMemoryFree(pReader);
@ -358,6 +380,7 @@ struct STsdbSnapWriter {
STsdb* pTsdb;
int64_t sver;
int64_t ever;
STsdbFS fs;
// config
int32_t minutes;
@ -798,7 +821,7 @@ static int32_t tsdbSnapWriteDataEnd(STsdbSnapWriter* pWriter) {
code = tsdbWriteBlockIdx(pWriter->pDataFWriter, pWriter->aBlockIdxW, NULL);
if (code) goto _err;
code = tsdbFSStateUpsertDFileSet(pTsdb->pFS->nState, tsdbDataFWriterGetWSet(pWriter->pDataFWriter));
code = tsdbFSUpsertFSet(&pWriter->fs, &pWriter->pDataFWriter->wSet);
if (code) goto _err;
code = tsdbDataFWriterClose(&pWriter->pDataFWriter, 1);
@ -843,7 +866,7 @@ static int32_t tsdbSnapWriteData(STsdbSnapWriter* pWriter, uint8_t* pData, uint3
pWriter->fid = fid;
// read
SDFileSet* pSet = tsdbFSStateGetDFileSet(pTsdb->pFS->nState, fid, TD_EQ);
SDFileSet* pSet = taosArraySearch(pWriter->fs.aDFileSet, &(SDFileSet){.fid = fid}, tDFileSetCmprFn, TD_EQ);
if (pSet) {
code = tsdbDataFReaderOpen(&pWriter->pDataFReader, pTsdb, pSet);
if (code) goto _err;
@ -863,22 +886,26 @@ static int32_t tsdbSnapWriteData(STsdbSnapWriter* pWriter, uint8_t* pData, uint3
tBlockDataReset(&pWriter->bDataR);
// write
SDFileSet wSet;
SHeadFile fHead;
SDataFile fData;
SLastFile fLast;
SSmaFile fSma;
SDFileSet wSet = {.pHeadF = &fHead, .pDataF = &fData, .pLastF = &fLast, .pSmaF = &fSma};
if (pSet) {
wSet = (SDFileSet){.diskId = pSet->diskId,
.fid = fid,
.fHead = {.commitID = pWriter->commitID, .offset = 0, .size = 0},
.fData = pSet->fData,
.fLast = {.commitID = pWriter->commitID, .size = 0},
.fSma = pSet->fSma};
wSet.diskId = pSet->diskId;
wSet.fid = fid;
fHead = (SHeadFile){.commitID = pWriter->commitID, .offset = 0, .size = 0};
fData = *pSet->pDataF;
fLast = (SLastFile){.commitID = pWriter->commitID, .size = 0};
fSma = *pSet->pSmaF;
} else {
wSet = (SDFileSet){.diskId = (SDiskID){.level = 0, .id = 0},
.fid = fid,
.fHead = {.commitID = pWriter->commitID, .offset = 0, .size = 0},
.fData = {.commitID = pWriter->commitID, .size = 0},
.fLast = {.commitID = pWriter->commitID, .size = 0},
.fSma = {.commitID = pWriter->commitID, .size = 0}};
wSet.diskId = (SDiskID){.level = 0, .id = 0};
wSet.fid = fid;
fHead = (SHeadFile){.commitID = pWriter->commitID, .offset = 0, .size = 0};
fData = (SDataFile){.commitID = pWriter->commitID, .size = 0};
fLast = (SLastFile){.commitID = pWriter->commitID, .size = 0};
fSma = (SSmaFile){.commitID = pWriter->commitID, .size = 0};
}
code = tsdbDataFWriterOpen(&pWriter->pDataFWriter, pTsdb, &wSet);
@ -907,7 +934,7 @@ static int32_t tsdbSnapWriteDel(STsdbSnapWriter* pWriter, uint8_t* pData, uint32
STsdb* pTsdb = pWriter->pTsdb;
if (pWriter->pDelFWriter == NULL) {
SDelFile* pDelFile = tsdbFSStateGetDelFile(pTsdb->pFS->nState);
SDelFile* pDelFile = pWriter->fs.pDelFile;
// reader
if (pDelFile) {
@ -1017,7 +1044,7 @@ static int32_t tsdbSnapWriteDelEnd(STsdbSnapWriter* pWriter) {
code = tsdbUpdateDelFileHdr(pWriter->pDelFWriter);
if (code) goto _err;
code = tsdbFSStateUpsertDelFile(pTsdb->pFS->nState, &pWriter->pDelFWriter->fDel);
code = tsdbFSUpsertDelFile(&pWriter->fs, &pWriter->pDelFWriter->fDel);
if (code) goto _err;
code = tsdbDelFWriterClose(&pWriter->pDelFWriter, 1);
@ -1051,6 +1078,9 @@ int32_t tsdbSnapWriterOpen(STsdb* pTsdb, int64_t sver, int64_t ever, STsdbSnapWr
pWriter->sver = sver;
pWriter->ever = ever;
code = tsdbFSCopy(pTsdb, &pWriter->fs);
if (code) goto _err;
// config
pWriter->minutes = pTsdb->keepCfg.days;
pWriter->precision = pTsdb->keepCfg.precision;
@ -1096,9 +1126,6 @@ int32_t tsdbSnapWriterOpen(STsdb* pTsdb, int64_t sver, int64_t ever, STsdbSnapWr
goto _err;
}
code = tsdbFSBegin(pTsdb->pFS);
if (code) goto _err;
*ppWriter = pWriter;
return code;
@ -1113,8 +1140,9 @@ int32_t tsdbSnapWriterClose(STsdbSnapWriter** ppWriter, int8_t rollback) {
STsdbSnapWriter* pWriter = *ppWriter;
if (rollback) {
code = tsdbFSRollback(pWriter->pTsdb->pFS);
if (code) goto _err;
ASSERT(0);
// code = tsdbFSRollback(pWriter->pTsdb->pFS);
// if (code) goto _err;
} else {
code = tsdbSnapWriteDataEnd(pWriter);
if (code) goto _err;
@ -1122,7 +1150,10 @@ int32_t tsdbSnapWriterClose(STsdbSnapWriter** ppWriter, int8_t rollback) {
code = tsdbSnapWriteDelEnd(pWriter);
if (code) goto _err;
code = tsdbFSCommit(pWriter->pTsdb->pFS);
code = tsdbFSCommit1(pWriter->pTsdb, &pWriter->fs);
if (code) goto _err;
code = tsdbFSCommit2(pWriter->pTsdb, &pWriter->fs);
if (code) goto _err;
}

View File

@ -316,7 +316,7 @@ int32_t vnodeProcessFetchMsg(SVnode *pVnode, SRpcMsg *pMsg, SQueueInfo *pInfo) {
case TDMT_VND_TABLE_CFG:
return vnodeGetTableCfg(pVnode, pMsg);
case TDMT_VND_CONSUME:
return tqProcessPollReq(pVnode->pTq, pMsg, pInfo->workerId);
return tqProcessPollReq(pVnode->pTq, pMsg);
case TDMT_STREAM_TASK_RUN:
return tqProcessTaskRunReq(pVnode->pTq, pMsg);
case TDMT_STREAM_TASK_DISPATCH:
@ -773,6 +773,7 @@ static int32_t vnodeProcessSubmitReq(SVnode *pVnode, int64_t version, void *pReq
terrno = TSDB_CODE_SUCCESS;
pRsp->code = 0;
pSubmitReq->version = version;
#ifdef TD_DEBUG_PRINT_ROW
vnodeDebugPrintSubmitMsg(pVnode, pReq, __func__);
@ -791,7 +792,7 @@ static int32_t vnodeProcessSubmitReq(SVnode *pVnode, int64_t version, void *pReq
submitRsp.pArray = taosArrayInit(msgIter.numOfBlocks, sizeof(SSubmitBlkRsp));
newTbUids = taosArrayInit(msgIter.numOfBlocks, sizeof(int64_t));
if (!submitRsp.pArray) {
if (!submitRsp.pArray || !newTbUids) {
pRsp->code = TSDB_CODE_OUT_OF_MEMORY;
goto _exit;
}

View File

@ -460,8 +460,6 @@ typedef struct SCtgOperation {
#define CTG_FLAG_MAKE_STB(_isStb) (((_isStb) == 1) ? CTG_FLAG_STB : ((_isStb) == 0 ? CTG_FLAG_NOT_STB : CTG_FLAG_UNKNOWN_STB))
#define CTG_FLAG_MATCH_STB(_flag, tbType) (CTG_FLAG_IS_UNKNOWN_STB(_flag) || (CTG_FLAG_IS_STB(_flag) && (tbType) == TSDB_SUPER_TABLE) || (CTG_FLAG_IS_NOT_STB(_flag) && (tbType) != TSDB_SUPER_TABLE))
#define CTG_IS_SYS_DBNAME(_dbname) (((*(_dbname) == 'i') && (0 == strcmp(_dbname, TSDB_INFORMATION_SCHEMA_DB))) || ((*(_dbname) == 'p') && (0 == strcmp(_dbname, TSDB_PERFORMANCE_SCHEMA_DB))))
#define CTG_META_SIZE(pMeta) (sizeof(STableMeta) + ((pMeta)->tableInfo.numOfTags + (pMeta)->tableInfo.numOfColumns) * sizeof(SSchema))
#define CTG_TABLE_NOT_EXIST(code) (code == CTG_ERR_CODE_TABLE_NOT_EXIST)

View File

@ -865,7 +865,7 @@ int32_t catalogChkTbMetaVersion(SCatalog* pCtg, SRequestConnInfo *pConn, SArray*
tNameFromString(&name, pTb->tbFName, T_NAME_ACCT | T_NAME_DB | T_NAME_TABLE);
if (CTG_IS_SYS_DBNAME(name.dbname)) {
if (IS_SYS_DBNAME(name.dbname)) {
continue;
}
@ -936,7 +936,7 @@ int32_t catalogGetTableDistVgInfo(SCatalog* pCtg, SRequestConnInfo *pConn, const
CTG_API_LEAVE(TSDB_CODE_CTG_INVALID_INPUT);
}
if (CTG_IS_SYS_DBNAME(pTableName->dbname)) {
if (IS_SYS_DBNAME(pTableName->dbname)) {
ctgError("no valid vgInfo for db, dbname:%s", pTableName->dbname);
CTG_API_LEAVE(TSDB_CODE_CTG_INVALID_INPUT);
}
@ -947,7 +947,7 @@ int32_t catalogGetTableDistVgInfo(SCatalog* pCtg, SRequestConnInfo *pConn, const
int32_t catalogGetTableHashVgroup(SCatalog *pCtg, SRequestConnInfo *pConn, const SName *pTableName, SVgroupInfo *pVgroup) {
CTG_API_ENTER();
if (CTG_IS_SYS_DBNAME(pTableName->dbname)) {
if (IS_SYS_DBNAME(pTableName->dbname)) {
ctgError("no valid vgInfo for db, dbname:%s", pTableName->dbname);
CTG_API_LEAVE(TSDB_CODE_CTG_INVALID_INPUT);
}

View File

@ -132,7 +132,7 @@ void ctgReleaseDBCache(SCatalog *pCtg, SCtgDBCache *dbCache) {
int32_t ctgAcquireDBCacheImpl(SCatalog* pCtg, const char *dbFName, SCtgDBCache **pCache, bool acquire) {
char *p = strchr(dbFName, '.');
if (p && CTG_IS_SYS_DBNAME(p + 1)) {
if (p && IS_SYS_DBNAME(p + 1)) {
dbFName = p + 1;
}
@ -694,7 +694,7 @@ int32_t ctgDropDbCacheEnqueue(SCatalog* pCtg, const char *dbFName, int64_t dbId)
}
char *p = strchr(dbFName, '.');
if (p && CTG_IS_SYS_DBNAME(p + 1)) {
if (p && IS_SYS_DBNAME(p + 1)) {
dbFName = p + 1;
}
@ -727,7 +727,7 @@ int32_t ctgDropDbVgroupEnqueue(SCatalog* pCtg, const char *dbFName, bool syncOp)
}
char *p = strchr(dbFName, '.');
if (p && CTG_IS_SYS_DBNAME(p + 1)) {
if (p && IS_SYS_DBNAME(p + 1)) {
dbFName = p + 1;
}
@ -823,7 +823,7 @@ int32_t ctgUpdateVgroupEnqueue(SCatalog* pCtg, const char *dbFName, int64_t dbId
}
char *p = strchr(dbFName, '.');
if (p && CTG_IS_SYS_DBNAME(p + 1)) {
if (p && IS_SYS_DBNAME(p + 1)) {
dbFName = p + 1;
}
@ -859,7 +859,7 @@ int32_t ctgUpdateTbMetaEnqueue(SCatalog* pCtg, STableMetaOutput *output, bool sy
}
char *p = strchr(output->dbFName, '.');
if (p && CTG_IS_SYS_DBNAME(p + 1)) {
if (p && IS_SYS_DBNAME(p + 1)) {
memmove(output->dbFName, p + 1, strlen(p + 1));
}
@ -2123,7 +2123,7 @@ int32_t ctgStartUpdateThread() {
int32_t ctgGetTbMetaFromCache(SCatalog* pCtg, SRequestConnInfo *pConn, SCtgTbMetaCtx* ctx, STableMeta** pTableMeta) {
if (CTG_IS_SYS_DBNAME(ctx->pName->dbname)) {
if (IS_SYS_DBNAME(ctx->pName->dbname)) {
CTG_FLAG_SET_SYS_DB(ctx->flag);
}
@ -2177,7 +2177,7 @@ _return:
}
int32_t ctgGetTbHashVgroupFromCache(SCatalog *pCtg, const SName *pTableName, SVgroupInfo **pVgroup) {
if (CTG_IS_SYS_DBNAME(pTableName->dbname)) {
if (IS_SYS_DBNAME(pTableName->dbname)) {
ctgError("no valid vgInfo for db, dbname:%s", pTableName->dbname);
CTG_ERR_RET(TSDB_CODE_CTG_INVALID_INPUT);
}

View File

@ -375,6 +375,8 @@ int32_t ctgGetQnodeListFromMnode(SCatalog* pCtg, SRequestConnInfo *pConn, SArray
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, NULL));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -408,6 +410,8 @@ int32_t ctgGetDnodeListFromMnode(SCatalog* pCtg, SRequestConnInfo *pConn, SArray
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, NULL));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -447,6 +451,8 @@ int32_t ctgGetDBVgInfoFromMnode(SCatalog* pCtg, SRequestConnInfo *pConn, SBuildU
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, input->db));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -485,6 +491,8 @@ int32_t ctgGetDBCfgFromMnode(SCatalog* pCtg, SRequestConnInfo *pConn, const char
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, (char*)dbFName));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -522,6 +530,8 @@ int32_t ctgGetIndexInfoFromMnode(SCatalog* pCtg, SRequestConnInfo *pConn, const
rpcSendRecv(pConn->pTrans, &pConn->mgmtEps, &rpcMsg, &rpcRsp);
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, (char*)indexName));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -563,6 +573,8 @@ int32_t ctgGetTbIndexFromMnode(SCatalog* pCtg, SRequestConnInfo *pConn, SName *n
rpcSendRecv(pConn->pTrans, &pConn->mgmtEps, &rpcMsg, &rpcRsp);
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, (char*)tbFName));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -602,6 +614,8 @@ int32_t ctgGetUdfInfoFromMnode(SCatalog* pCtg, SRequestConnInfo *pConn, const ch
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, (char*)funcName));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -639,6 +653,8 @@ int32_t ctgGetUserDbAuthFromMnode(SCatalog* pCtg, SRequestConnInfo *pConn, const
rpcSendRecv(pConn->pTrans, &pConn->mgmtEps, &rpcMsg, &rpcRsp);
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, (char*)user));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -683,6 +699,8 @@ int32_t ctgGetTbMetaFromMnodeImpl(SCatalog* pCtg, SRequestConnInfo *pConn, char
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, tbFName));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -740,6 +758,8 @@ int32_t ctgGetTbMetaFromVnode(SCatalog* pCtg, SRequestConnInfo *pConn, const SNa
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, tbFName));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -784,6 +804,8 @@ int32_t ctgGetTableCfgFromVnode(SCatalog* pCtg, SRequestConnInfo *pConn, const S
rpcSendRecv(pConn->pTrans, &vgroupInfo->epSet, &rpcMsg, &rpcRsp);
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, (char*)tbFName));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -824,6 +846,8 @@ int32_t ctgGetTableCfgFromMnode(SCatalog* pCtg, SRequestConnInfo *pConn, const S
rpcSendRecv(pConn->pTrans, &pConn->mgmtEps, &rpcMsg, &rpcRsp);
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, (char*)tbFName));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}
@ -858,6 +882,8 @@ int32_t ctgGetSvrVerFromMnode(SCatalog* pCtg, SRequestConnInfo *pConn, char **ou
rpcSendRecv(pConn->pTrans, &pConn->mgmtEps, &rpcMsg, &rpcRsp);
CTG_ERR_RET(ctgProcessRspMsg(out, reqType, rpcRsp.pCont, rpcRsp.contLen, rpcRsp.code, NULL));
rpcFreeCont(rpcRsp.pCont);
return TSDB_CODE_SUCCESS;
}

View File

@ -401,8 +401,6 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
QRY_ERR_RET(qExplainBufAppendExecInfo(pResNode->pExecInfo, tbuf, &tlen));
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
}
EXPLAIN_ROW_APPEND(EXPLAIN_COLUMNS_FORMAT, pTagScanNode->pScanCols->length);
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
if (pTagScanNode->pScanPseudoCols) {
EXPLAIN_ROW_APPEND(EXPLAIN_PSEUDO_COLUMNS_FORMAT, pTagScanNode->pScanPseudoCols->length);
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);

View File

@ -164,6 +164,7 @@ typedef struct {
char* dbname;
int32_t tversion;
SSchemaWrapper* sw;
SSchemaWrapper* qsw;
} SSchemaInfo;
typedef struct SExecTaskInfo {
@ -868,7 +869,7 @@ SOperatorInfo* createDataBlockInfoScanOperator(void* dataReader, SReadHandle* re
SExecTaskInfo* pTaskInfo);
SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhysiNode* pTableScanNode, SNode* pTagCond,
SExecTaskInfo* pTaskInfo, STimeWindowAggSupp* pTwSup);
STimeWindowAggSupp* pTwAggSup, SExecTaskInfo* pTaskInfo);
SOperatorInfo* createFillOperatorInfo(SOperatorInfo* downstream, SFillPhysiNode* pPhyFillNode, SExecTaskInfo* pTaskInfo);

View File

@ -120,7 +120,8 @@ int32_t qSetMultiStreamInput(qTaskInfo_t tinfo, const void* pBlocks, size_t numO
return code;
}
qTaskInfo_t qCreateQueueExecTaskInfo(void* msg, SReadHandle* readers, int32_t* numOfCols) {
qTaskInfo_t qCreateQueueExecTaskInfo(void* msg, SReadHandle* readers, int32_t* numOfCols,
SSchemaWrapper** pSchemaWrapper) {
if (msg == NULL) {
// TODO create raw scan
return NULL;
@ -146,7 +147,7 @@ qTaskInfo_t qCreateQueueExecTaskInfo(void* msg, SReadHandle* readers, int32_t* n
SDataBlockDescNode* pDescNode = pPlan->pNode->pOutputDataBlockDesc;
*numOfCols = 0;
SNode* pNode;
SNode* pNode;
FOREACH(pNode, pDescNode->pSlots) {
SSlotDescNode* pSlotDesc = (SSlotDescNode*)pNode;
if (pSlotDesc->output) {
@ -154,6 +155,7 @@ qTaskInfo_t qCreateQueueExecTaskInfo(void* msg, SReadHandle* readers, int32_t* n
}
}
*pSchemaWrapper = tCloneSSchemaWrapper(((SExecTaskInfo*)pTaskInfo)->schemaInfo.qsw);
return pTaskInfo;
}
@ -248,9 +250,11 @@ int32_t qUpdateQualifiedTableId(qTaskInfo_t tinfo, const SArray* tableIdList, bo
// add to qTaskInfo
// todo refactor STableList
for(int32_t i = 0; i < taosArrayGetSize(qa); ++i) {
for (int32_t i = 0; i < taosArrayGetSize(qa); ++i) {
uint64_t* uid = taosArrayGet(qa, i);
qDebug("table %ld added to task info", *uid);
STableKeyInfo keyInfo = {.uid = *uid, .groupId = 0};
taosArrayPush(pTaskInfo->tableqinfoList.pTableList, &keyInfo);
}

View File

@ -199,6 +199,10 @@ int32_t qAsyncKillTask(qTaskInfo_t qinfo) {
void qDestroyTask(qTaskInfo_t qTaskHandle) {
SExecTaskInfo* pTaskInfo = (SExecTaskInfo*)qTaskHandle;
if (pTaskInfo == NULL) {
return;
}
qDebug("%s execTask completed, numOfRows:%" PRId64, GET_TASKID(pTaskInfo), pTaskInfo->pRoot->resultInfo.totalRows);
queryCostStatis(pTaskInfo); // print the query cost summary
@ -311,6 +315,9 @@ int32_t qStreamPrepareScan(qTaskInfo_t tinfo, const STqOffsetVal* pOffset) {
if (type == QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN) {
SStreamScanInfo* pInfo = pOperator->info;
if (pOffset->type == TMQ_OFFSET__LOG) {
STableScanInfo* pTSInfo = pInfo->pTableScanOp->info;
tsdbReaderClose(pTSInfo->dataReader);
pTSInfo->dataReader = NULL;
#if 0
if (tOffsetEqual(pOffset, &pTaskInfo->streamInfo.lastStatus) &&
pInfo->tqReader->pWalReader->curVersion != pOffset->version) {
@ -337,11 +344,20 @@ int32_t qStreamPrepareScan(qTaskInfo_t tinfo, const STqOffsetVal* pOffset) {
return -1;
}
}
/*if (pTaskInfo->streamInfo.lastStatus.type != TMQ_OFFSET__SNAPSHOT_DATA ||*/
/*pTaskInfo->streamInfo.lastStatus.uid != uid || pTaskInfo->streamInfo.lastStatus.ts != ts) {*/
STableScanInfo* pTableScanInfo = pInfo->pTableScanOp->info;
int32_t tableSz = taosArrayGetSize(pTaskInfo->tableqinfoList.pTableList);
bool found = false;
#ifndef NDEBUG
qDebug("switch to next table %ld (cursor %d), %ld rows returned", uid, pTableScanInfo->currentTable,
pInfo->pTableScanOp->resultInfo.totalRows);
pInfo->pTableScanOp->resultInfo.totalRows = 0;
#endif
bool found = false;
for (int32_t i = 0; i < tableSz; i++) {
STableKeyInfo* pTableInfo = taosArrayGet(pTaskInfo->tableqinfoList.pTableList, i);
if (pTableInfo->uid == uid) {
@ -354,6 +370,14 @@ int32_t qStreamPrepareScan(qTaskInfo_t tinfo, const STqOffsetVal* pOffset) {
// TODO after dropping table, table may be not found
ASSERT(found);
if (pTableScanInfo->dataReader == NULL) {
if (tsdbReaderOpen(pTableScanInfo->readHandle.vnode, &pTableScanInfo->cond,
pTaskInfo->tableqinfoList.pTableList, &pTableScanInfo->dataReader, NULL) < 0 ||
pTableScanInfo->dataReader == NULL) {
ASSERT(0);
}
}
tsdbSetTableId(pTableScanInfo->dataReader, uid);
int64_t oldSkey = pTableScanInfo->cond.twindows.skey;
pTableScanInfo->cond.twindows.skey = ts + 1;

View File

@ -376,9 +376,7 @@ void initExecTimeWindowInfo(SColumnInfoData* pColData, STimeWindow* pQueryWindow
colDataAppendInt64(pColData, 4, &pQueryWindow->ekey);
}
void cleanupExecTimeWindowInfo(SColumnInfoData* pColData) {
colDataDestroy(pColData);
}
void cleanupExecTimeWindowInfo(SColumnInfoData* pColData) { colDataDestroy(pColData); }
void doApplyFunctions(SExecTaskInfo* taskInfo, SqlFunctionCtx* pCtx, STimeWindow* pWin,
SColumnInfoData* pTimeWindowData, int32_t offset, int32_t forwardStep, TSKEY* tsCol,
@ -524,8 +522,8 @@ static int32_t doSetInputDataBlock(SOperatorInfo* pOperator, SqlFunctionCtx* pCt
// NOTE: the last parameter is the primary timestamp column
// todo: refactor this
if (fmIsTimelineFunc(pCtx[i].functionId) && (j == pOneExpr->base.numOfParams - 1)) {
pInput->pPTS = pInput->pData[j]; // in case of merge function, this is not always the ts column data.
// ASSERT(pInput->pPTS->info.type == TSDB_DATA_TYPE_TIMESTAMP);
pInput->pPTS = pInput->pData[j]; // in case of merge function, this is not always the ts column data.
// ASSERT(pInput->pPTS->info.type == TSDB_DATA_TYPE_TIMESTAMP);
}
ASSERT(pInput->pData[j] != NULL);
} else if (pFuncParam->type == FUNC_PARAM_TYPE_VALUE) {
@ -633,7 +631,7 @@ int32_t projectApplyFunctions(SExprInfo* pExpr, SSDataBlock* pResult, SSDataBloc
ASSERT(pResult->info.capacity > 0);
colDataMergeCol(pResColData, startOffset, &pResult->info.capacity, &idata, dest.numOfRows);
colDataDestroy(&idata);
numOfRows = dest.numOfRows;
taosArrayDestroy(pBlockList);
} else if (pExpr[k].pExpr->nodeType == QUERY_NODE_FUNCTION) {
@ -835,7 +833,7 @@ void setTaskKilled(SExecTaskInfo* pTaskInfo) { pTaskInfo->code = TSDB_CODE_TSC_Q
/////////////////////////////////////////////////////////////////////////////////////////////
STimeWindow getAlignQueryTimeWindow(SInterval* pInterval, int32_t precision, int64_t key) {
STimeWindow win = {0};
STimeWindow win = {0};
win.skey = taosTimeTruncate(key, pInterval, precision);
/*
@ -1336,12 +1334,10 @@ void setResultRowInitCtx(SResultRow* pResult, SqlFunctionCtx* pCtx, int32_t numO
static void extractQualifiedTupleByFilterResult(SSDataBlock* pBlock, const int8_t* rowRes, bool keep);
void doFilter(const SNode* pFilterNode, SSDataBlock* pBlock) {
if (pFilterNode == NULL) {
return;
}
if (pBlock->info.rows == 0) {
if (pFilterNode == NULL || pBlock->info.rows == 0) {
return;
}
SFilterInfo* filter = NULL;
// todo move to the initialization function
@ -1358,8 +1354,6 @@ void doFilter(const SNode* pFilterNode, SSDataBlock* pBlock) {
filterFreeInfo(filter);
extractQualifiedTupleByFilterResult(pBlock, rowRes, keep);
blockDataUpdateTsWindow(pBlock, 0);
taosMemoryFree(rowRes);
}
@ -1647,11 +1641,6 @@ static int32_t compressQueryColData(SColumnInfoData* pColRes, int32_t numOfRows,
colSize + COMP_OVERFLOW_BYTES, compressed, NULL, 0);
}
int32_t doFillTimeIntervalGapsInResults(struct SFillInfo* pFillInfo, SSDataBlock* pBlock, int32_t capacity) {
int32_t numOfRows = (int32_t)taosFillResultDataBlock(pFillInfo, pBlock, capacity - pBlock->info.rows);
return pBlock->info.rows;
}
void queryCostStatis(SExecTaskInfo* pTaskInfo) {
STaskCostInfo* pSummary = &pTaskInfo->cost;
@ -2383,7 +2372,7 @@ static SSDataBlock* doLoadRemoteData(SOperatorInfo* pOperator) {
return NULL;
}
while(1) {
while (1) {
SSDataBlock* pBlock = doLoadRemoteDataImpl(pOperator);
if (pBlock == NULL) {
return NULL;
@ -2879,7 +2868,7 @@ int32_t getTableScanInfo(SOperatorInfo* pOperator, int32_t* order, int32_t* scan
*order = TSDB_ORDER_ASC;
*scanFlag = MAIN_SCAN;
return TSDB_CODE_SUCCESS;
} else if (type == QUERY_NODE_PHYSICAL_PLAN_TABLE_SCAN) {
} else if (type == QUERY_NODE_PHYSICAL_PLAN_TABLE_SCAN || type == QUERY_NODE_PHYSICAL_PLAN_TABLE_MERGE_SCAN) {
STableScanInfo* pTableScanInfo = pOperator->info;
*order = pTableScanInfo->cond.order;
*scanFlag = pTableScanInfo->scanFlag;
@ -3419,6 +3408,7 @@ static SSDataBlock* doFillImpl(SOperatorInfo* pOperator) {
doHandleRemainBlockFromNewGroup(pInfo, pResultInfo, pTaskInfo);
if (pResBlock->info.rows > pResultInfo->threshold || pResBlock->info.rows > 0) {
pResBlock->info.groupId = pInfo->curGroupId;
return pResBlock;
}
@ -3436,13 +3426,13 @@ static SSDataBlock* doFillImpl(SOperatorInfo* pOperator) {
blockDataUpdateTsWindow(pBlock, pInfo->primaryTsCol);
if (pInfo->curGroupId == 0 || pInfo->curGroupId == pBlock->info.groupId) {
pInfo->curGroupId = pBlock->info.groupId; // the first data block
pInfo->curGroupId = pBlock->info.groupId; // the first data block
pInfo->totalInputRows += pBlock->info.rows;
taosFillSetStartInfo(pInfo->pFillInfo, pBlock->info.rows, pBlock->info.window.ekey);
taosFillSetInputDataBlock(pInfo->pFillInfo, pBlock);
} else if (pInfo->curGroupId != pBlock->info.groupId) { // the new group data block
} else if (pInfo->curGroupId != pBlock->info.groupId) { // the new group data block
pInfo->existNewGroupBlock = pBlock;
// Fill the previous group data block, before handle the data block of new group.
@ -3461,17 +3451,20 @@ static SSDataBlock* doFillImpl(SOperatorInfo* pOperator) {
// 1. The result in current group not reach the threshold of output result, continue
// 2. If multiple group results existing in one SSDataBlock is not allowed, return immediately
if (pResBlock->info.rows > pResultInfo->threshold || pBlock == NULL || pInfo->existNewGroupBlock != NULL) {
pResBlock->info.groupId = pInfo->curGroupId;
return pResBlock;
}
doHandleRemainBlockFromNewGroup(pInfo, pResultInfo, pTaskInfo);
if (pResBlock->info.rows >= pOperator->resultInfo.threshold || pBlock == NULL) {
pResBlock->info.groupId = pInfo->curGroupId;
return pResBlock;
}
} else if (pInfo->existNewGroupBlock) { // try next group
assert(pBlock != NULL);
doHandleRemainBlockForNewGroupImpl(pInfo, pResultInfo, pTaskInfo);
if (pResBlock->info.rows > pResultInfo->threshold) {
pResBlock->info.groupId = pInfo->curGroupId;
return pResBlock;
}
} else {
@ -3491,23 +3484,19 @@ static SSDataBlock* doFill(SOperatorInfo* pOperator) {
SSDataBlock* fillResult = NULL;
while (true) {
fillResult = doFillImpl(pOperator);
if (fillResult != NULL) {
doFilter(pInfo->pCondition, fillResult);
}
if (fillResult == NULL) {
doSetOperatorCompleted(pOperator);
break;
}
doFilter(pInfo->pCondition, fillResult);
if (fillResult->info.rows > 0) {
break;
}
}
if (fillResult != NULL) {
size_t rows = fillResult->info.rows;
pOperator->resultInfo.totalRows += rows;
pOperator->resultInfo.totalRows += fillResult->info.rows;
}
return fillResult;
@ -3516,7 +3505,7 @@ static SSDataBlock* doFill(SOperatorInfo* pOperator) {
static void destroyExprInfo(SExprInfo* pExpr, int32_t numOfExprs) {
for (int32_t i = 0; i < numOfExprs; ++i) {
SExprInfo* pExprInfo = &pExpr[i];
for(int32_t j = 0; j < pExprInfo->base.numOfParams; ++j) {
for (int32_t j = 0; j < pExprInfo->base.numOfParams; ++j) {
if (pExprInfo->base.pParam[j].type == FUNC_PARAM_TYPE_COLUMN) {
taosMemoryFreeClear(pExprInfo->base.pParam[j].pCol);
}
@ -3609,7 +3598,7 @@ int32_t initAggInfo(SExprSupp* pSup, SAggSupporter* pAggSup, SExprInfo* pExprInf
return TSDB_CODE_SUCCESS;
}
void initResultSizeInfo(SResultInfo * pResultInfo, int32_t numOfRows) {
void initResultSizeInfo(SResultInfo* pResultInfo, int32_t numOfRows) {
ASSERT(numOfRows != 0);
pResultInfo->capacity = numOfRows;
pResultInfo->threshold = numOfRows * 0.75;
@ -3729,7 +3718,6 @@ void destroyBasicOperatorInfo(void* param, int32_t numOfOutput) {
taosMemoryFreeClear(param);
}
static void freeItem(void* pItem) {
void** p = pItem;
if (*p != NULL) {
@ -3963,6 +3951,8 @@ static SSDataBlock* doApplyIndefinitFunction(SOperatorInfo* pOperator) {
size_t rows = pInfo->pRes->info.rows;
if (rows > 0 || pOperator->status == OP_EXEC_DONE) {
break;
} else {
blockDataCleanup(pInfo->pRes);
}
}
@ -4054,8 +4044,8 @@ static int32_t initFillInfo(SFillOperatorInfo* pInfo, SExprInfo* pExpr, int32_t
w = getFirstQualifiedTimeWindow(win.skey, &w, pInterval, TSDB_ORDER_ASC);
int32_t order = TSDB_ORDER_ASC;
pInfo->pFillInfo = taosCreateFillInfo(order, w.skey, 0, capacity, numOfCols, pInterval,
fillType, pColInfo, pInfo->primaryTsCol, id);
pInfo->pFillInfo =
taosCreateFillInfo(order, w.skey, 0, capacity, numOfCols, pInterval, fillType, pColInfo, pInfo->primaryTsCol, id);
pInfo->win = win;
pInfo->p = taosMemoryCalloc(numOfCols, POINTER_BYTES);
@ -4069,7 +4059,8 @@ static int32_t initFillInfo(SFillOperatorInfo* pInfo, SExprInfo* pExpr, int32_t
}
}
SOperatorInfo* createFillOperatorInfo(SOperatorInfo* downstream, SFillPhysiNode* pPhyFillNode, SExecTaskInfo* pTaskInfo) {
SOperatorInfo* createFillOperatorInfo(SOperatorInfo* downstream, SFillPhysiNode* pPhyFillNode,
SExecTaskInfo* pTaskInfo) {
SFillOperatorInfo* pInfo = taosMemoryCalloc(1, sizeof(SFillOperatorInfo));
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
if (pInfo == NULL || pOperator == NULL) {
@ -4145,35 +4136,62 @@ static STsdbReader* doCreateDataReader(STableScanPhysiNode* pTableScanNode, SRea
static SArray* extractColumnInfo(SNodeList* pNodeList);
int32_t extractTableSchemaInfo(SReadHandle* pHandle, uint64_t uid, SExecTaskInfo* pTaskInfo) {
SSchemaWrapper* extractQueriedColumnSchema(SScanPhysiNode* pScanNode);
int32_t extractTableSchemaInfo(SReadHandle* pHandle, SScanPhysiNode* pScanNode, SExecTaskInfo* pTaskInfo) {
SMetaReader mr = {0};
metaReaderInit(&mr, pHandle->meta, 0);
int32_t code = metaGetTableEntryByUid(&mr, uid);
int32_t code = metaGetTableEntryByUid(&mr, pScanNode->uid);
if (code != TSDB_CODE_SUCCESS) {
qError("failed to get the table meta, uid:0x%" PRIx64 ", suid:0x%" PRIx64 ", %s", pScanNode->uid, pScanNode->suid,
GET_TASKID(pTaskInfo));
metaReaderClear(&mr);
return terrno;
}
pTaskInfo->schemaInfo.tablename = strdup(mr.me.name);
SSchemaInfo* pSchemaInfo = &pTaskInfo->schemaInfo;
pSchemaInfo->tablename = strdup(mr.me.name);
if (mr.me.type == TSDB_SUPER_TABLE) {
pTaskInfo->schemaInfo.sw = tCloneSSchemaWrapper(&mr.me.stbEntry.schemaRow);
pTaskInfo->schemaInfo.tversion = mr.me.stbEntry.schemaTag.version;
pSchemaInfo->sw = tCloneSSchemaWrapper(&mr.me.stbEntry.schemaRow);
pSchemaInfo->tversion = mr.me.stbEntry.schemaTag.version;
} else if (mr.me.type == TSDB_CHILD_TABLE) {
tDecoderClear(&mr.coder);
tb_uid_t suid = mr.me.ctbEntry.suid;
metaGetTableEntryByUid(&mr, suid);
pTaskInfo->schemaInfo.sw = tCloneSSchemaWrapper(&mr.me.stbEntry.schemaRow);
pTaskInfo->schemaInfo.tversion = mr.me.stbEntry.schemaTag.version;
pSchemaInfo->sw = tCloneSSchemaWrapper(&mr.me.stbEntry.schemaRow);
pSchemaInfo->tversion = mr.me.stbEntry.schemaTag.version;
} else {
pTaskInfo->schemaInfo.sw = tCloneSSchemaWrapper(&mr.me.ntbEntry.schemaRow);
pSchemaInfo->sw = tCloneSSchemaWrapper(&mr.me.ntbEntry.schemaRow);
}
metaReaderClear(&mr);
pSchemaInfo->qsw = extractQueriedColumnSchema(pScanNode);
return TSDB_CODE_SUCCESS;
}
SSchemaWrapper* extractQueriedColumnSchema(SScanPhysiNode* pScanNode) {
int32_t numOfCols = LIST_LENGTH(pScanNode->pScanCols);
SSchemaWrapper* pqSw = taosMemoryCalloc(1, sizeof(SSchemaWrapper));
pqSw->pSchema = taosMemoryCalloc(numOfCols, sizeof(SSchema));
for (int32_t i = 0; i < numOfCols; ++i) {
STargetNode* pNode = (STargetNode*)nodesListGetNode(pScanNode->pScanCols, i);
SColumnNode* pColNode = (SColumnNode*)pNode->pExpr;
SSchema* pSchema = &pqSw->pSchema[pqSw->nCols++];
pSchema->colId = pColNode->colId;
pSchema->type = pColNode->node.resType.type;
pSchema->type = pColNode->node.resType.bytes;
strncpy(pSchema->name, pColNode->colName, tListLen(pSchema->name));
}
return pqSw;
}
static void cleanupTableSchemaInfo(SSchemaInfo* pSchemaInfo) {
taosMemoryFreeClear(pSchemaInfo->dbname);
if (pSchemaInfo->sw == NULL) {
@ -4181,8 +4199,8 @@ static void cleanupTableSchemaInfo(SSchemaInfo* pSchemaInfo) {
}
taosMemoryFree(pSchemaInfo->tablename);
taosMemoryFree(pSchemaInfo->sw->pSchema);
taosMemoryFree(pSchemaInfo->sw);
tDeleteSSchemaWrapper(pSchemaInfo->sw);
tDeleteSSchemaWrapper(pSchemaInfo->qsw);
}
static int32_t sortTableGroup(STableListInfo* pTableListInfo, int32_t groupNum) {
@ -4291,7 +4309,7 @@ int32_t generateGroupIdMap(STableListInfo* pTableListInfo, SReadHandle* pHandle,
REPLACE_NODE(pNew);
} else {
taosMemoryFree(keyBuf);
nodesClearList(groupNew);
nodesDestroyList(groupNew);
metaReaderClear(&mr);
return code;
}
@ -4309,7 +4327,7 @@ int32_t generateGroupIdMap(STableListInfo* pTableListInfo, SReadHandle* pHandle,
if (tTagIsJson(data)) {
terrno = TSDB_CODE_QRY_JSON_IN_GROUP_ERROR;
taosMemoryFree(keyBuf);
nodesClearList(groupNew);
nodesDestroyList(groupNew);
metaReaderClear(&mr);
return terrno;
}
@ -4332,7 +4350,7 @@ int32_t generateGroupIdMap(STableListInfo* pTableListInfo, SReadHandle* pHandle,
info->groupId = groupId;
groupNum++;
nodesClearList(groupNew);
nodesDestroyList(groupNew);
metaReaderClear(&mr);
}
taosMemoryFree(keyBuf);
@ -4363,27 +4381,29 @@ static int32_t initTableblockDistQueryCond(uint64_t uid, SQueryTableDataCond* pC
pCond->suid = uid;
pCond->type = BLOCK_LOAD_OFFSET_ORDER;
pCond->startVersion = -1;
pCond->endVersion = -1;
pCond->endVersion = -1;
return TSDB_CODE_SUCCESS;
}
SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo, SReadHandle* pHandle,
STableListInfo* pTableListInfo, SNode* pTagCond, SNode* pTagIndexCond, const char* pUser) {
STableListInfo* pTableListInfo, SNode* pTagCond, SNode* pTagIndexCond,
const char* pUser) {
int32_t type = nodeType(pPhyNode);
if (pPhyNode->pChildren == NULL || LIST_LENGTH(pPhyNode->pChildren) == 0) {
if (QUERY_NODE_PHYSICAL_PLAN_TABLE_SCAN == type) {
STableScanPhysiNode* pTableScanNode = (STableScanPhysiNode*)pPhyNode;
int32_t code = createScanTableListInfo(&pTableScanNode->scan, pTableScanNode->pGroupTags,
pTableScanNode->groupSort, pHandle, pTableListInfo, pTagCond, pTagIndexCond, GET_TASKID(pTaskInfo));
int32_t code =
createScanTableListInfo(&pTableScanNode->scan, pTableScanNode->pGroupTags, pTableScanNode->groupSort, pHandle,
pTableListInfo, pTagCond, pTagIndexCond, GET_TASKID(pTaskInfo));
if (code) {
pTaskInfo->code = code;
return NULL;
}
code = extractTableSchemaInfo(pHandle, pTableScanNode->scan.uid, pTaskInfo);
code = extractTableSchemaInfo(pHandle, &pTableScanNode->scan, pTaskInfo);
if (code) {
pTaskInfo->code = terrno;
return NULL;
@ -4396,21 +4416,21 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
} else if (QUERY_NODE_PHYSICAL_PLAN_TABLE_MERGE_SCAN == type) {
STableMergeScanPhysiNode* pTableScanNode = (STableMergeScanPhysiNode*)pPhyNode;
int32_t code = createScanTableListInfo(&pTableScanNode->scan, pTableScanNode->pGroupTags,
pTableScanNode->groupSort, pHandle, pTableListInfo, pTagCond, pTagIndexCond, GET_TASKID(pTaskInfo));
int32_t code =
createScanTableListInfo(&pTableScanNode->scan, pTableScanNode->pGroupTags, pTableScanNode->groupSort, pHandle,
pTableListInfo, pTagCond, pTagIndexCond, GET_TASKID(pTaskInfo));
if (code) {
pTaskInfo->code = code;
return NULL;
}
code = extractTableSchemaInfo(pHandle, pTableScanNode->scan.uid, pTaskInfo);
code = extractTableSchemaInfo(pHandle, &pTableScanNode->scan, pTaskInfo);
if (code) {
pTaskInfo->code = terrno;
return NULL;
}
SOperatorInfo* pOperator =
createTableMergeScanOperatorInfo(pTableScanNode, pTableListInfo, pHandle, pTaskInfo);
SOperatorInfo* pOperator = createTableMergeScanOperatorInfo(pTableScanNode, pTableListInfo, pHandle, pTaskInfo);
STableScanInfo* pScanInfo = pOperator->info;
pTaskInfo->cost.pRecoder = &pScanInfo->readRecorder;
@ -4420,21 +4440,32 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
return createExchangeOperatorInfo(pHandle->pMsgCb->clientRpc, (SExchangePhysiNode*)pPhyNode, pTaskInfo);
} else if (QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN == type) {
STableScanPhysiNode* pTableScanNode = (STableScanPhysiNode*)pPhyNode;
STimeWindowAggSupp twSup = {
.waterMark = pTableScanNode->watermark,
.calTrigger = pTableScanNode->triggerType,
.maxTs = INT64_MIN,
STimeWindowAggSupp aggSup = (STimeWindowAggSupp){
.waterMark = pTableScanNode->watermark,
.calTrigger = pTableScanNode->triggerType,
.maxTs = INT64_MIN,
};
if (pHandle->vnode) {
int32_t code = createScanTableListInfo(&pTableScanNode->scan, pTableScanNode->pGroupTags,
pTableScanNode->groupSort, pHandle, pTableListInfo, pTagCond, pTagIndexCond, GET_TASKID(pTaskInfo));
int32_t code =
createScanTableListInfo(&pTableScanNode->scan, pTableScanNode->pGroupTags, pTableScanNode->groupSort,
pHandle, pTableListInfo, pTagCond, pTagIndexCond, GET_TASKID(pTaskInfo));
if (code) {
pTaskInfo->code = code;
return NULL;
}
}
SOperatorInfo* pOperator = createStreamScanOperatorInfo(pHandle, pTableScanNode, pTagCond, pTaskInfo, &twSup);
#ifndef NDEBUG
int32_t sz = taosArrayGetSize(pTableListInfo->pTableList);
for (int32_t i = 0; i < sz; i++) {
STableKeyInfo* pKeyInfo = taosArrayGet(pTableListInfo->pTableList, i);
qDebug("creating stream task: add table %ld", pKeyInfo->uid);
}
}
#endif
pTaskInfo->schemaInfo.qsw = extractQueriedColumnSchema(&pTableScanNode->scan);
SOperatorInfo* pOperator = createStreamScanOperatorInfo(pHandle, pTableScanNode, pTagCond, &aggSup, pTaskInfo);
return pOperator;
} else if (QUERY_NODE_PHYSICAL_PLAN_SYSTABLE_SCAN == type) {
@ -4466,7 +4497,7 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
}
SQueryTableDataCond cond = {0};
int32_t code = initTableblockDistQueryCond(pBlockNode->suid, &cond);
int32_t code = initTableblockDistQueryCond(pBlockNode->suid, &cond);
if (code != TSDB_CODE_SUCCESS) {
return NULL;
}
@ -4479,13 +4510,14 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
} else if (QUERY_NODE_PHYSICAL_PLAN_LAST_ROW_SCAN == type) {
SLastRowScanPhysiNode* pScanNode = (SLastRowScanPhysiNode*)pPhyNode;
int32_t code = createScanTableListInfo(&pScanNode->scan, pScanNode->pGroupTags, true, pHandle, pTableListInfo, pTagCond, pTagIndexCond, GET_TASKID(pTaskInfo));
int32_t code = createScanTableListInfo(&pScanNode->scan, pScanNode->pGroupTags, true, pHandle, pTableListInfo,
pTagCond, pTagIndexCond, GET_TASKID(pTaskInfo));
if (code != TSDB_CODE_SUCCESS) {
pTaskInfo->code = code;
return NULL;
}
code = extractTableSchemaInfo(pHandle, pScanNode->scan.uid, pTaskInfo);
code = extractTableSchemaInfo(pHandle, &pScanNode->scan, pTaskInfo);
if (code != TSDB_CODE_SUCCESS) {
pTaskInfo->code = code;
return NULL;
@ -4941,7 +4973,8 @@ int32_t createExecTaskInfoImpl(SSubplan* pPlan, SExecTaskInfo** pTaskInfo, SRead
(*pTaskInfo)->sql = sql;
(*pTaskInfo)->pSubplan = pPlan;
(*pTaskInfo)->pRoot = createOperatorTree(pPlan->pNode, *pTaskInfo, pHandle, &(*pTaskInfo)->tableqinfoList, pPlan->pTagCond, pPlan->pTagIndexCond, pPlan->user);
(*pTaskInfo)->pRoot = createOperatorTree(pPlan->pNode, *pTaskInfo, pHandle, &(*pTaskInfo)->tableqinfoList,
pPlan->pTagCond, pPlan->pTagIndexCond, pPlan->user);
if (NULL == (*pTaskInfo)->pRoot) {
code = (*pTaskInfo)->code;

View File

@ -359,6 +359,7 @@ void setTbNameColData(void* pMeta, const SSDataBlock* pBlock, SColumnInfoData* p
SScalarParam param = {.columnData = pColInfoData};
fpSet.process(&srcParam, 1, &param);
colDataDestroy(&infoData);
}
static SSDataBlock* doTableScanImpl(SOperatorInfo* pOperator) {
@ -739,7 +740,7 @@ static SSDataBlock* doBlockInfoScan(SOperatorInfo* pOperator) {
static void destroyBlockDistScanOperatorInfo(void* param, int32_t numOfOutput) {
SBlockDistInfo* pDistInfo = (SBlockDistInfo*)param;
blockDataDestroy(pDistInfo->pResBlock);
tsdbReaderClose(pDistInfo->pHandle);
taosMemoryFreeClear(param);
}
@ -981,6 +982,9 @@ static SSDataBlock* doRangeScan(SStreamScanInfo* pInfo, SSDataBlock* pSDB, int32
if (!pResult) {
blockDataCleanup(pSDB);
*pRowIndex = 0;
STableScanInfo* pTableScanInfo = pInfo->pTableScanOp->info;
tsdbReaderClose(pTableScanInfo->dataReader);
pTableScanInfo->dataReader = NULL;
return NULL;
}
@ -1002,6 +1006,9 @@ static SSDataBlock* doDataScan(SStreamScanInfo* pInfo, SSDataBlock* pSDB, int32_
}
if (!pResult) {
pInfo->updateWin = (STimeWindow){.skey = INT64_MIN, .ekey = INT64_MAX};
STableScanInfo* pTableScanInfo = pInfo->pTableScanOp->info;
tsdbReaderClose(pTableScanInfo->dataReader);
pTableScanInfo->dataReader = NULL;
return NULL;
}
@ -1524,7 +1531,7 @@ static void destroyStreamScanOperatorInfo(void* param, int32_t numOfOutput) {
}
SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhysiNode* pTableScanNode, SNode* pTagCond,
SExecTaskInfo* pTaskInfo, STimeWindowAggSupp* pTwSup) {
STimeWindowAggSupp* pTwSup, SExecTaskInfo* pTaskInfo) {
SStreamScanInfo* pInfo = taosMemoryCalloc(1, sizeof(SStreamScanInfo));
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
@ -1538,6 +1545,8 @@ SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhys
pInfo->pTagCond = pTagCond;
pInfo->twAggSup = *pTwSup;
int32_t numOfCols = 0;
pInfo->pColMatchInfo = extractColMatchInfo(pScanPhyNode->pScanCols, pDescNode, &numOfCols, COL_MATCH_FROM_COL_ID);
@ -1590,7 +1599,7 @@ SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhys
}
if (pTSInfo->interval.interval > 0) {
pInfo->pUpdateInfo = updateInfoInitP(&pTSInfo->interval, pTwSup->waterMark);
pInfo->pUpdateInfo = updateInfoInitP(&pTSInfo->interval, pInfo->twAggSup.waterMark);
} else {
pInfo->pUpdateInfo = NULL;
}
@ -1630,7 +1639,6 @@ SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhys
pInfo->deleteDataIndex = 0;
pInfo->pDeleteDataRes = createPullDataBlock();
pInfo->updateWin = (STimeWindow){.skey = INT64_MAX, .ekey = INT64_MAX};
pInfo->twAggSup = *pTwSup;
pOperator->name = "StreamScanOperator";
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN;
@ -2045,8 +2053,8 @@ static SSDataBlock* sysTableScanUserTables(SOperatorInfo* pOperator) {
uint64_t suid = pInfo->pCur->mr.me.ctbEntry.suid;
int32_t code = metaGetTableEntryByUid(&mr, suid);
if (code != TSDB_CODE_SUCCESS) {
qError("failed to get super table meta, uid:0x%" PRIx64 ", code:%s, %s", suid, tstrerror(terrno),
GET_TASKID(pTaskInfo));
qError("failed to get super table meta, cname:%s, suid:0x%" PRIx64 ", code:%s, %s", pInfo->pCur->mr.me.name,
suid, tstrerror(terrno), GET_TASKID(pTaskInfo));
metaReaderClear(&mr);
metaCloseTbCursor(pInfo->pCur);
pInfo->pCur = NULL;
@ -2152,16 +2160,39 @@ static SSDataBlock* sysTableScanUserTables(SOperatorInfo* pOperator) {
}
}
static SSDataBlock* sysTableScanUserSTables(SOperatorInfo* pOperator) {
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
SSysTableScanInfo* pInfo = pOperator->info;
if (pOperator->status == OP_EXEC_DONE) {
return NULL;
}
pInfo->pRes->info.rows = 0;
pOperator->status = OP_EXEC_DONE;
pInfo->loadInfo.totalRows += pInfo->pRes->info.rows;
return (pInfo->pRes->info.rows == 0) ? NULL : pInfo->pRes;
}
static SSDataBlock* doSysTableScan(SOperatorInfo* pOperator) {
// build message and send to mnode to fetch the content of system tables.
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
SSysTableScanInfo* pInfo = pOperator->info;
const char* name = tNameGetTableName(&pInfo->name);
if (pInfo->showRewrite) {
char dbName[TSDB_DB_NAME_LEN] = {0};
getDBNameFromCondition(pInfo->pCondition, dbName);
sprintf(pInfo->req.db, "%d.%s", pInfo->accountId, dbName);
}
if (strncasecmp(name, TSDB_INS_TABLE_USER_TABLES, TSDB_TABLE_FNAME_LEN) == 0) {
return sysTableScanUserTables(pOperator);
} else if (strncasecmp(name, TSDB_INS_TABLE_USER_TAGS, TSDB_TABLE_FNAME_LEN) == 0) {
return sysTableScanUserTags(pOperator);
} else if (strncasecmp(name, TSDB_INS_TABLE_USER_STABLES, TSDB_TABLE_FNAME_LEN) == 0 &&
IS_SYS_DBNAME(pInfo->req.db)) {
return sysTableScanUserSTables(pOperator);
} else { // load the meta from mnode of the given epset
if (pOperator->status == OP_EXEC_DONE) {
return NULL;
@ -2172,12 +2203,6 @@ static SSDataBlock* doSysTableScan(SOperatorInfo* pOperator) {
strncpy(pInfo->req.tb, tNameGetTableName(&pInfo->name), tListLen(pInfo->req.tb));
strcpy(pInfo->req.user, pInfo->pUser);
if (pInfo->showRewrite) {
char dbName[TSDB_DB_NAME_LEN] = {0};
getDBNameFromCondition(pInfo->pCondition, dbName);
sprintf(pInfo->req.db, "%d.%s", pInfo->accountId, dbName);
}
int32_t contLen = tSerializeSRetrieveTableReq(NULL, 0, &pInfo->req);
char* buf1 = taosMemoryCalloc(1, contLen);
tSerializeSRetrieveTableReq(buf1, contLen, &pInfo->req);

View File

@ -66,12 +66,32 @@ static void setNullRow(SSDataBlock* pBlock, int64_t ts, int32_t rowIndex) {
static void doSetVal(SColumnInfoData* pDstColInfoData, int32_t rowIndex, const SGroupKeys* pKey);
static void doSetUserSpecifiedValue(SColumnInfoData* pDst, SVariant* pVar, int32_t rowIndex, int64_t currentKey) {
if (pDst->info.type == TSDB_DATA_TYPE_FLOAT) {
float v = 0;
GET_TYPED_DATA(v, float, pVar->nType, &pVar->i);
colDataAppend(pDst, rowIndex, (char*)&v, false);
} else if (pDst->info.type == TSDB_DATA_TYPE_DOUBLE) {
double v = 0;
GET_TYPED_DATA(v, double, pVar->nType, &pVar->i);
colDataAppend(pDst, rowIndex, (char*)&v, false);
} else if (IS_SIGNED_NUMERIC_TYPE(pDst->info.type)) {
int64_t v = 0;
GET_TYPED_DATA(v, int64_t, pVar->nType, &pVar->i);
colDataAppend(pDst, rowIndex, (char*)&v, false);
} else if (pDst->info.type == TSDB_DATA_TYPE_TIMESTAMP) {
colDataAppend(pDst, rowIndex, (const char*)&currentKey, false);
} else { // varchar/nchar data
colDataAppendNULL(pDst, rowIndex);
}
}
static void doFillOneRow(SFillInfo* pFillInfo, SSDataBlock* pBlock, SSDataBlock* pSrcBlock, int64_t ts,
bool outOfBound) {
SPoint point1, point2, point;
int32_t step = GET_FORWARD_DIRECTION_FACTOR(pFillInfo->order);
// set the primary timestamp column value
// set the primary timestamp column value
int32_t index = pBlock->info.rows;
// set the other values
@ -160,30 +180,13 @@ static void doFillOneRow(SFillInfo* pFillInfo, SSDataBlock* pBlock, SSDataBlock*
} else { // fill with user specified value for each column
for (int32_t i = 0; i < pFillInfo->numOfCols; ++i) {
SFillColInfo* pCol = &pFillInfo->pFillCol[i];
if (TSDB_COL_IS_TAG(pCol->flag) /* || IS_VAR_DATA_TYPE(pCol->schema.type)*/) {
if (TSDB_COL_IS_TAG(pCol->flag)) {
continue;
}
SVariant* pVar = &pFillInfo->pFillCol[i].fillVal;
SColumnInfoData* pDst = taosArrayGet(pBlock->pDataBlock, i);
if (pDst->info.type == TSDB_DATA_TYPE_FLOAT) {
float v = 0;
GET_TYPED_DATA(v, float, pVar->nType, &pVar->i);
colDataAppend(pDst, index, (char*)&v, false);
} else if (pDst->info.type == TSDB_DATA_TYPE_DOUBLE) {
double v = 0;
GET_TYPED_DATA(v, double, pVar->nType, &pVar->i);
colDataAppend(pDst, index, (char*)&v, false);
} else if (IS_SIGNED_NUMERIC_TYPE(pDst->info.type)) {
int64_t v = 0;
GET_TYPED_DATA(v, int64_t, pVar->nType, &pVar->i);
colDataAppend(pDst, index, (char*)&v, false);
} else if (pDst->info.type == TSDB_DATA_TYPE_TIMESTAMP) {
colDataAppend(pDst, index, (const char*)&pFillInfo->currentKey, false);
} else { // varchar/nchar data
colDataAppendNULL(pDst, index);
}
doSetUserSpecifiedValue(pDst, pVar, index, pFillInfo->currentKey);
}
}
@ -273,7 +276,7 @@ static int32_t fillResultImpl(SFillInfo* pFillInfo, SSDataBlock* pBlock, int32_t
return outputRows;
}
} else {
assert(pFillInfo->currentKey == ts);
ASSERT(pFillInfo->currentKey == ts);
int32_t index = pBlock->info.rows;
if (pFillInfo->type == TSDB_FILL_NEXT && (pFillInfo->index + 1) < pFillInfo->numOfRows) {
@ -295,27 +298,32 @@ static int32_t fillResultImpl(SFillInfo* pFillInfo, SSDataBlock* pBlock, int32_t
SColumnInfoData* pSrc = taosArrayGet(pFillInfo->pSrcBlock->pDataBlock, srcSlotId);
char* src = colDataGetData(pSrc, pFillInfo->index);
if (i == 0 || (/*pCol->functionId != FUNCTION_COUNT &&*/ !colDataIsNull_s(pSrc, pFillInfo->index)) /*||
(pCol->functionId == FUNCTION_COUNT && GET_INT64_VAL(src) != 0)*/) {
if (/*i == 0 || (*/!colDataIsNull_s(pSrc, pFillInfo->index)) {
bool isNull = colDataIsNull_s(pSrc, pFillInfo->index);
colDataAppend(pDst, index, src, isNull);
saveColData(pFillInfo->prev, i, src, isNull);
} else { // i > 0 and data is null , do interpolation
if (pFillInfo->type == TSDB_FILL_PREV) {
SGroupKeys* pKey = taosArrayGet(pFillInfo->prev, i);
doSetVal(pDst, index, pKey);
} else if (pFillInfo->type == TSDB_FILL_LINEAR) {
bool isNull = colDataIsNull_s(pSrc, pFillInfo->index);
colDataAppend(pDst, index, src, isNull);
saveColData(pFillInfo->prev, i, src, isNull);
} else if (pFillInfo->type == TSDB_FILL_NULL) {
colDataAppendNULL(pDst, index);
} else if (pFillInfo->type == TSDB_FILL_NEXT) {
SGroupKeys* pKey = taosArrayGet(pFillInfo->next, i);
doSetVal(pDst, index, pKey);
} else {
SVariant* pVar = &pFillInfo->pFillCol[i].fillVal;
colDataAppend(pDst, index, (char*)&pVar->i, false);
} else {
if (pDst->info.type == TSDB_DATA_TYPE_TIMESTAMP) {
colDataAppend(pDst, index, (const char*)&pFillInfo->currentKey, false);
} else { // i > 0 and data is null , do interpolation
if (pFillInfo->type == TSDB_FILL_PREV) {
SArray* p = FILL_IS_ASC_FILL(pFillInfo) ? pFillInfo->prev : pFillInfo->next;
SGroupKeys* pKey = taosArrayGet(p, i);
doSetVal(pDst, index, pKey);
} else if (pFillInfo->type == TSDB_FILL_LINEAR) {
bool isNull = colDataIsNull_s(pSrc, pFillInfo->index);
colDataAppend(pDst, index, src, isNull);
saveColData(pFillInfo->prev, i, src, isNull); // todo:
} else if (pFillInfo->type == TSDB_FILL_NULL) {
colDataAppendNULL(pDst, index);
} else if (pFillInfo->type == TSDB_FILL_NEXT) {
SArray* p = FILL_IS_ASC_FILL(pFillInfo) ? pFillInfo->next : pFillInfo->prev;
SGroupKeys* pKey = taosArrayGet(p, i);
doSetVal(pDst, index, pKey);
} else {
SVariant* pVar = &pFillInfo->pFillCol[i].fillVal;
doSetUserSpecifiedValue(pDst, pVar, index, pFillInfo->currentKey);
}
}
}
}

View File

@ -47,6 +47,7 @@ extern "C" {
#define FUNC_MGT_SYSTEM_INFO_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(18)
#define FUNC_MGT_CLIENT_PC_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(19)
#define FUNC_MGT_MULTI_ROWS_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(20)
#define FUNC_MGT_KEEP_ORDER_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(21)
#define FUNC_MGT_TEST_MASK(val, mask) (((val) & (mask)) != 0)

View File

@ -2088,7 +2088,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.classification = FUNC_MGT_AGG_FUNC,
.translateFunc = translateApercentileMerge,
.getEnvFunc = getApercentileFuncEnv,
.initFunc = functionSetup,
.initFunc = apercentileFunctionSetup,
.processFunc = apercentileFunctionMerge,
.finalizeFunc = apercentileFinalize,
.invertFunc = NULL,
@ -2097,7 +2097,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
{
.name = "top",
.type = FUNCTION_TYPE_TOP,
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_MULTI_ROWS_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_MULTI_ROWS_FUNC | FUNC_MGT_KEEP_ORDER_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
.translateFunc = translateTopBot,
.getEnvFunc = getTopBotFuncEnv,
.initFunc = topBotFunctionSetup,
@ -2112,7 +2112,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
{
.name = "bottom",
.type = FUNCTION_TYPE_BOTTOM,
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_MULTI_ROWS_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_MULTI_ROWS_FUNC | FUNC_MGT_KEEP_ORDER_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
.translateFunc = translateTopBot,
.getEnvFunc = getTopBotFuncEnv,
.initFunc = topBotFunctionSetup,
@ -2480,7 +2480,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
{
.name = "sample",
.type = FUNCTION_TYPE_SAMPLE,
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_MULTI_ROWS_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_MULTI_ROWS_FUNC | FUNC_MGT_KEEP_ORDER_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
.translateFunc = translateSample,
.getEnvFunc = getSampleFuncEnv,
.initFunc = sampleFunctionSetup,
@ -2906,7 +2906,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
{
.name = "_select_value",
.type = FUNCTION_TYPE_SELECT_VALUE,
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_SELECT_FUNC,
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_KEEP_ORDER_FUNC,
.translateFunc = translateSelectValue,
.getEnvFunc = getSelectivityFuncEnv, // todo remove this function later.
.initFunc = functionSetup,

View File

@ -2592,6 +2592,7 @@ int32_t apercentileFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
SAPercentileInfo* pInfo = (SAPercentileInfo*)GET_ROWCELL_INTERBUF(pResInfo);
if (pInfo->algo == APERCT_ALGO_TDIGEST) {
buildTDigestInfo(pInfo);
if (pInfo->pTDigest->size > 0) {
pInfo->result = tdigestQuantile(pInfo->pTDigest, pInfo->percent / 100);
} else { // no need to free
@ -2599,6 +2600,7 @@ int32_t apercentileFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
return TSDB_CODE_SUCCESS;
}
} else {
buildHistogramInfo(pInfo);
if (pInfo->pHisto->numOfElems > 0) {
qDebug("get the final res:%d, elements:%"PRId64", entry:%d", pInfo->pHisto->numOfElems, pInfo->pHisto->numOfEntries);

View File

@ -183,6 +183,8 @@ bool fmIsClientPseudoColumnFunc(int32_t funcId) { return isSpecificClassifyFunc(
bool fmIsMultiRowsFunc(int32_t funcId) { return isSpecificClassifyFunc(funcId, FUNC_MGT_MULTI_ROWS_FUNC); }
bool fmIsKeepOrderFunc(int32_t funcId) { return isSpecificClassifyFunc(funcId, FUNC_MGT_KEEP_ORDER_FUNC); }
bool fmIsInterpFunc(int32_t funcId) {
if (funcId < 0 || funcId >= funcMgtBuiltinsNum) {
return false;

View File

@ -913,8 +913,8 @@ void udfdConnectMnodeThreadFunc(void *args) {
}
int main(int argc, char *argv[]) {
if (!taosCheckSystemIsSmallEnd()) {
printf("failed to start since on non-small-end machines\n");
if (!taosCheckSystemIsLittleEnd()) {
printf("failed to start since on non-little-end machines\n");
return -1;
}

View File

@ -707,6 +707,8 @@ static int32_t sifCalculate(SNode *pNode, SIFParam *pDst) {
sifFreeParam(res);
taosHashRemove(ctx.pRes, (void *)&pNode, POINTER_BYTES);
}
sifFreeRes(ctx.pRes);
SIF_RET(code);
}

View File

@ -332,6 +332,9 @@ static int32_t logicNodeCopy(const SLogicNode* pSrc, SLogicNode* pDst) {
COPY_SCALAR_FIELD(precision);
CLONE_NODE_FIELD(pLimit);
CLONE_NODE_FIELD(pSlimit);
COPY_SCALAR_FIELD(requireDataOrder);
COPY_SCALAR_FIELD(resultDataOrder);
COPY_SCALAR_FIELD(groupAction);
return TSDB_CODE_SUCCESS;
}

View File

@ -504,6 +504,9 @@ static const char* jkLogicPlanConditions = "Conditions";
static const char* jkLogicPlanChildren = "Children";
static const char* jkLogicPlanLimit = "Limit";
static const char* jkLogicPlanSlimit = "SLimit";
static const char* jkLogicPlanRequireDataOrder = "RequireDataOrder";
static const char* jkLogicPlanResultDataOrder = "ResultDataOrder";
static const char* jkLogicPlanGroupAction = "GroupAction";
static int32_t logicPlanNodeToJson(const void* pObj, SJson* pJson) {
const SLogicNode* pNode = (const SLogicNode*)pObj;
@ -521,6 +524,15 @@ static int32_t logicPlanNodeToJson(const void* pObj, SJson* pJson) {
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddObject(pJson, jkLogicPlanSlimit, nodeToJson, pNode->pSlimit);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddIntegerToObject(pJson, jkLogicPlanRequireDataOrder, pNode->requireDataOrder);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddIntegerToObject(pJson, jkLogicPlanResultDataOrder, pNode->resultDataOrder);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddIntegerToObject(pJson, jkLogicPlanGroupAction, pNode->groupAction);
}
return code;
}
@ -541,6 +553,15 @@ static int32_t jsonToLogicPlanNode(const SJson* pJson, void* pObj) {
if (TSDB_CODE_SUCCESS == code) {
code = jsonToNodeObject(pJson, jkLogicPlanSlimit, &pNode->pSlimit);
}
if (TSDB_CODE_SUCCESS == code) {
tjsonGetNumberValue(pJson, jkLogicPlanRequireDataOrder, pNode->requireDataOrder, code);
}
if (TSDB_CODE_SUCCESS == code) {
tjsonGetNumberValue(pJson, jkLogicPlanResultDataOrder, pNode->resultDataOrder, code);
}
if (TSDB_CODE_SUCCESS == code) {
tjsonGetNumberValue(pJson, jkLogicPlanGroupAction, pNode->groupAction, code);
}
return code;
}

View File

@ -434,8 +434,12 @@ static FORCE_INLINE int32_t checkAndTrimValue(SToken* pToken, char* tmpTokenBuf,
}
static bool isNullStr(SToken* pToken) {
return (pToken->type == TK_NULL) || ((pToken->type == TK_NK_STRING) && (pToken->n != 0) &&
(strncasecmp(TSDB_DATA_NULL_STR_L, pToken->z, pToken->n) == 0));
return ((pToken->type == TK_NK_STRING) && (pToken->n != 0) &&
(strncasecmp(TSDB_DATA_NULL_STR_L, pToken->z, pToken->n) == 0));
}
static bool isNullValue(int8_t dataType, SToken* pToken) {
return TK_NULL == pToken->type || (!IS_STR_DATA_TYPE(dataType) && isNullStr(pToken));
}
static FORCE_INLINE int32_t toDouble(SToken* pToken, double* value, char** endPtr) {
@ -461,7 +465,7 @@ static int32_t parseValueToken(char** end, SToken* pToken, SSchema* pSchema, int
return code;
}
if (isNullStr(pToken)) {
if (isNullValue(pSchema->type, pToken)) {
if (TSDB_DATA_TYPE_TIMESTAMP == pSchema->type && PRIMARYKEY_TIMESTAMP_COL_ID == pSchema->colId) {
return buildSyntaxErrMsg(pMsgBuf, "primary timestamp should not be null", pToken->z);
}
@ -735,11 +739,12 @@ static int32_t parseBoundColumns(SInsertParseContext* pCxt, SParsedDataColInfo*
return TSDB_CODE_SUCCESS;
}
static void buildCreateTbReq(SVCreateTbReq* pTbReq, const char* tname, STag* pTag, int64_t suid, const char* sname, SArray* tagName) {
static void buildCreateTbReq(SVCreateTbReq* pTbReq, const char* tname, STag* pTag, int64_t suid, const char* sname,
SArray* tagName) {
pTbReq->type = TD_CHILD_TABLE;
pTbReq->name = strdup(tname);
pTbReq->ctb.suid = suid;
if(sname) pTbReq->ctb.name = strdup(sname);
if (sname) pTbReq->ctb.name = strdup(sname);
pTbReq->ctb.pTag = (uint8_t*)pTag;
pTbReq->ctb.tagName = taosArrayDup(tagName);
pTbReq->commentLen = -1;
@ -753,7 +758,7 @@ static int32_t parseTagToken(char** end, SToken* pToken, SSchema* pSchema, int16
uint64_t uv;
char* endptr = NULL;
if (isNullStr(pToken)) {
if (isNullValue(pSchema->type, pToken)) {
if (TSDB_DATA_TYPE_TIMESTAMP == pSchema->type && PRIMARYKEY_TIMESTAMP_COL_ID == pSchema->colId) {
return buildSyntaxErrMsg(pMsgBuf, "primary timestamp should not be null", pToken->z);
}
@ -761,7 +766,7 @@ static int32_t parseTagToken(char** end, SToken* pToken, SSchema* pSchema, int16
return TSDB_CODE_SUCCESS;
}
// strcpy(val->colName, pSchema->name);
// strcpy(val->colName, pSchema->name);
val->cid = pSchema->colId;
val->type = pSchema->type;
@ -971,7 +976,7 @@ static int32_t parseTagsClause(SInsertParseContext* pCxt, SSchema* pSchema, uint
goto end;
}
if (!isNullStr(&sToken)) {
if (!isNullValue(pTagSchema->type, &sToken)) {
taosArrayPush(tagName, pTagSchema->name);
}
if (pTagSchema->type == TSDB_DATA_TYPE_JSON) {
@ -980,7 +985,7 @@ static int32_t parseTagsClause(SInsertParseContext* pCxt, SSchema* pSchema, uint
taosMemoryFree(tmpTokenBuf);
goto end;
}
if (isNullStr(&sToken)) {
if (isNullValue(pTagSchema->type, &sToken)) {
code = tTagNew(pTagVals, 1, true, &pTag);
} else {
code = parseJsontoTagData(sToken.z, pTagVals, &pTag, &pCxt->msg);
@ -1018,7 +1023,7 @@ static int32_t parseTagsClause(SInsertParseContext* pCxt, SSchema* pSchema, uint
end:
for (int i = 0; i < taosArrayGetSize(pTagVals); ++i) {
STagVal* p = (STagVal*)taosArrayGet(pTagVals, i);
if (IS_VAR_DATA_TYPE(p->type)) {
if (p->type == TSDB_DATA_TYPE_NCHAR) {
taosMemoryFree(p->pData);
}
}
@ -1200,7 +1205,7 @@ static int parseOneRow(SInsertParseContext* pCxt, STableDataBlocks* pDataBlocks,
*gotRow = true;
#ifdef TD_DEBUG_PRINT_ROW
STSchema* pSTSchema = tdGetSTSChemaFromSSChema(&schema, spd->numOfCols);
STSchema* pSTSchema = tdGetSTSChemaFromSSChema(schema, spd->numOfCols, 1);
tdSRowPrint(row, pSTSchema, __func__);
taosMemoryFree(pSTSchema);
#endif
@ -1321,7 +1326,11 @@ static int32_t parseCsvFile(SInsertParseContext* pCxt, TdFilePtr fp, STableDataB
static int32_t parseDataFromFile(SInsertParseContext* pCxt, SToken filePath, STableDataBlocks* dataBuf) {
char filePathStr[TSDB_FILENAME_LEN] = {0};
strncpy(filePathStr, filePath.z, filePath.n);
if (TK_NK_STRING == filePath.type) {
trimString(filePath.z, filePath.n, filePathStr, sizeof(filePathStr));
} else {
strncpy(filePathStr, filePath.z, filePath.n);
}
TdFilePtr fp = taosOpenFile(filePathStr, TD_FILE_READ | TD_FILE_STREAM);
if (NULL == fp) {
return TAOS_SYSTEM_ERROR(errno);
@ -1497,7 +1506,6 @@ static int32_t parseInsertBody(SInsertParseContext* pCxt) {
memset(&pCxt->tags, 0, sizeof(pCxt->tags));
pCxt->pVgroupsHashObj = NULL;
pCxt->pTableBlockHashObj = NULL;
pCxt->pTableMeta = NULL;
return TSDB_CODE_SUCCESS;
}
@ -1554,7 +1562,10 @@ int32_t parseInsertSql(SParseContext* pContext, SQuery** pQuery, SParseMetaCache
if (NULL == *pQuery) {
return TSDB_CODE_OUT_OF_MEMORY;
}
} else {
nodesDestroyNode((*pQuery)->pRoot);
}
(*pQuery)->execMode = QUERY_EXEC_MODE_SCHEDULE;
(*pQuery)->haveResultSet = false;
(*pQuery)->msgType = TDMT_VND_SUBMIT;
@ -1802,8 +1813,8 @@ int32_t qBuildStmtOutput(SQuery* pQuery, SHashObj* pVgHash, SHashObj* pBlockHash
return TSDB_CODE_SUCCESS;
}
int32_t qBindStmtTagsValue(void* pBlock, void* boundTags, int64_t suid, const char* sTableName, char* tName, TAOS_MULTI_BIND* bind,
char* msgBuf, int32_t msgBufLen) {
int32_t qBindStmtTagsValue(void* pBlock, void* boundTags, int64_t suid, const char* sTableName, char* tName,
TAOS_MULTI_BIND* bind, char* msgBuf, int32_t msgBufLen) {
STableDataBlocks* pDataBlock = (STableDataBlocks*)pBlock;
SMsgBuf pBuf = {.buf = msgBuf, .len = msgBufLen};
SParsedDataColInfo* tags = (SParsedDataColInfo*)boundTags;
@ -1854,7 +1865,7 @@ int32_t qBindStmtTagsValue(void* pBlock, void* boundTags, int64_t suid, const ch
}
} else {
STagVal val = {.cid = pTagSchema->colId, .type = pTagSchema->type};
// strcpy(val.colName, pTagSchema->name);
// strcpy(val.colName, pTagSchema->name);
if (pTagSchema->type == TSDB_DATA_TYPE_BINARY) {
val.pData = (uint8_t*)bind[c].buffer;
val.nData = colLen;
@ -1970,7 +1981,7 @@ int32_t qBindStmtColsValue(void* pBlock, TAOS_MULTI_BIND* bind, char* msgBuf, in
}
}
#ifdef TD_DEBUG_PRINT_ROW
STSchema* pSTSchema = tdGetSTSChemaFromSSChema(&pSchema, spd->numOfCols);
STSchema* pSTSchema = tdGetSTSChemaFromSSChema(pSchema, spd->numOfCols, 1);
tdSRowPrint(row, pSTSchema, __func__);
taosMemoryFree(pSTSchema);
#endif
@ -2055,7 +2066,7 @@ int32_t qBindStmtSingleColValue(void* pBlock, TAOS_MULTI_BIND* bind, char* msgBu
#ifdef TD_DEBUG_PRINT_ROW
if (rowEnd) {
STSchema* pSTSchema = tdGetSTSChemaFromSSChema(&pSchema, spd->numOfCols);
STSchema* pSTSchema = tdGetSTSChemaFromSSChema(pSchema, spd->numOfCols, 1);
tdSRowPrint(row, pSTSchema, __func__);
taosMemoryFree(pSTSchema);
}
@ -2245,7 +2256,8 @@ static int32_t smlBoundColumnData(SArray* cols, SParsedDataColInfo* pColList, SS
* @param msg
* @return int32_t
*/
static int32_t smlBuildTagRow(SArray* cols, SParsedDataColInfo* tags, SSchema* pSchema, STag** ppTag, SArray** tagName, SMsgBuf* msg) {
static int32_t smlBuildTagRow(SArray* cols, SParsedDataColInfo* tags, SSchema* pSchema, STag** ppTag, SArray** tagName,
SMsgBuf* msg) {
SArray* pTagArray = taosArrayInit(tags->numOfBound, sizeof(STagVal));
if (!pTagArray) {
return TSDB_CODE_TSC_OUT_OF_MEMORY;
@ -2262,7 +2274,7 @@ static int32_t smlBuildTagRow(SArray* cols, SParsedDataColInfo* tags, SSchema* p
taosArrayPush(*tagName, pTagSchema->name);
STagVal val = {.cid = pTagSchema->colId, .type = pTagSchema->type};
// strcpy(val.colName, pTagSchema->name);
// strcpy(val.colName, pTagSchema->name);
if (pTagSchema->type == TSDB_DATA_TYPE_BINARY) {
val.pData = (uint8_t*)kv->value;
val.nData = kv->length;
@ -2318,7 +2330,7 @@ int32_t smlBindData(void* handle, SArray* tags, SArray* colsSchema, SArray* cols
buildInvalidOperationMsg(&pBuf, "bound tags error");
return ret;
}
STag* pTag = NULL;
STag* pTag = NULL;
SArray* tagName = NULL;
ret = smlBuildTagRow(tags, &smlHandle->tableExecHandle.tags, pTagsSchema, &pTag, &tagName, &pBuf);
if (ret != TSDB_CODE_SUCCESS) {
@ -2402,9 +2414,9 @@ int32_t smlBindData(void* handle, SArray* tags, SArray* colsSchema, SArray* cols
} else {
int32_t colLen = kv->length;
if (pColSchema->type == TSDB_DATA_TYPE_TIMESTAMP) {
// uError("SML:data before:%ld, precision:%d", kv->i, pTableMeta->tableInfo.precision);
// uError("SML:data before:%ld, precision:%d", kv->i, pTableMeta->tableInfo.precision);
kv->i = convertTimePrecision(kv->i, TSDB_TIME_PRECISION_NANO, pTableMeta->tableInfo.precision);
// uError("SML:data after:%ld, precision:%d", kv->i, pTableMeta->tableInfo.precision);
// uError("SML:data after:%ld, precision:%d", kv->i, pTableMeta->tableInfo.precision);
}
if (IS_VAR_DATA_TYPE(kv->type)) {

View File

@ -19,6 +19,7 @@
#include "parInt.h"
#include "parUtil.h"
#include "querynodes.h"
#include "tRealloc.h"
#define IS_RAW_PAYLOAD(t) \
(((int)(t)) == PAYLOAD_TYPE_RAW) // 0: K-V payload for non-prepare insert, 1: rawPayload for prepare insert
@ -34,6 +35,32 @@ typedef struct SBlockKeyInfo {
SBlockKeyTuple* pKeyTuple;
} SBlockKeyInfo;
typedef struct {
int32_t index;
SArray* rowArray; // array of merged rows(mem allocated by tRealloc/free by tFree)
STSchema* pSchema;
int64_t tbUid; // suid for child table, uid for normal table
} SBlockRowMerger;
static FORCE_INLINE void tdResetSBlockRowMerger(SBlockRowMerger* pMerger) {
if (pMerger) {
pMerger->index = -1;
}
}
static void tdFreeSBlockRowMerger(SBlockRowMerger* pMerger) {
if (pMerger) {
int32_t size = taosArrayGetSize(pMerger->rowArray);
for (int32_t i = 0; i < size; ++i) {
tFree(*(void**)taosArrayGet(pMerger->rowArray, i));
}
taosArrayDestroy(pMerger->rowArray);
taosMemoryFreeClear(pMerger->pSchema);
taosMemoryFree(pMerger);
}
}
static int32_t rowDataCompar(const void* lhs, const void* rhs) {
TSKEY left = *(TSKEY*)lhs;
TSKEY right = *(TSKEY*)rhs;
@ -328,7 +355,7 @@ void sortRemoveDataBlockDupRowsRaw(STableDataBlocks* dataBuf) {
}
// data block is disordered, sort it in ascending order
int sortRemoveDataBlockDupRows(STableDataBlocks* dataBuf, SBlockKeyInfo* pBlkKeyInfo) {
static int sortRemoveDataBlockDupRows(STableDataBlocks* dataBuf, SBlockKeyInfo* pBlkKeyInfo) {
SSubmitBlk* pBlocks = (SSubmitBlk*)dataBuf->pData;
int16_t nRows = pBlocks->numOfRows;
@ -396,6 +423,201 @@ int sortRemoveDataBlockDupRows(STableDataBlocks* dataBuf, SBlockKeyInfo* pBlkKey
return 0;
}
static void* tdGetCurRowFromBlockMerger(SBlockRowMerger* pBlkRowMerger) {
if (pBlkRowMerger && (pBlkRowMerger->index >= 0)) {
ASSERT(pBlkRowMerger->index < taosArrayGetSize(pBlkRowMerger->rowArray));
return *(void**)taosArrayGet(pBlkRowMerger->rowArray, pBlkRowMerger->index);
}
return NULL;
}
static int32_t tdBlockRowMerge(STableMeta* pTableMeta, SBlockKeyTuple* pEndKeyTp, int32_t nDupRows,
SBlockRowMerger** pBlkRowMerger, int32_t rowSize) {
ASSERT(nDupRows > 1);
SBlockKeyTuple* pStartKeyTp = pEndKeyTp - (nDupRows - 1);
ASSERT(pStartKeyTp->skey == pEndKeyTp->skey);
// TODO: optimization if end row is all normal
#if 0
STSRow* pEndRow = (STSRow*)pEndKeyTp->payloadAddr;
if(isNormal(pEndRow)) { // set the end row if it is normal and return directly
pStartKeyTp->payloadAddr = pEndKeyTp->payloadAddr;
return TSDB_CODE_SUCCESS;
}
#endif
if (!(*pBlkRowMerger)) {
(*pBlkRowMerger) = taosMemoryCalloc(1, sizeof(**pBlkRowMerger));
if (!(*pBlkRowMerger)) {
terrno = TSDB_CODE_OUT_OF_MEMORY;
return TSDB_CODE_FAILED;
}
(*pBlkRowMerger)->index = -1;
if (!(*pBlkRowMerger)->rowArray) {
(*pBlkRowMerger)->rowArray = taosArrayInit(1, sizeof(void*));
if (!(*pBlkRowMerger)->rowArray) {
terrno = TSDB_CODE_OUT_OF_MEMORY;
return TSDB_CODE_FAILED;
}
}
}
if ((*pBlkRowMerger)->pSchema) {
if ((*pBlkRowMerger)->pSchema->version != pTableMeta->sversion) {
taosMemoryFreeClear((*pBlkRowMerger)->pSchema);
} else {
if ((*pBlkRowMerger)->tbUid != (pTableMeta->suid > 0 ? pTableMeta->suid : pTableMeta->uid)) {
taosMemoryFreeClear((*pBlkRowMerger)->pSchema);
}
}
}
if (!(*pBlkRowMerger)->pSchema) {
(*pBlkRowMerger)->pSchema =
tdGetSTSChemaFromSSChema(pTableMeta->schema, pTableMeta->tableInfo.numOfColumns, pTableMeta->sversion);
if (!(*pBlkRowMerger)->pSchema) {
terrno = TSDB_CODE_OUT_OF_MEMORY;
return TSDB_CODE_FAILED;
}
(*pBlkRowMerger)->tbUid = pTableMeta->suid > 0 ? pTableMeta->suid : pTableMeta->uid;
}
void* pDestRow = NULL;
++((*pBlkRowMerger)->index);
if ((*pBlkRowMerger)->index < taosArrayGetSize((*pBlkRowMerger)->rowArray)) {
void* pAlloc = *(void**)taosArrayGet((*pBlkRowMerger)->rowArray, (*pBlkRowMerger)->index);
if (tRealloc((uint8_t**)&pAlloc, rowSize) != 0) {
return TSDB_CODE_FAILED;
}
pDestRow = pAlloc;
} else {
if (tRealloc((uint8_t**)&pDestRow, rowSize) != 0) {
return TSDB_CODE_FAILED;
}
taosArrayPush((*pBlkRowMerger)->rowArray, &pDestRow);
}
// merge rows to pDestRow
STSchema* pSchema = (*pBlkRowMerger)->pSchema;
SArray* pArray = taosArrayInit(pSchema->numOfCols, sizeof(SColVal));
for (int32_t i = 0; i < pSchema->numOfCols; ++i) {
SColVal colVal = {0};
for (int32_t j = 0; j < nDupRows; ++j) {
tTSRowGetVal((pEndKeyTp - j)->payloadAddr, pSchema, i, &colVal);
if (!colVal.isNone) {
break;
}
}
taosArrayPush(pArray, &colVal);
}
if (tdSTSRowNew(pArray, pSchema, (STSRow**)&pDestRow) < 0) {
taosArrayDestroy(pArray);
return TSDB_CODE_FAILED;
}
taosArrayDestroy(pArray);
return TSDB_CODE_SUCCESS;
}
// data block is disordered, sort it in ascending order, and merge dup rows if exists
static int sortMergeDataBlockDupRows(STableDataBlocks* dataBuf, SBlockKeyInfo* pBlkKeyInfo,
SBlockRowMerger** ppBlkRowMerger) {
SSubmitBlk* pBlocks = (SSubmitBlk*)dataBuf->pData;
STableMeta* pTableMeta = dataBuf->pTableMeta;
int16_t nRows = pBlocks->numOfRows;
// size is less than the total size, since duplicated rows may be removed.
// allocate memory
size_t nAlloc = nRows * sizeof(SBlockKeyTuple);
if (pBlkKeyInfo->pKeyTuple == NULL || pBlkKeyInfo->maxBytesAlloc < nAlloc) {
char* tmp = taosMemoryRealloc(pBlkKeyInfo->pKeyTuple, nAlloc);
if (tmp == NULL) {
return TSDB_CODE_TSC_OUT_OF_MEMORY;
}
pBlkKeyInfo->pKeyTuple = (SBlockKeyTuple*)tmp;
pBlkKeyInfo->maxBytesAlloc = (int32_t)nAlloc;
}
memset(pBlkKeyInfo->pKeyTuple, 0, nAlloc);
tdResetSBlockRowMerger(*ppBlkRowMerger);
int32_t extendedRowSize = getExtendedRowSize(dataBuf);
SBlockKeyTuple* pBlkKeyTuple = pBlkKeyInfo->pKeyTuple;
char* pBlockData = pBlocks->data + pBlocks->schemaLen;
int n = 0;
while (n < nRows) {
pBlkKeyTuple->skey = TD_ROW_KEY((STSRow*)pBlockData);
pBlkKeyTuple->payloadAddr = pBlockData;
pBlkKeyTuple->index = n;
// next loop
pBlockData += extendedRowSize;
++pBlkKeyTuple;
++n;
}
if (!dataBuf->ordered) {
pBlkKeyTuple = pBlkKeyInfo->pKeyTuple;
taosSort(pBlkKeyTuple, nRows, sizeof(SBlockKeyTuple), rowDataComparStable);
pBlkKeyTuple = pBlkKeyInfo->pKeyTuple;
bool hasDup = false;
int32_t nextPos = 0;
int32_t i = 0;
int32_t j = 1;
while (j < nRows) {
TSKEY ti = (pBlkKeyTuple + i)->skey;
TSKEY tj = (pBlkKeyTuple + j)->skey;
if (ti == tj) {
++j;
continue;
}
if ((j - i) > 1) {
if (tdBlockRowMerge(pTableMeta, (pBlkKeyTuple + j - 1), j - i, ppBlkRowMerger, extendedRowSize) < 0) {
return TSDB_CODE_FAILED;
}
(pBlkKeyTuple + nextPos)->payloadAddr = tdGetCurRowFromBlockMerger(*ppBlkRowMerger);
if (!hasDup) {
hasDup = true;
}
i = j;
} else {
if (hasDup) {
memmove(pBlkKeyTuple + nextPos, pBlkKeyTuple + i, sizeof(SBlockKeyTuple));
}
++i;
}
++nextPos;
++j;
}
if ((j - i) > 1) {
ASSERT((pBlkKeyTuple + i)->skey == (pBlkKeyTuple + j - 1)->skey);
if (tdBlockRowMerge(pTableMeta, (pBlkKeyTuple + j - 1), j - i, ppBlkRowMerger, extendedRowSize) < 0) {
return TSDB_CODE_FAILED;
}
(pBlkKeyTuple + nextPos)->payloadAddr = tdGetCurRowFromBlockMerger(*ppBlkRowMerger);
} else if (hasDup) {
memmove(pBlkKeyTuple + nextPos, pBlkKeyTuple + i, sizeof(SBlockKeyTuple));
}
dataBuf->ordered = true;
pBlocks->numOfRows = nextPos + 1;
}
dataBuf->size = sizeof(SSubmitBlk) + pBlocks->numOfRows * extendedRowSize;
dataBuf->prevTS = INT64_MIN;
return TSDB_CODE_SUCCESS;
}
// Erase the empty space reserved for binary data
static int trimDataBlock(void* pDataBlock, STableDataBlocks* pTableDataBlock, SBlockKeyTuple* blkKeyTuple,
bool isRawPayload) {
@ -464,6 +686,8 @@ int32_t mergeTableDataBlocks(SHashObj* pHashObj, uint8_t payloadType, SArray** p
STableDataBlocks** p = taosHashIterate(pHashObj, NULL);
STableDataBlocks* pOneTableBlock = *p;
SBlockKeyInfo blkKeyInfo = {0}; // share by pOneTableBlock
SBlockRowMerger *pBlkRowMerger = NULL;
while (pOneTableBlock) {
SSubmitBlk* pBlocks = (SSubmitBlk*)pOneTableBlock->pData;
if (pBlocks->numOfRows > 0) {
@ -473,6 +697,7 @@ int32_t mergeTableDataBlocks(SHashObj* pHashObj, uint8_t payloadType, SArray** p
getDataBlockFromList(pVnodeDataBlockHashList, &pOneTableBlock->vgId, sizeof(pOneTableBlock->vgId), TSDB_PAYLOAD_SIZE, INSERT_HEAD_SIZE, 0,
pOneTableBlock->pTableMeta, &dataBuf, pVnodeDataBlockList, NULL);
if (ret != TSDB_CODE_SUCCESS) {
tdFreeSBlockRowMerger(pBlkRowMerger);
taosHashCleanup(pVnodeDataBlockHashList);
destroyBlockArrayList(pVnodeDataBlockList);
taosMemoryFreeClear(blkKeyInfo.pKeyTuple);
@ -490,6 +715,7 @@ int32_t mergeTableDataBlocks(SHashObj* pHashObj, uint8_t payloadType, SArray** p
if (tmp != NULL) {
dataBuf->pData = tmp;
} else { // failed to allocate memory, free already allocated memory and return error code
tdFreeSBlockRowMerger(pBlkRowMerger);
taosHashCleanup(pVnodeDataBlockHashList);
destroyBlockArrayList(pVnodeDataBlockList);
taosMemoryFreeClear(dataBuf->pData);
@ -501,7 +727,8 @@ int32_t mergeTableDataBlocks(SHashObj* pHashObj, uint8_t payloadType, SArray** p
if (isRawPayload) {
sortRemoveDataBlockDupRowsRaw(pOneTableBlock);
} else {
if ((code = sortRemoveDataBlockDupRows(pOneTableBlock, &blkKeyInfo)) != 0) {
if ((code = sortMergeDataBlockDupRows(pOneTableBlock, &blkKeyInfo, &pBlkRowMerger)) != 0) {
tdFreeSBlockRowMerger(pBlkRowMerger);
taosHashCleanup(pVnodeDataBlockHashList);
destroyBlockArrayList(pVnodeDataBlockList);
taosMemoryFreeClear(dataBuf->pData);
@ -529,6 +756,7 @@ int32_t mergeTableDataBlocks(SHashObj* pHashObj, uint8_t payloadType, SArray** p
}
// free the table data blocks;
tdFreeSBlockRowMerger(pBlkRowMerger);
taosHashCleanup(pVnodeDataBlockHashList);
taosMemoryFreeClear(blkKeyInfo.pKeyTuple);
*pVgDataBlocks = pVnodeDataBlockList;
@ -678,6 +906,7 @@ void qFreeStmtDataBlock(void* pDataBlock) {
return;
}
taosMemoryFreeClear(((STableDataBlocks*)pDataBlock)->pTableMeta);
taosMemoryFreeClear(((STableDataBlocks*)pDataBlock)->pData);
taosMemoryFreeClear(pDataBlock);
}

View File

@ -1089,7 +1089,7 @@ static int32_t translateScanPseudoColumnFunc(STranslateContext* pCxt, SFunctionN
return TSDB_CODE_SUCCESS;
}
if (0 == LIST_LENGTH(pFunc->pParameterList)) {
if (!isSelectStmt(pCxt->pCurrStmt) ||
if (!isSelectStmt(pCxt->pCurrStmt) || NULL == ((SSelectStmt*)pCxt->pCurrStmt)->pFromTable ||
QUERY_NODE_REAL_TABLE != nodeType(((SSelectStmt*)pCxt->pCurrStmt)->pFromTable)) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_TBNAME);
}

View File

@ -866,13 +866,15 @@ STableCfg* tableCfgDup(STableCfg* pCfg) {
memcpy(pNew, pCfg, sizeof(*pNew));
if (NULL != pNew->pComment) {
pNew->pComment = strdup(pNew->pComment);
pNew->pComment = taosMemoryCalloc(pNew->commentLen + 1, 1);
memcpy(pNew->pComment, pCfg->pComment, pNew->commentLen);
}
if (NULL != pNew->pFuncs) {
pNew->pFuncs = taosArrayDup(pNew->pFuncs);
}
if (NULL != pNew->pTags) {
pNew->pTags = strdup(pNew->pTags);
pNew->pTags = taosMemoryCalloc(pNew->tagsLen + 1, 1);
memcpy(pNew->pTags, pCfg->pTags, pNew->tagsLen);
}
int32_t schemaSize = (pCfg->numOfColumns + pCfg->numOfTags) * sizeof(SSchema);

View File

@ -36,7 +36,7 @@ bool qIsInsertValuesSql(const char* pStr, size_t length) {
pStr += index;
index = 0;
t = tStrGetToken((char*)pStr, &index, false);
if (TK_USING == t.type || TK_VALUES == t.type) {
if (TK_USING == t.type || TK_VALUES == t.type || TK_FILE == t.type) {
return true;
} else if (TK_SELECT == t.type) {
return false;
@ -82,11 +82,16 @@ static int32_t parseSqlSyntax(SParseContext* pCxt, SQuery** pQuery, SParseMetaCa
}
static int32_t setValueByBindParam(SValueNode* pVal, TAOS_MULTI_BIND* pParam) {
if (IS_VAR_DATA_TYPE(pVal->node.resType.type)) {
taosMemoryFreeClear(pVal->datum.p);
}
if (pParam->is_null && 1 == *(pParam->is_null)) {
pVal->node.resType.type = TSDB_DATA_TYPE_NULL;
pVal->node.resType.bytes = tDataTypes[TSDB_DATA_TYPE_NULL].bytes;
return TSDB_CODE_SUCCESS;
}
int32_t inputSize = (NULL != pParam->length ? *(pParam->length) : tDataTypes[pParam->buffer_type].bytes);
pVal->node.resType.type = pParam->buffer_type;
pVal->node.resType.bytes = inputSize;

View File

@ -444,4 +444,11 @@ TEST_F(ParserSelectTest, withoutFrom) {
run("SELECT USER()");
}
TEST_F(ParserSelectTest, withoutFromSemanticCheck) {
useDb("root", "test");
run("SELECT c1", TSDB_CODE_PAR_INVALID_COLUMN);
run("SELECT TBNAME", TSDB_CODE_PAR_INVALID_TBNAME);
}
} // namespace ParserTest

View File

@ -35,6 +35,7 @@ int32_t generateUsageErrMsg(char* pBuf, int32_t len, int32_t errCode, ...);
int32_t createColumnByRewriteExprs(SNodeList* pExprs, SNodeList** pList);
int32_t createColumnByRewriteExpr(SNode* pExpr, SNodeList** pList);
int32_t replaceLogicNode(SLogicSubplan* pSubplan, SLogicNode* pOld, SLogicNode* pNew);
int32_t adjustLogicNodeDataRequirement(SLogicNode* pNode, EDataOrderLevel requirement);
int32_t createLogicPlan(SPlanContext* pCxt, SLogicSubplan** pLogicSubplan);
int32_t optimizeLogicPlan(SPlanContext* pCxt, SLogicSubplan* pLogicSubplan);

View File

@ -250,6 +250,9 @@ static int32_t createScanLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSelect
SScanLogicNode* pScan = NULL;
int32_t code = makeScanLogicNode(pCxt, pRealTable, pSelect->hasRepeatScanFuncs, (SLogicNode**)&pScan);
pScan->node.groupAction = GROUP_ACTION_NONE;
pScan->node.resultDataOrder = DATA_ORDER_LEVEL_IN_BLOCK;
// set columns to scan
if (TSDB_CODE_SUCCESS == code) {
code = nodesCollectColumns(pSelect, SQL_CLAUSE_FROM, pRealTable->table.tableAlias, COLLECT_COL_TYPE_COL,
@ -336,6 +339,9 @@ static int32_t createJoinLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSelect
pJoin->joinType = pJoinTable->joinType;
pJoin->isSingleTableJoin = pJoinTable->table.singleTable;
pJoin->node.groupAction = GROUP_ACTION_CLEAR;
pJoin->node.requireDataOrder = DATA_ORDER_LEVEL_GLOBAL;
pJoin->node.requireDataOrder = DATA_ORDER_LEVEL_GLOBAL;
int32_t code = TSDB_CODE_SUCCESS;
@ -472,6 +478,9 @@ static int32_t createAggLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSelect,
}
pAgg->hasLastRow = pSelect->hasLastRowFunc;
pAgg->node.groupAction = GROUP_ACTION_SET;
pAgg->node.requireDataOrder = DATA_ORDER_LEVEL_NONE;
pAgg->node.resultDataOrder = DATA_ORDER_LEVEL_NONE;
int32_t code = TSDB_CODE_SUCCESS;
@ -540,6 +549,10 @@ static int32_t createIndefRowsFuncLogicNode(SLogicPlanContext* pCxt, SSelectStmt
pIdfRowsFunc->isTailFunc = pSelect->hasTailFunc;
pIdfRowsFunc->isUniqueFunc = pSelect->hasUniqueFunc;
pIdfRowsFunc->isTimeLineFunc = pSelect->hasTimeLineFunc;
pIdfRowsFunc->node.groupAction = GROUP_ACTION_KEEP;
pIdfRowsFunc->node.requireDataOrder =
pIdfRowsFunc->isTimeLineFunc ? DATA_ORDER_LEVEL_IN_GROUP : DATA_ORDER_LEVEL_NONE;
pIdfRowsFunc->node.resultDataOrder = pIdfRowsFunc->node.requireDataOrder;
// indefinite rows functions and _select_values functions
int32_t code = nodesCollectFuncs(pSelect, SQL_CLAUSE_SELECT, fmIsVectorFunc, &pIdfRowsFunc->pFuncs);
@ -571,6 +584,10 @@ static int32_t createInterpFuncLogicNode(SLogicPlanContext* pCxt, SSelectStmt* p
return TSDB_CODE_OUT_OF_MEMORY;
}
pInterpFunc->node.groupAction = GROUP_ACTION_KEEP;
pInterpFunc->node.requireDataOrder = DATA_ORDER_LEVEL_IN_GROUP;
pInterpFunc->node.resultDataOrder = pInterpFunc->node.requireDataOrder;
int32_t code = nodesCollectFuncs(pSelect, SQL_CLAUSE_SELECT, fmIsInterpFunc, &pInterpFunc->pFuncs);
if (TSDB_CODE_SUCCESS == code) {
code = rewriteExprsForSelect(pInterpFunc->pFuncs, pSelect, SQL_CLAUSE_SELECT);
@ -642,10 +659,12 @@ static int32_t createWindowLogicNodeByState(SLogicPlanContext* pCxt, SStateWindo
}
pWindow->winType = WINDOW_TYPE_STATE;
pWindow->node.groupAction = GROUP_ACTION_KEEP;
pWindow->node.requireDataOrder = pCxt->pPlanCxt->streamQuery ? DATA_ORDER_LEVEL_IN_BLOCK : DATA_ORDER_LEVEL_IN_GROUP;
pWindow->node.resultDataOrder = DATA_ORDER_LEVEL_IN_GROUP;
pWindow->pStateExpr = nodesCloneNode(pState->pExpr);
pWindow->pTspk = nodesCloneNode(pState->pCol);
if (NULL == pWindow->pTspk) {
if (NULL == pWindow->pStateExpr || NULL == pWindow->pTspk) {
nodesDestroyNode((SNode*)pWindow);
return TSDB_CODE_OUT_OF_MEMORY;
}
@ -663,6 +682,9 @@ static int32_t createWindowLogicNodeBySession(SLogicPlanContext* pCxt, SSessionW
pWindow->winType = WINDOW_TYPE_SESSION;
pWindow->sessionGap = ((SValueNode*)pSession->pGap)->datum.i;
pWindow->windowAlgo = pCxt->pPlanCxt->streamQuery ? SESSION_ALGO_STREAM_SINGLE : SESSION_ALGO_MERGE;
pWindow->node.groupAction = GROUP_ACTION_KEEP;
pWindow->node.requireDataOrder = pCxt->pPlanCxt->streamQuery ? DATA_ORDER_LEVEL_IN_BLOCK : DATA_ORDER_LEVEL_IN_GROUP;
pWindow->node.resultDataOrder = DATA_ORDER_LEVEL_IN_GROUP;
pWindow->pTspk = nodesCloneNode((SNode*)pSession->pCol);
if (NULL == pWindow->pTspk) {
@ -689,6 +711,9 @@ static int32_t createWindowLogicNodeByInterval(SLogicPlanContext* pCxt, SInterva
pWindow->slidingUnit =
(NULL != pInterval->pSliding ? ((SValueNode*)pInterval->pSliding)->unit : pWindow->intervalUnit);
pWindow->windowAlgo = pCxt->pPlanCxt->streamQuery ? INTERVAL_ALGO_STREAM_SINGLE : INTERVAL_ALGO_HASH;
pWindow->node.groupAction = GROUP_ACTION_KEEP;
pWindow->node.requireDataOrder = DATA_ORDER_LEVEL_IN_BLOCK;
pWindow->node.resultDataOrder = DATA_ORDER_LEVEL_IN_GROUP;
pWindow->pTspk = nodesCloneNode(pInterval->pCol);
if (NULL == pWindow->pTspk) {
@ -734,6 +759,10 @@ static int32_t createFillLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSelect
return TSDB_CODE_OUT_OF_MEMORY;
}
pFill->node.groupAction = GROUP_ACTION_KEEP;
pFill->node.requireDataOrder = DATA_ORDER_LEVEL_IN_GROUP;
pFill->node.resultDataOrder = DATA_ORDER_LEVEL_IN_GROUP;
int32_t code = nodesCollectColumns(pSelect, SQL_CLAUSE_WINDOW, NULL, COLLECT_COL_TYPE_ALL, &pFill->node.pTargets);
if (TSDB_CODE_SUCCESS == code && NULL == pFill->node.pTargets) {
code = nodesListMakeStrictAppend(&pFill->node.pTargets,
@ -768,6 +797,9 @@ static int32_t createSortLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSelect
}
pSort->groupSort = pSelect->groupSort;
pSort->node.groupAction = pSort->groupSort ? GROUP_ACTION_KEEP : GROUP_ACTION_CLEAR;
pSort->node.requireDataOrder = DATA_ORDER_LEVEL_NONE;
pSort->node.resultDataOrder = pSort->groupSort ? DATA_ORDER_LEVEL_IN_GROUP : DATA_ORDER_LEVEL_GLOBAL;
int32_t code = nodesCollectColumns(pSelect, SQL_CLAUSE_ORDER_BY, NULL, COLLECT_COL_TYPE_ALL, &pSort->node.pTargets);
if (TSDB_CODE_SUCCESS == code && NULL == pSort->node.pTargets) {
@ -818,6 +850,9 @@ static int32_t createProjectLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSel
TSWAP(pProject->node.pLimit, pSelect->pLimit);
TSWAP(pProject->node.pSlimit, pSelect->pSlimit);
pProject->node.groupAction = GROUP_ACTION_CLEAR;
pProject->node.requireDataOrder = DATA_ORDER_LEVEL_NONE;
pProject->node.resultDataOrder = DATA_ORDER_LEVEL_NONE;
int32_t code = TSDB_CODE_SUCCESS;
@ -850,6 +885,10 @@ static int32_t createPartitionLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pS
return TSDB_CODE_OUT_OF_MEMORY;
}
pPartition->node.groupAction = GROUP_ACTION_SET;
pPartition->node.requireDataOrder = DATA_ORDER_LEVEL_NONE;
pPartition->node.resultDataOrder = DATA_ORDER_LEVEL_NONE;
int32_t code =
nodesCollectColumns(pSelect, SQL_CLAUSE_PARTITION_BY, NULL, COLLECT_COL_TYPE_ALL, &pPartition->node.pTargets);
if (TSDB_CODE_SUCCESS == code && NULL == pPartition->node.pTargets) {
@ -882,11 +921,23 @@ static int32_t createDistinctLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSe
return TSDB_CODE_OUT_OF_MEMORY;
}
pAgg->node.groupAction = GROUP_ACTION_SET;
pAgg->node.requireDataOrder = DATA_ORDER_LEVEL_NONE;
pAgg->node.resultDataOrder = DATA_ORDER_LEVEL_NONE;
int32_t code = TSDB_CODE_SUCCESS;
// set grouyp keys, agg funcs and having conditions
pAgg->pGroupKeys = nodesCloneList(pSelect->pProjectionList);
if (NULL == pAgg->pGroupKeys) {
code = TSDB_CODE_OUT_OF_MEMORY;
SNodeList* pGroupKeys = NULL;
SNode* pProjection = NULL;
FOREACH(pProjection, pSelect->pProjectionList) {
code = nodesListMakeStrictAppend(&pGroupKeys, createGroupingSetNode(pProjection));
if (TSDB_CODE_SUCCESS != code) {
nodesDestroyList(pGroupKeys);
break;
}
}
if (TSDB_CODE_SUCCESS == code) {
pAgg->pGroupKeys = pGroupKeys;
}
// rewrite the expression in subsequent clauses
@ -1361,6 +1412,7 @@ int32_t createLogicPlan(SPlanContext* pCxt, SLogicSubplan** pLogicSubplan) {
if (TSDB_CODE_SUCCESS == code) {
setLogicNodeParent(pSubplan->pNode);
setLogicSubplanType(cxt.hasScan, pSubplan);
code = adjustLogicNodeDataRequirement(pSubplan->pNode, DATA_ORDER_LEVEL_NONE);
}
if (TSDB_CODE_SUCCESS == code) {

View File

@ -1579,6 +1579,34 @@ static bool eliminateProjOptMayBeOptimized(SLogicNode* pNode) {
return eliminateProjOptCheckProjColumnNames(pProjectNode);
}
typedef struct CheckNewChildTargetsCxt {
SNodeList* pNewChildTargets;
bool canUse;
} CheckNewChildTargetsCxt;
static EDealRes eliminateProjOptCanUseNewChildTargetsImpl(SNode* pNode, void* pContext) {
if (QUERY_NODE_COLUMN == nodeType(pNode)) {
CheckNewChildTargetsCxt* pCxt = pContext;
SNode* pTarget = NULL;
FOREACH(pTarget, pCxt->pNewChildTargets) {
if (!nodesEqualNode(pTarget, pNode)) {
pCxt->canUse = false;
return DEAL_RES_END;
}
}
}
return DEAL_RES_CONTINUE;
}
static bool eliminateProjOptCanUseNewChildTargets(SLogicNode* pChild, SNodeList* pNewChildTargets) {
if (NULL == pChild->pConditions) {
return true;
}
CheckNewChildTargetsCxt cxt = {.pNewChildTargets = pNewChildTargets, .canUse = true};
nodesWalkExpr(pChild->pConditions, eliminateProjOptCanUseNewChildTargetsImpl, &cxt);
return cxt.canUse;
}
static int32_t eliminateProjOptimizeImpl(SOptimizeContext* pCxt, SLogicSubplan* pLogicSubplan,
SProjectLogicNode* pProjectNode) {
SLogicNode* pChild = (SLogicNode*)nodesListGetNode(pProjectNode->node.pChildren, 0);
@ -1588,14 +1616,19 @@ static int32_t eliminateProjOptimizeImpl(SOptimizeContext* pCxt, SLogicSubplan*
FOREACH(pProjection, pProjectNode->pProjections) {
SNode* pChildTarget = NULL;
FOREACH(pChildTarget, pChild->pTargets) {
if (strcmp(((SColumnNode*)pProjection)->colName, ((SColumnNode*)pChildTarget)->colName) == 0) {
if (0 == strcmp(((SColumnNode*)pProjection)->colName, ((SColumnNode*)pChildTarget)->colName)) {
nodesListAppend(pNewChildTargets, nodesCloneNode(pChildTarget));
break;
}
}
}
nodesDestroyList(pChild->pTargets);
pChild->pTargets = pNewChildTargets;
if (eliminateProjOptCanUseNewChildTargets(pChild, pNewChildTargets)) {
nodesDestroyList(pChild->pTargets);
pChild->pTargets = pNewChildTargets;
} else {
nodesDestroyList(pNewChildTargets);
return TSDB_CODE_SUCCESS;
}
int32_t code = replaceLogicNode(pLogicSubplan, (SLogicNode*)pProjectNode, pChild);
if (TSDB_CODE_SUCCESS == code) {
@ -1873,6 +1906,8 @@ static int32_t rewriteUniqueOptCreateAgg(SIndefRowsFuncLogicNode* pIndef, SLogic
TSWAP(pAgg->node.pChildren, pIndef->node.pChildren);
optResetParent((SLogicNode*)pAgg);
pAgg->node.precision = pIndef->node.precision;
pAgg->node.requireDataOrder = DATA_ORDER_LEVEL_IN_BLOCK; // first function requirement
pAgg->node.resultDataOrder = DATA_ORDER_LEVEL_NONE;
int32_t code = TSDB_CODE_SUCCESS;
bool hasSelectPrimaryKey = false;
@ -1945,6 +1980,8 @@ static int32_t rewriteUniqueOptCreateProject(SIndefRowsFuncLogicNode* pIndef, SL
TSWAP(pProject->node.pTargets, pIndef->node.pTargets);
pProject->node.precision = pIndef->node.precision;
pProject->node.requireDataOrder = DATA_ORDER_LEVEL_NONE;
pProject->node.resultDataOrder = DATA_ORDER_LEVEL_NONE;
int32_t code = TSDB_CODE_SUCCESS;
SNode* pNode = NULL;
@ -1973,12 +2010,17 @@ static int32_t rewriteUniqueOptimizeImpl(SOptimizeContext* pCxt, SLogicSubplan*
}
if (TSDB_CODE_SUCCESS == code) {
code = nodesListMakeAppend(&pProject->pChildren, (SNode*)pAgg);
pAgg->pParent = pProject;
pAgg = NULL;
}
if (TSDB_CODE_SUCCESS == code) {
pAgg->pParent = pProject;
pAgg = NULL;
code = replaceLogicNode(pLogicSubplan, (SLogicNode*)pIndef, pProject);
}
if (TSDB_CODE_SUCCESS == code) {
code = adjustLogicNodeDataRequirement(
pProject, NULL == pProject->pParent ? DATA_ORDER_LEVEL_NONE : pProject->pParent->requireDataOrder);
pProject = NULL;
}
if (TSDB_CODE_SUCCESS == code) {
nodesDestroyNode((SNode*)pIndef);
} else {
@ -2145,11 +2187,20 @@ static bool tagScanMayBeOptimized(SLogicNode* pNode) {
}
SAggLogicNode* pAgg = (SAggLogicNode*)(pNode->pParent);
if (NULL == pAgg->pGroupKeys || NULL != pAgg->pAggFuncs ||
planOptNodeListHasCol(pAgg->pGroupKeys) || !planOptNodeListHasTbname(pAgg->pGroupKeys)) {
if (NULL == pAgg->pGroupKeys || NULL != pAgg->pAggFuncs || planOptNodeListHasCol(pAgg->pGroupKeys) ||
!planOptNodeListHasTbname(pAgg->pGroupKeys)) {
return false;
}
SNode* pGroupKey = NULL;
FOREACH(pGroupKey, pAgg->pGroupKeys) {
SNode* pGroup = NULL;
FOREACH(pGroup, ((SGroupingSetNode*)pGroupKey)->pParameterList) {
if (QUERY_NODE_COLUMN != nodeType(pGroup)) {
return false;
}
}
}
return true;
}
@ -2162,15 +2213,32 @@ static int32_t tagScanOptimize(SOptimizeContext* pCxt, SLogicSubplan* pLogicSubp
pScanNode->scanType = SCAN_TYPE_TAG;
SNode* pTarget = NULL;
FOREACH(pTarget, pScanNode->node.pTargets) {
if (PRIMARYKEY_TIMESTAMP_COL_ID == ((SColumnNode*)(pTarget))->colId) {
ERASE_NODE(pScanNode->node.pTargets);
break;
}
if (PRIMARYKEY_TIMESTAMP_COL_ID == ((SColumnNode*)(pTarget))->colId) {
ERASE_NODE(pScanNode->node.pTargets);
break;
}
}
NODES_DESTORY_LIST(pScanNode->pScanCols);
SLogicNode* pAgg = pScanNode->node.pParent;
if (NULL == pAgg->pParent) {
SNodeList* pScanTargets = nodesMakeList();
SNode* pAggTarget = NULL;
FOREACH(pAggTarget, pAgg->pTargets) {
SNode* pScanTarget = NULL;
FOREACH(pScanTarget, pScanNode->node.pTargets) {
if (0 == strcmp(((SColumnNode*)pAggTarget)->colName, ((SColumnNode*)pAggTarget)->colName)) {
nodesListAppend(pScanTargets, nodesCloneNode(pScanTarget));
break;
}
}
}
nodesDestroyList(pScanNode->node.pTargets);
pScanNode->node.pTargets = pScanTargets;
}
int32_t code = replaceLogicNode(pLogicSubplan, pAgg, (SLogicNode*)pScanNode);
if (TSDB_CODE_SUCCESS == code) {
NODES_CLEAR_LIST(pAgg->pChildren);

View File

@ -974,6 +974,17 @@ static int32_t createInterpFuncPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pCh
return code;
}
static bool projectCanMergeDataBlock(SProjectLogicNode* pProject) {
if (DATA_ORDER_LEVEL_NONE == pProject->node.resultDataOrder) {
return true;
}
if (1 != LIST_LENGTH(pProject->node.pChildren)) {
return false;
}
SLogicNode* pChild = (SLogicNode*)nodesListGetNode(pProject->node.pChildren, 0);
return DATA_ORDER_LEVEL_GLOBAL == pChild->resultDataOrder ? true : false;
}
static int32_t createProjectPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChildren,
SProjectLogicNode* pProjectLogicNode, SPhysiNode** pPhyNode) {
SProjectPhysiNode* pProject =
@ -982,6 +993,8 @@ static int32_t createProjectPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChild
return TSDB_CODE_OUT_OF_MEMORY;
}
pProject->mergeDataBlock = projectCanMergeDataBlock(pProjectLogicNode);
int32_t code = TSDB_CODE_SUCCESS;
if (0 == LIST_LENGTH(pChildren)) {
pProject->pProjections = nodesCloneList(pProjectLogicNode->pProjections);

View File

@ -657,6 +657,9 @@ static int32_t stbSplSplitWindowForPartTable(SSplitContext* pCxt, SStableSplitIn
return TSDB_CODE_SUCCESS;
}
if (NULL != pInfo->pSplitNode->pParent && QUERY_NODE_LOGIC_PLAN_FILL == nodeType(pInfo->pSplitNode->pParent)) {
pInfo->pSplitNode = pInfo->pSplitNode->pParent;
}
SExchangeLogicNode* pExchange = NULL;
int32_t code = splCreateExchangeNode(pCxt, pInfo->pSplitNode, &pExchange);
if (TSDB_CODE_SUCCESS == code) {

View File

@ -13,6 +13,7 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include "functionMgt.h"
#include "planInt.h"
static char* getUsageErrFormat(int32_t errCode) {
@ -121,3 +122,185 @@ int32_t replaceLogicNode(SLogicSubplan* pSubplan, SLogicNode* pOld, SLogicNode*
}
return TSDB_CODE_PLAN_INTERNAL_ERROR;
}
static int32_t adjustScanDataRequirement(SScanLogicNode* pScan, EDataOrderLevel requirement) {
if (SCAN_TYPE_TABLE != pScan->scanType || SCAN_TYPE_TABLE_MERGE != pScan->scanType) {
return TSDB_CODE_SUCCESS;
}
// The lowest sort level of scan output data is DATA_ORDER_LEVEL_IN_BLOCK
if (requirement < DATA_ORDER_LEVEL_IN_BLOCK) {
requirement = DATA_ORDER_LEVEL_IN_BLOCK;
}
if (DATA_ORDER_LEVEL_IN_BLOCK == requirement) {
pScan->scanType = SCAN_TYPE_TABLE;
} else if (TSDB_SUPER_TABLE == pScan->tableType) {
pScan->scanType = SCAN_TYPE_TABLE_MERGE;
}
pScan->node.resultDataOrder = requirement;
return TSDB_CODE_SUCCESS;
}
static int32_t adjustJoinDataRequirement(SJoinLogicNode* pJoin, EDataOrderLevel requirement) {
// The lowest sort level of join input and output data is DATA_ORDER_LEVEL_GLOBAL
return TSDB_CODE_SUCCESS;
}
static bool isKeepOrderAggFunc(SNodeList* pFuncs) {
SNode* pFunc = NULL;
FOREACH(pFunc, pFuncs) {
if (!fmIsKeepOrderFunc(((SFunctionNode*)pFunc)->funcId)) {
return false;
}
}
return true;
}
static int32_t adjustAggDataRequirement(SAggLogicNode* pAgg, EDataOrderLevel requirement) {
// The sort level of agg with group by output data can only be DATA_ORDER_LEVEL_NONE
if (requirement > DATA_ORDER_LEVEL_NONE && (NULL != pAgg->pGroupKeys || !isKeepOrderAggFunc(pAgg->pAggFuncs))) {
return TSDB_CODE_PLAN_INTERNAL_ERROR;
}
pAgg->node.resultDataOrder = requirement;
pAgg->node.requireDataOrder = requirement;
return TSDB_CODE_SUCCESS;
}
static int32_t adjustProjectDataRequirement(SProjectLogicNode* pProject, EDataOrderLevel requirement) {
pProject->node.resultDataOrder = requirement;
pProject->node.requireDataOrder = requirement;
return TSDB_CODE_SUCCESS;
}
static int32_t adjustIntervalDataRequirement(SWindowLogicNode* pWindow, EDataOrderLevel requirement) {
// The lowest sort level of interval output data is DATA_ORDER_LEVEL_IN_GROUP
if (requirement < DATA_ORDER_LEVEL_IN_GROUP) {
requirement = DATA_ORDER_LEVEL_IN_GROUP;
}
// The sort level of interval input data is always DATA_ORDER_LEVEL_IN_BLOCK
pWindow->node.resultDataOrder = requirement;
return TSDB_CODE_SUCCESS;
}
static int32_t adjustSessionDataRequirement(SWindowLogicNode* pWindow, EDataOrderLevel requirement) {
if (requirement <= pWindow->node.resultDataOrder) {
return TSDB_CODE_SUCCESS;
}
pWindow->node.resultDataOrder = requirement;
pWindow->node.requireDataOrder = requirement;
return TSDB_CODE_SUCCESS;
}
static int32_t adjustStateDataRequirement(SWindowLogicNode* pWindow, EDataOrderLevel requirement) {
if (requirement <= pWindow->node.resultDataOrder) {
return TSDB_CODE_SUCCESS;
}
pWindow->node.resultDataOrder = requirement;
pWindow->node.requireDataOrder = requirement;
return TSDB_CODE_SUCCESS;
}
static int32_t adjustWindowDataRequirement(SWindowLogicNode* pWindow, EDataOrderLevel requirement) {
switch (pWindow->winType) {
case WINDOW_TYPE_INTERVAL:
return adjustIntervalDataRequirement(pWindow, requirement);
case WINDOW_TYPE_SESSION:
return adjustSessionDataRequirement(pWindow, requirement);
case WINDOW_TYPE_STATE:
return adjustStateDataRequirement(pWindow, requirement);
default:
break;
}
return TSDB_CODE_PLAN_INTERNAL_ERROR;
}
static int32_t adjustFillDataRequirement(SFillLogicNode* pFill, EDataOrderLevel requirement) {
if (requirement <= pFill->node.requireDataOrder) {
return TSDB_CODE_SUCCESS;
}
pFill->node.resultDataOrder = requirement;
pFill->node.requireDataOrder = requirement;
return TSDB_CODE_SUCCESS;
}
static int32_t adjustSortDataRequirement(SSortLogicNode* pSort, EDataOrderLevel requirement) {
return TSDB_CODE_SUCCESS;
}
static int32_t adjustPartitionDataRequirement(SPartitionLogicNode* pPart, EDataOrderLevel requirement) {
if (DATA_ORDER_LEVEL_GLOBAL == requirement) {
return TSDB_CODE_PLAN_INTERNAL_ERROR;
}
pPart->node.resultDataOrder = requirement;
pPart->node.requireDataOrder = requirement;
return TSDB_CODE_SUCCESS;
}
static int32_t adjustIndefRowsDataRequirement(SIndefRowsFuncLogicNode* pIndef, EDataOrderLevel requirement) {
if (requirement <= pIndef->node.resultDataOrder) {
return TSDB_CODE_SUCCESS;
}
pIndef->node.resultDataOrder = requirement;
pIndef->node.requireDataOrder = requirement;
return TSDB_CODE_SUCCESS;
}
static int32_t adjustInterpDataRequirement(SInterpFuncLogicNode* pInterp, EDataOrderLevel requirement) {
if (requirement <= pInterp->node.requireDataOrder) {
return TSDB_CODE_SUCCESS;
}
pInterp->node.resultDataOrder = requirement;
pInterp->node.requireDataOrder = requirement;
return TSDB_CODE_SUCCESS;
}
int32_t adjustLogicNodeDataRequirement(SLogicNode* pNode, EDataOrderLevel requirement) {
int32_t code = TSDB_CODE_SUCCESS;
switch (nodeType(pNode)) {
case QUERY_NODE_LOGIC_PLAN_SCAN:
code = adjustScanDataRequirement((SScanLogicNode*)pNode, requirement);
break;
case QUERY_NODE_LOGIC_PLAN_JOIN:
code = adjustJoinDataRequirement((SJoinLogicNode*)pNode, requirement);
break;
case QUERY_NODE_LOGIC_PLAN_AGG:
code = adjustAggDataRequirement((SAggLogicNode*)pNode, requirement);
break;
case QUERY_NODE_LOGIC_PLAN_PROJECT:
code = adjustProjectDataRequirement((SProjectLogicNode*)pNode, requirement);
break;
case QUERY_NODE_LOGIC_PLAN_VNODE_MODIFY:
case QUERY_NODE_LOGIC_PLAN_EXCHANGE:
case QUERY_NODE_LOGIC_PLAN_MERGE:
break;
case QUERY_NODE_LOGIC_PLAN_WINDOW:
code = adjustWindowDataRequirement((SWindowLogicNode*)pNode, requirement);
break;
case QUERY_NODE_LOGIC_PLAN_FILL:
code = adjustFillDataRequirement((SFillLogicNode*)pNode, requirement);
break;
case QUERY_NODE_LOGIC_PLAN_SORT:
code = adjustSortDataRequirement((SSortLogicNode*)pNode, requirement);
break;
case QUERY_NODE_LOGIC_PLAN_PARTITION:
code = adjustPartitionDataRequirement((SPartitionLogicNode*)pNode, requirement);
break;
case QUERY_NODE_LOGIC_PLAN_INDEF_ROWS_FUNC:
code = adjustIndefRowsDataRequirement((SIndefRowsFuncLogicNode*)pNode, requirement);
break;
case QUERY_NODE_LOGIC_PLAN_INTERP_FUNC:
code = adjustInterpDataRequirement((SInterpFuncLogicNode*)pNode, requirement);
break;
default:
break;
}
if (TSDB_CODE_SUCCESS == code) {
SNode* pChild = NULL;
FOREACH(pChild, pNode->pChildren) {
code = adjustLogicNodeDataRequirement((SLogicNode*)pChild, pNode->requireDataOrder);
if (TSDB_CODE_SUCCESS != code) {
break;
}
}
}
return code;
}

View File

@ -38,9 +38,15 @@ TEST_F(PlanIntervalTest, fill) {
run("SELECT COUNT(*) FROM t1 WHERE ts > TIMESTAMP '2022-04-01 00:00:00' and ts < TIMESTAMP '2022-04-30 23:59:59' "
"INTERVAL(10s) FILL(LINEAR)");
run("SELECT COUNT(*) FROM st1 WHERE ts > TIMESTAMP '2022-04-01 00:00:00' and ts < TIMESTAMP '2022-04-30 23:59:59' "
"INTERVAL(10s) FILL(LINEAR)");
run("SELECT COUNT(*), SUM(c1) FROM t1 "
"WHERE ts > TIMESTAMP '2022-04-01 00:00:00' and ts < TIMESTAMP '2022-04-30 23:59:59' "
"INTERVAL(10s) FILL(VALUE, 10, 20)");
run("SELECT COUNT(*) FROM st1 WHERE ts > TIMESTAMP '2022-04-01 00:00:00' and ts < TIMESTAMP '2022-04-30 23:59:59' "
"PARTITION BY TBNAME interval(10s) fill(prev)");
}
TEST_F(PlanIntervalTest, selectFunc) {

View File

@ -48,10 +48,28 @@ TEST_F(PlanSubqeuryTest, doubleGroupBy) {
"WHERE a > 100 GROUP BY b");
}
TEST_F(PlanSubqeuryTest, withSetOperator) {
TEST_F(PlanSubqeuryTest, innerSetOperator) {
useDb("root", "test");
run("SELECT c1 FROM (SELECT c1 FROM t1 UNION ALL SELECT c1 FROM t1)");
run("SELECT c1 FROM (SELECT c1 FROM t1 UNION SELECT c1 FROM t1)");
}
TEST_F(PlanSubqeuryTest, innerFill) {
useDb("root", "test");
run("SELECT cnt FROM (SELECT _WSTART ts, COUNT(*) cnt FROM t1 "
"WHERE ts > '2022-04-01 00:00:00' and ts < '2022-04-30 23:59:59' INTERVAL(10s) FILL(LINEAR)) "
"WHERE ts > '2022-04-06 00:00:00'");
}
TEST_F(PlanSubqeuryTest, outerInterval) {
useDb("root", "test");
run("SELECT COUNT(*) FROM (SELECT * FROM st1) INTERVAL(5s)");
run("SELECT COUNT(*) + SUM(c1) FROM (SELECT * FROM st1) INTERVAL(5s)");
run("SELECT COUNT(*) FROM (SELECT ts, TOP(c1, 10) FROM st1s1) INTERVAL(5s)");
}

View File

@ -148,11 +148,12 @@ void destroySendMsgInfo(SMsgSendInfo* pMsgBody) {
taosMemoryFreeClear(pMsgBody);
}
int32_t asyncSendMsgToServerExt(void* pTransporter, SEpSet* epSet, int64_t* pTransporterId, const SMsgSendInfo* pInfo,
int32_t asyncSendMsgToServerExt(void* pTransporter, SEpSet* epSet, int64_t* pTransporterId, SMsgSendInfo* pInfo,
bool persistHandle, void* rpcCtx) {
char* pMsg = rpcMallocCont(pInfo->msgInfo.len);
if (NULL == pMsg) {
qError("0x%" PRIx64 " msg:%s malloc failed", pInfo->requestId, TMSG_INFO(pInfo->msgType));
destroySendMsgInfo(pInfo);
terrno = TSDB_CODE_TSC_OUT_OF_MEMORY;
return terrno;
}
@ -167,13 +168,15 @@ int32_t asyncSendMsgToServerExt(void* pTransporter, SEpSet* epSet, int64_t* pTra
.info.persistHandle = persistHandle,
.code = 0
};
assert(pInfo->fp != NULL);
TRACE_SET_ROOTID(&rpcMsg.info.traceId, pInfo->requestId);
rpcSendRequestWithCtx(pTransporter, epSet, &rpcMsg, pTransporterId, rpcCtx);
return TSDB_CODE_SUCCESS;
int code = rpcSendRequestWithCtx(pTransporter, epSet, &rpcMsg, pTransporterId, rpcCtx);
if (code) {
destroySendMsgInfo(pInfo);
}
return code;
}
int32_t asyncSendMsgToServer(void* pTransporter, SEpSet* epSet, int64_t* pTransporterId, const SMsgSendInfo* pInfo) {
int32_t asyncSendMsgToServer(void* pTransporter, SEpSet* epSet, int64_t* pTransporterId, SMsgSendInfo* pInfo) {
return asyncSendMsgToServerExt(pTransporter, epSet, pTransporterId, pInfo, false, NULL);
}

View File

@ -509,7 +509,7 @@ int32_t schGenerateCallBackInfo(SSchJob *pJob, SSchTask *pTask, void *msg, uint3
SMsgSendInfo *msgSendInfo = taosMemoryCalloc(1, sizeof(SMsgSendInfo));
if (NULL == msgSendInfo) {
SCH_TASK_ELOG("calloc %d failed", (int32_t)sizeof(SMsgSendInfo));
SCH_ERR_RET(TSDB_CODE_QRY_OUT_OF_MEMORY);
SCH_ERR_JRET(TSDB_CODE_QRY_OUT_OF_MEMORY);
}
msgSendInfo->paramFreeFp = taosMemoryFree;
@ -535,8 +535,12 @@ int32_t schGenerateCallBackInfo(SSchJob *pJob, SSchTask *pTask, void *msg, uint3
_return:
destroySendMsgInfo(msgSendInfo);
if (msgSendInfo) {
destroySendMsgInfo(msgSendInfo);
}
taosMemoryFree(msg);
SCH_RET(code);
}
@ -843,6 +847,7 @@ int32_t schAsyncSendMsg(SSchJob *pJob, SSchTask *pTask, SSchTrans *trans, SQuery
int64_t transporterId = 0;
code = asyncSendMsgToServerExt(trans->pTrans, epSet, &transporterId, pMsgSendInfo, persistHandle, ctx);
pMsgSendInfo = NULL;
if (code) {
SCH_ERR_JRET(code);
}
@ -919,7 +924,9 @@ int32_t schBuildAndSendHbMsg(SQueryNodeEpId *nodeEpId, SArray *taskAction) {
addr.epSet.numOfEps = 1;
memcpy(&addr.epSet.eps[0], &nodeEpId->ep, sizeof(nodeEpId->ep));
SCH_ERR_JRET(schAsyncSendMsg(NULL, NULL, &trans, &addr, msgType, msg, msgSize, true, &rpcCtx));
code = schAsyncSendMsg(NULL, NULL, &trans, &addr, msgType, msg, msgSize, true, &rpcCtx);
msg = NULL;
SCH_ERR_JRET(code);
return TSDB_CODE_SUCCESS;
@ -1087,9 +1094,10 @@ int32_t schBuildAndSendMsg(SSchJob *pJob, SSchTask *pTask, SQueryNodeAddr *addr,
}
SSchTrans trans = {.pTrans = pJob->conn.pTrans, .pHandle = SCH_GET_TASK_HANDLE(pTask)};
SCH_ERR_JRET(
schAsyncSendMsg(pJob, pTask, &trans, addr, msgType, msg, msgSize, persistHandle, (rpcCtx.args ? &rpcCtx : NULL)));
schAsyncSendMsg(pJob, pTask, &trans, addr, msgType, msg, msgSize, persistHandle, (rpcCtx.args ? &rpcCtx : NULL));
msg = NULL;
SCH_ERR_JRET(code);
if (msgType == TDMT_SCH_QUERY || msgType == TDMT_SCH_MERGE_QUERY) {
SCH_ERR_RET(schAppendTaskExecNode(pJob, pTask, addr, pTask->execId));
}

View File

@ -102,14 +102,14 @@ int32_t schRecordTaskSucceedNode(SSchJob *pJob, SSchTask *pTask) {
}
int32_t schAppendTaskExecNode(SSchJob *pJob, SSchTask *pTask, SQueryNodeAddr *addr, int32_t execId) {
SSchNodeInfo nodeInfo = {.addr = *addr, .handle = NULL};
SSchNodeInfo nodeInfo = {.addr = *addr, .handle = SCH_GET_TASK_HANDLE(pTask)};
if (taosHashPut(pTask->execNodes, &execId, sizeof(execId), &nodeInfo, sizeof(nodeInfo))) {
SCH_TASK_ELOG("taosHashPut nodeInfo to execNodes failed, errno:%d", errno);
SCH_ERR_RET(TSDB_CODE_QRY_OUT_OF_MEMORY);
}
SCH_TASK_DLOG("task execNode added, execId:%d", execId);
SCH_TASK_DLOG("task execNode added, execId:%d, handle:%p", execId, nodeInfo.handle);
return TSDB_CODE_SUCCESS;
}
@ -752,12 +752,18 @@ void schDropTaskOnExecNode(SSchJob *pJob, SSchTask *pTask) {
return;
}
int32_t i = 0;
SSchNodeInfo *nodeInfo = taosHashIterate(pTask->execNodes, NULL);
while (nodeInfo) {
SCH_SET_TASK_HANDLE(pTask, nodeInfo->handle);
schBuildAndSendMsg(pJob, pTask, &nodeInfo->addr, TDMT_SCH_DROP_TASK);
if (nodeInfo->handle) {
SCH_SET_TASK_HANDLE(pTask, nodeInfo->handle);
schBuildAndSendMsg(pJob, pTask, &nodeInfo->addr, TDMT_SCH_DROP_TASK);
SCH_TASK_DLOG("start to drop task's %dth execNode", i);
} else {
SCH_TASK_DLOG("no need to drop task %dth execNode", i);
}
++i;
nodeInfo = taosHashIterate(pTask->execNodes, nodeInfo);
}

View File

@ -34,6 +34,7 @@ int32_t streamDispatchReqToData(const SStreamDispatchReq* pReq, SStreamDataBlock
// TODO: refactor
pDataBlock->info.window.skey = be64toh(pRetrieve->skey);
pDataBlock->info.window.ekey = be64toh(pRetrieve->ekey);
pDataBlock->info.version = be64toh(pRetrieve->version);
pDataBlock->info.type = pRetrieve->streamBlockType;
pDataBlock->info.childId = pReq->upstreamChildId;
@ -54,6 +55,7 @@ int32_t streamRetrieveReqToData(const SStreamRetrieveReq* pReq, SStreamDataBlock
// TODO: refactor
pDataBlock->info.window.skey = be64toh(pRetrieve->skey);
pDataBlock->info.window.ekey = be64toh(pRetrieve->ekey);
pDataBlock->info.version = be64toh(pRetrieve->version);
pDataBlock->info.type = pRetrieve->streamBlockType;

Some files were not shown because too many files have changed in this diff Show More