Merge branch '3.0' of github.com:taosdata/TDengine into test/chr/TD-14699
This commit is contained in:
commit
a9f4b09146
|
@ -22,6 +22,7 @@ mac/
|
|||
.mypy_cache
|
||||
*.tmp
|
||||
*.swp
|
||||
*.swo
|
||||
*.orig
|
||||
src/connector/nodejs/node_modules/
|
||||
src/connector/nodejs/out/
|
||||
|
|
|
@ -3,11 +3,31 @@ sidebar_label: Docker
|
|||
title: 通过 Docker 快速体验 TDengine
|
||||
---
|
||||
|
||||
虽然并不推荐在生产环境中通过 Docker 来部署 TDengine 服务,但 Docker 工具能够很好地屏蔽底层操作系统的环境差异,很适合在开发测试或初次体验时用于安装运行 TDengine 的工具集。特别是,借助 Docker,能够比较方便地在 macOS 和 Windows 系统上尝试 TDengine,而无需安装虚拟机或额外租用 Linux 服务器。另外,从 2.0.14.0 版本开始,TDengine 提供的镜像已经可以同时支持 X86-64、X86、arm64、arm32 平台,像 NAS、树莓派、嵌入式开发板之类可以运行 docker 的非主流计算机也可以基于本文档轻松体验 TDengine。
|
||||
本节首先介绍如何通过 Docker 快速体验 TDengine,然后介绍如何在 Docker 环境下体验 TDengine 的写入和查询功能。
|
||||
|
||||
下文通过 Step by Step 风格的介绍,讲解如何通过 Docker 快速建立 TDengine 的单节点运行环境,以支持开发和测试。
|
||||
## 启动 TDengine
|
||||
|
||||
## 下载 Docker
|
||||
如果已经安装了 docker, 只需执行下面的命令。
|
||||
|
||||
```shell
|
||||
docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
|
||||
```
|
||||
|
||||
确定该容器已经启动并且在正常运行
|
||||
|
||||
```shell
|
||||
docker ps
|
||||
```
|
||||
|
||||
进入该容器并执行 bash
|
||||
|
||||
```shell
|
||||
docker exec -it <container name> bash
|
||||
```
|
||||
|
||||
然后就可以执行相关的 Linux 命令操作和访问 TDengine
|
||||
|
||||
:::info
|
||||
|
||||
Docker 工具自身的下载请参考 [Docker 官网文档](https://docs.docker.com/get-docker/)。
|
||||
|
||||
|
@ -18,95 +38,49 @@ $ docker -v
|
|||
Docker version 20.10.3, build 48d30b5
|
||||
```
|
||||
|
||||
## 使用 Docker 在容器中运行 TDengine
|
||||
:::
|
||||
|
||||
### 在 Docker 容器中运行 TDengine server
|
||||
## 运行 TDengine CLI
|
||||
|
||||
```bash
|
||||
$ docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
|
||||
526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd
|
||||
```
|
||||
|
||||
这条命令,启动一个运行了 TDengine server 的 docker 容器,并且将容器的 6030 到 6049 端口映射到宿主机的 6030 到 6049 端口上。如果宿主机已经运行了 TDengine server 并占用了相同端口,需要映射容器的端口到不同的未使用端口段。(详情参见 [TDengine 2.0 端口说明](/train-faq/faq#port)。为了支持 TDengine 客户端操作 TDengine server 服务, TCP 和 UDP 端口都需要打开。
|
||||
|
||||
- **docker run**:通过 Docker 运行一个容器
|
||||
- **-d**:让容器在后台运行
|
||||
- **-p**:指定映射端口。注意:如果不是用端口映射,依然可以进入 Docker 容器内部使用 TDengine 服务或进行应用开发,只是不能对容器外部提供服务
|
||||
- **tdengine/tdengine**:拉取的 TDengine 官方发布的应用镜像
|
||||
- **526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd**:这个返回的长字符是容器 ID,我们也可以通过容器 ID 来查看对应的容器
|
||||
|
||||
进一步,还可以使用 docker run 命令启动运行 TDengine server 的 docker 容器,并使用 `--name` 命令行参数将容器命名为 `tdengine`,使用 `--hostname` 指定 hostname 为 `tdengine-server`,通过 `-v` 挂载本地目录到容器,实现宿主机与容器内部的数据同步,防止容器删除后,数据丢失。
|
||||
|
||||
```bash
|
||||
docker run -d --name tdengine --hostname="tdengine-server" -v ~/work/taos/log:/var/log/taos -v ~/work/taos/data:/var/lib/taos -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
|
||||
```
|
||||
|
||||
- **--name tdengine**:设置容器名称,我们可以通过容器名称来访问对应的容器
|
||||
- **--hostname=tdengine-server**:设置容器内 Linux 系统的 hostname,我们可以通过映射 hostname 和 IP 来解决容器 IP 可能变化的问题。
|
||||
- **-v**:设置宿主机文件目录映射到容器内目录,避免容器删除后数据丢失。
|
||||
|
||||
### 使用 docker ps 命令确认容器是否已经正确运行
|
||||
|
||||
```bash
|
||||
docker ps
|
||||
```
|
||||
|
||||
输出示例如下:
|
||||
|
||||
```
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS ···
|
||||
c452519b0f9b tdengine/tdengine "taosd" 14 minutes ago Up 14 minutes ···
|
||||
```
|
||||
|
||||
- **docker ps**:列出所有正在运行状态的容器信息。
|
||||
- **CONTAINER ID**:容器 ID。
|
||||
- **IMAGE**:使用的镜像。
|
||||
- **COMMAND**:启动容器时运行的命令。
|
||||
- **CREATED**:容器创建时间。
|
||||
- **STATUS**:容器状态。UP 表示运行中。
|
||||
|
||||
### 通过 docker exec 命令,进入到 docker 容器中去做开发
|
||||
|
||||
```bash
|
||||
$ docker exec -it tdengine /bin/bash
|
||||
root@tdengine-server:~/TDengine-server-2.4.0.4#
|
||||
```
|
||||
|
||||
- **docker exec**:通过 docker exec 命令进入容器,如果退出,容器不会停止。
|
||||
- **-i**:进入交互模式。
|
||||
- **-t**:指定一个终端。
|
||||
- **tdengine**:容器名称,需要根据 docker ps 指令返回的值进行修改。
|
||||
- **/bin/bash**:载入容器后运行 bash 来进行交互。
|
||||
|
||||
进入容器后,执行 taos shell 客户端程序。
|
||||
|
||||
```bash
|
||||
root@tdengine-server:~/TDengine-server-2.4.0.4# taos
|
||||
|
||||
Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
|
||||
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
|
||||
|
||||
taos>
|
||||
```
|
||||
|
||||
TDengine 终端成功连接服务端,打印出了欢迎消息和版本信息。如果失败,会有错误信息打印出来。
|
||||
|
||||
在 TDengine 终端中,可以通过 SQL 命令来创建/删除数据库、表、超级表等,并可以进行插入和查询操作。具体可以参考 [TAOS SQL 说明文档](/taos-sql/)。
|
||||
|
||||
### 在宿主机访问 Docker 容器中的 TDengine server
|
||||
|
||||
在使用了 -p 命令行参数映射了正确的端口启动了 TDengine Docker 容器后,就在宿主机使用 taos shell 命令即可访问运行在 Docker 容器中的 TDengine。
|
||||
有两种方式在 Docker 环境下使用 TDengine CLI (taos) 访问 TDengine.
|
||||
- 进入容器后,执行 taos
|
||||
- 在宿主机使用容器映射到主机的端口进行访问 `taos -h <hostname> -P <port>`
|
||||
|
||||
```
|
||||
$ taos
|
||||
Welcome to the TDengine shell from Linux, Client Version:3.0.0.0
|
||||
Copyright (c) 2022 by TAOS Data, Inc. All rights reserved.
|
||||
|
||||
Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
|
||||
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
|
||||
Server is Enterprise trial Edition, ver:3.0.0.0 and will expire at 2022-09-24 15:29:46.
|
||||
|
||||
taos>
|
||||
|
||||
taos>
|
||||
```
|
||||
|
||||
也可以在宿主机使用 curl 通过 RESTful 端口访问 Docker 容器内的 TDengine server。
|
||||
|
||||
## 启动 REST 服务
|
||||
|
||||
taosAdapter 是 TDengine 中提供 REST 服务的组件。下面这条命令会在容器中同时启动 `taosd` 和 `taosadapter` 两个服务组件。
|
||||
|
||||
```bash
|
||||
docker run -d --name tdengine-all -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
|
||||
```
|
||||
|
||||
如果想只启动 `taosadapter`:
|
||||
|
||||
```bash
|
||||
docker run -d --name tdengine-taosa -p 6041-6049:6041-6049 -p 6041-6049:6041-6049/udp -e TAOS_FIRST_EP=tdengine-all tdengine/tdengine:3.0.0.0 taosadapter
|
||||
```
|
||||
|
||||
如果想只启动 `taosd`:
|
||||
|
||||
```bash
|
||||
docker run -d --name tdengine-taosd -p 6030-6042:6030-6042 -p 6030-6042:6030-6042/udp -e TAOS_DISABLE_ADAPTER=true tdengine/tdengine:3.0.0.0
|
||||
```
|
||||
|
||||
## 访问 REST 接口
|
||||
|
||||
可以在宿主机使用 curl 通过 RESTful 端口访问 Docker 容器内的 TDengine server。
|
||||
|
||||
```
|
||||
curl -L -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
|
||||
|
@ -115,217 +89,60 @@ curl -L -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
|
|||
输出示例如下:
|
||||
|
||||
```
|
||||
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep0,keep1,keep(D)","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep0,keep1,keep(D)",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["test","2021-08-18 06:01:11.021",10000,4,1,1,10,"3650,3650,3650",16,6,100,4096,1,3000,2,0,"ms",0,"ready"],["log","2021-08-18 05:51:51.065",4,1,1,1,10,"30,30,30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":2}
|
||||
{"code":0,"column_meta":[["name","VARCHAR",64],["create_time","TIMESTAMP",8],["vgroups","SMALLINT",2],["ntables","BIGINT",8],["replica","TINYINT",1],["strict","VARCHAR",4],["duration","VARCHAR",10],["keep","VARCHAR",32],["buffer","INT",4],["pagesize","INT",4],["pages","INT",4],["minrows","INT",4],["maxrows","INT",4],["wal","TINYINT",1],["fsync","INT",4],["comp","TINYINT",1],["cacheModel","VARCHAR",11],["precision","VARCHAR",2],["single_stable","BOOL",1],["status","VARCHAR",10],["retention","VARCHAR",60]],"data":[["information_schema",null,null,14,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"ready"],["performance_schema",null,null,3,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"ready"]],"rows":2}
|
||||
```
|
||||
|
||||
这条命令,通过 REST API 访问 TDengine server,这时连接的是本机的 6041 端口,可见连接成功。
|
||||
这条命令,通过 REST API 访问 TDengine server,这时连接的是从容器映射到主机的 6041 端口。
|
||||
|
||||
TDengine REST API 详情请参考[官方文档](/reference/rest-api/)。
|
||||
|
||||
### 使用 Docker 容器运行 TDengine server 和 taosAdapter
|
||||
## 写入数据
|
||||
|
||||
在 TDengine 2.4.0.0 之后版本的 Docker 容器,开始提供一个独立运行的组件 taosAdapter,代替之前版本 TDengine 中 taosd 进程中内置的 http server。taosAdapter 支持通过 RESTful 接口对 TDengine server 的数据写入和查询能力,并提供和 InfluxDB/OpenTSDB 兼容的数据摄取接口,允许 InfluxDB/OpenTSDB 应用程序无缝移植到 TDengine。在新版本 Docker 镜像中,默认启用了 taosAdapter,也可以使用 docker run 命令中设置 TAOS_DISABLE_ADAPTER=true 来禁用 taosAdapter;也可以在 docker run 命令中单独使用 taosAdapter,而不运行 taosd 。
|
||||
可以使用 TDengine 的自带工具 taosBenchmark 快速体验 TDengine 的写入。
|
||||
|
||||
注意:如果容器中运行 taosAdapter,需要根据需要映射其他端口,具体端口默认配置和修改方法请参考[taosAdapter 文档](/reference/taosadapter/)。
|
||||
|
||||
使用 docker 运行 TDengine 2.4.0.4 版本镜像(taosd + taosAdapter):
|
||||
|
||||
```bash
|
||||
docker run -d --name tdengine-all -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine:2.4.0.4
|
||||
```
|
||||
|
||||
使用 docker 运行 TDengine 2.4.0.4 版本镜像(仅 taosAdapter,需要设置 firstEp 配置项 或 TAOS_FIRST_EP 环境变量):
|
||||
|
||||
```bash
|
||||
docker run -d --name tdengine-taosa -p 6041-6049:6041-6049 -p 6041-6049:6041-6049/udp -e TAOS_FIRST_EP=tdengine-all tdengine/tdengine:2.4.0.4 taosadapter
|
||||
```
|
||||
|
||||
使用 docker 运行 TDengine 2.4.0.4 版本镜像(仅 taosd):
|
||||
|
||||
```bash
|
||||
docker run -d --name tdengine-taosd -p 6030-6042:6030-6042 -p 6030-6042:6030-6042/udp -e TAOS_DISABLE_ADAPTER=true tdengine/tdengine:2.4.0.4
|
||||
```
|
||||
|
||||
使用 curl 命令验证 RESTful 接口可以正常工作:
|
||||
|
||||
```bash
|
||||
curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" -d "show databases;" 127.0.0.1:6041/rest/sql
|
||||
```
|
||||
|
||||
输出示例如下:
|
||||
|
||||
```
|
||||
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["log","2021-12-28 09:18:55.765",10,1,1,1,10,"30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":1}
|
||||
```
|
||||
|
||||
### 应用示例:在宿主机使用 taosBenchmark 写入数据到 Docker 容器中的 TDengine server
|
||||
|
||||
1. 在宿主机命令行界面执行 taosBenchmark (曾命名为 taosdemo)写入数据到 Docker 容器中的 TDengine server
|
||||
假定启动容器时已经将容器的6030端口映射到了宿主机的6030端口,则可以直接在宿主机命令行启动 taosBenchmark,也可以进入容器后执行:
|
||||
|
||||
```bash
|
||||
$ taosBenchmark
|
||||
|
||||
taosBenchmark is simulating data generated by power equipments monitoring...
|
||||
|
||||
host: 127.0.0.1:6030
|
||||
user: root
|
||||
password: taosdata
|
||||
configDir:
|
||||
resultFile: ./output.txt
|
||||
thread num of insert data: 10
|
||||
thread num of create table: 10
|
||||
top insert interval: 0
|
||||
number of records per req: 30000
|
||||
max sql length: 1048576
|
||||
database count: 1
|
||||
database[0]:
|
||||
database[0] name: test
|
||||
drop: yes
|
||||
replica: 1
|
||||
precision: ms
|
||||
super table count: 1
|
||||
super table[0]:
|
||||
stbName: meters
|
||||
autoCreateTable: no
|
||||
childTblExists: no
|
||||
childTblCount: 10000
|
||||
childTblPrefix: d
|
||||
dataSource: rand
|
||||
iface: taosc
|
||||
insertRows: 10000
|
||||
interlaceRows: 0
|
||||
disorderRange: 1000
|
||||
disorderRatio: 0
|
||||
maxSqlLen: 1048576
|
||||
timeStampStep: 1
|
||||
startTimestamp: 2017-07-14 10:40:00.000
|
||||
sampleFormat:
|
||||
sampleFile:
|
||||
tagsFile:
|
||||
columnCount: 3
|
||||
column[0]:FLOAT column[1]:INT column[2]:FLOAT
|
||||
tagCount: 2
|
||||
tag[0]:INT tag[1]:BINARY(16)
|
||||
|
||||
Press enter key to continue or Ctrl-C to stop
|
||||
|
||||
```
|
||||
|
||||
回车后,该命令将在数据库 test 下面自动创建一张超级表 meters,该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupId,groupId 被设置为 1 到 10, location 被设置为 "California.SanFrancisco" 或者 "California.SanDieo"。
|
||||
该命令将在数据库 test 下面自动创建一张超级表 meters,该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupId,groupId 被设置为 1 到 10, location 被设置为 "California.SanFrancisco" 或者 "California.LosAngeles"。
|
||||
|
||||
最后共插入 1 亿条记录。
|
||||
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能。
|
||||
|
||||
2. 进入 TDengine 终端,查看 taosBenchmark 生成的数据。
|
||||
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,您可以设置不同参数进行体验,请执行 `taosBenchmark --help` 详细列出。taosBenchmark 详细使用方法请参照 [taosBenchmark 参考手册](../reference/taosbenchmark)。
|
||||
|
||||
- **进入命令行。**
|
||||
## 体验查询
|
||||
|
||||
```bash
|
||||
$ root@c452519b0f9b:~/TDengine-server-2.4.0.4# taos
|
||||
使用上述 taosBenchmark 插入数据后,可以在 TDengine CLI 输入查询命令,体验查询速度。可以直接在宿主机上也可以进入容器后运行。
|
||||
|
||||
Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
|
||||
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
|
||||
查询超级表下记录总条数:
|
||||
|
||||
taos>
|
||||
```
|
||||
|
||||
- **查看数据库。**
|
||||
|
||||
```bash
|
||||
$ taos> show databases;
|
||||
name | created_time | ntables | vgroups | ···
|
||||
test | 2021-08-18 06:01:11.021 | 10000 | 6 | ···
|
||||
log | 2021-08-18 05:51:51.065 | 4 | 1 | ···
|
||||
|
||||
```
|
||||
|
||||
- **查看超级表。**
|
||||
|
||||
```bash
|
||||
$ taos> use test;
|
||||
Database changed.
|
||||
|
||||
$ taos> show stables;
|
||||
name | created_time | columns | tags | tables |
|
||||
============================================================================================
|
||||
meters | 2021-08-18 06:01:11.116 | 4 | 2 | 10000 |
|
||||
Query OK, 1 row(s) in set (0.003259s)
|
||||
|
||||
```
|
||||
|
||||
- **查看表,限制输出十条。**
|
||||
|
||||
```bash
|
||||
$ taos> select * from test.t0 limit 10;
|
||||
|
||||
DB error: Table does not exist (0.002857s)
|
||||
taos> select * from test.d0 limit 10;
|
||||
ts | current | voltage | phase |
|
||||
======================================================================================
|
||||
2017-07-14 10:40:00.000 | 10.12072 | 223 | 0.34167 |
|
||||
2017-07-14 10:40:00.001 | 10.16103 | 224 | 0.34445 |
|
||||
2017-07-14 10:40:00.002 | 10.00204 | 220 | 0.33334 |
|
||||
2017-07-14 10:40:00.003 | 10.00030 | 220 | 0.33333 |
|
||||
2017-07-14 10:40:00.004 | 9.84029 | 216 | 0.32222 |
|
||||
2017-07-14 10:40:00.005 | 9.88028 | 217 | 0.32500 |
|
||||
2017-07-14 10:40:00.006 | 9.88110 | 217 | 0.32500 |
|
||||
2017-07-14 10:40:00.007 | 10.08137 | 222 | 0.33889 |
|
||||
2017-07-14 10:40:00.008 | 10.12063 | 223 | 0.34167 |
|
||||
2017-07-14 10:40:00.009 | 10.16086 | 224 | 0.34445 |
|
||||
Query OK, 10 row(s) in set (0.016791s)
|
||||
|
||||
```
|
||||
|
||||
- **查看 d0 表的标签值。**
|
||||
|
||||
```bash
|
||||
$ taos> select groupid, location from test.d0;
|
||||
groupid | location |
|
||||
=================================
|
||||
0 | California.SanDieo |
|
||||
Query OK, 1 row(s) in set (0.003490s)
|
||||
```
|
||||
|
||||
### 应用示例:使用数据收集代理软件写入 TDengine
|
||||
|
||||
taosAdapter 支持多个数据收集代理软件(如 Telegraf、StatsD、collectd 等),这里仅模拟 StasD 写入数据,在宿主机执行命令如下:
|
||||
|
||||
```
|
||||
echo "foo:1|c" | nc -u -w0 127.0.0.1 6044
|
||||
```sql
|
||||
taos> select count(*) from test.meters;
|
||||
```
|
||||
|
||||
然后可以使用 taos shell 查询 taosAdapter 自动创建的数据库 statsd 和 超级表 foo 中的内容:
|
||||
查询 1 亿条记录的平均值、最大值、最小值等:
|
||||
|
||||
```
|
||||
taos> show databases;
|
||||
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
|
||||
====================================================================================================================================================================================================================================================================================
|
||||
log | 2021-12-28 09:18:55.765 | 12 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
|
||||
statsd | 2021-12-28 09:21:48.841 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
|
||||
Query OK, 2 row(s) in set (0.002112s)
|
||||
|
||||
taos> use statsd;
|
||||
Database changed.
|
||||
|
||||
taos> show stables;
|
||||
name | created_time | columns | tags | tables |
|
||||
============================================================================================
|
||||
foo | 2021-12-28 09:21:48.894 | 2 | 1 | 1 |
|
||||
Query OK, 1 row(s) in set (0.001160s)
|
||||
|
||||
taos> select * from foo;
|
||||
ts | value | metric_type |
|
||||
=======================================================================================
|
||||
2021-12-28 09:21:48.840820836 | 1 | counter |
|
||||
Query OK, 1 row(s) in set (0.001639s)
|
||||
|
||||
taos>
|
||||
```sql
|
||||
taos> select avg(current), max(voltage), min(phase) from test.meters;
|
||||
```
|
||||
|
||||
可以看到模拟数据已经被写入到 TDengine 中。
|
||||
查询 location="California.SanFrancisco" 的记录总条数:
|
||||
|
||||
## 停止正在 Docker 中运行的 TDengine 服务
|
||||
|
||||
```bash
|
||||
docker stop tdengine
|
||||
```sql
|
||||
taos> select count(*) from test.meters where location="California.SanFrancisco";
|
||||
```
|
||||
|
||||
- **docker stop**:通过 docker stop 停止指定的正在运行中的 docker 镜像。
|
||||
查询 groupId=10 的所有记录的平均值、最大值、最小值等:
|
||||
|
||||
```sql
|
||||
taos> select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
|
||||
```
|
||||
|
||||
对表 d10 按 10s 进行平均值、最大值和最小值聚合统计:
|
||||
|
||||
```sql
|
||||
taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
|
||||
```
|
|
@ -3,6 +3,7 @@ title: 立即开始
|
|||
description: '快速设置 TDengine 环境并体验其高效写入和查询'
|
||||
---
|
||||
|
||||
TDengine 完整的软件包包括服务端(taosd)、用于与第三方系统对接并提供 RESTful 接口的 taosAdapter、应用驱动(taosc)、命令行程序 (CLI,taos) 和一些工具软件。TDengine 除了提供多种语言的连接器之外,还通过 [taosAdapter](/reference/taosadapter) 提供 [RESTful 接口](/reference/rest-api)。
|
||||
|
||||
本章主要介绍如何利用 Docker 或者安装包快速设置 TDengine 环境并体验其高效写入和查询。
|
||||
|
||||
|
|
|
@ -6,53 +6,85 @@ description: "创建、删除数据库,查看、修改数据库参数"
|
|||
|
||||
## 创建数据库
|
||||
|
||||
```
|
||||
CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [DAYS days] [UPDATE 1];
|
||||
```sql
|
||||
CREATE DATABASE [IF NOT EXISTS] db_name [database_options]
|
||||
|
||||
database_options:
|
||||
database_option ...
|
||||
|
||||
database_option: {
|
||||
BUFFER value
|
||||
| CACHEMODEL {'none' | 'last_row' | 'last_value' | 'both'}
|
||||
| CACHESIZE value
|
||||
| COMP {0 | 1 | 2}
|
||||
| DURATION value
|
||||
| FSYNC value
|
||||
| MAXROWS value
|
||||
| MINROWS value
|
||||
| KEEP value
|
||||
| PAGES value
|
||||
| PAGESIZE value
|
||||
| PRECISION {'ms' | 'us' | 'ns'}
|
||||
| REPLICA value
|
||||
| RETENTIONS ingestion_duration:keep_duration ...
|
||||
| STRICT {'off' | 'on'}
|
||||
| WAL {1 | 2}
|
||||
| VGROUPS value
|
||||
| SINGLE_STABLE {0 | 1}
|
||||
| WAL_RETENTION_PERIOD value
|
||||
| WAL_ROLL_PERIOD value
|
||||
| WAL_RETENTION_SIZE value
|
||||
| WAL_SEGMENT_SIZE value
|
||||
}
|
||||
```
|
||||
|
||||
:::info
|
||||
1. KEEP 是该数据库的数据保留多长天数,缺省是 3650 天(10 年),数据库会自动删除超过时限的数据;<!-- REPLACE_OPEN_TO_ENTERPRISE__KEEP_PARAM_DESCRIPTION -->
|
||||
2. UPDATE 标志数据库支持更新相同时间戳数据;(从 2.1.7.0 版本开始此参数支持设为 2,表示允许部分列更新,也即更新数据行时未被设置的列会保留原值。)(从 2.0.8.0 版本开始支持此参数。注意此参数不能通过 `ALTER DATABASE` 指令进行修改。)
|
||||
1. UPDATE 设为 0 时,表示不允许更新数据,后发送的相同时间戳的数据会被直接丢弃;
|
||||
2. UPDATE 设为 1 时,表示更新全部列数据,即如果更新一个数据行,其中某些列没有提供取值,那么这些列会被设为 NULL;
|
||||
3. UPDATE 设为 2 时,表示支持更新部分列数据,即如果更新一个数据行,其中某些列没有提供取值,那么这些列会保持原有数据行中的对应值;
|
||||
4. 更多关于 UPDATE 参数的用法,请参考[FAQ](/train-faq/faq)。
|
||||
3. 数据库名最大长度为 33;
|
||||
4. 一条 SQL 语句的最大长度为 65480 个字符;
|
||||
5. 创建数据库时可用的参数有:
|
||||
- cache: [详细说明](/reference/config/#cache)
|
||||
- blocks: [详细说明](/reference/config/#blocks)
|
||||
- days: [详细说明](/reference/config/#days)
|
||||
- keep: [详细说明](/reference/config/#keep)
|
||||
- minRows: [详细说明](/reference/config/#minrows)
|
||||
- maxRows: [详细说明](/reference/config/#maxrows)
|
||||
- wal: [详细说明](/reference/config/#wallevel)
|
||||
- fsync: [详细说明](/reference/config/#fsync)
|
||||
- update: [详细说明](/reference/config/#update)
|
||||
- cacheLast: [详细说明](/reference/config/#cachelast)
|
||||
- replica: [详细说明](/reference/config/#replica)
|
||||
- quorum: [详细说明](/reference/config/#quorum)
|
||||
- comp: [详细说明](/reference/config/#comp)
|
||||
- precision: [详细说明](/reference/config/#precision)
|
||||
6. 请注意上面列出的所有参数都可以配置在配置文件 `taosd.cfg` 中作为创建数据库时使用的默认配置, `create database` 的参数中明确指定的会覆盖配置文件中的设置。
|
||||
|
||||
:::
|
||||
### 参数说明
|
||||
- buffer: 一个 VNODE 写入内存池大小,单位为MB,默认为96,最小为3,最大为16384。
|
||||
- CACHEMODEL:表示是否在内存中缓存子表的最近数据。默认为none。
|
||||
- none:表示不缓存。
|
||||
- last_row:表示缓存子表最近一行数据。这将显著改善 LAST_ROW 函数的性能表现。
|
||||
- last_value:表示缓存子表每一列的最近的非 NULL 值。这将显著改善无特殊影响(WHERE、ORDER BY、GROUP BY、INTERVAL)下的 LAST 函数的性能表现。
|
||||
- both:表示同时打开缓存最近行和列功能。
|
||||
- CACHESIZE:表示缓存子表最近数据的内存大小。默认为 1 ,范围是[1, 65536],单位是 MB。
|
||||
- COMP:表示数据库文件压缩标志位,缺省值为 2,取值范围为 [0, 2]。
|
||||
- 0:表示不压缩。
|
||||
- 1:表示一阶段压缩。
|
||||
- 2:表示两阶段压缩。
|
||||
- DURATION:数据文件存储数据的时间跨度。可以使用加单位的表示形式,如 DURATION 100h、DURATION 10d等,支持 m(分钟)、h(小时)和 d(天)三个单位。不加时间单位时默认单位为天,如 DURATION 50 表示 50 天。
|
||||
- FSYNC:当 WAL 参数设置为2时,落盘的周期。默认为3000,单位毫秒。最小为0,表示每次写入立即落盘;最大为180000,即三分钟。
|
||||
- MAXROWS:文件块中记录的最大条数,默认为4096条。
|
||||
- MINROWS:文件块中记录的最小条数,默认为100条。
|
||||
- KEEP:表示数据文件保存的天数,缺省值为 3650,取值范围 [1, 365000],且必须大于或等于 DURATION 参数值。数据库会自动删除保存时间超过KEEP值的数据。KEEP 可以使用加单位的表示形式,如 KEEP 100h、KEEP 10d 等,支持m(分钟)、h(小时)和 d(天)三个单位。也可以不写单位,如 KEEP 50,此时默认单位为天。
|
||||
- PAGES:一个 VNODE 中元数据存储引擎的缓存页个数,默认为256,最小64。一个 VNODE 元数据存储占用 PAGESIZE * PAGES,默认情况下为1MB内存。
|
||||
- PAGESIZE:一个 VNODE 中元数据存储引擎的页大小,单位为KB,默认为4 KB。范围为1到16384,即1 KB到16 MB。
|
||||
- PRECISION:数据库的时间戳精度。ms表示毫秒,us表示微秒,ns表示纳秒,默认ms毫秒。
|
||||
- REPLICA:表示数据库副本数,取值为1或3,默认为1。在集群中使用,副本数必须小于或等于 DNODE 的数目。
|
||||
- RETENTIONS:表示数据的聚合周期和保存时长,如RETENTIONS 15s:7d,1m:21d,15m:50d表示数据原始采集周期为15秒,原始数据保存7天;按1分钟聚合的数据保存21天;按15分钟聚合的数据保存50天。目前支持且只支持三级存储周期。
|
||||
- STRICT:表示数据同步的一致性要求,默认为off。
|
||||
- on 表示强一致,即运行标准的 raft 协议,半数提交返回成功。
|
||||
- off表示弱一致,本地提交即返回成功。
|
||||
- WAL:WAL级别,默认为1。
|
||||
- 1:写WAL,但不执行fsync。
|
||||
- 2:写WAL,而且执行fsync。
|
||||
- VGROUPS:数据库中初始vgroup的数目。
|
||||
- SINGLE_STABLE:表示此数据库中是否只可以创建一个超级表,用于超级表列非常多的情况。
|
||||
- 0:表示可以创建多张超级表。
|
||||
- 1:表示只可以创建一张超级表。
|
||||
- WAL_RETENTION_PERIOD:wal文件的额外保留策略,用于数据订阅。wal的保存时长,单位为s。默认为0,即落盘后立即删除。-1表示不删除。
|
||||
- WAL_RETENTION_SIZE:wal文件的额外保留策略,用于数据订阅。wal的保存的最大上限,单位为KB。默认为0,即落盘后立即删除。-1表示不删除。
|
||||
- WAL_ROLL_PERIOD:wal文件切换时长,单位为s。当wal文件创建并写入后,经过该时间,会自动创建一个新的wal文件。默认为0,即仅在落盘时创建新文件。
|
||||
- WAL_SEGMENT_SIZE:wal单个文件大小,单位为KB。当前写入文件大小超过上限后会自动创建一个新的wal文件。默认为0,即仅在落盘时创建新文件。
|
||||
|
||||
### 创建数据库示例
|
||||
|
||||
创建时间精度为纳秒的数据库, 保留 1 年数据:
|
||||
|
||||
```sql
|
||||
CREATE DATABASE test PRECISION 'ns' KEEP 365;
|
||||
```
|
||||
|
||||
## 显示系统当前参数
|
||||
create database if not exists db vgroups 10 buffer 10
|
||||
|
||||
```
|
||||
SHOW VARIABLES;
|
||||
```
|
||||
|
||||
## 使用数据库
|
||||
以上示例创建了一个有 10 个 vgroup 名为 db 的数据库, 其中每个 vnode 分配也 10MB 的写入缓存
|
||||
|
||||
### 使用数据库
|
||||
|
||||
```
|
||||
USE db_name;
|
||||
|
@ -63,61 +95,42 @@ USE db_name;
|
|||
## 删除数据库
|
||||
|
||||
```
|
||||
DROP DATABASE [IF EXISTS] db_name;
|
||||
DROP DATABASE [IF EXISTS] db_name
|
||||
```
|
||||
|
||||
删除数据库。指定 Database 所包含的全部数据表将被删除,谨慎使用!
|
||||
删除数据库。指定 Database 所包含的全部数据表将被删除,该数据库的所有 vgroups 也会被全部销毁,请谨慎使用!
|
||||
|
||||
## 修改数据库参数
|
||||
|
||||
```
|
||||
ALTER DATABASE db_name COMP 2;
|
||||
```sql
|
||||
ALTER DATABASE db_name [alter_database_options]
|
||||
|
||||
alter_database_options:
|
||||
alter_database_option ...
|
||||
|
||||
alter_database_option: {
|
||||
CACHEMODEL {'none' | 'last_row' | 'last_value' | 'both'}
|
||||
| CACHESIZE value
|
||||
| FSYNC value
|
||||
| KEEP value
|
||||
| WAL value
|
||||
}
|
||||
```
|
||||
|
||||
COMP 参数是指修改数据库文件压缩标志位,缺省值为 2,取值范围为 [0, 2]。0 表示不压缩,1 表示一阶段压缩,2 表示两阶段压缩。
|
||||
:::note
|
||||
其它参数在3.0.0.0中暂不支持修改
|
||||
|
||||
```
|
||||
ALTER DATABASE db_name REPLICA 2;
|
||||
```
|
||||
|
||||
REPLICA 参数是指修改数据库副本数,取值范围 [1, 3]。在集群中使用,副本数必须小于或等于 DNODE 的数目。
|
||||
|
||||
```
|
||||
ALTER DATABASE db_name KEEP 365;
|
||||
```
|
||||
|
||||
KEEP 参数是指修改数据文件保存的天数,缺省值为 3650,取值范围 [days, 365000],必须大于或等于 days 参数值。
|
||||
|
||||
```
|
||||
ALTER DATABASE db_name QUORUM 2;
|
||||
```
|
||||
|
||||
QUORUM 参数是指数据写入成功所需要的确认数,取值范围 [1, 2]。对于异步复制,quorum 设为 1,具有 master 角色的虚拟节点自己确认即可。对于同步复制,quorum 设为 2。原则上,Quorum >= 1 并且 Quorum <= replica(副本数),这个参数在启动一个同步模块实例时需要提供。
|
||||
|
||||
```
|
||||
ALTER DATABASE db_name BLOCKS 100;
|
||||
```
|
||||
|
||||
BLOCKS 参数是每个 VNODE (TSDB) 中有多少 cache 大小的内存块,因此一个 VNODE 的用的内存大小粗略为(cache \* blocks)。取值范围 [3, 1000]。
|
||||
|
||||
```
|
||||
ALTER DATABASE db_name CACHELAST 0;
|
||||
```
|
||||
|
||||
CACHELAST 参数控制是否在内存中缓存子表的最近数据。缺省值为 0,取值范围 [0, 1, 2, 3]。其中 0 表示不缓存,1 表示缓存子表最近一行数据,2 表示缓存子表每一列的最近的非 NULL 值,3 表示同时打开缓存最近行和列功能。(从 2.0.11.0 版本开始支持参数值 [0, 1],从 2.1.2.0 版本开始支持参数值 [0, 1, 2, 3]。)
|
||||
说明:缓存最近行,将显著改善 LAST_ROW 函数的性能表现;缓存每列的最近非 NULL 值,将显著改善无特殊影响(WHERE、ORDER BY、GROUP BY、INTERVAL)下的 LAST 函数的性能表现。
|
||||
|
||||
:::tip
|
||||
以上所有参数修改后都可以用 show databases 来确认是否修改成功。另外,从 2.1.3.0 版本开始,修改这些参数后无需重启服务器即可生效。
|
||||
:::
|
||||
|
||||
## 显示系统所有数据库
|
||||
## 查看数据库
|
||||
|
||||
### 查看系统中的所有数据库
|
||||
|
||||
```
|
||||
SHOW DATABASES;
|
||||
```
|
||||
|
||||
## 显示一个数据库的创建语句
|
||||
### 显示一个数据库的创建语句
|
||||
|
||||
```
|
||||
SHOW CREATE DATABASE db_name;
|
||||
|
@ -125,3 +138,4 @@ SHOW CREATE DATABASE db_name;
|
|||
|
||||
常用于数据库迁移。对一个已经存在的数据库,返回其创建语句;在另一个集群中执行该语句,就能得到一个设置完全相同的 Database。
|
||||
|
||||
### 查看数据库参数
|
||||
|
|
|
@ -2,13 +2,45 @@
|
|||
title: 表管理
|
||||
---
|
||||
|
||||
## 创建数据表
|
||||
## 创建表
|
||||
|
||||
`CREATE TABLE` 语句用于创建普通表和以超级表为模板创建子表。
|
||||
|
||||
```sql
|
||||
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definitionn] ...) [table_options]
|
||||
|
||||
CREATE TABLE create_subtable_clause
|
||||
|
||||
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definitionn] ...)
|
||||
[TAGS (create_definition [, create_definitionn] ...)]
|
||||
[table_options]
|
||||
|
||||
create_subtable_clause: {
|
||||
create_subtable_clause [create_subtable_clause] ...
|
||||
| [IF NOT EXISTS] [db_name.]tb_name USING [db_name.]stb_name [(tag_name [, tag_name] ...)] TAGS (tag_value [, tag_value] ...)
|
||||
}
|
||||
|
||||
create_definition:
|
||||
col_name column_definition
|
||||
|
||||
column_definition:
|
||||
type_name [comment 'string_value']
|
||||
|
||||
table_options:
|
||||
table_option ...
|
||||
|
||||
table_option: {
|
||||
COMMENT 'string_value'
|
||||
| WATERMARK duration[,duration]
|
||||
| MAX_DELAY duration[,duration]
|
||||
| ROLLUP(func_name [, func_name] ...)
|
||||
| SMA(col_name [, col_name] ...)
|
||||
| TTL value
|
||||
}
|
||||
|
||||
```
|
||||
CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]);
|
||||
```
|
||||
|
||||
:::info 说明
|
||||
**使用说明**
|
||||
|
||||
1. 表的第一个字段必须是 TIMESTAMP,并且系统自动将其设为主键;
|
||||
2. 表名最大长度为 192;
|
||||
|
@ -18,101 +50,112 @@ CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_nam
|
|||
6. 为了兼容支持更多形式的表名,TDengine 引入新的转义符 "\`",可以让表名与关键词不冲突,同时不受限于上述表名称合法性约束检查。但是同样具有长度限制要求。使用转义字符以后,不再对转义字符中的内容进行大小写统一。
|
||||
例如:\`aBc\` 和 \`abc\` 是不同的表名,但是 abc 和 aBc 是相同的表名。
|
||||
需要注意的是转义字符中的内容必须是可打印字符。
|
||||
上述的操作逻辑和约束要求与 MySQL 数据的操作一致。
|
||||
从 2.3.0.0 版本开始支持这种方式。
|
||||
|
||||
:::
|
||||
**参数说明**
|
||||
1. COMMENT:表注释。可用于超级表、子表和普通表。
|
||||
2. WATERMARK:指定窗口的关闭时间,默认值为 5 秒,最小单位毫秒,范围为0到15分钟,多个以逗号分隔。只可用于超级表,且只有当数据库使用了RETENTIONS参数时,才可以使用此表参数。
|
||||
3. MAX_DELAY:用于控制推送计算结果的最大延迟,默认值为 interval 的值(但不能超过最大值),最小单位毫秒,范围为1毫秒到15分钟,多个以逗号分隔。注:不建议 MAX_DELAY 设置太小,否则会过于频繁的推送结果,影响存储和查询性能,如无特殊需求,取默认值即可。只可用于超级表,且只有当数据库使用了RETENTIONS参数时,才可以使用此表参数。
|
||||
4. ROLLUP:Rollup 指定的聚合函数,提供基于多层级的降采样聚合结果。只可用于超级表。只有当数据库使用了RETENTIONS参数时,才可以使用此表参数。作用于超级表除TS列外的其它所有列,但是只能定义一个聚合函数。 聚合函数支持 avg, sum, min, max, last, first。
|
||||
5. SMA:Small Materialized Aggregates,提供基于数据块的自定义预计算功能。预计算类型包括MAX、MIN和SUM。可用于超级表/普通表。
|
||||
6. TTL:Time to Live,是用户用来指定表的生命周期的参数。如果在持续的TTL时间内,都没有数据写入该表,则TDengine系统会自动删除该表。这个TTL的时间只是一个大概时间,我们系统不保证到了时间一定会将其删除,而只保证存在这样一个机制。TTL单位是天,默认为0,表示不限制。用户需要注意,TTL优先级高于KEEP,即TTL时间满足删除机制时,即使当前数据的存在时间小于KEEP,此表也会被删除。只可用于子表和普通表。
|
||||
|
||||
### 以超级表为模板创建数据表
|
||||
## 创建子表
|
||||
|
||||
```
|
||||
### 创建子表
|
||||
|
||||
```sql
|
||||
CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name TAGS (tag_value1, ...);
|
||||
```
|
||||
|
||||
以指定的超级表为模板,指定 TAGS 的值来创建数据表。
|
||||
### 创建子表并指定标签的值
|
||||
|
||||
### 以超级表为模板创建数据表,并指定具体的 TAGS 列
|
||||
|
||||
```
|
||||
```sql
|
||||
CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name (tag_name1, ...) TAGS (tag_value1, ...);
|
||||
```
|
||||
|
||||
以指定的超级表为模板,指定一部分 TAGS 列的值来创建数据表(没被指定的 TAGS 列会设为空值)。
|
||||
说明:从 2.0.17.0 版本开始支持这种方式。在之前的版本中,不允许指定 TAGS 列,而必须显式给出所有 TAGS 列的取值。
|
||||
以指定的超级表为模板,也可以指定一部分 TAGS 列的值来创建数据表(没被指定的 TAGS 列会设为空值)。
|
||||
|
||||
### 批量创建数据表
|
||||
### 批量创建子表
|
||||
|
||||
```
|
||||
```sql
|
||||
CREATE TABLE [IF NOT EXISTS] tb_name1 USING stb_name TAGS (tag_value1, ...) [IF NOT EXISTS] tb_name2 USING stb_name TAGS (tag_value2, ...) ...;
|
||||
```
|
||||
|
||||
以更快的速度批量创建大量数据表(服务器端 2.0.14 及以上版本)。
|
||||
批量建表方式要求数据表必须以超级表为模板。 在不超出 SQL 语句长度限制的前提下,单条语句中的建表数量建议控制在 1000 ~ 3000 之间,将会获得比较理想的建表速度。
|
||||
|
||||
:::info
|
||||
## 修改普通表
|
||||
|
||||
1.批量建表方式要求数据表必须以超级表为模板。 2.在不超出 SQL 语句长度限制的前提下,单条语句中的建表数量建议控制在 1000 ~ 3000 之间,将会获得比较理想的建表速度。
|
||||
|
||||
:::
|
||||
|
||||
## 删除数据表
|
||||
```sql
|
||||
ALTER TABLE [db_name.]tb_name alter_table_clause
|
||||
|
||||
alter_table_clause: {
|
||||
alter_table_options
|
||||
| ADD COLUMN col_name column_type
|
||||
| DROP COLUMN col_name
|
||||
| MODIFY COLUMN col_name column_type
|
||||
| RENAME COLUMN old_col_name new_col_name
|
||||
}
|
||||
|
||||
alter_table_options:
|
||||
alter_table_option ...
|
||||
|
||||
alter_table_option: {
|
||||
TTL value
|
||||
| COMMENT 'string_value'
|
||||
}
|
||||
|
||||
```
|
||||
DROP TABLE [IF EXISTS] tb_name;
|
||||
```
|
||||
|
||||
## 显示当前数据库下的所有数据表信息
|
||||
**使用说明**
|
||||
对普通表可以进行如下修改操作
|
||||
1. ADD COLUMN:添加列。
|
||||
2. DROP COLUMN:删除列。
|
||||
3. ODIFY COLUMN:修改列定义,如果数据列的类型是可变长类型,那么可以使用此指令修改其宽度,只能改大,不能改小。
|
||||
4. RENAME COLUMN:修改列名称。
|
||||
|
||||
```
|
||||
SHOW TABLES [LIKE tb_name_wildchar];
|
||||
```
|
||||
### 增加列
|
||||
|
||||
显示当前数据库下的所有数据表信息。
|
||||
|
||||
## 显示一个数据表的创建语句
|
||||
|
||||
```
|
||||
SHOW CREATE TABLE tb_name;
|
||||
```
|
||||
|
||||
常用于数据库迁移。对一个已经存在的数据表,返回其创建语句;在另一个集群中执行该语句,就能得到一个结构完全相同的数据表。
|
||||
|
||||
## 获取表的结构信息
|
||||
|
||||
```
|
||||
DESCRIBE tb_name;
|
||||
```
|
||||
|
||||
## 修改表定义
|
||||
|
||||
### 表增加列
|
||||
|
||||
```
|
||||
```sql
|
||||
ALTER TABLE tb_name ADD COLUMN field_name data_type;
|
||||
```
|
||||
|
||||
:::info
|
||||
### 删除列
|
||||
|
||||
1. 列的最大个数为 1024,最小个数为 2;(从 2.1.7.0 版本开始,改为最多允许 4096 列)
|
||||
2. 列名最大长度为 64。
|
||||
|
||||
:::
|
||||
|
||||
### 表删除列
|
||||
|
||||
```
|
||||
```sql
|
||||
ALTER TABLE tb_name DROP COLUMN field_name;
|
||||
```
|
||||
|
||||
如果表是通过超级表创建,更改表结构的操作只能对超级表进行。同时针对超级表的结构更改对所有通过该结构创建的表生效。对于不是通过超级表创建的表,可以直接修改表结构。
|
||||
### 修改列宽
|
||||
|
||||
### 表修改列宽
|
||||
|
||||
```
|
||||
```sql
|
||||
ALTER TABLE tb_name MODIFY COLUMN field_name data_type(length);
|
||||
```
|
||||
|
||||
如果数据列的类型是可变长格式(BINARY 或 NCHAR),那么可以使用此指令修改其宽度(只能改大,不能改小)。(2.1.3.0 版本新增)
|
||||
如果表是通过超级表创建,更改表结构的操作只能对超级表进行。同时针对超级表的结构更改对所有通过该结构创建的表生效。对于不是通过超级表创建的表,可以直接修改表结构。
|
||||
### 修改列名
|
||||
|
||||
```sql
|
||||
ALTER TABLE tb_name RENAME COLUMN old_col_name new_col_name
|
||||
```
|
||||
|
||||
## 修改子表
|
||||
|
||||
ALTER TABLE [db_name.]tb_name alter_table_clause
|
||||
|
||||
alter_table_clause: {
|
||||
alter_table_options
|
||||
| SET TAG tag_name = new_tag_value
|
||||
}
|
||||
|
||||
alter_table_options:
|
||||
alter_table_option ...
|
||||
|
||||
alter_table_option: {
|
||||
TTL value
|
||||
| COMMENT 'string_value'
|
||||
}
|
||||
|
||||
**使用说明**
|
||||
1. 对子表的列和标签的修改,除了更改标签值以外,都要通过超级表才能进行。
|
||||
|
||||
### 修改子表标签值
|
||||
|
||||
|
@ -120,4 +163,34 @@ ALTER TABLE tb_name MODIFY COLUMN field_name data_type(length);
|
|||
ALTER TABLE tb_name SET TAG tag_name=new_tag_value;
|
||||
```
|
||||
|
||||
如果表是通过超级表创建,可以使用此指令修改其标签值
|
||||
## 删除表
|
||||
|
||||
可以在一条SQL语句中删除一个或多个普通表或子表。
|
||||
|
||||
```sql
|
||||
DROP TABLE [IF EXISTS] [db_name.]tb_name [, [IF EXISTS] [db_name.]tb_name] ...
|
||||
```
|
||||
|
||||
## 查看表的信息
|
||||
|
||||
### 显示所有表
|
||||
|
||||
如下SQL语句可以列出当前数据库中的所有表名。
|
||||
|
||||
```sql
|
||||
SHOW TABLES [LIKE tb_name_wildchar];
|
||||
```
|
||||
|
||||
### 显示表创建语句
|
||||
|
||||
```
|
||||
SHOW CREATE TABLE tb_name;
|
||||
```
|
||||
|
||||
常用于数据库迁移。对一个已经存在的数据表,返回其创建语句;在另一个集群中执行该语句,就能得到一个结构完全相同的数据表。
|
||||
|
||||
### 获取表结构信息
|
||||
|
||||
```
|
||||
DESCRIBE tb_name;
|
||||
```
|
|
@ -3,38 +3,31 @@ sidebar_label: 超级表管理
|
|||
title: 超级表 STable 管理
|
||||
---
|
||||
|
||||
:::note
|
||||
|
||||
在 2.0.15.0 及以后的版本中开始支持 STABLE 保留字。也即,在本节后文的指令说明中,CREATE、DROP、ALTER 三个指令在 2.0.15.0 之前的版本中 STABLE 保留字需写作 TABLE。
|
||||
|
||||
:::
|
||||
|
||||
## 创建超级表
|
||||
|
||||
```
|
||||
CREATE STABLE [IF NOT EXISTS] stb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]) TAGS (tag1_name tag_type1, tag2_name tag_type2 [, tag3_name tag_type3]);
|
||||
```sql
|
||||
CREATE STABLE [IF NOT EXISTS] stb_name (create_definition [, create_definitionn] ...) TAGS (create_definition [, create_definition] ...) [table_options]
|
||||
|
||||
create_definition:
|
||||
col_name column_definition
|
||||
|
||||
column_definition:
|
||||
type_name [COMMENT 'string_value']
|
||||
```
|
||||
|
||||
创建 STable,与创建表的 SQL 语法相似,但需要指定 TAGS 字段的名称和类型。
|
||||
**使用说明**
|
||||
- 超级表中列的最大个数为 4096,需要注意,这里的 4096 是包含 TAG 列在内的,最小个数为 3,包含一个时间戳主键、一个 TAG 列和一个数据列。
|
||||
- 建表时可以给列或标签附加注释。
|
||||
- TAGS语法指定超级表的标签列,标签列需要遵循以下约定:
|
||||
- TAGS 中的 TIMESTAMP 列写入数据时需要提供给定值,而暂不支持四则运算,例如 NOW + 10s 这类表达式。
|
||||
- TAGS 列名不能与其他列名相同。
|
||||
- TAGS 列名不能为预留关键字。
|
||||
- TAGS 最多允许 128 个,至少 1 个,总长度不超过 16 KB。
|
||||
- 关于表参数的详细说明,参见 CREATE TABLE 中的介绍。
|
||||
|
||||
:::info
|
||||
## 查看超级表
|
||||
|
||||
1. TAGS 列的数据类型不能是 timestamp 类型;(从 2.1.3.0 版本开始,TAGS 列中支持使用 timestamp 类型,但需注意在 TAGS 中的 timestamp 列写入数据时需要提供给定值,而暂不支持四则运算,例如 `NOW + 10s` 这类表达式)
|
||||
2. TAGS 列名不能与其他列名相同;
|
||||
3. TAGS 列名不能为预留关键字(参见:[参数限制与保留关键字](/taos-sql/keywords/) 章节);
|
||||
4. TAGS 最多允许 128 个,至少 1 个,总长度不超过 16 KB。
|
||||
|
||||
:::
|
||||
|
||||
## 删除超级表
|
||||
|
||||
```
|
||||
DROP STABLE [IF EXISTS] stb_name;
|
||||
```
|
||||
|
||||
删除 STable 会自动删除通过 STable 创建的子表。
|
||||
|
||||
## 显示当前数据库下的所有超级表信息
|
||||
### 显示当前数据库下的所有超级表信息
|
||||
|
||||
```
|
||||
SHOW STABLES [LIKE tb_name_wildcard];
|
||||
|
@ -42,7 +35,7 @@ SHOW STABLES [LIKE tb_name_wildcard];
|
|||
|
||||
查看数据库内全部 STable,及其相关信息,包括 STable 的名称、创建时间、列数量、标签(TAG)数量、通过该 STable 建表的数量。
|
||||
|
||||
## 显示一个超级表的创建语句
|
||||
### 显示一个超级表的创建语句
|
||||
|
||||
```
|
||||
SHOW CREATE STABLE stb_name;
|
||||
|
@ -50,40 +43,81 @@ SHOW CREATE STABLE stb_name;
|
|||
|
||||
常用于数据库迁移。对一个已经存在的超级表,返回其创建语句;在另一个集群中执行该语句,就能得到一个结构完全相同的超级表。
|
||||
|
||||
## 获取超级表的结构信息
|
||||
### 获取超级表的结构信息
|
||||
|
||||
```
|
||||
DESCRIBE stb_name;
|
||||
```
|
||||
|
||||
## 修改超级表普通列
|
||||
|
||||
### 超级表增加列
|
||||
## 删除超级表
|
||||
|
||||
```
|
||||
ALTER STABLE stb_name ADD COLUMN field_name data_type;
|
||||
DROP STABLE [IF EXISTS] [db_name.]stb_name
|
||||
```
|
||||
|
||||
### 超级表删除列
|
||||
删除 STable 会自动删除通过 STable 创建的子表以及子表中的所有数据。
|
||||
|
||||
## 修改超级表
|
||||
|
||||
```sql
|
||||
ALTER STABLE [db_name.]tb_name alter_table_clause
|
||||
|
||||
alter_table_clause: {
|
||||
alter_table_options
|
||||
| ADD COLUMN col_name column_type
|
||||
| DROP COLUMN col_name
|
||||
| MODIFY COLUMN col_name column_type
|
||||
| ADD TAG tag_name tag_type
|
||||
| DROP TAG tag_name
|
||||
| MODIFY TAG tag_name tag_type
|
||||
| RENAME TAG old_tag_name new_tag_name
|
||||
}
|
||||
|
||||
alter_table_options:
|
||||
alter_table_option ...
|
||||
|
||||
alter_table_option: {
|
||||
COMMENT 'string_value'
|
||||
}
|
||||
|
||||
```
|
||||
ALTER STABLE stb_name DROP COLUMN field_name;
|
||||
```
|
||||
|
||||
### 超级表修改列宽
|
||||
**使用说明**
|
||||
|
||||
修改超级表的结构会对其下的所有子表生效。无法针对某个特定子表修改表结构。标签结构的修改需要对超级表下发,TDengine 会自动作用于此超级表的所有子表。
|
||||
|
||||
- ADD COLUMN:添加列。
|
||||
- DROP COLUMN:删除列。
|
||||
- MODIFY COLUMN:修改列定义,如果数据列的类型是可变长类型,那么可以使用此指令修改其宽度,只能改大,不能改小。
|
||||
- ADD TAG:给超级表添加一个标签。
|
||||
- DROP TAG:删除超级表的一个标签。从超级表删除某个标签后,该超级表下的所有子表也会自动删除该标签。
|
||||
- MODIFY TAG:修改超级表的一个标签的定义。如果标签的类型是可变长类型,那么可以使用此指令修改其宽度,只能改大,不能改小。
|
||||
- RENAME TAG:修改超级表的一个标签的名称。从超级表修改某个标签名后,该超级表下的所有子表也会自动更新该标签名。
|
||||
|
||||
### 增加列
|
||||
|
||||
```
|
||||
ALTER STABLE stb_name MODIFY COLUMN field_name data_type(length);
|
||||
ALTER STABLE stb_name ADD COLUMN col_name column_type;
|
||||
```
|
||||
|
||||
如果数据列的类型是可变长格式(BINARY 或 NCHAR),那么可以使用此指令修改其宽度(只能改大,不能改小)。(2.1.3.0 版本新增)
|
||||
### 删除列
|
||||
|
||||
## 修改超级表标签列
|
||||
```
|
||||
ALTER STABLE stb_name DROP COLUMN col_name;
|
||||
```
|
||||
|
||||
### 修改列宽
|
||||
|
||||
```
|
||||
ALTER STABLE stb_name MODIFY COLUMN col_name data_type(length);
|
||||
```
|
||||
|
||||
如果数据列的类型是可变长格式(BINARY 或 NCHAR),那么可以使用此指令修改其宽度(只能改大,不能改小)。
|
||||
|
||||
### 添加标签
|
||||
|
||||
```
|
||||
ALTER STABLE stb_name ADD TAG new_tag_name tag_type;
|
||||
ALTER STABLE stb_name ADD TAG tag_name tag_type;
|
||||
```
|
||||
|
||||
为 STable 增加一个新的标签,并指定新标签的类型。标签总数不能超过 128 个,总长度不超过 16KB 。
|
||||
|
@ -99,7 +133,7 @@ ALTER STABLE stb_name DROP TAG tag_name;
|
|||
### 修改标签名
|
||||
|
||||
```
|
||||
ALTER STABLE stb_name CHANGE TAG old_tag_name new_tag_name;
|
||||
ALTER STABLE stb_name RENAME TAG old_tag_name new_tag_name;
|
||||
```
|
||||
|
||||
修改超级表的标签名,从超级表修改某个标签名后,该超级表下的所有子表也会自动更新该标签名。
|
||||
|
|
|
@ -153,11 +153,10 @@ typedef struct SQueryTableDataCond {
|
|||
int32_t order; // desc|asc order to iterate the data block
|
||||
int32_t numOfCols;
|
||||
SColumnInfo* colList;
|
||||
int32_t type; // data block load type:
|
||||
// int32_t numOfTWindows;
|
||||
STimeWindow twindows;
|
||||
int64_t startVersion;
|
||||
int64_t endVersion;
|
||||
int32_t type; // data block load type:
|
||||
STimeWindow twindows;
|
||||
int64_t startVersion;
|
||||
int64_t endVersion;
|
||||
} SQueryTableDataCond;
|
||||
|
||||
int32_t tEncodeDataBlock(void** buf, const SSDataBlock* pBlock);
|
||||
|
|
|
@ -152,7 +152,10 @@ void taosCfgDynamicOptions(const char *option, const char *value);
|
|||
void taosAddDataDir(int32_t index, char *v1, int32_t level, int32_t primary);
|
||||
|
||||
struct SConfig *taosGetCfg();
|
||||
int32_t taosSetCfg(SConfig *pCfg, char* name);
|
||||
|
||||
void taosSetAllDebugFlag(int32_t flag);
|
||||
void taosSetDebugFlag(int32_t *pFlagPtr, const char *flagName, int32_t flagVal);
|
||||
int32_t taosSetCfg(SConfig *pCfg, char *name);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
|
|
|
@ -748,6 +748,10 @@ typedef struct {
|
|||
int8_t ignoreExist;
|
||||
int32_t numOfRetensions;
|
||||
SArray* pRetensions; // SRetention
|
||||
int32_t walRetentionPeriod;
|
||||
int32_t walRetentionSize;
|
||||
int32_t walRollPeriod;
|
||||
int32_t walSegmentSize;
|
||||
} SCreateDbReq;
|
||||
|
||||
int32_t tSerializeSCreateDbReq(void* buf, int32_t bufLen, SCreateDbReq* pReq);
|
||||
|
@ -1150,6 +1154,10 @@ typedef struct {
|
|||
int32_t numOfRetensions;
|
||||
SArray* pRetensions; // SRetention
|
||||
void* pTsma;
|
||||
int32_t walRetentionPeriod;
|
||||
int64_t walRetentionSize;
|
||||
int32_t walRollPeriod;
|
||||
int64_t walSegmentSize;
|
||||
} SCreateVnodeReq;
|
||||
|
||||
int32_t tSerializeSCreateVnodeReq(void* buf, int32_t bufLen, SCreateVnodeReq* pReq);
|
||||
|
@ -1977,7 +1985,7 @@ typedef struct SVCreateTbReq {
|
|||
union {
|
||||
struct {
|
||||
char* name; // super table name
|
||||
uint8_t tagNum;
|
||||
uint8_t tagNum;
|
||||
tb_uid_t suid;
|
||||
SArray* tagName;
|
||||
uint8_t* pTag;
|
||||
|
|
|
@ -16,261 +16,265 @@
|
|||
#ifndef _TD_COMMON_TOKEN_H_
|
||||
#define _TD_COMMON_TOKEN_H_
|
||||
|
||||
#define TK_OR 1
|
||||
#define TK_AND 2
|
||||
#define TK_UNION 3
|
||||
#define TK_ALL 4
|
||||
#define TK_MINUS 5
|
||||
#define TK_EXCEPT 6
|
||||
#define TK_INTERSECT 7
|
||||
#define TK_NK_BITAND 8
|
||||
#define TK_NK_BITOR 9
|
||||
#define TK_NK_LSHIFT 10
|
||||
#define TK_NK_RSHIFT 11
|
||||
#define TK_NK_PLUS 12
|
||||
#define TK_NK_MINUS 13
|
||||
#define TK_NK_STAR 14
|
||||
#define TK_NK_SLASH 15
|
||||
#define TK_NK_REM 16
|
||||
#define TK_NK_CONCAT 17
|
||||
#define TK_CREATE 18
|
||||
#define TK_ACCOUNT 19
|
||||
#define TK_NK_ID 20
|
||||
#define TK_PASS 21
|
||||
#define TK_NK_STRING 22
|
||||
#define TK_ALTER 23
|
||||
#define TK_PPS 24
|
||||
#define TK_TSERIES 25
|
||||
#define TK_STORAGE 26
|
||||
#define TK_STREAMS 27
|
||||
#define TK_QTIME 28
|
||||
#define TK_DBS 29
|
||||
#define TK_USERS 30
|
||||
#define TK_CONNS 31
|
||||
#define TK_STATE 32
|
||||
#define TK_USER 33
|
||||
#define TK_ENABLE 34
|
||||
#define TK_NK_INTEGER 35
|
||||
#define TK_SYSINFO 36
|
||||
#define TK_DROP 37
|
||||
#define TK_GRANT 38
|
||||
#define TK_ON 39
|
||||
#define TK_TO 40
|
||||
#define TK_REVOKE 41
|
||||
#define TK_FROM 42
|
||||
#define TK_NK_COMMA 43
|
||||
#define TK_READ 44
|
||||
#define TK_WRITE 45
|
||||
#define TK_NK_DOT 46
|
||||
#define TK_DNODE 47
|
||||
#define TK_PORT 48
|
||||
#define TK_DNODES 49
|
||||
#define TK_NK_IPTOKEN 50
|
||||
#define TK_LOCAL 51
|
||||
#define TK_QNODE 52
|
||||
#define TK_BNODE 53
|
||||
#define TK_SNODE 54
|
||||
#define TK_MNODE 55
|
||||
#define TK_DATABASE 56
|
||||
#define TK_USE 57
|
||||
#define TK_FLUSH 58
|
||||
#define TK_TRIM 59
|
||||
#define TK_IF 60
|
||||
#define TK_NOT 61
|
||||
#define TK_EXISTS 62
|
||||
#define TK_BUFFER 63
|
||||
#define TK_CACHEMODEL 64
|
||||
#define TK_CACHESIZE 65
|
||||
#define TK_COMP 66
|
||||
#define TK_DURATION 67
|
||||
#define TK_NK_VARIABLE 68
|
||||
#define TK_FSYNC 69
|
||||
#define TK_MAXROWS 70
|
||||
#define TK_MINROWS 71
|
||||
#define TK_KEEP 72
|
||||
#define TK_PAGES 73
|
||||
#define TK_PAGESIZE 74
|
||||
#define TK_PRECISION 75
|
||||
#define TK_REPLICA 76
|
||||
#define TK_STRICT 77
|
||||
#define TK_WAL 78
|
||||
#define TK_VGROUPS 79
|
||||
#define TK_SINGLE_STABLE 80
|
||||
#define TK_RETENTIONS 81
|
||||
#define TK_SCHEMALESS 82
|
||||
#define TK_NK_COLON 83
|
||||
#define TK_TABLE 84
|
||||
#define TK_NK_LP 85
|
||||
#define TK_NK_RP 86
|
||||
#define TK_STABLE 87
|
||||
#define TK_ADD 88
|
||||
#define TK_COLUMN 89
|
||||
#define TK_MODIFY 90
|
||||
#define TK_RENAME 91
|
||||
#define TK_TAG 92
|
||||
#define TK_SET 93
|
||||
#define TK_NK_EQ 94
|
||||
#define TK_USING 95
|
||||
#define TK_TAGS 96
|
||||
#define TK_COMMENT 97
|
||||
#define TK_BOOL 98
|
||||
#define TK_TINYINT 99
|
||||
#define TK_SMALLINT 100
|
||||
#define TK_INT 101
|
||||
#define TK_INTEGER 102
|
||||
#define TK_BIGINT 103
|
||||
#define TK_FLOAT 104
|
||||
#define TK_DOUBLE 105
|
||||
#define TK_BINARY 106
|
||||
#define TK_TIMESTAMP 107
|
||||
#define TK_NCHAR 108
|
||||
#define TK_UNSIGNED 109
|
||||
#define TK_JSON 110
|
||||
#define TK_VARCHAR 111
|
||||
#define TK_MEDIUMBLOB 112
|
||||
#define TK_BLOB 113
|
||||
#define TK_VARBINARY 114
|
||||
#define TK_DECIMAL 115
|
||||
#define TK_MAX_DELAY 116
|
||||
#define TK_WATERMARK 117
|
||||
#define TK_ROLLUP 118
|
||||
#define TK_TTL 119
|
||||
#define TK_SMA 120
|
||||
#define TK_FIRST 121
|
||||
#define TK_LAST 122
|
||||
#define TK_SHOW 123
|
||||
#define TK_DATABASES 124
|
||||
#define TK_TABLES 125
|
||||
#define TK_STABLES 126
|
||||
#define TK_MNODES 127
|
||||
#define TK_MODULES 128
|
||||
#define TK_QNODES 129
|
||||
#define TK_FUNCTIONS 130
|
||||
#define TK_INDEXES 131
|
||||
#define TK_ACCOUNTS 132
|
||||
#define TK_APPS 133
|
||||
#define TK_CONNECTIONS 134
|
||||
#define TK_LICENCE 135
|
||||
#define TK_GRANTS 136
|
||||
#define TK_QUERIES 137
|
||||
#define TK_SCORES 138
|
||||
#define TK_TOPICS 139
|
||||
#define TK_VARIABLES 140
|
||||
#define TK_BNODES 141
|
||||
#define TK_SNODES 142
|
||||
#define TK_CLUSTER 143
|
||||
#define TK_TRANSACTIONS 144
|
||||
#define TK_DISTRIBUTED 145
|
||||
#define TK_CONSUMERS 146
|
||||
#define TK_SUBSCRIPTIONS 147
|
||||
#define TK_LIKE 148
|
||||
#define TK_INDEX 149
|
||||
#define TK_FUNCTION 150
|
||||
#define TK_INTERVAL 151
|
||||
#define TK_TOPIC 152
|
||||
#define TK_AS 153
|
||||
#define TK_WITH 154
|
||||
#define TK_META 155
|
||||
#define TK_CONSUMER 156
|
||||
#define TK_GROUP 157
|
||||
#define TK_DESC 158
|
||||
#define TK_DESCRIBE 159
|
||||
#define TK_RESET 160
|
||||
#define TK_QUERY 161
|
||||
#define TK_CACHE 162
|
||||
#define TK_EXPLAIN 163
|
||||
#define TK_ANALYZE 164
|
||||
#define TK_VERBOSE 165
|
||||
#define TK_NK_BOOL 166
|
||||
#define TK_RATIO 167
|
||||
#define TK_NK_FLOAT 168
|
||||
#define TK_COMPACT 169
|
||||
#define TK_VNODES 170
|
||||
#define TK_IN 171
|
||||
#define TK_OUTPUTTYPE 172
|
||||
#define TK_AGGREGATE 173
|
||||
#define TK_BUFSIZE 174
|
||||
#define TK_STREAM 175
|
||||
#define TK_INTO 176
|
||||
#define TK_TRIGGER 177
|
||||
#define TK_AT_ONCE 178
|
||||
#define TK_WINDOW_CLOSE 179
|
||||
#define TK_IGNORE 180
|
||||
#define TK_EXPIRED 181
|
||||
#define TK_KILL 182
|
||||
#define TK_CONNECTION 183
|
||||
#define TK_TRANSACTION 184
|
||||
#define TK_BALANCE 185
|
||||
#define TK_VGROUP 186
|
||||
#define TK_MERGE 187
|
||||
#define TK_REDISTRIBUTE 188
|
||||
#define TK_SPLIT 189
|
||||
#define TK_SYNCDB 190
|
||||
#define TK_DELETE 191
|
||||
#define TK_INSERT 192
|
||||
#define TK_NULL 193
|
||||
#define TK_NK_QUESTION 194
|
||||
#define TK_NK_ARROW 195
|
||||
#define TK_ROWTS 196
|
||||
#define TK_TBNAME 197
|
||||
#define TK_QSTART 198
|
||||
#define TK_QEND 199
|
||||
#define TK_QDURATION 200
|
||||
#define TK_WSTART 201
|
||||
#define TK_WEND 202
|
||||
#define TK_WDURATION 203
|
||||
#define TK_CAST 204
|
||||
#define TK_NOW 205
|
||||
#define TK_TODAY 206
|
||||
#define TK_TIMEZONE 207
|
||||
#define TK_CLIENT_VERSION 208
|
||||
#define TK_SERVER_VERSION 209
|
||||
#define TK_SERVER_STATUS 210
|
||||
#define TK_CURRENT_USER 211
|
||||
#define TK_COUNT 212
|
||||
#define TK_LAST_ROW 213
|
||||
#define TK_BETWEEN 214
|
||||
#define TK_IS 215
|
||||
#define TK_NK_LT 216
|
||||
#define TK_NK_GT 217
|
||||
#define TK_NK_LE 218
|
||||
#define TK_NK_GE 219
|
||||
#define TK_NK_NE 220
|
||||
#define TK_MATCH 221
|
||||
#define TK_NMATCH 222
|
||||
#define TK_CONTAINS 223
|
||||
#define TK_JOIN 224
|
||||
#define TK_INNER 225
|
||||
#define TK_SELECT 226
|
||||
#define TK_DISTINCT 227
|
||||
#define TK_WHERE 228
|
||||
#define TK_PARTITION 229
|
||||
#define TK_BY 230
|
||||
#define TK_SESSION 231
|
||||
#define TK_STATE_WINDOW 232
|
||||
#define TK_SLIDING 233
|
||||
#define TK_FILL 234
|
||||
#define TK_VALUE 235
|
||||
#define TK_NONE 236
|
||||
#define TK_PREV 237
|
||||
#define TK_LINEAR 238
|
||||
#define TK_NEXT 239
|
||||
#define TK_HAVING 240
|
||||
#define TK_RANGE 241
|
||||
#define TK_EVERY 242
|
||||
#define TK_ORDER 243
|
||||
#define TK_SLIMIT 244
|
||||
#define TK_SOFFSET 245
|
||||
#define TK_LIMIT 246
|
||||
#define TK_OFFSET 247
|
||||
#define TK_ASC 248
|
||||
#define TK_NULLS 249
|
||||
#define TK_ID 250
|
||||
#define TK_NK_BITNOT 251
|
||||
#define TK_VALUES 252
|
||||
#define TK_IMPORT 253
|
||||
#define TK_NK_SEMI 254
|
||||
#define TK_FILE 255
|
||||
#define TK_OR 1
|
||||
#define TK_AND 2
|
||||
#define TK_UNION 3
|
||||
#define TK_ALL 4
|
||||
#define TK_MINUS 5
|
||||
#define TK_EXCEPT 6
|
||||
#define TK_INTERSECT 7
|
||||
#define TK_NK_BITAND 8
|
||||
#define TK_NK_BITOR 9
|
||||
#define TK_NK_LSHIFT 10
|
||||
#define TK_NK_RSHIFT 11
|
||||
#define TK_NK_PLUS 12
|
||||
#define TK_NK_MINUS 13
|
||||
#define TK_NK_STAR 14
|
||||
#define TK_NK_SLASH 15
|
||||
#define TK_NK_REM 16
|
||||
#define TK_NK_CONCAT 17
|
||||
#define TK_CREATE 18
|
||||
#define TK_ACCOUNT 19
|
||||
#define TK_NK_ID 20
|
||||
#define TK_PASS 21
|
||||
#define TK_NK_STRING 22
|
||||
#define TK_ALTER 23
|
||||
#define TK_PPS 24
|
||||
#define TK_TSERIES 25
|
||||
#define TK_STORAGE 26
|
||||
#define TK_STREAMS 27
|
||||
#define TK_QTIME 28
|
||||
#define TK_DBS 29
|
||||
#define TK_USERS 30
|
||||
#define TK_CONNS 31
|
||||
#define TK_STATE 32
|
||||
#define TK_USER 33
|
||||
#define TK_ENABLE 34
|
||||
#define TK_NK_INTEGER 35
|
||||
#define TK_SYSINFO 36
|
||||
#define TK_DROP 37
|
||||
#define TK_GRANT 38
|
||||
#define TK_ON 39
|
||||
#define TK_TO 40
|
||||
#define TK_REVOKE 41
|
||||
#define TK_FROM 42
|
||||
#define TK_NK_COMMA 43
|
||||
#define TK_READ 44
|
||||
#define TK_WRITE 45
|
||||
#define TK_NK_DOT 46
|
||||
#define TK_DNODE 47
|
||||
#define TK_PORT 48
|
||||
#define TK_DNODES 49
|
||||
#define TK_NK_IPTOKEN 50
|
||||
#define TK_LOCAL 51
|
||||
#define TK_QNODE 52
|
||||
#define TK_BNODE 53
|
||||
#define TK_SNODE 54
|
||||
#define TK_MNODE 55
|
||||
#define TK_DATABASE 56
|
||||
#define TK_USE 57
|
||||
#define TK_FLUSH 58
|
||||
#define TK_TRIM 59
|
||||
#define TK_IF 60
|
||||
#define TK_NOT 61
|
||||
#define TK_EXISTS 62
|
||||
#define TK_BUFFER 63
|
||||
#define TK_CACHEMODEL 64
|
||||
#define TK_CACHESIZE 65
|
||||
#define TK_COMP 66
|
||||
#define TK_DURATION 67
|
||||
#define TK_NK_VARIABLE 68
|
||||
#define TK_FSYNC 69
|
||||
#define TK_MAXROWS 70
|
||||
#define TK_MINROWS 71
|
||||
#define TK_KEEP 72
|
||||
#define TK_PAGES 73
|
||||
#define TK_PAGESIZE 74
|
||||
#define TK_PRECISION 75
|
||||
#define TK_REPLICA 76
|
||||
#define TK_STRICT 77
|
||||
#define TK_WAL 78
|
||||
#define TK_VGROUPS 79
|
||||
#define TK_SINGLE_STABLE 80
|
||||
#define TK_RETENTIONS 81
|
||||
#define TK_SCHEMALESS 82
|
||||
#define TK_WAL_RETENTION_PERIOD 83
|
||||
#define TK_WAL_RETENTION_SIZE 84
|
||||
#define TK_WAL_ROLL_PERIOD 85
|
||||
#define TK_WAL_SEGMENT_SIZE 86
|
||||
#define TK_NK_COLON 87
|
||||
#define TK_TABLE 88
|
||||
#define TK_NK_LP 89
|
||||
#define TK_NK_RP 90
|
||||
#define TK_STABLE 91
|
||||
#define TK_ADD 92
|
||||
#define TK_COLUMN 93
|
||||
#define TK_MODIFY 94
|
||||
#define TK_RENAME 95
|
||||
#define TK_TAG 96
|
||||
#define TK_SET 97
|
||||
#define TK_NK_EQ 98
|
||||
#define TK_USING 99
|
||||
#define TK_TAGS 100
|
||||
#define TK_COMMENT 101
|
||||
#define TK_BOOL 102
|
||||
#define TK_TINYINT 103
|
||||
#define TK_SMALLINT 104
|
||||
#define TK_INT 105
|
||||
#define TK_INTEGER 106
|
||||
#define TK_BIGINT 107
|
||||
#define TK_FLOAT 108
|
||||
#define TK_DOUBLE 109
|
||||
#define TK_BINARY 110
|
||||
#define TK_TIMESTAMP 111
|
||||
#define TK_NCHAR 112
|
||||
#define TK_UNSIGNED 113
|
||||
#define TK_JSON 114
|
||||
#define TK_VARCHAR 115
|
||||
#define TK_MEDIUMBLOB 116
|
||||
#define TK_BLOB 117
|
||||
#define TK_VARBINARY 118
|
||||
#define TK_DECIMAL 119
|
||||
#define TK_MAX_DELAY 120
|
||||
#define TK_WATERMARK 121
|
||||
#define TK_ROLLUP 122
|
||||
#define TK_TTL 123
|
||||
#define TK_SMA 124
|
||||
#define TK_FIRST 125
|
||||
#define TK_LAST 126
|
||||
#define TK_SHOW 127
|
||||
#define TK_DATABASES 128
|
||||
#define TK_TABLES 129
|
||||
#define TK_STABLES 130
|
||||
#define TK_MNODES 131
|
||||
#define TK_MODULES 132
|
||||
#define TK_QNODES 133
|
||||
#define TK_FUNCTIONS 134
|
||||
#define TK_INDEXES 135
|
||||
#define TK_ACCOUNTS 136
|
||||
#define TK_APPS 137
|
||||
#define TK_CONNECTIONS 138
|
||||
#define TK_LICENCE 139
|
||||
#define TK_GRANTS 140
|
||||
#define TK_QUERIES 141
|
||||
#define TK_SCORES 142
|
||||
#define TK_TOPICS 143
|
||||
#define TK_VARIABLES 144
|
||||
#define TK_BNODES 145
|
||||
#define TK_SNODES 146
|
||||
#define TK_CLUSTER 147
|
||||
#define TK_TRANSACTIONS 148
|
||||
#define TK_DISTRIBUTED 149
|
||||
#define TK_CONSUMERS 150
|
||||
#define TK_SUBSCRIPTIONS 151
|
||||
#define TK_LIKE 152
|
||||
#define TK_INDEX 153
|
||||
#define TK_FUNCTION 154
|
||||
#define TK_INTERVAL 155
|
||||
#define TK_TOPIC 156
|
||||
#define TK_AS 157
|
||||
#define TK_WITH 158
|
||||
#define TK_META 159
|
||||
#define TK_CONSUMER 160
|
||||
#define TK_GROUP 161
|
||||
#define TK_DESC 162
|
||||
#define TK_DESCRIBE 163
|
||||
#define TK_RESET 164
|
||||
#define TK_QUERY 165
|
||||
#define TK_CACHE 166
|
||||
#define TK_EXPLAIN 167
|
||||
#define TK_ANALYZE 168
|
||||
#define TK_VERBOSE 169
|
||||
#define TK_NK_BOOL 170
|
||||
#define TK_RATIO 171
|
||||
#define TK_NK_FLOAT 172
|
||||
#define TK_COMPACT 173
|
||||
#define TK_VNODES 174
|
||||
#define TK_IN 175
|
||||
#define TK_OUTPUTTYPE 176
|
||||
#define TK_AGGREGATE 177
|
||||
#define TK_BUFSIZE 178
|
||||
#define TK_STREAM 179
|
||||
#define TK_INTO 180
|
||||
#define TK_TRIGGER 181
|
||||
#define TK_AT_ONCE 182
|
||||
#define TK_WINDOW_CLOSE 183
|
||||
#define TK_IGNORE 184
|
||||
#define TK_EXPIRED 185
|
||||
#define TK_KILL 186
|
||||
#define TK_CONNECTION 187
|
||||
#define TK_TRANSACTION 188
|
||||
#define TK_BALANCE 189
|
||||
#define TK_VGROUP 190
|
||||
#define TK_MERGE 191
|
||||
#define TK_REDISTRIBUTE 192
|
||||
#define TK_SPLIT 193
|
||||
#define TK_SYNCDB 194
|
||||
#define TK_DELETE 195
|
||||
#define TK_INSERT 196
|
||||
#define TK_NULL 197
|
||||
#define TK_NK_QUESTION 198
|
||||
#define TK_NK_ARROW 199
|
||||
#define TK_ROWTS 200
|
||||
#define TK_TBNAME 201
|
||||
#define TK_QSTART 202
|
||||
#define TK_QEND 203
|
||||
#define TK_QDURATION 204
|
||||
#define TK_WSTART 205
|
||||
#define TK_WEND 206
|
||||
#define TK_WDURATION 207
|
||||
#define TK_CAST 208
|
||||
#define TK_NOW 209
|
||||
#define TK_TODAY 210
|
||||
#define TK_TIMEZONE 211
|
||||
#define TK_CLIENT_VERSION 212
|
||||
#define TK_SERVER_VERSION 213
|
||||
#define TK_SERVER_STATUS 214
|
||||
#define TK_CURRENT_USER 215
|
||||
#define TK_COUNT 216
|
||||
#define TK_LAST_ROW 217
|
||||
#define TK_BETWEEN 218
|
||||
#define TK_IS 219
|
||||
#define TK_NK_LT 220
|
||||
#define TK_NK_GT 221
|
||||
#define TK_NK_LE 222
|
||||
#define TK_NK_GE 223
|
||||
#define TK_NK_NE 224
|
||||
#define TK_MATCH 225
|
||||
#define TK_NMATCH 226
|
||||
#define TK_CONTAINS 227
|
||||
#define TK_JOIN 228
|
||||
#define TK_INNER 229
|
||||
#define TK_SELECT 230
|
||||
#define TK_DISTINCT 231
|
||||
#define TK_WHERE 232
|
||||
#define TK_PARTITION 233
|
||||
#define TK_BY 234
|
||||
#define TK_SESSION 235
|
||||
#define TK_STATE_WINDOW 236
|
||||
#define TK_SLIDING 237
|
||||
#define TK_FILL 238
|
||||
#define TK_VALUE 239
|
||||
#define TK_NONE 240
|
||||
#define TK_PREV 241
|
||||
#define TK_LINEAR 242
|
||||
#define TK_NEXT 243
|
||||
#define TK_HAVING 244
|
||||
#define TK_RANGE 245
|
||||
#define TK_EVERY 246
|
||||
#define TK_ORDER 247
|
||||
#define TK_SLIMIT 248
|
||||
#define TK_SOFFSET 249
|
||||
#define TK_LIMIT 250
|
||||
#define TK_OFFSET 251
|
||||
#define TK_ASC 252
|
||||
#define TK_NULLS 253
|
||||
#define TK_ID 254
|
||||
#define TK_NK_BITNOT 255
|
||||
#define TK_VALUES 256
|
||||
#define TK_IMPORT 257
|
||||
#define TK_NK_SEMI 258
|
||||
#define TK_FILE 259
|
||||
|
||||
#define TK_NK_SPACE 300
|
||||
#define TK_NK_COMMENT 301
|
||||
|
|
|
@ -143,6 +143,7 @@ typedef struct SqlFunctionCtx {
|
|||
struct SExprInfo *pExpr;
|
||||
struct SDiskbasedBuf *pBuf;
|
||||
struct SSDataBlock *pSrcBlock;
|
||||
struct SSDataBlock *pDstBlock; // used by indifinite rows function to set selectivity
|
||||
int32_t curBufPage;
|
||||
bool increase;
|
||||
|
||||
|
|
|
@ -74,6 +74,10 @@ typedef struct SDatabaseOptions {
|
|||
int8_t singleStable;
|
||||
SNodeList* pRetentions;
|
||||
int8_t schemaless;
|
||||
int32_t walRetentionPeriod;
|
||||
int32_t walRetentionSize;
|
||||
int32_t walRollPeriod;
|
||||
int32_t walSegmentSize;
|
||||
} SDatabaseOptions;
|
||||
|
||||
typedef struct SCreateDatabaseStmt {
|
||||
|
|
|
@ -104,6 +104,7 @@ typedef struct SJoinLogicNode {
|
|||
SNode* pMergeCondition;
|
||||
SNode* pOnConditions;
|
||||
bool isSingleTableJoin;
|
||||
EOrder inputTsOrder;
|
||||
} SJoinLogicNode;
|
||||
|
||||
typedef struct SAggLogicNode {
|
||||
|
@ -201,6 +202,7 @@ typedef struct SWindowLogicNode {
|
|||
int64_t watermark;
|
||||
int8_t igExpired;
|
||||
EWindowAlgorithm windowAlgo;
|
||||
EOrder inputTsOrder;
|
||||
} SWindowLogicNode;
|
||||
|
||||
typedef struct SFillLogicNode {
|
||||
|
@ -356,15 +358,14 @@ typedef struct SInterpFuncPhysiNode {
|
|||
SNode* pTimeSeries; // SColumnNode
|
||||
} SInterpFuncPhysiNode;
|
||||
|
||||
typedef struct SJoinPhysiNode {
|
||||
typedef struct SSortMergeJoinPhysiNode {
|
||||
SPhysiNode node;
|
||||
EJoinType joinType;
|
||||
SNode* pMergeCondition;
|
||||
SNode* pOnConditions;
|
||||
SNodeList* pTargets;
|
||||
} SJoinPhysiNode;
|
||||
|
||||
typedef SJoinPhysiNode SSortMergeJoinPhysiNode;
|
||||
EOrder inputTsOrder;
|
||||
} SSortMergeJoinPhysiNode;
|
||||
|
||||
typedef struct SAggPhysiNode {
|
||||
SPhysiNode node;
|
||||
|
|
|
@ -255,6 +255,7 @@ typedef struct SSelectStmt {
|
|||
int32_t selectFuncNum;
|
||||
bool isEmptyResult;
|
||||
bool isTimeLineResult;
|
||||
bool isSubquery;
|
||||
bool hasAggFuncs;
|
||||
bool hasRepeatScanFuncs;
|
||||
bool hasIndefiniteRowsFunc;
|
||||
|
|
|
@ -103,8 +103,8 @@ typedef struct SWal {
|
|||
int32_t fsyncSeq;
|
||||
// meta
|
||||
SWalVer vers;
|
||||
TdFilePtr pWriteLogTFile;
|
||||
TdFilePtr pWriteIdxTFile;
|
||||
TdFilePtr pLogFile;
|
||||
TdFilePtr pIdxFile;
|
||||
int32_t writeCur;
|
||||
SArray *fileInfoSet; // SArray<SWalFileInfo>
|
||||
// status
|
||||
|
@ -114,21 +114,30 @@ typedef struct SWal {
|
|||
int64_t refId;
|
||||
TdThreadMutex mutex;
|
||||
// ref
|
||||
SHashObj *pRefHash; // ref -> SWalRef
|
||||
SHashObj *pRefHash; // refId -> SWalRef
|
||||
// path
|
||||
char path[WAL_PATH_LEN];
|
||||
// reusable write head
|
||||
SWalCkHead writeHead;
|
||||
} SWal; // WAL HANDLE
|
||||
} SWal;
|
||||
|
||||
typedef struct {
|
||||
int64_t refId;
|
||||
int64_t refVer;
|
||||
int64_t refFile;
|
||||
SWal *pWal;
|
||||
} SWalRef;
|
||||
|
||||
typedef struct {
|
||||
int8_t scanUncommited;
|
||||
int8_t scanNotApplied;
|
||||
int8_t scanMeta;
|
||||
int8_t enableRef;
|
||||
} SWalFilterCond;
|
||||
|
||||
typedef struct {
|
||||
SWal *pWal;
|
||||
int64_t readerId;
|
||||
TdFilePtr pLogFile;
|
||||
TdFilePtr pIdxFile;
|
||||
int64_t curFileFirstVer;
|
||||
|
@ -138,7 +147,8 @@ typedef struct {
|
|||
int8_t curStopped;
|
||||
TdThreadMutex mutex;
|
||||
SWalFilterCond cond;
|
||||
SWalCkHead *pHead;
|
||||
// TODO remove it
|
||||
SWalCkHead *pHead;
|
||||
} SWalReader;
|
||||
|
||||
// module initialization
|
||||
|
@ -157,11 +167,7 @@ int32_t walWrite(SWal *, int64_t index, tmsg_t msgType, const void *body, int32_
|
|||
int32_t walWriteWithSyncInfo(SWal *, int64_t index, tmsg_t msgType, SWalSyncInfo syncMeta, const void *body,
|
||||
int32_t bodyLen);
|
||||
|
||||
// This interface assign version automatically and return to caller.
|
||||
// When using this interface with concurrent writes,
|
||||
// wal will write all logs atomically,
|
||||
// but not sure which one will be actually write first,
|
||||
// and then the unique index of successful writen is returned.
|
||||
// Assign version automatically and return to caller,
|
||||
// -1 will be returned for failed writes
|
||||
int64_t walAppendLog(SWal *, tmsg_t msgType, SWalSyncInfo syncMeta, const void *body, int32_t bodyLen);
|
||||
|
||||
|
@ -191,17 +197,15 @@ void walSetReaderCapacity(SWalReader *pRead, int32_t capacity);
|
|||
int32_t walFetchHead(SWalReader *pRead, int64_t ver, SWalCkHead *pHead);
|
||||
int32_t walFetchBody(SWalReader *pRead, SWalCkHead **ppHead);
|
||||
int32_t walSkipFetchBody(SWalReader *pRead, const SWalCkHead *pHead);
|
||||
typedef struct {
|
||||
int64_t refId;
|
||||
int64_t ver;
|
||||
} SWalRef;
|
||||
|
||||
SWalRef *walRefCommittedVer(SWal *);
|
||||
|
||||
SWalRef *walOpenRef(SWal *);
|
||||
void walCloseRef(SWalRef *);
|
||||
void walCloseRef(SWal *pWal, int64_t refId);
|
||||
int32_t walRefVer(SWalRef *, int64_t ver);
|
||||
int32_t walUnrefVer(SWal *);
|
||||
void walUnrefVer(SWalRef *);
|
||||
|
||||
// help function for raft
|
||||
// helper function for raft
|
||||
bool walLogExist(SWal *, int64_t ver);
|
||||
bool walIsEmpty(SWal *);
|
||||
|
||||
|
|
|
@ -358,6 +358,15 @@ typedef enum ELogicConditionType {
|
|||
#define TSDB_DB_SCHEMALESS_OFF 0
|
||||
#define TSDB_DEFAULT_DB_SCHEMALESS TSDB_DB_SCHEMALESS_OFF
|
||||
|
||||
#define TSDB_DB_MIN_WAL_RETENTION_PERIOD -1
|
||||
#define TSDB_DEFAULT_DB_WAL_RETENTION_PERIOD 0
|
||||
#define TSDB_DB_MIN_WAL_RETENTION_SIZE -1
|
||||
#define TSDB_DEFAULT_DB_WAL_RETENTION_SIZE 0
|
||||
#define TSDB_DB_MIN_WAL_ROLL_PERIOD 0
|
||||
#define TSDB_DEFAULT_DB_WAL_ROLL_PERIOD 0
|
||||
#define TSDB_DB_MIN_WAL_SEGMENT_SIZE 0
|
||||
#define TSDB_DEFAULT_DB_WAL_SEGMENT_SIZE 0
|
||||
|
||||
#define TSDB_MIN_ROLLUP_MAX_DELAY 1 // unit millisecond
|
||||
#define TSDB_MAX_ROLLUP_MAX_DELAY (15 * 60 * 1000)
|
||||
#define TSDB_MIN_ROLLUP_WATERMARK 0 // unit millisecond
|
||||
|
|
|
@ -67,7 +67,6 @@ extern int32_t idxDebugFlag;
|
|||
int32_t taosInitLog(const char *logName, int32_t maxFiles);
|
||||
void taosCloseLog();
|
||||
void taosResetLog();
|
||||
void taosSetAllDebugFlag(int32_t flag);
|
||||
void taosDumpData(uint8_t *msg, int32_t len);
|
||||
|
||||
void taosPrintLog(const char *flags, ELogLevel level, int32_t dflag, const char *format, ...)
|
||||
|
|
|
@ -2019,7 +2019,7 @@ int32_t transferTableNameList(const char* tbList, int32_t acctId, char* dbName,
|
|||
}
|
||||
|
||||
if (('a' <= *(tbList + i) && 'z' >= *(tbList + i)) || ('A' <= *(tbList + i) && 'Z' >= *(tbList + i)) ||
|
||||
('0' <= *(tbList + i) && '9' >= *(tbList + i))) {
|
||||
('0' <= *(tbList + i) && '9' >= *(tbList + i)) || ('_' == *(tbList + i))) {
|
||||
if (vLen[vIdx] > 0) {
|
||||
goto _return;
|
||||
}
|
||||
|
|
|
@ -973,7 +973,7 @@ int taos_load_table_info(TAOS *taos, const char *tableNameList) {
|
|||
|
||||
conn.mgmtEps = getEpSet_s(&pTscObj->pAppInfo->mgmtEp);
|
||||
|
||||
code = catalogAsyncGetAllMeta(pCtg, &conn, &catalogReq, syncCatalogFn, NULL, NULL);
|
||||
code = catalogAsyncGetAllMeta(pCtg, &conn, &catalogReq, syncCatalogFn, pRequest->body.param, NULL);
|
||||
if (code) {
|
||||
goto _return;
|
||||
}
|
||||
|
|
|
@ -1763,9 +1763,9 @@ char* dumpBlockData(SSDataBlock* pDataBlock, const char* flag, char** pDataBuf)
|
|||
int32_t colNum = taosArrayGetSize(pDataBlock->pDataBlock);
|
||||
int32_t rows = pDataBlock->info.rows;
|
||||
int32_t len = 0;
|
||||
len += snprintf(dumpBuf + len, size - len, "===stream===%s |block type %d |child id %d|group id:%" PRIu64 "| uid:%ld|\n", flag,
|
||||
len += snprintf(dumpBuf + len, size - len, "===stream===%s |block type %d|child id %d|group id:%" PRIu64 "|uid:%ld|rows:%d\n", flag,
|
||||
(int32_t)pDataBlock->info.type, pDataBlock->info.childId, pDataBlock->info.groupId,
|
||||
pDataBlock->info.uid);
|
||||
pDataBlock->info.uid, pDataBlock->info.rows);
|
||||
if (len >= size - 1) return dumpBuf;
|
||||
|
||||
for (int32_t j = 0; j < rows; j++) {
|
||||
|
@ -1878,7 +1878,7 @@ int32_t buildSubmitReqFromDataBlock(SSubmitReq** pReq, const SArray* pDataBlocks
|
|||
msgLen += sizeof(SSubmitBlk);
|
||||
int32_t dataLen = 0;
|
||||
for (int32_t j = 0; j < rows; ++j) { // iterate by row
|
||||
tdSRowResetBuf(&rb, POINTER_SHIFT(pDataBuf, msgLen)); // set row buf
|
||||
tdSRowResetBuf(&rb, POINTER_SHIFT(pDataBuf, msgLen + dataLen)); // set row buf
|
||||
bool isStartKey = false;
|
||||
int32_t offset = 0;
|
||||
for (int32_t k = 0; k < colNum; ++k) { // iterate by column
|
||||
|
|
|
@ -213,10 +213,6 @@ static int32_t taosSetTfsCfg(SConfig *pCfg) {
|
|||
memcpy(&tsDiskCfg[index], pCfg, sizeof(SDiskCfg));
|
||||
if (pCfg->level == 0 && pCfg->primary == 1) {
|
||||
tstrncpy(tsDataDir, pCfg->dir, PATH_MAX);
|
||||
if (taosMulMkDir(tsDataDir) != 0) {
|
||||
uError("failed to create dataDir:%s since %s", tsDataDir, terrstr());
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
if (taosMulMkDir(pCfg->dir) != 0) {
|
||||
uError("failed to create tfsDir:%s since %s", tsDataDir, terrstr());
|
||||
|
@ -227,12 +223,13 @@ static int32_t taosSetTfsCfg(SConfig *pCfg) {
|
|||
|
||||
if (tsDataDir[0] == 0) {
|
||||
if (pItem->str != NULL) {
|
||||
taosAddDataDir(0, pItem->str, 0, 1);
|
||||
taosAddDataDir(tsDiskCfgNum, pItem->str, 0, 1);
|
||||
tstrncpy(tsDataDir, pItem->str, PATH_MAX);
|
||||
if (taosMulMkDir(tsDataDir) != 0) {
|
||||
uError("failed to create dataDir:%s since %s", tsDataDir, terrstr());
|
||||
uError("failed to create tfsDir:%s since %s", tsDataDir, terrstr());
|
||||
return -1;
|
||||
}
|
||||
tsDiskCfgNum++;
|
||||
} else {
|
||||
uError("datadir not set");
|
||||
return -1;
|
||||
|
@ -1146,6 +1143,10 @@ void taosCfgDynamicOptions(const char *option, const char *value) {
|
|||
int32_t monitor = atoi(value);
|
||||
uInfo("monitor set from %d to %d", tsEnableMonitor, monitor);
|
||||
tsEnableMonitor = monitor;
|
||||
SConfigItem *pItem = cfgGetItem(tsCfg, "monitor");
|
||||
if (pItem != NULL) {
|
||||
pItem->bval = tsEnableMonitor;
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -1169,8 +1170,39 @@ void taosCfgDynamicOptions(const char *option, const char *value) {
|
|||
int32_t flag = atoi(value);
|
||||
uInfo("%s set from %d to %d", optName, *optionVars[d], flag);
|
||||
*optionVars[d] = flag;
|
||||
taosSetDebugFlag(optionVars[d], optName, flag);
|
||||
return;
|
||||
}
|
||||
|
||||
uError("failed to cfg dynamic option:%s value:%s", option, value);
|
||||
}
|
||||
|
||||
void taosSetDebugFlag(int32_t *pFlagPtr, const char *flagName, int32_t flagVal) {
|
||||
SConfigItem *pItem = cfgGetItem(tsCfg, flagName);
|
||||
if (pItem != NULL) {
|
||||
pItem->i32 = flagVal;
|
||||
}
|
||||
*pFlagPtr = flagVal;
|
||||
}
|
||||
|
||||
void taosSetAllDebugFlag(int32_t flag) {
|
||||
if (flag <= 0) return;
|
||||
|
||||
taosSetDebugFlag(&uDebugFlag, "uDebugFlag", flag);
|
||||
taosSetDebugFlag(&rpcDebugFlag, "rpcDebugFlag", flag);
|
||||
taosSetDebugFlag(&jniDebugFlag, "jniDebugFlag", flag);
|
||||
taosSetDebugFlag(&qDebugFlag, "qDebugFlag", flag);
|
||||
taosSetDebugFlag(&cDebugFlag, "cDebugFlag", flag);
|
||||
taosSetDebugFlag(&dDebugFlag, "dDebugFlag", flag);
|
||||
taosSetDebugFlag(&vDebugFlag, "vDebugFlag", flag);
|
||||
taosSetDebugFlag(&mDebugFlag, "mDebugFlag", flag);
|
||||
taosSetDebugFlag(&wDebugFlag, "wDebugFlag", flag);
|
||||
taosSetDebugFlag(&sDebugFlag, "sDebugFlag", flag);
|
||||
taosSetDebugFlag(&tsdbDebugFlag, "tsdbDebugFlag", flag);
|
||||
taosSetDebugFlag(&tqDebugFlag, "tqDebugFlag", flag);
|
||||
taosSetDebugFlag(&fsDebugFlag, "fsDebugFlag", flag);
|
||||
taosSetDebugFlag(&udfDebugFlag, "udfDebugFlag", flag);
|
||||
taosSetDebugFlag(&smaDebugFlag, "smaDebugFlag", flag);
|
||||
taosSetDebugFlag(&idxDebugFlag, "idxDebugFlag", flag);
|
||||
uInfo("all debug flag are set to %d", flag);
|
||||
}
|
||||
|
|
|
@ -2018,6 +2018,10 @@ int32_t tSerializeSCreateDbReq(void *buf, int32_t bufLen, SCreateDbReq *pReq) {
|
|||
if (tEncodeI8(&encoder, pReq->strict) < 0) return -1;
|
||||
if (tEncodeI8(&encoder, pReq->cacheLast) < 0) return -1;
|
||||
if (tEncodeI8(&encoder, pReq->schemaless) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->walRetentionPeriod) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->walRetentionSize) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->walRollPeriod) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->walSegmentSize) < 0) return -1;
|
||||
if (tEncodeI8(&encoder, pReq->ignoreExist) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->numOfRetensions) < 0) return -1;
|
||||
for (int32_t i = 0; i < pReq->numOfRetensions; ++i) {
|
||||
|
@ -2060,6 +2064,10 @@ int32_t tDeserializeSCreateDbReq(void *buf, int32_t bufLen, SCreateDbReq *pReq)
|
|||
if (tDecodeI8(&decoder, &pReq->strict) < 0) return -1;
|
||||
if (tDecodeI8(&decoder, &pReq->cacheLast) < 0) return -1;
|
||||
if (tDecodeI8(&decoder, &pReq->schemaless) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->walRetentionPeriod) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->walRetentionSize) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->walRollPeriod) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->walSegmentSize) < 0) return -1;
|
||||
if (tDecodeI8(&decoder, &pReq->ignoreExist) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->numOfRetensions) < 0) return -1;
|
||||
pReq->pRetensions = taosArrayInit(pReq->numOfRetensions, sizeof(SRetention));
|
||||
|
@ -3742,6 +3750,10 @@ int32_t tSerializeSCreateVnodeReq(void *buf, int32_t bufLen, SCreateVnodeReq *pR
|
|||
uint32_t tsmaLen = (uint32_t)(htonl(((SMsgHead *)pReq->pTsma)->contLen));
|
||||
if (tEncodeBinary(&encoder, (const uint8_t *)pReq->pTsma, tsmaLen) < 0) return -1;
|
||||
}
|
||||
if (tEncodeI32(&encoder, pReq->walRetentionPeriod) < 0) return -1;
|
||||
if (tEncodeI64(&encoder, pReq->walRetentionSize) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->walRollPeriod) < 0) return -1;
|
||||
if (tEncodeI64(&encoder, pReq->walSegmentSize) < 0) return -1;
|
||||
|
||||
tEndEncode(&encoder);
|
||||
|
||||
|
@ -3810,6 +3822,11 @@ int32_t tDeserializeSCreateVnodeReq(void *buf, int32_t bufLen, SCreateVnodeReq *
|
|||
if (tDecodeBinary(&decoder, (uint8_t **)&pReq->pTsma, NULL) < 0) return -1;
|
||||
}
|
||||
|
||||
if (tDecodeI32(&decoder, &pReq->walRetentionPeriod) < 0) return -1;
|
||||
if (tDecodeI64(&decoder, &pReq->walRetentionSize) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->walRollPeriod) < 0) return -1;
|
||||
if (tDecodeI64(&decoder, &pReq->walSegmentSize) < 0) return -1;
|
||||
|
||||
tEndDecode(&decoder);
|
||||
tDecoderClear(&decoder);
|
||||
return 0;
|
||||
|
|
|
@ -160,6 +160,13 @@ static void vmGenerateVnodeCfg(SCreateVnodeReq *pCreate, SVnodeCfg *pCfg) {
|
|||
}
|
||||
|
||||
pCfg->walCfg.vgId = pCreate->vgId;
|
||||
pCfg->walCfg.fsyncPeriod = pCreate->fsyncPeriod;
|
||||
pCfg->walCfg.retentionPeriod = pCreate->walRetentionPeriod;
|
||||
pCfg->walCfg.rollPeriod = pCreate->walRollPeriod;
|
||||
pCfg->walCfg.retentionSize = pCreate->walRetentionSize;
|
||||
pCfg->walCfg.segSize = pCreate->walSegmentSize;
|
||||
pCfg->walCfg.level = pCreate->walLevel;
|
||||
|
||||
pCfg->hashBegin = pCreate->hashBegin;
|
||||
pCfg->hashEnd = pCreate->hashEnd;
|
||||
pCfg->hashMethod = pCreate->hashMethod;
|
||||
|
|
|
@ -164,8 +164,8 @@ typedef struct {
|
|||
int32_t lastErrorNo;
|
||||
tmsg_t lastMsgType;
|
||||
SEpSet lastEpset;
|
||||
char dbname1[TSDB_DB_FNAME_LEN];
|
||||
char dbname2[TSDB_DB_FNAME_LEN];
|
||||
char dbname1[TSDB_TABLE_FNAME_LEN];
|
||||
char dbname2[TSDB_TABLE_FNAME_LEN];
|
||||
int32_t startFunc;
|
||||
int32_t stopFunc;
|
||||
int32_t paramLen;
|
||||
|
@ -302,9 +302,13 @@ typedef struct {
|
|||
int8_t strict;
|
||||
int8_t hashMethod; // default is 1
|
||||
int8_t cacheLast;
|
||||
int8_t schemaless;
|
||||
int32_t numOfRetensions;
|
||||
SArray* pRetensions;
|
||||
int8_t schemaless;
|
||||
int32_t walRetentionPeriod;
|
||||
int64_t walRetentionSize;
|
||||
int32_t walRollPeriod;
|
||||
int64_t walSegmentSize;
|
||||
} SDbCfg;
|
||||
|
||||
typedef struct {
|
||||
|
|
|
@ -120,6 +120,10 @@ static SSdbRaw *mndDbActionEncode(SDbObj *pDb) {
|
|||
SDB_SET_INT8(pRaw, dataPos, pRetension->keepUnit, _OVER)
|
||||
}
|
||||
SDB_SET_INT8(pRaw, dataPos, pDb->cfg.schemaless, _OVER)
|
||||
SDB_SET_INT32(pRaw, dataPos, pDb->cfg.walRetentionPeriod, _OVER)
|
||||
SDB_SET_INT64(pRaw, dataPos, pDb->cfg.walRetentionSize, _OVER)
|
||||
SDB_SET_INT32(pRaw, dataPos, pDb->cfg.walRollPeriod, _OVER)
|
||||
SDB_SET_INT64(pRaw, dataPos, pDb->cfg.walSegmentSize, _OVER)
|
||||
|
||||
SDB_SET_RESERVE(pRaw, dataPos, DB_RESERVE_SIZE, _OVER)
|
||||
SDB_SET_DATALEN(pRaw, dataPos, _OVER)
|
||||
|
@ -199,6 +203,10 @@ static SSdbRow *mndDbActionDecode(SSdbRaw *pRaw) {
|
|||
}
|
||||
}
|
||||
SDB_GET_INT8(pRaw, dataPos, &pDb->cfg.schemaless, _OVER)
|
||||
SDB_GET_INT32(pRaw, dataPos, &pDb->cfg.walRetentionPeriod, _OVER)
|
||||
SDB_GET_INT64(pRaw, dataPos, &pDb->cfg.walRetentionSize, _OVER)
|
||||
SDB_GET_INT32(pRaw, dataPos, &pDb->cfg.walRollPeriod, _OVER)
|
||||
SDB_GET_INT64(pRaw, dataPos, &pDb->cfg.walSegmentSize, _OVER)
|
||||
|
||||
SDB_GET_RESERVE(pRaw, dataPos, DB_RESERVE_SIZE, _OVER)
|
||||
taosInitRWLatch(&pDb->lock);
|
||||
|
@ -318,6 +326,10 @@ static int32_t mndCheckDbCfg(SMnode *pMnode, SDbCfg *pCfg) {
|
|||
terrno = TSDB_CODE_MND_NO_ENOUGH_DNODES;
|
||||
return -1;
|
||||
}
|
||||
if (pCfg->walRetentionPeriod < TSDB_DB_MIN_WAL_RETENTION_PERIOD) return -1;
|
||||
if (pCfg->walRetentionSize < TSDB_DB_MIN_WAL_RETENTION_SIZE) return -1;
|
||||
if (pCfg->walRollPeriod < TSDB_DB_MIN_WAL_ROLL_PERIOD) return -1;
|
||||
if (pCfg->walSegmentSize < TSDB_DB_MIN_WAL_SEGMENT_SIZE) return -1;
|
||||
|
||||
terrno = 0;
|
||||
return terrno;
|
||||
|
@ -345,6 +357,12 @@ static void mndSetDefaultDbCfg(SDbCfg *pCfg) {
|
|||
if (pCfg->cacheLastSize <= 0) pCfg->cacheLastSize = TSDB_DEFAULT_CACHE_SIZE;
|
||||
if (pCfg->numOfRetensions < 0) pCfg->numOfRetensions = 0;
|
||||
if (pCfg->schemaless < 0) pCfg->schemaless = TSDB_DB_SCHEMALESS_OFF;
|
||||
if (pCfg->walRetentionPeriod < 0 && pCfg->walRetentionPeriod != -1)
|
||||
pCfg->walRetentionPeriod = TSDB_DEFAULT_DB_WAL_RETENTION_PERIOD;
|
||||
if (pCfg->walRetentionSize < 0 && pCfg->walRetentionSize != -1)
|
||||
pCfg->walRetentionSize = TSDB_DEFAULT_DB_WAL_RETENTION_SIZE;
|
||||
if (pCfg->walRollPeriod < 0) pCfg->walRollPeriod = TSDB_DEFAULT_DB_WAL_ROLL_PERIOD;
|
||||
if (pCfg->walSegmentSize < 0) pCfg->walSegmentSize = TSDB_DEFAULT_DB_WAL_SEGMENT_SIZE;
|
||||
}
|
||||
|
||||
static int32_t mndSetCreateDbRedoLogs(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SVgObj *pVgroups) {
|
||||
|
@ -457,6 +475,10 @@ static int32_t mndCreateDb(SMnode *pMnode, SRpcMsg *pReq, SCreateDbReq *pCreate,
|
|||
.cacheLast = pCreate->cacheLast,
|
||||
.hashMethod = 1,
|
||||
.schemaless = pCreate->schemaless,
|
||||
.walRetentionPeriod = pCreate->walRetentionPeriod,
|
||||
.walRetentionSize = pCreate->walRetentionSize,
|
||||
.walRollPeriod = pCreate->walRollPeriod,
|
||||
.walSegmentSize = pCreate->walSegmentSize,
|
||||
};
|
||||
|
||||
dbObj.cfg.numOfRetensions = pCreate->numOfRetensions;
|
||||
|
|
|
@ -874,7 +874,7 @@ static int32_t mndProcessConfigDnodeReq(SRpcMsg *pReq) {
|
|||
}
|
||||
|
||||
static int32_t mndProcessConfigDnodeRsp(SRpcMsg *pRsp) {
|
||||
mInfo("config rsp from dnode, app:%p", pRsp->info.ahandle);
|
||||
mInfo("config rsp from dnode");
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -281,7 +281,7 @@ static int32_t mndSetDropOffsetRedoLogs(SMnode *pMnode, STrans *pTrans, SMqOffse
|
|||
}
|
||||
|
||||
int32_t mndDropOffsetByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
|
||||
int32_t code = -1;
|
||||
int32_t code = 0;
|
||||
SSdb *pSdb = pMnode->pSdb;
|
||||
|
||||
void *pIter = NULL;
|
||||
|
@ -297,15 +297,15 @@ int32_t mndDropOffsetByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
|
|||
|
||||
if (mndSetDropOffsetCommitLogs(pMnode, pTrans, pOffset) < 0) {
|
||||
sdbRelease(pSdb, pOffset);
|
||||
goto END;
|
||||
sdbCancelFetch(pSdb, pIter);
|
||||
code = -1;
|
||||
break;
|
||||
}
|
||||
|
||||
sdbRelease(pSdb, pOffset);
|
||||
}
|
||||
|
||||
code = 0;
|
||||
END:
|
||||
return code;
|
||||
return code;
|
||||
}
|
||||
|
||||
int32_t mndDropOffsetByTopic(SMnode *pMnode, STrans *pTrans, const char *topic) {
|
||||
|
|
|
@ -312,7 +312,7 @@ static int32_t mndSaveQueryList(SConnObj *pConn, SQueryHbReqBasic *pBasic) {
|
|||
pConn->numOfQueries = pBasic->queryDesc ? taosArrayGetSize(pBasic->queryDesc) : 0;
|
||||
pBasic->queryDesc = NULL;
|
||||
|
||||
mDebug("queries updated in conn %d, num:%d", pConn->id, pConn->numOfQueries);
|
||||
mDebug("queries updated in conn %u, num:%d", pConn->id, pConn->numOfQueries);
|
||||
|
||||
taosWUnLockLatch(&pConn->queryLock);
|
||||
|
||||
|
|
|
@ -641,6 +641,7 @@ static int32_t mndSetCreateStbRedoActions(SMnode *pMnode, STrans *pTrans, SDbObj
|
|||
action.contLen = contLen;
|
||||
action.msgType = TDMT_VND_CREATE_STB;
|
||||
action.acceptableCode = TSDB_CODE_TDB_STB_ALREADY_EXIST;
|
||||
action.retryCode = TSDB_CODE_TDB_STB_NOT_EXIST;
|
||||
if (mndTransAppendRedoAction(pTrans, &action) != 0) {
|
||||
taosMemoryFree(pReq);
|
||||
sdbCancelFetch(pSdb, pIter);
|
||||
|
@ -805,7 +806,7 @@ _OVER:
|
|||
}
|
||||
|
||||
int32_t mndAddStbToTrans(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SStbObj *pStb) {
|
||||
mndTransSetDbName(pTrans, pDb->name, NULL);
|
||||
mndTransSetDbName(pTrans, pDb->name, pStb->name);
|
||||
if (mndSetCreateStbRedoLogs(pMnode, pTrans, pDb, pStb) != 0) return -1;
|
||||
if (mndSetCreateStbUndoLogs(pMnode, pTrans, pDb, pStb) != 0) return -1;
|
||||
if (mndSetCreateStbCommitLogs(pMnode, pTrans, pDb, pStb) != 0) return -1;
|
||||
|
@ -1612,7 +1613,7 @@ static int32_t mndAlterStbImp(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SStbOb
|
|||
if (pTrans == NULL) goto _OVER;
|
||||
|
||||
mDebug("trans:%d, used to alter stb:%s", pTrans->id, pStb->name);
|
||||
mndTransSetDbName(pTrans, pDb->name, NULL);
|
||||
mndTransSetDbName(pTrans, pDb->name, pStb->name);
|
||||
|
||||
if (needRsp) {
|
||||
void *pCont = NULL;
|
||||
|
@ -1811,7 +1812,7 @@ static int32_t mndDropStb(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SStbObj *p
|
|||
if (pTrans == NULL) goto _OVER;
|
||||
|
||||
mDebug("trans:%d, used to drop stb:%s", pTrans->id, pStb->name);
|
||||
mndTransSetDbName(pTrans, pDb->name, NULL);
|
||||
mndTransSetDbName(pTrans, pDb->name, pStb->name);
|
||||
|
||||
if (mndSetDropStbRedoLogs(pMnode, pTrans, pStb) != 0) goto _OVER;
|
||||
if (mndSetDropStbCommitLogs(pMnode, pTrans, pStb) != 0) goto _OVER;
|
||||
|
|
|
@ -824,7 +824,7 @@ int32_t mndSetDropSubCommitLogs(SMnode *pMnode, STrans *pTrans, SMqSubscribeObj
|
|||
}
|
||||
|
||||
int32_t mndDropSubByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
|
||||
int32_t code = -1;
|
||||
int32_t code = 0;
|
||||
SSdb *pSdb = pMnode->pSdb;
|
||||
|
||||
void *pIter = NULL;
|
||||
|
@ -840,12 +840,14 @@ int32_t mndDropSubByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
|
|||
|
||||
if (mndSetDropSubCommitLogs(pMnode, pTrans, pSub) < 0) {
|
||||
sdbRelease(pSdb, pSub);
|
||||
goto END;
|
||||
sdbCancelFetch(pSdb, pIter);
|
||||
code = -1;
|
||||
break;
|
||||
}
|
||||
|
||||
sdbRelease(pSdb, pSub);
|
||||
}
|
||||
|
||||
code = 0;
|
||||
END:
|
||||
return code;
|
||||
}
|
||||
|
||||
|
|
|
@ -833,7 +833,7 @@ static void mndCancelGetNextTopic(SMnode *pMnode, void *pIter) {
|
|||
}
|
||||
|
||||
int32_t mndDropTopicByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
|
||||
int32_t code = -1;
|
||||
int32_t code = 0;
|
||||
SSdb *pSdb = pMnode->pSdb;
|
||||
|
||||
void *pIter = NULL;
|
||||
|
@ -848,11 +848,14 @@ int32_t mndDropTopicByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
|
|||
}
|
||||
|
||||
if (mndSetDropTopicCommitLogs(pMnode, pTrans, pTopic) < 0) {
|
||||
goto END;
|
||||
sdbRelease(pSdb, pTopic);
|
||||
sdbCancelFetch(pSdb, pIter);
|
||||
code = -1;
|
||||
break;
|
||||
}
|
||||
|
||||
sdbRelease(pSdb, pTopic);
|
||||
}
|
||||
|
||||
code = 0;
|
||||
END:
|
||||
return code;
|
||||
}
|
||||
|
|
|
@ -127,8 +127,8 @@ static SSdbRaw *mndTransActionEncode(STrans *pTrans) {
|
|||
SDB_SET_INT8(pRaw, dataPos, 0, _OVER)
|
||||
SDB_SET_INT8(pRaw, dataPos, 0, _OVER)
|
||||
SDB_SET_INT64(pRaw, dataPos, pTrans->createdTime, _OVER)
|
||||
SDB_SET_BINARY(pRaw, dataPos, pTrans->dbname1, TSDB_DB_FNAME_LEN, _OVER)
|
||||
SDB_SET_BINARY(pRaw, dataPos, pTrans->dbname2, TSDB_DB_FNAME_LEN, _OVER)
|
||||
SDB_SET_BINARY(pRaw, dataPos, pTrans->dbname1, TSDB_TABLE_FNAME_LEN, _OVER)
|
||||
SDB_SET_BINARY(pRaw, dataPos, pTrans->dbname2, TSDB_TABLE_FNAME_LEN, _OVER)
|
||||
SDB_SET_INT32(pRaw, dataPos, pTrans->redoActionPos, _OVER)
|
||||
|
||||
int32_t redoActionNum = taosArrayGetSize(pTrans->redoActions);
|
||||
|
@ -290,8 +290,8 @@ static SSdbRow *mndTransActionDecode(SSdbRaw *pRaw) {
|
|||
pTrans->exec = exec;
|
||||
pTrans->oper = oper;
|
||||
SDB_GET_INT64(pRaw, dataPos, &pTrans->createdTime, _OVER)
|
||||
SDB_GET_BINARY(pRaw, dataPos, pTrans->dbname1, TSDB_DB_FNAME_LEN, _OVER)
|
||||
SDB_GET_BINARY(pRaw, dataPos, pTrans->dbname2, TSDB_DB_FNAME_LEN, _OVER)
|
||||
SDB_GET_BINARY(pRaw, dataPos, pTrans->dbname1, TSDB_TABLE_FNAME_LEN, _OVER)
|
||||
SDB_GET_BINARY(pRaw, dataPos, pTrans->dbname2, TSDB_TABLE_FNAME_LEN, _OVER)
|
||||
SDB_GET_INT32(pRaw, dataPos, &pTrans->redoActionPos, _OVER)
|
||||
SDB_GET_INT32(pRaw, dataPos, &redoActionNum, _OVER)
|
||||
SDB_GET_INT32(pRaw, dataPos, &undoActionNum, _OVER)
|
||||
|
@ -727,10 +727,10 @@ int32_t mndSetRpcInfoForDbTrans(SMnode *pMnode, SRpcMsg *pMsg, EOperType oper, c
|
|||
|
||||
void mndTransSetDbName(STrans *pTrans, const char *dbname1, const char *dbname2) {
|
||||
if (dbname1 != NULL) {
|
||||
memcpy(pTrans->dbname1, dbname1, TSDB_DB_FNAME_LEN);
|
||||
tstrncpy(pTrans->dbname1, dbname1, TSDB_TABLE_FNAME_LEN);
|
||||
}
|
||||
if (dbname2 != NULL) {
|
||||
memcpy(pTrans->dbname2, dbname2, TSDB_DB_FNAME_LEN);
|
||||
tstrncpy(pTrans->dbname2, dbname2, TSDB_TABLE_FNAME_LEN);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1289,6 +1289,19 @@ static bool mndTransPerformRedoActionStage(SMnode *pMnode, STrans *pTrans) {
|
|||
} else {
|
||||
pTrans->code = terrno;
|
||||
if (pTrans->policy == TRN_POLICY_ROLLBACK) {
|
||||
if (pTrans->lastAction != 0) {
|
||||
STransAction *pAction = taosArrayGet(pTrans->redoActions, pTrans->lastAction);
|
||||
if (pAction->retryCode != 0 && pAction->retryCode != pAction->errCode) {
|
||||
if (pTrans->failedTimes < 6) {
|
||||
mError("trans:%d, stage keep on redoAction since action:%d code:0x%x not 0x%x, failedTimes:%d", pTrans->id,
|
||||
pTrans->lastAction, pTrans->code, pAction->retryCode, pTrans->failedTimes);
|
||||
taosMsleep(1000);
|
||||
continueExec = true;
|
||||
return true;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pTrans->stage = TRN_STAGE_ROLLBACK;
|
||||
mError("trans:%d, stage from redoAction to rollback since %s", pTrans->id, terrstr());
|
||||
continueExec = true;
|
||||
|
|
|
@ -230,6 +230,10 @@ void *mndBuildCreateVnodeReq(SMnode *pMnode, SDnodeObj *pDnode, SDbObj *pDb, SVg
|
|||
createReq.standby = standby;
|
||||
createReq.isTsma = pVgroup->isTsma;
|
||||
createReq.pTsma = pVgroup->pTsma;
|
||||
createReq.walRetentionPeriod = pDb->cfg.walRetentionPeriod;
|
||||
createReq.walRetentionSize = pDb->cfg.walRetentionSize;
|
||||
createReq.walRollPeriod = pDb->cfg.walRollPeriod;
|
||||
createReq.walSegmentSize = pDb->cfg.walSegmentSize;
|
||||
|
||||
for (int32_t v = 0; v < pVgroup->replica; ++v) {
|
||||
SReplica *pReplica = &createReq.replicas[v];
|
||||
|
|
|
@ -118,9 +118,8 @@ int32_t metaTbCursorNext(SMTbCursor *pTbCur);
|
|||
// typedef struct STsdb STsdb;
|
||||
typedef struct STsdbReader STsdbReader;
|
||||
|
||||
#define BLOCK_LOAD_OFFSET_ORDER 1
|
||||
#define BLOCK_LOAD_TABLESEQ_ORDER 2
|
||||
#define BLOCK_LOAD_EXTERN_ORDER 3
|
||||
#define TIMEWINDOW_RANGE_CONTAINED 1
|
||||
#define TIMEWINDOW_RANGE_EXTERNAL 2
|
||||
|
||||
#define LASTROW_RETRIEVE_TYPE_ALL 0x1
|
||||
#define LASTROW_RETRIEVE_TYPE_SINGLE 0x2
|
||||
|
|
|
@ -104,6 +104,8 @@ typedef struct {
|
|||
// TODO remove
|
||||
SWalReader* pWalReader;
|
||||
|
||||
SWalRef* pRef;
|
||||
|
||||
// push
|
||||
STqPushHandle pushHandle;
|
||||
|
||||
|
|
|
@ -268,6 +268,7 @@ struct SVnode {
|
|||
tsem_t canCommit;
|
||||
int64_t sync;
|
||||
int32_t blockCount;
|
||||
bool restored;
|
||||
tsem_t syncSem;
|
||||
SQHandle* pQuery;
|
||||
};
|
||||
|
|
|
@ -180,11 +180,41 @@ int metaClose(SMeta *pMeta) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
int32_t metaRLock(SMeta *pMeta) { return taosThreadRwlockRdlock(&pMeta->lock); }
|
||||
int32_t metaRLock(SMeta *pMeta) {
|
||||
int32_t ret = 0;
|
||||
|
||||
int32_t metaWLock(SMeta *pMeta) { return taosThreadRwlockWrlock(&pMeta->lock); }
|
||||
metaDebug("meta rlock %p B", &pMeta->lock);
|
||||
|
||||
int32_t metaULock(SMeta *pMeta) { return taosThreadRwlockUnlock(&pMeta->lock); }
|
||||
ret = taosThreadRwlockRdlock(&pMeta->lock);
|
||||
|
||||
metaDebug("meta rlock %p E", &pMeta->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int32_t metaWLock(SMeta *pMeta) {
|
||||
int32_t ret = 0;
|
||||
|
||||
metaDebug("meta wlock %p B", &pMeta->lock);
|
||||
|
||||
ret = taosThreadRwlockWrlock(&pMeta->lock);
|
||||
|
||||
metaDebug("meta wlock %p E", &pMeta->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int32_t metaULock(SMeta *pMeta) {
|
||||
int32_t ret = 0;
|
||||
|
||||
metaDebug("meta ulock %p B", &pMeta->lock);
|
||||
|
||||
ret = taosThreadRwlockUnlock(&pMeta->lock);
|
||||
|
||||
metaDebug("meta ulock %p E", &pMeta->lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int tbDbKeyCmpr(const void *pKey1, int kLen1, const void *pKey2, int kLen2) {
|
||||
STbDbKey *pTbDbKey1 = (STbDbKey *)pKey1;
|
||||
|
@ -259,7 +289,7 @@ static int ctbIdxKeyCmpr(const void *pKey1, int kLen1, const void *pKey2, int kL
|
|||
static int tagIdxKeyCmpr(const void *pKey1, int kLen1, const void *pKey2, int kLen2) {
|
||||
STagIdxKey *pTagIdxKey1 = (STagIdxKey *)pKey1;
|
||||
STagIdxKey *pTagIdxKey2 = (STagIdxKey *)pKey2;
|
||||
tb_uid_t uid1, uid2;
|
||||
tb_uid_t uid1 = 0, uid2 = 0;
|
||||
int c;
|
||||
|
||||
// compare suid
|
||||
|
@ -287,14 +317,15 @@ static int tagIdxKeyCmpr(const void *pKey1, int kLen1, const void *pKey2, int kL
|
|||
// all not NULL, compr tag vals
|
||||
c = doCompare(pTagIdxKey1->data, pTagIdxKey2->data, pTagIdxKey1->type, 0);
|
||||
if (c) return c;
|
||||
}
|
||||
|
||||
if (IS_VAR_DATA_TYPE(pTagIdxKey1->type)) {
|
||||
uid1 = *(tb_uid_t *)(pTagIdxKey1->data + varDataTLen(pTagIdxKey1->data));
|
||||
uid2 = *(tb_uid_t *)(pTagIdxKey2->data + varDataTLen(pTagIdxKey2->data));
|
||||
} else {
|
||||
uid1 = *(tb_uid_t *)(pTagIdxKey1->data + tDataTypes[pTagIdxKey1->type].bytes);
|
||||
uid2 = *(tb_uid_t *)(pTagIdxKey2->data + tDataTypes[pTagIdxKey2->type].bytes);
|
||||
}
|
||||
// both null or tag values are equal, then continue to compare uids
|
||||
if (IS_VAR_DATA_TYPE(pTagIdxKey1->type)) {
|
||||
uid1 = *(tb_uid_t *)(pTagIdxKey1->data + varDataTLen(pTagIdxKey1->data));
|
||||
uid2 = *(tb_uid_t *)(pTagIdxKey2->data + varDataTLen(pTagIdxKey2->data));
|
||||
} else {
|
||||
uid1 = *(tb_uid_t *)(pTagIdxKey1->data + tDataTypes[pTagIdxKey1->type].bytes);
|
||||
uid2 = *(tb_uid_t *)(pTagIdxKey2->data + tDataTypes[pTagIdxKey2->type].bytes);
|
||||
}
|
||||
|
||||
// compare uid
|
||||
|
|
|
@ -178,7 +178,7 @@ int metaCreateSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) {
|
|||
if (metaGetTableEntryByName(&mr, pReq->name) == 0) {
|
||||
// TODO: just for pass case
|
||||
#if 0
|
||||
terrno = TSDB_CODE_TDB_TABLE_ALREADY_EXIST;
|
||||
terrno = TSDB_CODE_TDB_STB_ALREADY_EXIST;
|
||||
metaReaderClear(&mr);
|
||||
return -1;
|
||||
#else
|
||||
|
@ -223,7 +223,7 @@ int metaDropSTable(SMeta *pMeta, int64_t verison, SVDropStbReq *pReq, SArray *tb
|
|||
// check if super table exists
|
||||
rc = tdbTbGet(pMeta->pNameIdx, pReq->name, strlen(pReq->name) + 1, &pData, &nData);
|
||||
if (rc < 0 || *(tb_uid_t *)pData != pReq->suid) {
|
||||
terrno = TSDB_CODE_VND_TABLE_NOT_EXIST;
|
||||
terrno = TSDB_CODE_TDB_STB_NOT_EXIST;
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
|
|
@ -212,6 +212,15 @@ int32_t tqProcessOffsetCommitReq(STQ* pTq, char* msg, int32_t msgLen) {
|
|||
ASSERT(0);
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (offset.val.type == TMQ_OFFSET__LOG) {
|
||||
STqHandle* pHandle = taosHashGet(pTq->handles, offset.subKey, strlen(offset.subKey));
|
||||
if (walRefVer(pHandle->pRef, offset.val.version) < 0) {
|
||||
ASSERT(0);
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
/*}*/
|
||||
/*}*/
|
||||
|
||||
|
@ -376,8 +385,8 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg) {
|
|||
}
|
||||
|
||||
if (pHandle->execHandle.subType != TOPIC_SUB_TYPE__COLUMN) {
|
||||
int64_t fetchVer = fetchOffsetNew.version + 1;
|
||||
SWalCkHead* pCkHead = taosMemoryMalloc(sizeof(SWalCkHead) + 2048);
|
||||
int64_t fetchVer = fetchOffsetNew.version + 1;
|
||||
pCkHead = taosMemoryMalloc(sizeof(SWalCkHead) + 2048);
|
||||
if (pCkHead == NULL) {
|
||||
code = -1;
|
||||
goto OVER;
|
||||
|
@ -534,11 +543,14 @@ int32_t tqProcessVgChangeReq(STQ* pTq, char* msg, int32_t msgLen) {
|
|||
|
||||
pHandle->execHandle.subType = req.subType;
|
||||
pHandle->fetchMeta = req.withMeta;
|
||||
// TODO version should be assigned and refed during preprocess
|
||||
SWalRef* pRef = walRefCommittedVer(pTq->pVnode->pWal);
|
||||
if (pRef == NULL) {
|
||||
ASSERT(0);
|
||||
}
|
||||
int64_t ver = pRef->refVer;
|
||||
pHandle->pRef = pRef;
|
||||
|
||||
pHandle->pWalReader = walOpenReader(pTq->pVnode->pWal, NULL);
|
||||
|
||||
// TODO version should be assigned in preprocess
|
||||
int64_t ver = walGetCommittedVer(pTq->pVnode->pWal);
|
||||
if (pHandle->execHandle.subType == TOPIC_SUB_TYPE__COLUMN) {
|
||||
pHandle->execHandle.execCol.qmsg = req.qmsg;
|
||||
pHandle->snapshotVer = ver;
|
||||
|
@ -560,14 +572,18 @@ int32_t tqProcessVgChangeReq(STQ* pTq, char* msg, int32_t msgLen) {
|
|||
pHandle->execHandle.pExecReader = qExtractReaderFromStreamScanner(scanner);
|
||||
ASSERT(pHandle->execHandle.pExecReader);
|
||||
} else if (pHandle->execHandle.subType == TOPIC_SUB_TYPE__DB) {
|
||||
pHandle->pWalReader = walOpenReader(pTq->pVnode->pWal, NULL);
|
||||
|
||||
pHandle->execHandle.pExecReader = tqOpenReader(pTq->pVnode);
|
||||
pHandle->execHandle.execDb.pFilterOutTbUid =
|
||||
taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK);
|
||||
} else if (pHandle->execHandle.subType == TOPIC_SUB_TYPE__TABLE) {
|
||||
pHandle->pWalReader = walOpenReader(pTq->pVnode->pWal, NULL);
|
||||
|
||||
pHandle->execHandle.execTb.suid = req.suid;
|
||||
SArray* tbUidList = taosArrayInit(0, sizeof(int64_t));
|
||||
vnodeGetCtbIdList(pTq->pVnode, req.suid, tbUidList);
|
||||
tqDebug("vgId:%d, tq try get suid:%" PRId64, pTq->pVnode->config.vgId, req.suid);
|
||||
tqDebug("vgId:%d, tq try to get all ctb, suid:%" PRId64, pTq->pVnode->config.vgId, req.suid);
|
||||
for (int32_t i = 0; i < taosArrayGetSize(tbUidList); i++) {
|
||||
int64_t tbUid = *(int64_t*)taosArrayGet(tbUidList, i);
|
||||
tqDebug("vgId:%d, idx %d, uid:%" PRId64, TD_VID(pTq->pVnode), i, tbUid);
|
||||
|
|
|
@ -52,7 +52,7 @@ int32_t tqMetaOpen(STQ* pTq) {
|
|||
ASSERT(0);
|
||||
}
|
||||
|
||||
TXN txn;
|
||||
TXN txn = {0};
|
||||
|
||||
if (tdbTxnOpen(&txn, 0, tdbDefaultMalloc, tdbDefaultFree, NULL, 0) < 0) {
|
||||
ASSERT(0);
|
||||
|
@ -75,7 +75,13 @@ int32_t tqMetaOpen(STQ* pTq) {
|
|||
STqHandle handle;
|
||||
tDecoderInit(&decoder, (uint8_t*)pVal, vLen);
|
||||
tDecodeSTqHandle(&decoder, &handle);
|
||||
handle.pWalReader = walOpenReader(pTq->pVnode->pWal, NULL);
|
||||
|
||||
handle.pRef = walOpenRef(pTq->pVnode->pWal);
|
||||
if (handle.pRef == NULL) {
|
||||
ASSERT(0);
|
||||
}
|
||||
walRefVer(handle.pRef, handle.snapshotVer);
|
||||
|
||||
if (handle.execHandle.subType == TOPIC_SUB_TYPE__COLUMN) {
|
||||
SReadHandle reader = {
|
||||
.meta = pTq->pVnode->pMeta,
|
||||
|
@ -94,6 +100,7 @@ int32_t tqMetaOpen(STQ* pTq) {
|
|||
handle.execHandle.pExecReader = qExtractReaderFromStreamScanner(scanner);
|
||||
ASSERT(handle.execHandle.pExecReader);
|
||||
} else {
|
||||
handle.pWalReader = walOpenReader(pTq->pVnode->pWal, NULL);
|
||||
handle.execHandle.execDb.pFilterOutTbUid =
|
||||
taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK);
|
||||
}
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1395,10 +1395,26 @@ void tsdbCalcColDataSMA(SColData *pColData, SColumnDataAgg *pColAgg) {
|
|||
break;
|
||||
case TSDB_DATA_TYPE_BOOL:
|
||||
break;
|
||||
case TSDB_DATA_TYPE_TINYINT:
|
||||
case TSDB_DATA_TYPE_TINYINT:{
|
||||
pColAgg->sum += colVal.value.i8;
|
||||
if (pColAgg->min > colVal.value.i8) {
|
||||
pColAgg->min = colVal.value.i8;
|
||||
}
|
||||
if (pColAgg->max < colVal.value.i8) {
|
||||
pColAgg->max = colVal.value.i8;
|
||||
}
|
||||
break;
|
||||
case TSDB_DATA_TYPE_SMALLINT:
|
||||
}
|
||||
case TSDB_DATA_TYPE_SMALLINT:{
|
||||
pColAgg->sum += colVal.value.i16;
|
||||
if (pColAgg->min > colVal.value.i16) {
|
||||
pColAgg->min = colVal.value.i16;
|
||||
}
|
||||
if (pColAgg->max < colVal.value.i16) {
|
||||
pColAgg->max = colVal.value.i16;
|
||||
}
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_INT: {
|
||||
pColAgg->sum += colVal.value.i32;
|
||||
if (pColAgg->min > colVal.value.i32) {
|
||||
|
@ -1419,24 +1435,79 @@ void tsdbCalcColDataSMA(SColData *pColData, SColumnDataAgg *pColAgg) {
|
|||
}
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_FLOAT:
|
||||
case TSDB_DATA_TYPE_FLOAT:{
|
||||
pColAgg->sum += colVal.value.f;
|
||||
if (pColAgg->min > colVal.value.f) {
|
||||
pColAgg->min = colVal.value.f;
|
||||
}
|
||||
if (pColAgg->max < colVal.value.f) {
|
||||
pColAgg->max = colVal.value.f;
|
||||
}
|
||||
break;
|
||||
case TSDB_DATA_TYPE_DOUBLE:
|
||||
}
|
||||
case TSDB_DATA_TYPE_DOUBLE:{
|
||||
pColAgg->sum += colVal.value.d;
|
||||
if (pColAgg->min > colVal.value.d) {
|
||||
pColAgg->min = colVal.value.d;
|
||||
}
|
||||
if (pColAgg->max < colVal.value.d) {
|
||||
pColAgg->max = colVal.value.d;
|
||||
}
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_VARCHAR:
|
||||
break;
|
||||
case TSDB_DATA_TYPE_TIMESTAMP:
|
||||
case TSDB_DATA_TYPE_TIMESTAMP:{
|
||||
if (pColAgg->min > colVal.value.i64) {
|
||||
pColAgg->min = colVal.value.i64;
|
||||
}
|
||||
if (pColAgg->max < colVal.value.i64) {
|
||||
pColAgg->max = colVal.value.i64;
|
||||
}
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_NCHAR:
|
||||
break;
|
||||
case TSDB_DATA_TYPE_UTINYINT:
|
||||
case TSDB_DATA_TYPE_UTINYINT:{
|
||||
pColAgg->sum += colVal.value.u8;
|
||||
if (pColAgg->min > colVal.value.u8) {
|
||||
pColAgg->min = colVal.value.u8;
|
||||
}
|
||||
if (pColAgg->max < colVal.value.u8) {
|
||||
pColAgg->max = colVal.value.u8;
|
||||
}
|
||||
break;
|
||||
case TSDB_DATA_TYPE_USMALLINT:
|
||||
}
|
||||
case TSDB_DATA_TYPE_USMALLINT:{
|
||||
pColAgg->sum += colVal.value.u16;
|
||||
if (pColAgg->min > colVal.value.u16) {
|
||||
pColAgg->min = colVal.value.u16;
|
||||
}
|
||||
if (pColAgg->max < colVal.value.u16) {
|
||||
pColAgg->max = colVal.value.u16;
|
||||
}
|
||||
break;
|
||||
case TSDB_DATA_TYPE_UINT:
|
||||
}
|
||||
case TSDB_DATA_TYPE_UINT:{
|
||||
pColAgg->sum += colVal.value.u32;
|
||||
if (pColAgg->min > colVal.value.u32) {
|
||||
pColAgg->min = colVal.value.u32;
|
||||
}
|
||||
if (pColAgg->max < colVal.value.u32) {
|
||||
pColAgg->max = colVal.value.u32;
|
||||
}
|
||||
break;
|
||||
case TSDB_DATA_TYPE_UBIGINT:
|
||||
}
|
||||
case TSDB_DATA_TYPE_UBIGINT:{
|
||||
pColAgg->sum += colVal.value.u64;
|
||||
if (pColAgg->min > colVal.value.u64) {
|
||||
pColAgg->min = colVal.value.u64;
|
||||
}
|
||||
if (pColAgg->max < colVal.value.u64) {
|
||||
pColAgg->max = colVal.value.u64;
|
||||
}
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_JSON:
|
||||
break;
|
||||
case TSDB_DATA_TYPE_VARBINARY:
|
||||
|
|
|
@ -40,8 +40,8 @@ const SVnodeCfg vnodeCfgDefault = {.vgId = -1,
|
|||
.vgId = -1,
|
||||
.fsyncPeriod = 0,
|
||||
.retentionPeriod = -1,
|
||||
.rollPeriod = -1,
|
||||
.segSize = -1,
|
||||
.rollPeriod = 0,
|
||||
.segSize = 0,
|
||||
.retentionSize = -1,
|
||||
.level = TAOS_WAL_WRITE,
|
||||
},
|
||||
|
|
|
@ -16,23 +16,28 @@
|
|||
#define _DEFAULT_SOURCE
|
||||
#include "vnd.h"
|
||||
|
||||
#define BATCH_DISABLE 1
|
||||
|
||||
static inline bool vnodeIsMsgBlock(tmsg_t type) {
|
||||
return (type == TDMT_VND_CREATE_TABLE) || (type == TDMT_VND_CREATE_TABLE) || (type == TDMT_VND_CREATE_TABLE) ||
|
||||
(type == TDMT_VND_ALTER_TABLE) || (type == TDMT_VND_DROP_TABLE) || (type == TDMT_VND_UPDATE_TAG_VAL);
|
||||
(type == TDMT_VND_ALTER_TABLE) || (type == TDMT_VND_DROP_TABLE) || (type == TDMT_VND_UPDATE_TAG_VAL) ||
|
||||
(type == TDMT_VND_ALTER_REPLICA);
|
||||
}
|
||||
|
||||
static inline bool vnodeIsMsgWeak(tmsg_t type) { return false; }
|
||||
|
||||
static inline void vnodeWaitBlockMsg(SVnode *pVnode, const SRpcMsg *pMsg) {
|
||||
if (vnodeIsMsgBlock(pMsg->msgType)) {
|
||||
vTrace("vgId:%d, msg:%p wait block, type:%s", pVnode->config.vgId, pMsg, TMSG_INFO(pMsg->msgType));
|
||||
const STraceId *trace = &pMsg->info.traceId;
|
||||
vGTrace("vgId:%d, msg:%p wait block, type:%s", pVnode->config.vgId, pMsg, TMSG_INFO(pMsg->msgType));
|
||||
tsem_wait(&pVnode->syncSem);
|
||||
}
|
||||
}
|
||||
|
||||
static inline void vnodePostBlockMsg(SVnode *pVnode, const SRpcMsg *pMsg) {
|
||||
if (vnodeIsMsgBlock(pMsg->msgType)) {
|
||||
vTrace("vgId:%d, msg:%p post block, type:%s", pVnode->config.vgId, pMsg, TMSG_INFO(pMsg->msgType));
|
||||
const STraceId *trace = &pMsg->info.traceId;
|
||||
vGTrace("vgId:%d, msg:%p post block, type:%s", pVnode->config.vgId, pMsg, TMSG_INFO(pMsg->msgType));
|
||||
tsem_post(&pVnode->syncSem);
|
||||
}
|
||||
}
|
||||
|
@ -124,60 +129,147 @@ void vnodeRedirectRpcMsg(SVnode *pVnode, SRpcMsg *pMsg) {
|
|||
tmsgSendRedirectRsp(&rsp, &newEpSet);
|
||||
}
|
||||
|
||||
void vnodeProposeWriteMsg(SQueueInfo *pInfo, STaosQall *qall, int32_t numOfMsgs) {
|
||||
SVnode *pVnode = pInfo->ahandle;
|
||||
int32_t vgId = pVnode->config.vgId;
|
||||
int32_t code = 0;
|
||||
SRpcMsg *pMsg = NULL;
|
||||
|
||||
vTrace("vgId:%d, get %d msgs from vnode-write queue", vgId, numOfMsgs);
|
||||
|
||||
for (int32_t m = 0; m < numOfMsgs; m++) {
|
||||
if (taosGetQitem(qall, (void **)&pMsg) == 0) continue;
|
||||
static void inline vnodeHandleWriteMsg(SVnode *pVnode, SRpcMsg *pMsg) {
|
||||
SRpcMsg rsp = {.code = pMsg->code, .info = pMsg->info};
|
||||
if (vnodeProcessWriteMsg(pVnode, pMsg, pMsg->info.conn.applyIndex, &rsp) < 0) {
|
||||
rsp.code = terrno;
|
||||
const STraceId *trace = &pMsg->info.traceId;
|
||||
vGTrace("vgId:%d, msg:%p get from vnode-write queue handle:%p", vgId, pMsg, pMsg->info.handle);
|
||||
vGError("vgId:%d, msg:%p failed to apply right now since %s", pVnode->config.vgId, pMsg, terrstr());
|
||||
}
|
||||
if (rsp.info.handle != NULL) {
|
||||
tmsgSendRsp(&rsp);
|
||||
}
|
||||
}
|
||||
|
||||
code = vnodePreProcessWriteMsg(pVnode, pMsg);
|
||||
if (code != 0) {
|
||||
vError("vgId:%d, msg:%p failed to pre-process since %s", vgId, pMsg, terrstr());
|
||||
} else {
|
||||
if (pMsg->msgType == TDMT_VND_ALTER_REPLICA) {
|
||||
code = vnodeProcessAlterReplicaReq(pVnode, pMsg);
|
||||
} else {
|
||||
code = syncPropose(pVnode->sync, pMsg, vnodeIsMsgWeak(pMsg->msgType));
|
||||
if (code > 0) {
|
||||
SRpcMsg rsp = {.code = pMsg->code, .info = pMsg->info};
|
||||
if (vnodeProcessWriteMsg(pVnode, pMsg, pMsg->info.conn.applyIndex, &rsp) < 0) {
|
||||
rsp.code = terrno;
|
||||
vError("vgId:%d, msg:%p failed to apply right now since %s", vgId, pMsg, terrstr());
|
||||
}
|
||||
if (rsp.info.handle != NULL) {
|
||||
tmsgSendRsp(&rsp);
|
||||
}
|
||||
} else if (code == 0) {
|
||||
vnodeWaitBlockMsg(pVnode, pMsg);
|
||||
} else {
|
||||
}
|
||||
}
|
||||
static void vnodeHandleProposeError(SVnode *pVnode, SRpcMsg *pMsg, int32_t code) {
|
||||
if (code == TSDB_CODE_SYN_NOT_LEADER) {
|
||||
vnodeRedirectRpcMsg(pVnode, pMsg);
|
||||
} else {
|
||||
const STraceId *trace = &pMsg->info.traceId;
|
||||
vGError("vgId:%d, msg:%p failed to propose since %s, code:0x%x", pVnode->config.vgId, pMsg, tstrerror(code), code);
|
||||
SRpcMsg rsp = {.code = code, .info = pMsg->info};
|
||||
if (rsp.info.handle != NULL) {
|
||||
tmsgSendRsp(&rsp);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (code < 0) {
|
||||
if (terrno == TSDB_CODE_SYN_NOT_LEADER) {
|
||||
vnodeRedirectRpcMsg(pVnode, pMsg);
|
||||
} else {
|
||||
if (terrno != 0) code = terrno;
|
||||
vError("vgId:%d, msg:%p failed to propose since %s, code:0x%x", vgId, pMsg, tstrerror(code), code);
|
||||
SRpcMsg rsp = {.code = code, .info = pMsg->info};
|
||||
if (rsp.info.handle != NULL) {
|
||||
tmsgSendRsp(&rsp);
|
||||
}
|
||||
}
|
||||
static void vnodeHandleAlterReplicaReq(SVnode *pVnode, SRpcMsg *pMsg) {
|
||||
int32_t code = vnodeProcessAlterReplicaReq(pVnode, pMsg);
|
||||
|
||||
if (code > 0) {
|
||||
ASSERT(0);
|
||||
} else if (code == 0) {
|
||||
vnodeWaitBlockMsg(pVnode, pMsg);
|
||||
} else {
|
||||
if (terrno != 0) code = terrno;
|
||||
vnodeHandleProposeError(pVnode, pMsg, code);
|
||||
}
|
||||
|
||||
const STraceId *trace = &pMsg->info.traceId;
|
||||
vGTrace("vgId:%d, msg:%p is freed, code:0x%x", pVnode->config.vgId, pMsg, code);
|
||||
rpcFreeCont(pMsg->pCont);
|
||||
taosFreeQitem(pMsg);
|
||||
}
|
||||
|
||||
static void inline vnodeProposeBatchMsg(SVnode *pVnode, SRpcMsg **pMsgArr, bool *pIsWeakArr, int32_t *arrSize) {
|
||||
if (*arrSize <= 0) return;
|
||||
|
||||
#if BATCH_DISABLE
|
||||
int32_t code = syncPropose(pVnode->sync, pMsgArr[0], pIsWeakArr[0]);
|
||||
#else
|
||||
int32_t code = syncProposeBatch(pVnode->sync, pMsgArr, pIsWeakArr, *arrSize);
|
||||
#endif
|
||||
|
||||
if (code > 0) {
|
||||
for (int32_t i = 0; i < *arrSize; ++i) {
|
||||
vnodeHandleWriteMsg(pVnode, pMsgArr[i]);
|
||||
}
|
||||
} else if (code == 0) {
|
||||
vnodeWaitBlockMsg(pVnode, pMsgArr[*arrSize - 1]);
|
||||
} else {
|
||||
if (terrno != 0) code = terrno;
|
||||
for (int32_t i = 0; i < *arrSize; ++i) {
|
||||
vnodeHandleProposeError(pVnode, pMsgArr[i], code);
|
||||
}
|
||||
}
|
||||
|
||||
vGTrace("vgId:%d, msg:%p is freed, code:0x%x", vgId, pMsg, code);
|
||||
for (int32_t i = 0; i < *arrSize; ++i) {
|
||||
SRpcMsg *pMsg = pMsgArr[i];
|
||||
const STraceId *trace = &pMsg->info.traceId;
|
||||
vGTrace("vgId:%d, msg:%p is freed, code:0x%x", pVnode->config.vgId, pMsg, code);
|
||||
rpcFreeCont(pMsg->pCont);
|
||||
taosFreeQitem(pMsg);
|
||||
}
|
||||
|
||||
*arrSize = 0;
|
||||
}
|
||||
|
||||
void vnodeProposeWriteMsg(SQueueInfo *pInfo, STaosQall *qall, int32_t numOfMsgs) {
|
||||
SVnode *pVnode = pInfo->ahandle;
|
||||
int32_t vgId = pVnode->config.vgId;
|
||||
int32_t code = 0;
|
||||
SRpcMsg *pMsg = NULL;
|
||||
int32_t arrayPos = 0;
|
||||
SRpcMsg **pMsgArr = taosMemoryCalloc(numOfMsgs, sizeof(SRpcMsg *));
|
||||
bool *pIsWeakArr = taosMemoryCalloc(numOfMsgs, sizeof(bool));
|
||||
vTrace("vgId:%d, get %d msgs from vnode-write queue", vgId, numOfMsgs);
|
||||
|
||||
for (int32_t msg = 0; msg < numOfMsgs; msg++) {
|
||||
if (taosGetQitem(qall, (void **)&pMsg) == 0) continue;
|
||||
bool isWeak = vnodeIsMsgWeak(pMsg->msgType);
|
||||
bool isBlock = vnodeIsMsgBlock(pMsg->msgType);
|
||||
|
||||
const STraceId *trace = &pMsg->info.traceId;
|
||||
vGTrace("vgId:%d, msg:%p get from vnode-write queue, weak:%d block:%d msg:%d:%d pos:%d, handle:%p", vgId, pMsg,
|
||||
isWeak, isBlock, msg, numOfMsgs, arrayPos, pMsg->info.handle);
|
||||
|
||||
if (!pVnode->restored) {
|
||||
vGError("vgId:%d, msg:%p failed to process since not leader", vgId, pMsg);
|
||||
terrno = TSDB_CODE_APP_NOT_READY;
|
||||
vnodeHandleProposeError(pVnode, pMsg, TSDB_CODE_APP_NOT_READY);
|
||||
rpcFreeCont(pMsg->pCont);
|
||||
taosFreeQitem(pMsg);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (pMsgArr == NULL || pIsWeakArr == NULL) {
|
||||
vGError("vgId:%d, msg:%p failed to process since out of memory", vgId, pMsg);
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
vnodeHandleProposeError(pVnode, pMsg, terrno);
|
||||
rpcFreeCont(pMsg->pCont);
|
||||
taosFreeQitem(pMsg);
|
||||
continue;
|
||||
}
|
||||
|
||||
code = vnodePreProcessWriteMsg(pVnode, pMsg);
|
||||
if (code != 0) {
|
||||
vGError("vgId:%d, msg:%p failed to pre-process since %s", vgId, pMsg, terrstr());
|
||||
rpcFreeCont(pMsg->pCont);
|
||||
taosFreeQitem(pMsg);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (pMsg->msgType == TDMT_VND_ALTER_REPLICA) {
|
||||
vnodeHandleAlterReplicaReq(pVnode, pMsg);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (isBlock || BATCH_DISABLE) {
|
||||
vnodeProposeBatchMsg(pVnode, pMsgArr, pIsWeakArr, &arrayPos);
|
||||
}
|
||||
|
||||
pMsgArr[arrayPos] = pMsg;
|
||||
pIsWeakArr[arrayPos] = isWeak;
|
||||
arrayPos++;
|
||||
|
||||
if (isBlock || msg == numOfMsgs - 1 || BATCH_DISABLE) {
|
||||
vnodeProposeBatchMsg(pVnode, pMsgArr, pIsWeakArr, &arrayPos);
|
||||
}
|
||||
}
|
||||
|
||||
taosMemoryFree(pMsgArr);
|
||||
taosMemoryFree(pIsWeakArr);
|
||||
}
|
||||
|
||||
void vnodeApplyWriteMsg(SQueueInfo *pInfo, STaosQall *qall, int32_t numOfMsgs) {
|
||||
|
@ -414,7 +506,7 @@ static void vnodeSyncCommitMsg(SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta c
|
|||
syncGetVgId(pVnode->sync), pFsm, cbMeta.index, cbMeta.isWeak, cbMeta.code, cbMeta.state,
|
||||
syncUtilState2String(cbMeta.state), pMsg->msgType, TMSG_INFO(pMsg->msgType));
|
||||
|
||||
if (cbMeta.code == 0) {
|
||||
if (cbMeta.code == 0 && cbMeta.isWeak == 0) {
|
||||
SRpcMsg rpcMsg = {.msgType = pMsg->msgType, .contLen = pMsg->contLen};
|
||||
rpcMsg.pCont = rpcMallocCont(rpcMsg.contLen);
|
||||
memcpy(rpcMsg.pCont, pMsg->pCont, pMsg->contLen);
|
||||
|
@ -437,6 +529,23 @@ static void vnodeSyncPreCommitMsg(SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMet
|
|||
vTrace("vgId:%d, pre-commit-cb is excuted, fsm:%p, index:%" PRId64 ", isWeak:%d, code:%d, state:%d %s, msgtype:%d %s",
|
||||
syncGetVgId(pVnode->sync), pFsm, cbMeta.index, cbMeta.isWeak, cbMeta.code, cbMeta.state,
|
||||
syncUtilState2String(cbMeta.state), pMsg->msgType, TMSG_INFO(pMsg->msgType));
|
||||
|
||||
if (cbMeta.code == 0 && cbMeta.isWeak == 1) {
|
||||
SRpcMsg rpcMsg = {.msgType = pMsg->msgType, .contLen = pMsg->contLen};
|
||||
rpcMsg.pCont = rpcMallocCont(rpcMsg.contLen);
|
||||
memcpy(rpcMsg.pCont, pMsg->pCont, pMsg->contLen);
|
||||
syncGetAndDelRespRpc(pVnode->sync, cbMeta.seqNum, &rpcMsg.info);
|
||||
rpcMsg.info.conn.applyIndex = cbMeta.index;
|
||||
rpcMsg.info.conn.applyTerm = cbMeta.term;
|
||||
tmsgPutToQueue(&pVnode->msgCb, APPLY_QUEUE, &rpcMsg);
|
||||
} else {
|
||||
SRpcMsg rsp = {.code = cbMeta.code, .info = pMsg->info};
|
||||
vError("vgId:%d, sync pre-commit error, msgtype:%d,%s, error:0x%X, errmsg:%s", syncGetVgId(pVnode->sync),
|
||||
pMsg->msgType, TMSG_INFO(pMsg->msgType), cbMeta.code, tstrerror(cbMeta.code));
|
||||
if (rsp.info.handle != NULL) {
|
||||
tmsgSendRsp(&rsp);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void vnodeSyncRollBackMsg(SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta cbMeta) {
|
||||
|
@ -527,6 +636,12 @@ static void vnodeLeaderTransfer(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsm
|
|||
SVnode *pVnode = pFsm->data;
|
||||
}
|
||||
|
||||
static void vnodeRestoreFinish(struct SSyncFSM *pFsm) {
|
||||
SVnode *pVnode = pFsm->data;
|
||||
pVnode->restored = true;
|
||||
vDebug("vgId:%d, sync restore finished", pVnode->config.vgId);
|
||||
}
|
||||
|
||||
static SSyncFSM *vnodeSyncMakeFsm(SVnode *pVnode) {
|
||||
SSyncFSM *pFsm = taosMemoryCalloc(1, sizeof(SSyncFSM));
|
||||
pFsm->data = pVnode;
|
||||
|
@ -534,7 +649,7 @@ static SSyncFSM *vnodeSyncMakeFsm(SVnode *pVnode) {
|
|||
pFsm->FpPreCommitCb = vnodeSyncPreCommitMsg;
|
||||
pFsm->FpRollBackCb = vnodeSyncRollBackMsg;
|
||||
pFsm->FpGetSnapshotInfo = vnodeSyncGetSnapshot;
|
||||
pFsm->FpRestoreFinishCb = NULL;
|
||||
pFsm->FpRestoreFinishCb = vnodeRestoreFinish;
|
||||
pFsm->FpLeaderTransferCb = vnodeLeaderTransfer;
|
||||
pFsm->FpReConfigCb = vnodeSyncReconfig;
|
||||
pFsm->FpSnapshotStartRead = vnodeSnapshotStartRead;
|
||||
|
@ -588,11 +703,10 @@ bool vnodeIsLeader(SVnode *pVnode) {
|
|||
return false;
|
||||
}
|
||||
|
||||
// todo
|
||||
// if (!pVnode->restored) {
|
||||
// terrno = TSDB_CODE_APP_NOT_READY;
|
||||
// return false;
|
||||
// }
|
||||
if (!pVnode->restored) {
|
||||
terrno = TSDB_CODE_APP_NOT_READY;
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
|
@ -135,7 +135,7 @@ int32_t qExplainGenerateResChildren(SPhysiNode *pNode, SExplainGroup *group, SNo
|
|||
break;
|
||||
}
|
||||
case QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN: {
|
||||
SJoinPhysiNode *pJoinNode = (SJoinPhysiNode *)pNode;
|
||||
SSortMergeJoinPhysiNode *pJoinNode = (SSortMergeJoinPhysiNode *)pNode;
|
||||
pPhysiChildren = pJoinNode->node.pChildren;
|
||||
break;
|
||||
}
|
||||
|
@ -434,7 +434,8 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
|
|||
case QUERY_NODE_PHYSICAL_PLAN_TABLE_SCAN: {
|
||||
STableScanPhysiNode *pTblScanNode = (STableScanPhysiNode *)pNode;
|
||||
EXPLAIN_ROW_NEW(level,
|
||||
QUERY_NODE_PHYSICAL_PLAN_TABLE_MERGE_SCAN == pNode->type ? EXPLAIN_TBL_MERGE_SCAN_FORMAT : EXPLAIN_TBL_SCAN_FORMAT,
|
||||
QUERY_NODE_PHYSICAL_PLAN_TABLE_MERGE_SCAN == pNode->type ? EXPLAIN_TBL_MERGE_SCAN_FORMAT
|
||||
: EXPLAIN_TBL_SCAN_FORMAT,
|
||||
pTblScanNode->scan.tableName.tname);
|
||||
EXPLAIN_ROW_APPEND(EXPLAIN_LEFT_PARENTHESIS_FORMAT);
|
||||
if (pResNode->pExecInfo) {
|
||||
|
@ -551,7 +552,7 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
|
|||
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
||||
if (pSTblScanNode->scan.pScanPseudoCols) {
|
||||
EXPLAIN_ROW_APPEND(EXPLAIN_PSEUDO_COLUMNS_FORMAT, pSTblScanNode->scan.pScanPseudoCols->length);
|
||||
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
||||
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
||||
}
|
||||
EXPLAIN_ROW_APPEND(EXPLAIN_WIDTH_FORMAT, pSTblScanNode->scan.node.pOutputDataBlockDesc->totalRowSize);
|
||||
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
||||
|
@ -613,7 +614,7 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
|
|||
break;
|
||||
}
|
||||
case QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN: {
|
||||
SJoinPhysiNode *pJoinNode = (SJoinPhysiNode *)pNode;
|
||||
SSortMergeJoinPhysiNode *pJoinNode = (SSortMergeJoinPhysiNode *)pNode;
|
||||
EXPLAIN_ROW_NEW(level, EXPLAIN_JOIN_FORMAT, EXPLAIN_JOIN_STRING(pJoinNode->joinType));
|
||||
EXPLAIN_ROW_APPEND(EXPLAIN_LEFT_PARENTHESIS_FORMAT);
|
||||
if (pResNode->pExecInfo) {
|
||||
|
@ -1180,7 +1181,7 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
|
|||
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
||||
if (pDistScanNode->pScanPseudoCols) {
|
||||
EXPLAIN_ROW_APPEND(EXPLAIN_PSEUDO_COLUMNS_FORMAT, pDistScanNode->pScanPseudoCols->length);
|
||||
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
||||
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
||||
}
|
||||
EXPLAIN_ROW_APPEND(EXPLAIN_WIDTH_FORMAT, pDistScanNode->node.pOutputDataBlockDesc->totalRowSize);
|
||||
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
||||
|
@ -1367,7 +1368,7 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
|
|||
EXPLAIN_ROW_APPEND(EXPLAIN_FUNCTIONS_FORMAT, pInterpNode->pFuncs->length);
|
||||
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
||||
}
|
||||
|
||||
|
||||
EXPLAIN_ROW_APPEND(EXPLAIN_MODE_FORMAT, nodesGetFillModeString(pInterpNode->fillMode));
|
||||
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
||||
|
||||
|
@ -1419,7 +1420,7 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
|
|||
}
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
default:
|
||||
qError("not supported physical node type %d", pNode->type);
|
||||
return TSDB_CODE_QRY_APP_ERROR;
|
||||
|
|
|
@ -82,8 +82,6 @@ size_t getResultRowSize(struct SqlFunctionCtx* pCtx, int32_t numOfOutput);
|
|||
void initResultRowInfo(SResultRowInfo* pResultRowInfo);
|
||||
void cleanupResultRowInfo(SResultRowInfo* pResultRowInfo);
|
||||
|
||||
void closeAllResultRows(SResultRowInfo* pResultRowInfo);
|
||||
|
||||
void initResultRow(SResultRow* pResultRow);
|
||||
void closeResultRow(SResultRow* pResultRow);
|
||||
bool isResultRowClosed(SResultRow* pResultRow);
|
||||
|
@ -96,6 +94,11 @@ static FORCE_INLINE SResultRow* getResultRowByPos(SDiskbasedBuf* pBuf, SResultRo
|
|||
return pRow;
|
||||
}
|
||||
|
||||
static FORCE_INLINE void setResultBufPageDirty(SDiskbasedBuf* pBuf, SResultRowPosition* pos) {
|
||||
void* pPage = getBufPage(pBuf, pos->pageId);
|
||||
setBufPageDirty(pPage, true);
|
||||
}
|
||||
|
||||
void initGroupedResultInfo(SGroupResInfo* pGroupResInfo, SHashObj* pHashmap, int32_t order);
|
||||
void cleanupGroupResInfo(SGroupResInfo* pGroupResInfo);
|
||||
|
||||
|
|
|
@ -108,7 +108,6 @@ typedef struct STaskCostInfo {
|
|||
SFileBlockLoadRecorder* pRecoder;
|
||||
uint64_t elapsedTime;
|
||||
|
||||
uint64_t firstStageMergeTime;
|
||||
uint64_t winInfoSize;
|
||||
uint64_t tableInfoSize;
|
||||
uint64_t hashSize;
|
||||
|
@ -321,6 +320,49 @@ typedef struct STableScanInfo {
|
|||
int8_t noTable;
|
||||
} STableScanInfo;
|
||||
|
||||
typedef struct STableMergeScanInfo {
|
||||
STableListInfo* tableListInfo;
|
||||
int32_t tableStartIndex;
|
||||
int32_t tableEndIndex;
|
||||
bool hasGroupId;
|
||||
uint64_t groupId;
|
||||
SArray* dataReaders; // array of tsdbReaderT*
|
||||
SReadHandle readHandle;
|
||||
int32_t bufPageSize;
|
||||
uint32_t sortBufSize; // max buffer size for in-memory sort
|
||||
SArray* pSortInfo;
|
||||
SSortHandle* pSortHandle;
|
||||
|
||||
SSDataBlock* pSortInputBlock;
|
||||
int64_t startTs; // sort start time
|
||||
SArray* sortSourceParams;
|
||||
|
||||
SFileBlockLoadRecorder readRecorder;
|
||||
int64_t numOfRows;
|
||||
SScanInfo scanInfo;
|
||||
int32_t scanTimes;
|
||||
SNode* pFilterNode; // filter info, which is push down by optimizer
|
||||
SqlFunctionCtx* pCtx; // which belongs to the direct upstream operator operator query context
|
||||
SResultRowInfo* pResultRowInfo;
|
||||
int32_t* rowEntryInfoOffset;
|
||||
SExprInfo* pExpr;
|
||||
SSDataBlock* pResBlock;
|
||||
SArray* pColMatchInfo;
|
||||
int32_t numOfOutput;
|
||||
|
||||
SExprSupp pseudoSup;
|
||||
|
||||
SQueryTableDataCond cond;
|
||||
int32_t scanFlag; // table scan flag to denote if it is a repeat/reverse/main scan
|
||||
int32_t dataBlockLoadFlag;
|
||||
// if the upstream is an interval operator, the interval info is also kept here to get the time
|
||||
// window to check if current data block needs to be loaded.
|
||||
SInterval interval;
|
||||
SSampleExecInfo sample; // sample execution info
|
||||
|
||||
SSortExecInfo sortExecInfo;
|
||||
} STableMergeScanInfo;
|
||||
|
||||
typedef struct STagScanInfo {
|
||||
SColumnInfo *pCols;
|
||||
SSDataBlock *pRes;
|
||||
|
@ -352,6 +394,11 @@ typedef enum EStreamScanMode {
|
|||
STREAM_SCAN_FROM_DATAREADER_RANGE,
|
||||
} EStreamScanMode;
|
||||
|
||||
enum {
|
||||
PROJECT_RETRIEVE_CONTINUE = 0x1,
|
||||
PROJECT_RETRIEVE_DONE = 0x2,
|
||||
};
|
||||
|
||||
typedef struct SCatchSupporter {
|
||||
SHashObj* pWindowHashTable; // quick locate the window object for each window
|
||||
SDiskbasedBuf* pDataBuf; // buffer based on blocked-wised disk file
|
||||
|
@ -549,6 +596,7 @@ typedef struct SProjectOperatorInfo {
|
|||
SLimitInfo limitInfo;
|
||||
bool mergeDataBlocks;
|
||||
SSDataBlock* pFinalRes;
|
||||
SNode* pCondition;
|
||||
} SProjectOperatorInfo;
|
||||
|
||||
typedef struct SIndefOperatorInfo {
|
||||
|
@ -881,7 +929,7 @@ SOperatorInfo* createPartitionOperatorInfo(SOperatorInfo* downstream, SPartition
|
|||
|
||||
SOperatorInfo* createTimeSliceOperatorInfo(SOperatorInfo* downstream, SPhysiNode* pNode, SExecTaskInfo* pTaskInfo);
|
||||
|
||||
SOperatorInfo* createMergeJoinOperatorInfo(SOperatorInfo** pDownstream, int32_t numOfDownstream, SJoinPhysiNode* pJoinNode,
|
||||
SOperatorInfo* createMergeJoinOperatorInfo(SOperatorInfo** pDownstream, int32_t numOfDownstream, SSortMergeJoinPhysiNode* pJoinNode,
|
||||
SExecTaskInfo* pTaskInfo);
|
||||
|
||||
SOperatorInfo* createStreamSessionAggOperatorInfo(SOperatorInfo* downstream,
|
||||
|
@ -954,6 +1002,7 @@ int32_t updateSessionWindowInfo(SResultWindowInfo* pWinInfo, TSKEY* pStartTs,
|
|||
bool functionNeedToExecute(SqlFunctionCtx* pCtx);
|
||||
bool isCloseWindow(STimeWindow* pWin, STimeWindowAggSupp* pSup);
|
||||
void appendOneRow(SSDataBlock* pBlock, TSKEY* pStartTs, TSKEY* pEndTs, uint64_t* pUid);
|
||||
void printDataBlock(SSDataBlock* pBlock, const char* flag);
|
||||
|
||||
int32_t finalizeResultRowIntoResultDataBlock(SDiskbasedBuf* pBuf, SResultRowPosition* resultRowPosition,
|
||||
SqlFunctionCtx* pCtx, SExprInfo* pExprInfo, int32_t numOfExprs, const int32_t* rowCellOffset,
|
||||
|
|
|
@ -182,7 +182,8 @@ static int32_t getDataBlock(SDataSinkHandle* pHandle, SOutputData* pOutput) {
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
SDataCacheEntry* pEntry = (SDataCacheEntry*)(pDeleter->nextOutput.pData);
|
||||
memcpy(pOutput->pData, pEntry->data, pEntry->dataLen);
|
||||
memcpy(pOutput->pData, pEntry->data, pEntry->dataLen);
|
||||
pDeleter->pParam->pUidList = NULL;
|
||||
pOutput->numOfRows = pEntry->numOfRows;
|
||||
pOutput->numOfCols = pEntry->numOfCols;
|
||||
pOutput->compressed = pEntry->compressed;
|
||||
|
@ -205,6 +206,8 @@ static int32_t destroyDataSinker(SDataSinkHandle* pHandle) {
|
|||
SDataDeleterHandle* pDeleter = (SDataDeleterHandle*)pHandle;
|
||||
atomic_sub_fetch_64(&gDataSinkStat.cachedSize, pDeleter->cachedSize);
|
||||
taosMemoryFreeClear(pDeleter->nextOutput.pData);
|
||||
taosArrayDestroy(pDeleter->pParam->pUidList);
|
||||
taosMemoryFree(pDeleter->pParam);
|
||||
while (!taosQueueEmpty(pDeleter->pDataBlocks)) {
|
||||
SDataDeleterBuf* pBuf = NULL;
|
||||
taosReadQitem(pDeleter->pDataBlocks, (void**)&pBuf);
|
||||
|
|
|
@ -43,10 +43,6 @@ void cleanupResultRowInfo(SResultRowInfo* pResultRowInfo) {
|
|||
}
|
||||
}
|
||||
|
||||
void closeAllResultRows(SResultRowInfo* pResultRowInfo) {
|
||||
// do nothing
|
||||
}
|
||||
|
||||
bool isResultRowClosed(SResultRow* pRow) { return (pRow->closed == true); }
|
||||
|
||||
void closeResultRow(SResultRow* pResultRow) { pResultRow->closed = true; }
|
||||
|
@ -160,11 +156,13 @@ int32_t getNumOfTotalRes(SGroupResInfo* pGroupResInfo) {
|
|||
|
||||
SArray* createSortInfo(SNodeList* pNodeList) {
|
||||
size_t numOfCols = 0;
|
||||
|
||||
if (pNodeList != NULL) {
|
||||
numOfCols = LIST_LENGTH(pNodeList);
|
||||
} else {
|
||||
numOfCols = 0;
|
||||
}
|
||||
|
||||
SArray* pList = taosArrayInit(numOfCols, sizeof(SBlockOrderInfo));
|
||||
if (pList == NULL) {
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
|
@ -196,10 +194,6 @@ SSDataBlock* createResDataBlock(SDataBlockDescNode* pNode) {
|
|||
|
||||
for (int32_t i = 0; i < numOfCols; ++i) {
|
||||
SSlotDescNode* pDescNode = (SSlotDescNode*)nodesListGetNode(pNode->pSlots, i);
|
||||
/*if (!pDescNode->output) { // todo disable it temporarily*/
|
||||
/*continue;*/
|
||||
/*}*/
|
||||
|
||||
SColumnInfoData idata =
|
||||
createColumnInfoData(pDescNode->dataType.type, pDescNode->dataType.bytes, pDescNode->slotId);
|
||||
idata.info.scale = pDescNode->dataType.scale;
|
||||
|
@ -701,9 +695,6 @@ static int32_t setSelectValueColumnInfo(SqlFunctionCtx* pCtx, int32_t numOfOutpu
|
|||
}
|
||||
}
|
||||
|
||||
#ifdef BUF_PAGE_DEBUG
|
||||
qDebug("page_setSelect num:%d", num);
|
||||
#endif
|
||||
if (p != NULL) {
|
||||
p->subsidiaries.pCtx = pValCtx;
|
||||
p->subsidiaries.num = num;
|
||||
|
@ -852,7 +843,7 @@ int32_t initQueryTableDataCond(SQueryTableDataCond* pCond, const STableScanPhysi
|
|||
// TODO: get it from stable scan node
|
||||
pCond->twindows = pTableScanNode->scanRange;
|
||||
pCond->suid = pTableScanNode->scan.suid;
|
||||
pCond->type = BLOCK_LOAD_OFFSET_ORDER;
|
||||
pCond->type = TIMEWINDOW_RANGE_CONTAINED;
|
||||
pCond->startVersion = -1;
|
||||
pCond->endVersion = -1;
|
||||
// pCond->type = pTableScanNode->scanFlag;
|
||||
|
@ -947,6 +938,7 @@ STimeWindow getFirstQualifiedTimeWindow(int64_t ts, STimeWindow* pWindow, SInter
|
|||
}
|
||||
|
||||
// get the correct time window according to the handled timestamp
|
||||
// todo refactor
|
||||
STimeWindow getActiveTimeWindow(SDiskbasedBuf* pBuf, SResultRowInfo* pResultRowInfo, int64_t ts, SInterval* pInterval,
|
||||
int32_t order) {
|
||||
STimeWindow w = {0};
|
||||
|
|
|
@ -416,6 +416,7 @@ int32_t qExecTask(qTaskInfo_t tinfo, SSDataBlock** pRes, uint64_t* useconds) {
|
|||
}
|
||||
|
||||
if (isTaskKilled(pTaskInfo)) {
|
||||
atomic_store_64(&pTaskInfo->owner, 0);
|
||||
qDebug("%s already killed, abort", GET_TASKID(pTaskInfo));
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
|
|
@ -42,11 +42,6 @@
|
|||
|
||||
#define GET_FORWARD_DIRECTION_FACTOR(ord) (((ord) == TSDB_ORDER_ASC) ? QUERY_ASC_FORWARD_STEP : QUERY_DESC_FORWARD_STEP)
|
||||
|
||||
enum {
|
||||
PROJECT_RETRIEVE_CONTINUE = 0x1,
|
||||
PROJECT_RETRIEVE_DONE = 0x2,
|
||||
};
|
||||
|
||||
#if 0
|
||||
static UNUSED_FUNC void *u_malloc (size_t __size) {
|
||||
uint32_t v = taosRand();
|
||||
|
@ -575,6 +570,26 @@ static void setPseudoOutputColInfo(SSDataBlock* pResult, SqlFunctionCtx* pCtx, S
|
|||
int32_t projectApplyFunctions(SExprInfo* pExpr, SSDataBlock* pResult, SSDataBlock* pSrcBlock, SqlFunctionCtx* pCtx,
|
||||
int32_t numOfOutput, SArray* pPseudoList) {
|
||||
setPseudoOutputColInfo(pResult, pCtx, pPseudoList);
|
||||
|
||||
if (pSrcBlock == NULL) {
|
||||
for (int32_t k = 0; k < numOfOutput; ++k) {
|
||||
int32_t outputSlotId = pExpr[k].base.resSchema.slotId;
|
||||
|
||||
ASSERT(pExpr[k].pExpr->nodeType == QUERY_NODE_VALUE);
|
||||
SColumnInfoData* pColInfoData = taosArrayGet(pResult->pDataBlock, outputSlotId);
|
||||
|
||||
int32_t type = pExpr[k].base.pParam[0].param.nType;
|
||||
if (TSDB_DATA_TYPE_NULL == type) {
|
||||
colDataAppendNNULL(pColInfoData, 0, 1);
|
||||
} else {
|
||||
colDataAppend(pColInfoData, 0, taosVariantGet(&pExpr[k].base.pParam[0].param, type), false);
|
||||
}
|
||||
}
|
||||
|
||||
pResult->info.rows = 1;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
pResult->info.groupId = pSrcBlock->info.groupId;
|
||||
|
||||
// if the source equals to the destination, it is to create a new column as the result of scalar
|
||||
|
@ -651,6 +666,11 @@ int32_t projectApplyFunctions(SExprInfo* pExpr, SSDataBlock* pResult, SSDataBloc
|
|||
pfCtx->pTsOutput = (SColumnInfoData*)pCtx[*outputColIndex].pOutput;
|
||||
}
|
||||
|
||||
// link pDstBlock to set selectivity value
|
||||
if (pfCtx->subsidiaries.num > 0) {
|
||||
pfCtx->pDstBlock = pResult;
|
||||
}
|
||||
|
||||
numOfRows = pfCtx->fpSet.process(pfCtx);
|
||||
} else if (fmIsAggFunc(pfCtx->functionId)) {
|
||||
// _group_key function for "partition by tbname" + csum(col_name) query
|
||||
|
@ -1243,52 +1263,6 @@ void initResultRow(SResultRow* pResultRow) {
|
|||
// pResultRow->pEntryInfo = (struct SResultRowEntryInfo*)((char*)pResultRow + sizeof(SResultRow));
|
||||
}
|
||||
|
||||
/*
|
||||
* The start of each column SResultRowEntryInfo is denote by RowCellInfoOffset.
|
||||
* Note that in case of top/bottom query, the whole multiple rows of result is treated as only one row of results.
|
||||
* +------------+-----------------result column 1------------+------------------result column 2-----------+
|
||||
* | SResultRow | SResultRowEntryInfo | intermediate buffer1 | SResultRowEntryInfo | intermediate buffer 2|
|
||||
* +------------+--------------------------------------------+--------------------------------------------+
|
||||
* offset[0] offset[1] offset[2]
|
||||
*/
|
||||
// TODO refactor: some function move away
|
||||
void setFunctionResultOutput(SOperatorInfo* pOperator, SOptrBasicInfo* pInfo, SAggSupporter* pSup, int32_t stage,
|
||||
int32_t numOfExprs) {
|
||||
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
|
||||
SqlFunctionCtx* pCtx = pOperator->exprSupp.pCtx;
|
||||
int32_t* rowEntryInfoOffset = pOperator->exprSupp.rowEntryInfoOffset;
|
||||
|
||||
SResultRowInfo* pResultRowInfo = &pInfo->resultRowInfo;
|
||||
initResultRowInfo(pResultRowInfo);
|
||||
|
||||
int64_t tid = 0;
|
||||
int64_t groupId = 0;
|
||||
SResultRow* pRow = doSetResultOutBufByKey(pSup->pResultBuf, pResultRowInfo, (char*)&tid, sizeof(tid), true, groupId,
|
||||
pTaskInfo, false, pSup);
|
||||
|
||||
for (int32_t i = 0; i < numOfExprs; ++i) {
|
||||
struct SResultRowEntryInfo* pEntry = getResultEntryInfo(pRow, i, rowEntryInfoOffset);
|
||||
cleanupResultRowEntry(pEntry);
|
||||
|
||||
pCtx[i].resultInfo = pEntry;
|
||||
pCtx[i].scanFlag = stage;
|
||||
}
|
||||
|
||||
initCtxOutputBuffer(pCtx, numOfExprs);
|
||||
}
|
||||
|
||||
void initCtxOutputBuffer(SqlFunctionCtx* pCtx, int32_t size) {
|
||||
for (int32_t j = 0; j < size; ++j) {
|
||||
struct SResultRowEntryInfo* pResInfo = GET_RES_INFO(&pCtx[j]);
|
||||
if (isRowEntryInitialized(pResInfo) || fmIsPseudoColumnFunc(pCtx[j].functionId) || pCtx[j].functionId == -1 ||
|
||||
fmIsScalarFunc(pCtx[j].functionId)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
pCtx[j].fpSet.init(&pCtx[j], pCtx[j].resultInfo);
|
||||
}
|
||||
}
|
||||
|
||||
void setTaskStatus(SExecTaskInfo* pTaskInfo, int8_t status) {
|
||||
if (status == TASK_NOT_COMPLETED) {
|
||||
pTaskInfo->status = status;
|
||||
|
@ -1356,7 +1330,7 @@ void doFilter(const SNode* pFilterNode, SSDataBlock* pBlock, const SArray* pColM
|
|||
extractQualifiedTupleByFilterResult(pBlock, rowRes, keep);
|
||||
|
||||
if (pColMatchInfo != NULL) {
|
||||
for(int32_t i = 0; i < taosArrayGetSize(pColMatchInfo); ++i) {
|
||||
for (int32_t i = 0; i < taosArrayGetSize(pColMatchInfo); ++i) {
|
||||
SColMatchInfo* pInfo = taosArrayGet(pColMatchInfo, i);
|
||||
if (pInfo->colId == PRIMARYKEY_TIMESTAMP_COL_ID) {
|
||||
SColumnInfoData* pColData = taosArrayGet(pBlock->pDataBlock, pInfo->targetSlotId);
|
||||
|
@ -1665,9 +1639,6 @@ void queryCostStatis(SExecTaskInfo* pTaskInfo) {
|
|||
// hashSize += taosHashGetMemSize(pRuntimeEnv->tableqinfoGroupInfo.map);
|
||||
// pSummary->hashSize = hashSize;
|
||||
|
||||
// add the merge time
|
||||
pSummary->elapsedTime += pSummary->firstStageMergeTime;
|
||||
|
||||
// SResultRowPool* p = pTaskInfo->pool;
|
||||
// if (p != NULL) {
|
||||
// pSummary->winInfoSize = getResultRowPoolMemSize(p);
|
||||
|
@ -1676,17 +1647,16 @@ void queryCostStatis(SExecTaskInfo* pTaskInfo) {
|
|||
// pSummary->winInfoSize = 0;
|
||||
// pSummary->numOfTimeWindows = 0;
|
||||
// }
|
||||
//
|
||||
// calculateOperatorProfResults(pQInfo);
|
||||
|
||||
SFileBlockLoadRecorder* pRecorder = pSummary->pRecoder;
|
||||
if (pSummary->pRecoder != NULL) {
|
||||
qDebug("%s :cost summary: elapsed time:%" PRId64 " us, first merge:%" PRId64
|
||||
" us, total blocks:%d, "
|
||||
"load block statis:%d, load data block:%d, total rows:%" PRId64 ", check rows:%" PRId64,
|
||||
GET_TASKID(pTaskInfo), pSummary->elapsedTime, pSummary->firstStageMergeTime, pRecorder->totalBlocks,
|
||||
pRecorder->loadBlockStatis, pRecorder->loadBlocks, pRecorder->totalRows, pRecorder->totalCheckedRows);
|
||||
qDebug(
|
||||
"%s :cost summary: elapsed time:%.2f ms, total blocks:%d, load block SMA:%d, load data block:%d, total "
|
||||
"rows:%" PRId64 ", check rows:%" PRId64,
|
||||
GET_TASKID(pTaskInfo), pSummary->elapsedTime / 1000.0, pRecorder->totalBlocks, pRecorder->loadBlockStatis,
|
||||
pRecorder->loadBlocks, pRecorder->totalRows, pRecorder->totalCheckedRows);
|
||||
}
|
||||
|
||||
// qDebug("QInfo:0x%"PRIx64" :cost summary: winResPool size:%.2f Kb, numOfWin:%"PRId64", tableInfoSize:%.2f Kb,
|
||||
// hashTable:%.2f Kb", pQInfo->qId, pSummary->winInfoSize/1024.0,
|
||||
// pSummary->numOfTimeWindows, pSummary->tableInfoSize/1024.0, pSummary->hashSize/1024.0);
|
||||
|
@ -2809,73 +2779,6 @@ static int32_t initGroupCol(SExprInfo* pExprInfo, int32_t numOfCols, SArray* pGr
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
SOperatorInfo* createSortedMergeOperatorInfo(SOperatorInfo** downstream, int32_t numOfDownstream, SExprInfo* pExprInfo,
|
||||
int32_t num, SArray* pSortInfo, SArray* pGroupInfo,
|
||||
SExecTaskInfo* pTaskInfo) {
|
||||
SSortedMergeOperatorInfo* pInfo = taosMemoryCalloc(1, sizeof(SSortedMergeOperatorInfo));
|
||||
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
|
||||
if (pInfo == NULL || pOperator == NULL) {
|
||||
goto _error;
|
||||
}
|
||||
|
||||
int32_t code = initExprSupp(&pOperator->exprSupp, pExprInfo, num);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
goto _error;
|
||||
}
|
||||
|
||||
initResultRowInfo(&pInfo->binfo.resultRowInfo);
|
||||
|
||||
if (pOperator->exprSupp.pCtx == NULL || pInfo->binfo.pRes == NULL) {
|
||||
goto _error;
|
||||
}
|
||||
|
||||
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
|
||||
code = doInitAggInfoSup(&pInfo->aggSup, pOperator->exprSupp.pCtx, num, keyBufSize, pTaskInfo->id.str);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
goto _error;
|
||||
}
|
||||
|
||||
setFunctionResultOutput(pOperator, &pInfo->binfo, &pInfo->aggSup, MAIN_SCAN, num);
|
||||
code = initGroupCol(pExprInfo, num, pGroupInfo, pInfo);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
goto _error;
|
||||
}
|
||||
|
||||
// pInfo->resultRowFactor = (int32_t)(getRowNumForMultioutput(pRuntimeEnv->pQueryAttr,
|
||||
// pRuntimeEnv->pQueryAttr->topBotQuery, false));
|
||||
pInfo->sortBufSize = 1024 * 16; // 1MB
|
||||
pInfo->bufPageSize = 1024;
|
||||
pInfo->pSortInfo = pSortInfo;
|
||||
|
||||
pOperator->resultInfo.capacity = blockDataGetCapacityInRow(pInfo->binfo.pRes, pInfo->bufPageSize);
|
||||
|
||||
pOperator->name = "SortedMerge";
|
||||
// pOperator->operatorType = OP_SortedMerge;
|
||||
pOperator->blocking = true;
|
||||
pOperator->status = OP_NOT_OPENED;
|
||||
pOperator->info = pInfo;
|
||||
pOperator->pTaskInfo = pTaskInfo;
|
||||
|
||||
pOperator->fpSet = createOperatorFpSet(operatorDummyOpenFn, doSortedMerge, NULL, NULL, destroySortedMergeOperatorInfo,
|
||||
NULL, NULL, NULL);
|
||||
code = appendDownstream(pOperator, downstream, numOfDownstream);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
goto _error;
|
||||
}
|
||||
|
||||
return pOperator;
|
||||
|
||||
_error:
|
||||
if (pInfo != NULL) {
|
||||
destroySortedMergeOperatorInfo(pInfo, num);
|
||||
}
|
||||
|
||||
taosMemoryFreeClear(pInfo);
|
||||
taosMemoryFreeClear(pOperator);
|
||||
terrno = TSDB_CODE_QRY_OUT_OF_MEMORY;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
int32_t getTableScanInfo(SOperatorInfo* pOperator, int32_t* order, int32_t* scanFlag) {
|
||||
// todo add more information about exchange operation
|
||||
int32_t type = pOperator->operatorType;
|
||||
|
@ -2885,11 +2788,16 @@ int32_t getTableScanInfo(SOperatorInfo* pOperator, int32_t* order, int32_t* scan
|
|||
*order = TSDB_ORDER_ASC;
|
||||
*scanFlag = MAIN_SCAN;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
} else if (type == QUERY_NODE_PHYSICAL_PLAN_TABLE_SCAN || type == QUERY_NODE_PHYSICAL_PLAN_TABLE_MERGE_SCAN) {
|
||||
} else if (type == QUERY_NODE_PHYSICAL_PLAN_TABLE_SCAN) {
|
||||
STableScanInfo* pTableScanInfo = pOperator->info;
|
||||
*order = pTableScanInfo->cond.order;
|
||||
*scanFlag = pTableScanInfo->scanFlag;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
} else if (type == QUERY_NODE_PHYSICAL_PLAN_TABLE_MERGE_SCAN) {
|
||||
STableMergeScanInfo* pTableScanInfo = pOperator->info;
|
||||
*order = pTableScanInfo->cond.order;
|
||||
*scanFlag = pTableScanInfo->scanFlag;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
} else {
|
||||
if (pOperator->pDownstream == NULL || pOperator->pDownstream[0] == NULL) {
|
||||
return TSDB_CODE_INVALID_PARA;
|
||||
|
@ -3031,7 +2939,6 @@ static int32_t doOpenAggregateOptr(SOperatorInfo* pOperator) {
|
|||
}
|
||||
}
|
||||
|
||||
closeAllResultRows(&pAggInfo->binfo.resultRowInfo);
|
||||
initGroupedResultInfo(&pAggInfo->groupResInfo, pAggInfo->aggSup.pResultRowHashTable, 0);
|
||||
OPTR_SET_OPENED(pOperator);
|
||||
|
||||
|
@ -3279,162 +3186,6 @@ int32_t handleLimitOffset(SOperatorInfo* pOperator, SLimitInfo* pLimitInfo, SSDa
|
|||
}
|
||||
}
|
||||
|
||||
static SSDataBlock* doProjectOperation(SOperatorInfo* pOperator) {
|
||||
SProjectOperatorInfo* pProjectInfo = pOperator->info;
|
||||
SOptrBasicInfo* pInfo = &pProjectInfo->binfo;
|
||||
|
||||
SExprSupp* pSup = &pOperator->exprSupp;
|
||||
SSDataBlock* pRes = pInfo->pRes;
|
||||
SSDataBlock* pFinalRes = pProjectInfo->pFinalRes;
|
||||
|
||||
blockDataCleanup(pFinalRes);
|
||||
|
||||
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
|
||||
if (pOperator->status == OP_EXEC_DONE) {
|
||||
if (pTaskInfo->execModel == OPTR_EXEC_MODEL_QUEUE) {
|
||||
pOperator->status = OP_OPENED;
|
||||
return NULL;
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
|
||||
int64_t st = 0;
|
||||
int32_t order = 0;
|
||||
int32_t scanFlag = 0;
|
||||
|
||||
if (pOperator->cost.openCost == 0) {
|
||||
st = taosGetTimestampUs();
|
||||
}
|
||||
|
||||
SOperatorInfo* downstream = pOperator->pDownstream[0];
|
||||
SLimitInfo* pLimitInfo = &pProjectInfo->limitInfo;
|
||||
|
||||
while(1) {
|
||||
while (1) {
|
||||
blockDataCleanup(pRes);
|
||||
|
||||
// The downstream exec may change the value of the newgroup, so use a local variable instead.
|
||||
SSDataBlock* pBlock = downstream->fpSet.getNextFn(downstream);
|
||||
if (pBlock == NULL) {
|
||||
doSetOperatorCompleted(pOperator);
|
||||
break;
|
||||
}
|
||||
|
||||
if (pBlock->info.type == STREAM_RETRIEVE) {
|
||||
// for stream interval
|
||||
return pBlock;
|
||||
}
|
||||
|
||||
if (pLimitInfo->remainGroupOffset > 0) {
|
||||
if (pLimitInfo->currentGroupId == 0 || pLimitInfo->currentGroupId == pBlock->info.groupId) { // it is the first group
|
||||
pLimitInfo->currentGroupId = pBlock->info.groupId;
|
||||
continue;
|
||||
} else if (pLimitInfo->currentGroupId != pBlock->info.groupId) {
|
||||
// now it is the data from a new group
|
||||
pLimitInfo->remainGroupOffset -= 1;
|
||||
pLimitInfo->currentGroupId = pBlock->info.groupId;
|
||||
|
||||
// ignore data block in current group
|
||||
if (pLimitInfo->remainGroupOffset > 0) {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
// set current group id of the project operator
|
||||
pLimitInfo->currentGroupId = pBlock->info.groupId;
|
||||
}
|
||||
|
||||
// remainGroupOffset == 0
|
||||
// here check for a new group data, we need to handle the data of the previous group.
|
||||
if (pLimitInfo->currentGroupId != 0 && pLimitInfo->currentGroupId != pBlock->info.groupId) {
|
||||
pLimitInfo->numOfOutputGroups += 1;
|
||||
if ((pLimitInfo->slimit.limit > 0) && (pLimitInfo->slimit.limit <= pLimitInfo->numOfOutputGroups)) {
|
||||
doSetOperatorCompleted(pOperator);
|
||||
break;
|
||||
}
|
||||
|
||||
// reset the value for a new group data
|
||||
// existing rows that belongs to previous group.
|
||||
pLimitInfo->numOfOutputRows = 0;
|
||||
pLimitInfo->remainOffset = pLimitInfo->limit.offset;
|
||||
}
|
||||
|
||||
// the pDataBlock are always the same one, no need to call this again
|
||||
int32_t code = getTableScanInfo(pOperator->pDownstream[0], &order, &scanFlag);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
longjmp(pTaskInfo->env, code);
|
||||
}
|
||||
|
||||
setInputDataBlock(pOperator, pSup->pCtx, pBlock, order, scanFlag, false);
|
||||
blockDataEnsureCapacity(pInfo->pRes, pInfo->pRes->info.rows + pBlock->info.rows);
|
||||
|
||||
code = projectApplyFunctions(pSup->pExprInfo, pInfo->pRes, pBlock, pSup->pCtx, pSup->numOfExprs,
|
||||
pProjectInfo->pPseudoColInfo);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
longjmp(pTaskInfo->env, code);
|
||||
}
|
||||
|
||||
// set current group id
|
||||
pLimitInfo->currentGroupId = pBlock->info.groupId;
|
||||
|
||||
if (pLimitInfo->remainOffset >= pInfo->pRes->info.rows) {
|
||||
pLimitInfo->remainOffset -= pInfo->pRes->info.rows;
|
||||
blockDataCleanup(pInfo->pRes);
|
||||
continue;
|
||||
} else if (pLimitInfo->remainOffset < pInfo->pRes->info.rows && pLimitInfo->remainOffset > 0) {
|
||||
blockDataTrimFirstNRows(pInfo->pRes, pLimitInfo->remainOffset);
|
||||
pLimitInfo->remainOffset = 0;
|
||||
}
|
||||
|
||||
// check for the limitation in each group
|
||||
if (pLimitInfo->limit.limit >= 0 &&
|
||||
pLimitInfo->numOfOutputRows + pInfo->pRes->info.rows >= pLimitInfo->limit.limit) {
|
||||
int32_t keepRows = (int32_t)(pLimitInfo->limit.limit - pLimitInfo->numOfOutputRows);
|
||||
blockDataKeepFirstNRows(pInfo->pRes, keepRows);
|
||||
if (pLimitInfo->slimit.limit > 0 && pLimitInfo->slimit.limit <= pLimitInfo->numOfOutputGroups) {
|
||||
pOperator->status = OP_EXEC_DONE;
|
||||
}
|
||||
}
|
||||
|
||||
pLimitInfo->numOfOutputRows += pInfo->pRes->info.rows;
|
||||
break;
|
||||
}
|
||||
|
||||
// no results generated
|
||||
if (pInfo->pRes->info.rows == 0 || (!pProjectInfo->mergeDataBlocks)) {
|
||||
break;
|
||||
}
|
||||
|
||||
if (pProjectInfo->mergeDataBlocks) {
|
||||
pFinalRes->info.groupId = pInfo->pRes->info.groupId;
|
||||
pFinalRes->info.version = pInfo->pRes->info.version;
|
||||
|
||||
// continue merge data, ignore the group id
|
||||
blockDataMerge(pFinalRes, pInfo->pRes);
|
||||
|
||||
if (pFinalRes->info.rows + pInfo->pRes->info.rows <= pOperator->resultInfo.threshold) {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
// do apply filter
|
||||
SSDataBlock* p = pProjectInfo->mergeDataBlocks ? pFinalRes : pRes;
|
||||
doFilter(pProjectInfo->pFilterNode, p, NULL);
|
||||
if (p->info.rows > 0) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
SSDataBlock* p = pProjectInfo->mergeDataBlocks ? pFinalRes : pRes;
|
||||
pOperator->resultInfo.totalRows += p->info.rows;
|
||||
|
||||
if (pOperator->cost.openCost == 0) {
|
||||
pOperator->cost.openCost = (taosGetTimestampUs() - st) / 1000.0;
|
||||
}
|
||||
|
||||
return (p->info.rows > 0) ? p : NULL;
|
||||
}
|
||||
|
||||
static void doHandleRemainBlockForNewGroupImpl(SFillOperatorInfo* pInfo, SResultInfo* pResultInfo,
|
||||
SExecTaskInfo* pTaskInfo) {
|
||||
pInfo->totalInputRows = pInfo->existNewGroupBlock->info.rows;
|
||||
|
@ -3815,30 +3566,6 @@ void destroySFillOperatorInfo(void* param, int32_t numOfOutput) {
|
|||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
static void destroyProjectOperatorInfo(void* param, int32_t numOfOutput) {
|
||||
if (NULL == param) {
|
||||
return;
|
||||
}
|
||||
SProjectOperatorInfo* pInfo = (SProjectOperatorInfo*)param;
|
||||
cleanupBasicInfo(&pInfo->binfo);
|
||||
cleanupAggSup(&pInfo->aggSup);
|
||||
taosArrayDestroy(pInfo->pPseudoColInfo);
|
||||
|
||||
blockDataDestroy(pInfo->pFinalRes);
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
static void destroyIndefinitOperatorInfo(void* param, int32_t numOfOutput) {
|
||||
SIndefOperatorInfo* pInfo = (SIndefOperatorInfo*)param;
|
||||
cleanupBasicInfo(&pInfo->binfo);
|
||||
|
||||
taosArrayDestroy(pInfo->pPseudoColInfo);
|
||||
cleanupAggSup(&pInfo->aggSup);
|
||||
cleanupExprSupp(&pInfo->scalarSup);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
void destroyExchangeOperatorInfo(void* param, int32_t numOfOutput) {
|
||||
SExchangeInfo* pExInfo = (SExchangeInfo*)param;
|
||||
taosRemoveRef(exchangeObjRefPool, pExInfo->self);
|
||||
|
@ -3858,260 +3585,6 @@ void doDestroyExchangeOperatorInfo(void* param) {
|
|||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
static SArray* setRowTsColumnOutputInfo(SqlFunctionCtx* pCtx, int32_t numOfCols) {
|
||||
SArray* pList = taosArrayInit(4, sizeof(int32_t));
|
||||
for (int32_t i = 0; i < numOfCols; ++i) {
|
||||
if (fmIsPseudoColumnFunc(pCtx[i].functionId)) {
|
||||
taosArrayPush(pList, &i);
|
||||
}
|
||||
}
|
||||
|
||||
return pList;
|
||||
}
|
||||
|
||||
SOperatorInfo* createProjectOperatorInfo(SOperatorInfo* downstream, SProjectPhysiNode* pProjPhyNode,
|
||||
SExecTaskInfo* pTaskInfo) {
|
||||
SProjectOperatorInfo* pInfo = taosMemoryCalloc(1, sizeof(SProjectOperatorInfo));
|
||||
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
|
||||
if (pInfo == NULL || pOperator == NULL) {
|
||||
goto _error;
|
||||
}
|
||||
|
||||
int32_t numOfCols = 0;
|
||||
SExprInfo* pExprInfo = createExprInfo(pProjPhyNode->pProjections, NULL, &numOfCols);
|
||||
|
||||
SSDataBlock* pResBlock = createResDataBlock(pProjPhyNode->node.pOutputDataBlockDesc);
|
||||
initLimitInfo(pProjPhyNode->node.pLimit, pProjPhyNode->node.pSlimit, &pInfo->limitInfo);
|
||||
|
||||
pInfo->binfo.pRes = pResBlock;
|
||||
pInfo->pFinalRes = createOneDataBlock(pResBlock, false);
|
||||
|
||||
pInfo->pFilterNode = pProjPhyNode->node.pConditions;
|
||||
pInfo->mergeDataBlocks = pProjPhyNode->mergeDataBlock;
|
||||
|
||||
int32_t numOfRows = 4096;
|
||||
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
|
||||
|
||||
// Make sure the size of SSDataBlock will never exceed the size of 2MB.
|
||||
int32_t TWOMB = 2 * 1024 * 1024;
|
||||
if (numOfRows * pResBlock->info.rowSize > TWOMB) {
|
||||
numOfRows = TWOMB / pResBlock->info.rowSize;
|
||||
}
|
||||
initResultSizeInfo(&pOperator->resultInfo, numOfRows);
|
||||
|
||||
initAggInfo(&pOperator->exprSupp, &pInfo->aggSup, pExprInfo, numOfCols, keyBufSize, pTaskInfo->id.str);
|
||||
initBasicInfo(&pInfo->binfo, pResBlock);
|
||||
setFunctionResultOutput(pOperator, &pInfo->binfo, &pInfo->aggSup, MAIN_SCAN, numOfCols);
|
||||
|
||||
pInfo->pPseudoColInfo = setRowTsColumnOutputInfo(pOperator->exprSupp.pCtx, numOfCols);
|
||||
pOperator->name = "ProjectOperator";
|
||||
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_PROJECT;
|
||||
pOperator->blocking = false;
|
||||
pOperator->status = OP_NOT_OPENED;
|
||||
pOperator->info = pInfo;
|
||||
pOperator->pTaskInfo = pTaskInfo;
|
||||
|
||||
pOperator->fpSet = createOperatorFpSet(operatorDummyOpenFn, doProjectOperation, NULL, NULL,
|
||||
destroyProjectOperatorInfo, NULL, NULL, NULL);
|
||||
|
||||
int32_t code = appendDownstream(pOperator, &downstream, 1);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
goto _error;
|
||||
}
|
||||
|
||||
return pOperator;
|
||||
|
||||
_error:
|
||||
pTaskInfo->code = TSDB_CODE_OUT_OF_MEMORY;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void doHandleDataBlock(SOperatorInfo* pOperator, SSDataBlock* pBlock, SOperatorInfo* downstream,
|
||||
SExecTaskInfo* pTaskInfo) {
|
||||
int32_t order = 0;
|
||||
int32_t scanFlag = 0;
|
||||
|
||||
SIndefOperatorInfo* pIndefInfo = pOperator->info;
|
||||
SOptrBasicInfo* pInfo = &pIndefInfo->binfo;
|
||||
SExprSupp* pSup = &pOperator->exprSupp;
|
||||
|
||||
// the pDataBlock are always the same one, no need to call this again
|
||||
int32_t code = getTableScanInfo(downstream, &order, &scanFlag);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
longjmp(pTaskInfo->env, code);
|
||||
}
|
||||
|
||||
// there is an scalar expression that needs to be calculated before apply the group aggregation.
|
||||
SExprSupp* pScalarSup = &pIndefInfo->scalarSup;
|
||||
if (pScalarSup->pExprInfo != NULL) {
|
||||
code = projectApplyFunctions(pScalarSup->pExprInfo, pBlock, pBlock, pScalarSup->pCtx, pScalarSup->numOfExprs,
|
||||
pIndefInfo->pPseudoColInfo);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
longjmp(pTaskInfo->env, code);
|
||||
}
|
||||
}
|
||||
|
||||
setInputDataBlock(pOperator, pSup->pCtx, pBlock, order, scanFlag, false);
|
||||
blockDataEnsureCapacity(pInfo->pRes, pInfo->pRes->info.rows + pBlock->info.rows);
|
||||
|
||||
code = projectApplyFunctions(pSup->pExprInfo, pInfo->pRes, pBlock, pSup->pCtx, pSup->numOfExprs,
|
||||
pIndefInfo->pPseudoColInfo);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
longjmp(pTaskInfo->env, code);
|
||||
}
|
||||
}
|
||||
|
||||
static SSDataBlock* doApplyIndefinitFunction(SOperatorInfo* pOperator) {
|
||||
SIndefOperatorInfo* pIndefInfo = pOperator->info;
|
||||
SOptrBasicInfo* pInfo = &pIndefInfo->binfo;
|
||||
SExprSupp* pSup = &pOperator->exprSupp;
|
||||
|
||||
SSDataBlock* pRes = pInfo->pRes;
|
||||
blockDataCleanup(pRes);
|
||||
|
||||
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
|
||||
if (pOperator->status == OP_EXEC_DONE) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
int64_t st = 0;
|
||||
|
||||
if (pOperator->cost.openCost == 0) {
|
||||
st = taosGetTimestampUs();
|
||||
}
|
||||
|
||||
SOperatorInfo* downstream = pOperator->pDownstream[0];
|
||||
|
||||
while (1) {
|
||||
// here we need to handle the existsed group results
|
||||
if (pIndefInfo->pNextGroupRes != NULL) { // todo extract method
|
||||
for (int32_t k = 0; k < pSup->numOfExprs; ++k) {
|
||||
SqlFunctionCtx* pCtx = &pSup->pCtx[k];
|
||||
|
||||
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
|
||||
pResInfo->initialized = false;
|
||||
pCtx->pOutput = NULL;
|
||||
}
|
||||
|
||||
doHandleDataBlock(pOperator, pIndefInfo->pNextGroupRes, downstream, pTaskInfo);
|
||||
pIndefInfo->pNextGroupRes = NULL;
|
||||
}
|
||||
|
||||
if (pInfo->pRes->info.rows < pOperator->resultInfo.threshold) {
|
||||
while (1) {
|
||||
// The downstream exec may change the value of the newgroup, so use a local variable instead.
|
||||
SSDataBlock* pBlock = downstream->fpSet.getNextFn(downstream);
|
||||
if (pBlock == NULL) {
|
||||
doSetOperatorCompleted(pOperator);
|
||||
break;
|
||||
}
|
||||
|
||||
if (pIndefInfo->groupId == 0 && pBlock->info.groupId != 0) {
|
||||
pIndefInfo->groupId = pBlock->info.groupId; // this is the initial group result
|
||||
} else {
|
||||
if (pIndefInfo->groupId != pBlock->info.groupId) { // reset output buffer and computing status
|
||||
pIndefInfo->groupId = pBlock->info.groupId;
|
||||
pIndefInfo->pNextGroupRes = pBlock;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
doHandleDataBlock(pOperator, pBlock, downstream, pTaskInfo);
|
||||
if (pInfo->pRes->info.rows >= pOperator->resultInfo.threshold) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
doFilter(pIndefInfo->pCondition, pInfo->pRes, NULL);
|
||||
size_t rows = pInfo->pRes->info.rows;
|
||||
if (rows > 0 || pOperator->status == OP_EXEC_DONE) {
|
||||
break;
|
||||
} else {
|
||||
blockDataCleanup(pInfo->pRes);
|
||||
}
|
||||
}
|
||||
|
||||
size_t rows = pInfo->pRes->info.rows;
|
||||
pOperator->resultInfo.totalRows += rows;
|
||||
|
||||
if (pOperator->cost.openCost == 0) {
|
||||
pOperator->cost.openCost = (taosGetTimestampUs() - st) / 1000.0;
|
||||
}
|
||||
|
||||
return (rows > 0) ? pInfo->pRes : NULL;
|
||||
}
|
||||
|
||||
SOperatorInfo* createIndefinitOutputOperatorInfo(SOperatorInfo* downstream, SPhysiNode* pNode,
|
||||
SExecTaskInfo* pTaskInfo) {
|
||||
SIndefOperatorInfo* pInfo = taosMemoryCalloc(1, sizeof(SIndefOperatorInfo));
|
||||
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
|
||||
if (pInfo == NULL || pOperator == NULL) {
|
||||
goto _error;
|
||||
}
|
||||
|
||||
SExprSupp* pSup = &pOperator->exprSupp;
|
||||
|
||||
SIndefRowsFuncPhysiNode* pPhyNode = (SIndefRowsFuncPhysiNode*)pNode;
|
||||
|
||||
int32_t numOfExpr = 0;
|
||||
SExprInfo* pExprInfo = createExprInfo(pPhyNode->pFuncs, NULL, &numOfExpr);
|
||||
|
||||
if (pPhyNode->pExprs != NULL) {
|
||||
int32_t num = 0;
|
||||
SExprInfo* pSExpr = createExprInfo(pPhyNode->pExprs, NULL, &num);
|
||||
int32_t code = initExprSupp(&pInfo->scalarSup, pSExpr, num);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
goto _error;
|
||||
}
|
||||
}
|
||||
|
||||
SSDataBlock* pResBlock = createResDataBlock(pPhyNode->node.pOutputDataBlockDesc);
|
||||
|
||||
int32_t numOfRows = 4096;
|
||||
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
|
||||
|
||||
// Make sure the size of SSDataBlock will never exceed the size of 2MB.
|
||||
int32_t TWOMB = 2 * 1024 * 1024;
|
||||
if (numOfRows * pResBlock->info.rowSize > TWOMB) {
|
||||
numOfRows = TWOMB / pResBlock->info.rowSize;
|
||||
}
|
||||
|
||||
initResultSizeInfo(&pOperator->resultInfo, numOfRows);
|
||||
|
||||
initAggInfo(pSup, &pInfo->aggSup, pExprInfo, numOfExpr, keyBufSize, pTaskInfo->id.str);
|
||||
initBasicInfo(&pInfo->binfo, pResBlock);
|
||||
|
||||
setFunctionResultOutput(pOperator, &pInfo->binfo, &pInfo->aggSup, MAIN_SCAN, numOfExpr);
|
||||
|
||||
pInfo->binfo.pRes = pResBlock;
|
||||
pInfo->pCondition = pPhyNode->node.pConditions;
|
||||
pInfo->pPseudoColInfo = setRowTsColumnOutputInfo(pSup->pCtx, numOfExpr);
|
||||
|
||||
pOperator->name = "IndefinitOperator";
|
||||
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_INDEF_ROWS_FUNC;
|
||||
pOperator->blocking = false;
|
||||
pOperator->status = OP_NOT_OPENED;
|
||||
pOperator->info = pInfo;
|
||||
pOperator->pTaskInfo = pTaskInfo;
|
||||
|
||||
pOperator->fpSet = createOperatorFpSet(operatorDummyOpenFn, doApplyIndefinitFunction, NULL, NULL,
|
||||
destroyIndefinitOperatorInfo, NULL, NULL, NULL);
|
||||
|
||||
int32_t code = appendDownstream(pOperator, &downstream, 1);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
goto _error;
|
||||
}
|
||||
|
||||
return pOperator;
|
||||
|
||||
_error:
|
||||
taosMemoryFree(pInfo);
|
||||
taosMemoryFree(pOperator);
|
||||
pTaskInfo->code = TSDB_CODE_OUT_OF_MEMORY;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static int32_t initFillInfo(SFillOperatorInfo* pInfo, SExprInfo* pExpr, int32_t numOfCols, SNodeListNode* pValNode,
|
||||
STimeWindow win, int32_t capacity, const char* id, SInterval* pInterval, int32_t fillType) {
|
||||
SFillColInfo* pColInfo = createFillColInfo(pExpr, numOfCols, pValNode);
|
||||
|
@ -4265,7 +3738,7 @@ SSchemaWrapper* extractQueriedColumnSchema(SScanPhysiNode* pScanNode) {
|
|||
}
|
||||
|
||||
// this the tags and pseudo function columns, we only keep the tag columns
|
||||
for(int32_t i = 0; i < numOfTags; ++i) {
|
||||
for (int32_t i = 0; i < numOfTags; ++i) {
|
||||
STargetNode* pNode = (STargetNode*)nodesListGetNode(pScanNode->pScanPseudoCols, i);
|
||||
|
||||
int32_t type = nodeType(pNode->pExpr);
|
||||
|
@ -4381,7 +3854,7 @@ int32_t generateGroupIdMap(STableListInfo* pTableListInfo, SReadHandle* pHandle,
|
|||
int32_t groupNum = 0;
|
||||
for (int32_t i = 0; i < taosArrayGetSize(pTableListInfo->pTableList); i++) {
|
||||
STableKeyInfo* info = taosArrayGet(pTableListInfo->pTableList, i);
|
||||
int32_t code = getGroupIdFromTagsVal(pHandle->meta, info->uid, group, keyBuf, &info->groupId);
|
||||
int32_t code = getGroupIdFromTagsVal(pHandle->meta, info->uid, group, keyBuf, &info->groupId);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
return code;
|
||||
}
|
||||
|
@ -4416,7 +3889,7 @@ static int32_t initTableblockDistQueryCond(uint64_t uid, SQueryTableDataCond* pC
|
|||
|
||||
pCond->twindows = (STimeWindow){.skey = INT64_MIN, .ekey = INT64_MAX};
|
||||
pCond->suid = uid;
|
||||
pCond->type = BLOCK_LOAD_OFFSET_ORDER;
|
||||
pCond->type = TIMEWINDOW_RANGE_CONTAINED;
|
||||
pCond->startVersion = -1;
|
||||
pCond->endVersion = -1;
|
||||
|
||||
|
@ -4504,7 +3977,6 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
|
|||
return createSysTableScanOperatorInfo(pHandle, pSysScanPhyNode, pUser, pTaskInfo);
|
||||
} else if (QUERY_NODE_PHYSICAL_PLAN_TAG_SCAN == type) {
|
||||
STagScanPhysiNode* pScanPhyNode = (STagScanPhysiNode*)pPhyNode;
|
||||
|
||||
int32_t code = getTableList(pHandle->meta, pHandle->vnode, pScanPhyNode, pTagCond, pTagIndexCond, pTableListInfo);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
pTaskInfo->code = terrno;
|
||||
|
@ -4555,6 +4027,8 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
|
|||
}
|
||||
|
||||
return createLastrowScanOperator(pScanNode, pHandle, pTaskInfo);
|
||||
} else if (QUERY_NODE_PHYSICAL_PLAN_PROJECT == type) {
|
||||
return createProjectOperatorInfo(NULL, (SProjectPhysiNode*)pPhyNode, pTaskInfo);
|
||||
} else {
|
||||
ASSERT(0);
|
||||
}
|
||||
|
@ -4701,7 +4175,7 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
|
|||
} else if (QUERY_NODE_PHYSICAL_PLAN_STREAM_STATE == type) {
|
||||
pOptr = createStreamStateAggOperatorInfo(ops[0], pPhyNode, pTaskInfo);
|
||||
} else if (QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN == type) {
|
||||
pOptr = createMergeJoinOperatorInfo(ops, size, (SJoinPhysiNode*)pPhyNode, pTaskInfo);
|
||||
pOptr = createMergeJoinOperatorInfo(ops, size, (SSortMergeJoinPhysiNode*)pPhyNode, pTaskInfo);
|
||||
} else if (QUERY_NODE_PHYSICAL_PLAN_FILL == type) {
|
||||
pOptr = createFillOperatorInfo(ops[0], (SFillPhysiNode*)pPhyNode, pTaskInfo);
|
||||
} else if (QUERY_NODE_PHYSICAL_PLAN_INDEF_ROWS_FUNC == type) {
|
||||
|
|
|
@ -28,30 +28,30 @@ static SSDataBlock* doMergeJoin(struct SOperatorInfo* pOperator);
|
|||
static void destroyMergeJoinOperator(void* param, int32_t numOfOutput);
|
||||
static void extractTimeCondition(SJoinOperatorInfo* Info, SLogicConditionNode* pLogicConditionNode);
|
||||
|
||||
SOperatorInfo* createMergeJoinOperatorInfo(SOperatorInfo** pDownstream, int32_t numOfDownstream, SJoinPhysiNode* pJoinNode,
|
||||
SExecTaskInfo* pTaskInfo) {
|
||||
SOperatorInfo* createMergeJoinOperatorInfo(SOperatorInfo** pDownstream, int32_t numOfDownstream,
|
||||
SSortMergeJoinPhysiNode* pJoinNode, SExecTaskInfo* pTaskInfo) {
|
||||
SJoinOperatorInfo* pInfo = taosMemoryCalloc(1, sizeof(SJoinOperatorInfo));
|
||||
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
|
||||
if (pOperator == NULL || pInfo == NULL) {
|
||||
goto _error;
|
||||
}
|
||||
|
||||
SSDataBlock* pResBlock = createResDataBlock(pJoinNode->node.pOutputDataBlockDesc);
|
||||
SSDataBlock* pResBlock = createResDataBlock(pJoinNode->node.pOutputDataBlockDesc);
|
||||
|
||||
int32_t numOfCols = 0;
|
||||
int32_t numOfCols = 0;
|
||||
SExprInfo* pExprInfo = createExprInfo(pJoinNode->pTargets, NULL, &numOfCols);
|
||||
|
||||
initResultSizeInfo(&pOperator->resultInfo, 4096);
|
||||
|
||||
pInfo->pRes = pResBlock;
|
||||
pOperator->name = "MergeJoinOperator";
|
||||
pInfo->pRes = pResBlock;
|
||||
pOperator->name = "MergeJoinOperator";
|
||||
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN;
|
||||
pOperator->blocking = false;
|
||||
pOperator->status = OP_NOT_OPENED;
|
||||
pOperator->exprSupp.pExprInfo = pExprInfo;
|
||||
pOperator->exprSupp.numOfExprs = numOfCols;
|
||||
pOperator->info = pInfo;
|
||||
pOperator->pTaskInfo = pTaskInfo;
|
||||
pOperator->blocking = false;
|
||||
pOperator->status = OP_NOT_OPENED;
|
||||
pOperator->exprSupp.pExprInfo = pExprInfo;
|
||||
pOperator->exprSupp.numOfExprs = numOfCols;
|
||||
pOperator->info = pInfo;
|
||||
pOperator->pTaskInfo = pTaskInfo;
|
||||
|
||||
SNode* pMergeCondition = pJoinNode->pMergeCondition;
|
||||
if (nodeType(pMergeCondition) == QUERY_NODE_OPERATOR) {
|
||||
|
@ -104,7 +104,7 @@ void setJoinColumnInfo(SColumnInfo* pColumn, const SColumnNode* pColumnNode) {
|
|||
void destroyMergeJoinOperator(void* param, int32_t numOfOutput) {
|
||||
SJoinOperatorInfo* pJoinOperator = (SJoinOperatorInfo*)param;
|
||||
nodesDestroyNode(pJoinOperator->pCondAfterMerge);
|
||||
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
|
|
|
@ -0,0 +1,604 @@
|
|||
/*
|
||||
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
|
||||
*
|
||||
* This program is free software: you can use, redistribute, and/or modify
|
||||
* it under the terms of the GNU Affero General Public License, version 3
|
||||
* or later ("AGPL"), as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE.
|
||||
*
|
||||
* You should have received a copy of the GNU Affero General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#include "executorimpl.h"
|
||||
#include "functionMgt.h"
|
||||
|
||||
static SSDataBlock* doGenerateSourceData(SOperatorInfo* pOperator);
|
||||
static SSDataBlock* doProjectOperation(SOperatorInfo* pOperator);
|
||||
static SSDataBlock* doApplyIndefinitFunction(SOperatorInfo* pOperator);
|
||||
static SArray* setRowTsColumnOutputInfo(SqlFunctionCtx* pCtx, int32_t numOfCols);
|
||||
static void setFunctionResultOutput(SOperatorInfo* pOperator, SOptrBasicInfo* pInfo, SAggSupporter* pSup, int32_t stage,
|
||||
int32_t numOfExprs);
|
||||
|
||||
static void destroyProjectOperatorInfo(void* param, int32_t numOfOutput) {
|
||||
if (NULL == param) {
|
||||
return;
|
||||
}
|
||||
|
||||
SProjectOperatorInfo* pInfo = (SProjectOperatorInfo*)param;
|
||||
cleanupBasicInfo(&pInfo->binfo);
|
||||
cleanupAggSup(&pInfo->aggSup);
|
||||
taosArrayDestroy(pInfo->pPseudoColInfo);
|
||||
|
||||
blockDataDestroy(pInfo->pFinalRes);
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
static void destroyIndefinitOperatorInfo(void* param, int32_t numOfOutput) {
|
||||
SIndefOperatorInfo* pInfo = (SIndefOperatorInfo*)param;
|
||||
cleanupBasicInfo(&pInfo->binfo);
|
||||
|
||||
taosArrayDestroy(pInfo->pPseudoColInfo);
|
||||
cleanupAggSup(&pInfo->aggSup);
|
||||
cleanupExprSupp(&pInfo->scalarSup);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
SOperatorInfo* createProjectOperatorInfo(SOperatorInfo* downstream, SProjectPhysiNode* pProjPhyNode,
|
||||
SExecTaskInfo* pTaskInfo) {
|
||||
SProjectOperatorInfo* pInfo = taosMemoryCalloc(1, sizeof(SProjectOperatorInfo));
|
||||
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
|
||||
if (pInfo == NULL || pOperator == NULL) {
|
||||
goto _error;
|
||||
}
|
||||
|
||||
int32_t numOfCols = 0;
|
||||
SExprInfo* pExprInfo = createExprInfo(pProjPhyNode->pProjections, NULL, &numOfCols);
|
||||
|
||||
SSDataBlock* pResBlock = createResDataBlock(pProjPhyNode->node.pOutputDataBlockDesc);
|
||||
initLimitInfo(pProjPhyNode->node.pLimit, pProjPhyNode->node.pSlimit, &pInfo->limitInfo);
|
||||
|
||||
pInfo->binfo.pRes = pResBlock;
|
||||
pInfo->pFinalRes = createOneDataBlock(pResBlock, false);
|
||||
pInfo->pFilterNode = pProjPhyNode->node.pConditions;
|
||||
pInfo->mergeDataBlocks = pProjPhyNode->mergeDataBlock;
|
||||
|
||||
// todo remove it soon
|
||||
|
||||
if (pTaskInfo->execModel == OPTR_EXEC_MODEL_STREAM) {
|
||||
pInfo->mergeDataBlocks = false;
|
||||
}
|
||||
|
||||
|
||||
int32_t numOfRows = 4096;
|
||||
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
|
||||
|
||||
// Make sure the size of SSDataBlock will never exceed the size of 2MB.
|
||||
int32_t TWOMB = 2 * 1024 * 1024;
|
||||
if (numOfRows * pResBlock->info.rowSize > TWOMB) {
|
||||
numOfRows = TWOMB / pResBlock->info.rowSize;
|
||||
}
|
||||
initResultSizeInfo(&pOperator->resultInfo, numOfRows);
|
||||
|
||||
initAggInfo(&pOperator->exprSupp, &pInfo->aggSup, pExprInfo, numOfCols, keyBufSize, pTaskInfo->id.str);
|
||||
initBasicInfo(&pInfo->binfo, pResBlock);
|
||||
setFunctionResultOutput(pOperator, &pInfo->binfo, &pInfo->aggSup, MAIN_SCAN, numOfCols);
|
||||
|
||||
pInfo->pPseudoColInfo = setRowTsColumnOutputInfo(pOperator->exprSupp.pCtx, numOfCols);
|
||||
pOperator->name = "ProjectOperator";
|
||||
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_PROJECT;
|
||||
pOperator->blocking = false;
|
||||
pOperator->status = OP_NOT_OPENED;
|
||||
pOperator->info = pInfo;
|
||||
pOperator->pTaskInfo = pTaskInfo;
|
||||
|
||||
pOperator->fpSet = createOperatorFpSet(operatorDummyOpenFn, doProjectOperation, NULL, NULL,
|
||||
destroyProjectOperatorInfo, NULL, NULL, NULL);
|
||||
|
||||
int32_t code = appendDownstream(pOperator, &downstream, 1);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
goto _error;
|
||||
}
|
||||
|
||||
return pOperator;
|
||||
|
||||
_error:
|
||||
pTaskInfo->code = TSDB_CODE_OUT_OF_MEMORY;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static int32_t discardGroupDataBlock(SSDataBlock* pBlock, SLimitInfo* pLimitInfo) {
|
||||
if (pLimitInfo->remainGroupOffset > 0) {
|
||||
// it is the first group
|
||||
if (pLimitInfo->currentGroupId == 0 || pLimitInfo->currentGroupId == pBlock->info.groupId) {
|
||||
pLimitInfo->currentGroupId = pBlock->info.groupId;
|
||||
return PROJECT_RETRIEVE_CONTINUE;
|
||||
} else if (pLimitInfo->currentGroupId != pBlock->info.groupId) {
|
||||
// now it is the data from a new group
|
||||
pLimitInfo->remainGroupOffset -= 1;
|
||||
pLimitInfo->currentGroupId = pBlock->info.groupId;
|
||||
|
||||
// ignore data block in current group
|
||||
if (pLimitInfo->remainGroupOffset > 0) {
|
||||
return PROJECT_RETRIEVE_CONTINUE;
|
||||
}
|
||||
}
|
||||
|
||||
// set current group id of the project operator
|
||||
pLimitInfo->currentGroupId = pBlock->info.groupId;
|
||||
}
|
||||
|
||||
return PROJECT_RETRIEVE_DONE;
|
||||
}
|
||||
|
||||
static int32_t setInfoForNewGroup(SSDataBlock* pBlock, SLimitInfo* pLimitInfo, SOperatorInfo* pOperator) {
|
||||
// remainGroupOffset == 0
|
||||
// here check for a new group data, we need to handle the data of the previous group.
|
||||
ASSERT(pLimitInfo->remainGroupOffset == 0 || pLimitInfo->remainGroupOffset == -1);
|
||||
|
||||
if (pLimitInfo->currentGroupId != 0 && pLimitInfo->currentGroupId != pBlock->info.groupId) {
|
||||
pLimitInfo->numOfOutputGroups += 1;
|
||||
if ((pLimitInfo->slimit.limit > 0) && (pLimitInfo->slimit.limit <= pLimitInfo->numOfOutputGroups)) {
|
||||
doSetOperatorCompleted(pOperator);
|
||||
return PROJECT_RETRIEVE_DONE;
|
||||
}
|
||||
|
||||
// reset the value for a new group data
|
||||
// existing rows that belongs to previous group.
|
||||
pLimitInfo->numOfOutputRows = 0;
|
||||
pLimitInfo->remainOffset = pLimitInfo->limit.offset;
|
||||
}
|
||||
|
||||
return PROJECT_RETRIEVE_DONE;
|
||||
}
|
||||
|
||||
static int32_t doIngroupLimitOffset(SLimitInfo* pLimitInfo, uint64_t groupId, SSDataBlock* pBlock, SOperatorInfo* pOperator) {
|
||||
// set current group id
|
||||
pLimitInfo->currentGroupId = groupId;
|
||||
|
||||
if (pLimitInfo->remainOffset >= pBlock->info.rows) {
|
||||
pLimitInfo->remainOffset -= pBlock->info.rows;
|
||||
blockDataCleanup(pBlock);
|
||||
return PROJECT_RETRIEVE_CONTINUE;
|
||||
} else if (pLimitInfo->remainOffset < pBlock->info.rows && pLimitInfo->remainOffset > 0) {
|
||||
blockDataTrimFirstNRows(pBlock, pLimitInfo->remainOffset);
|
||||
pLimitInfo->remainOffset = 0;
|
||||
}
|
||||
|
||||
// check for the limitation in each group
|
||||
if (pLimitInfo->limit.limit >= 0 &&
|
||||
pLimitInfo->numOfOutputRows + pBlock->info.rows >= pLimitInfo->limit.limit) {
|
||||
int32_t keepRows = (int32_t)(pLimitInfo->limit.limit - pLimitInfo->numOfOutputRows);
|
||||
blockDataKeepFirstNRows(pBlock, keepRows);
|
||||
if (pLimitInfo->slimit.limit > 0 && pLimitInfo->slimit.limit <= pLimitInfo->numOfOutputGroups) {
|
||||
doSetOperatorCompleted(pOperator);
|
||||
}
|
||||
}
|
||||
|
||||
pLimitInfo->numOfOutputRows += pBlock->info.rows;
|
||||
return PROJECT_RETRIEVE_DONE;
|
||||
}
|
||||
|
||||
void printDataBlock1(SSDataBlock* pBlock, const char* flag) {
|
||||
if (!pBlock || pBlock->info.rows == 0) {
|
||||
qDebug("===stream===printDataBlock: Block is Null or Empty");
|
||||
return;
|
||||
}
|
||||
char* pBuf = NULL;
|
||||
qDebug("%s", dumpBlockData(pBlock, flag, &pBuf));
|
||||
taosMemoryFreeClear(pBuf);
|
||||
}
|
||||
|
||||
SSDataBlock* doProjectOperation(SOperatorInfo* pOperator) {
|
||||
SProjectOperatorInfo* pProjectInfo = pOperator->info;
|
||||
SOptrBasicInfo* pInfo = &pProjectInfo->binfo;
|
||||
|
||||
SExprSupp* pSup = &pOperator->exprSupp;
|
||||
SSDataBlock* pRes = pInfo->pRes;
|
||||
SSDataBlock* pFinalRes = pProjectInfo->pFinalRes;
|
||||
|
||||
blockDataCleanup(pFinalRes);
|
||||
|
||||
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
|
||||
if (pOperator->status == OP_EXEC_DONE) {
|
||||
if (pTaskInfo->execModel == OPTR_EXEC_MODEL_QUEUE) {
|
||||
pOperator->status = OP_OPENED;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
int64_t st = 0;
|
||||
int32_t order = 0;
|
||||
int32_t scanFlag = 0;
|
||||
|
||||
if (pOperator->cost.openCost == 0) {
|
||||
st = taosGetTimestampUs();
|
||||
}
|
||||
|
||||
SOperatorInfo* downstream = pOperator->pDownstream[0];
|
||||
SLimitInfo* pLimitInfo = &pProjectInfo->limitInfo;
|
||||
|
||||
if (downstream == NULL) {
|
||||
return doGenerateSourceData(pOperator);
|
||||
}
|
||||
|
||||
while (1) {
|
||||
while (1) {
|
||||
blockDataCleanup(pRes);
|
||||
|
||||
// The downstream exec may change the value of the newgroup, so use a local variable instead.
|
||||
SSDataBlock* pBlock = downstream->fpSet.getNextFn(downstream);
|
||||
if (pBlock == NULL) {
|
||||
doSetOperatorCompleted(pOperator);
|
||||
break;
|
||||
}
|
||||
|
||||
// for stream interval
|
||||
if (pBlock->info.type == STREAM_RETRIEVE) {
|
||||
// printDataBlock1(pBlock, "project1");
|
||||
return pBlock;
|
||||
}
|
||||
|
||||
int32_t status = discardGroupDataBlock(pBlock, pLimitInfo);
|
||||
if (status == PROJECT_RETRIEVE_CONTINUE) {
|
||||
continue;
|
||||
}
|
||||
|
||||
setInfoForNewGroup(pBlock, pLimitInfo, pOperator);
|
||||
if (pOperator->status == OP_EXEC_DONE) {
|
||||
break;
|
||||
}
|
||||
|
||||
// the pDataBlock are always the same one, no need to call this again
|
||||
int32_t code = getTableScanInfo(downstream, &order, &scanFlag);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
longjmp(pTaskInfo->env, code);
|
||||
}
|
||||
|
||||
setInputDataBlock(pOperator, pSup->pCtx, pBlock, order, scanFlag, false);
|
||||
blockDataEnsureCapacity(pInfo->pRes, pInfo->pRes->info.rows + pBlock->info.rows);
|
||||
|
||||
code = projectApplyFunctions(pSup->pExprInfo, pInfo->pRes, pBlock, pSup->pCtx, pSup->numOfExprs,
|
||||
pProjectInfo->pPseudoColInfo);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
longjmp(pTaskInfo->env, code);
|
||||
}
|
||||
|
||||
status = doIngroupLimitOffset(pLimitInfo, pBlock->info.groupId, pInfo->pRes, pOperator);
|
||||
if (status == PROJECT_RETRIEVE_CONTINUE) {
|
||||
continue;
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
if (pProjectInfo->mergeDataBlocks) {
|
||||
if (pRes->info.rows > 0) {
|
||||
pFinalRes->info.groupId = pRes->info.groupId;
|
||||
pFinalRes->info.version = pRes->info.version;
|
||||
|
||||
// continue merge data, ignore the group id
|
||||
blockDataMerge(pFinalRes, pRes);
|
||||
if (pFinalRes->info.rows + pRes->info.rows <= pOperator->resultInfo.threshold) {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
// do apply filter
|
||||
doFilter(pProjectInfo->pFilterNode, pFinalRes, NULL);
|
||||
if (pFinalRes->info.rows > 0 || pRes->info.rows == 0) {
|
||||
break;
|
||||
}
|
||||
} else {
|
||||
// do apply filter
|
||||
if (pRes->info.rows > 0) {
|
||||
doFilter(pProjectInfo->pFilterNode, pRes, NULL);
|
||||
if (pRes->info.rows == 0) {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
// no results generated
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
SSDataBlock* p = pProjectInfo->mergeDataBlocks ? pFinalRes : pRes;
|
||||
pOperator->resultInfo.totalRows += p->info.rows;
|
||||
|
||||
if (pOperator->cost.openCost == 0) {
|
||||
pOperator->cost.openCost = (taosGetTimestampUs() - st) / 1000.0;
|
||||
}
|
||||
|
||||
// printDataBlock1(p, "project");
|
||||
return (p->info.rows > 0) ? p : NULL;
|
||||
}
|
||||
|
||||
SOperatorInfo* createIndefinitOutputOperatorInfo(SOperatorInfo* downstream, SPhysiNode* pNode,
|
||||
SExecTaskInfo* pTaskInfo) {
|
||||
SIndefOperatorInfo* pInfo = taosMemoryCalloc(1, sizeof(SIndefOperatorInfo));
|
||||
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
|
||||
if (pInfo == NULL || pOperator == NULL) {
|
||||
goto _error;
|
||||
}
|
||||
|
||||
SExprSupp* pSup = &pOperator->exprSupp;
|
||||
|
||||
SIndefRowsFuncPhysiNode* pPhyNode = (SIndefRowsFuncPhysiNode*)pNode;
|
||||
|
||||
int32_t numOfExpr = 0;
|
||||
SExprInfo* pExprInfo = createExprInfo(pPhyNode->pFuncs, NULL, &numOfExpr);
|
||||
|
||||
if (pPhyNode->pExprs != NULL) {
|
||||
int32_t num = 0;
|
||||
SExprInfo* pSExpr = createExprInfo(pPhyNode->pExprs, NULL, &num);
|
||||
int32_t code = initExprSupp(&pInfo->scalarSup, pSExpr, num);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
goto _error;
|
||||
}
|
||||
}
|
||||
|
||||
SSDataBlock* pResBlock = createResDataBlock(pPhyNode->node.pOutputDataBlockDesc);
|
||||
|
||||
int32_t numOfRows = 4096;
|
||||
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
|
||||
|
||||
// Make sure the size of SSDataBlock will never exceed the size of 2MB.
|
||||
int32_t TWOMB = 2 * 1024 * 1024;
|
||||
if (numOfRows * pResBlock->info.rowSize > TWOMB) {
|
||||
numOfRows = TWOMB / pResBlock->info.rowSize;
|
||||
}
|
||||
|
||||
initResultSizeInfo(&pOperator->resultInfo, numOfRows);
|
||||
|
||||
initAggInfo(pSup, &pInfo->aggSup, pExprInfo, numOfExpr, keyBufSize, pTaskInfo->id.str);
|
||||
initBasicInfo(&pInfo->binfo, pResBlock);
|
||||
|
||||
setFunctionResultOutput(pOperator, &pInfo->binfo, &pInfo->aggSup, MAIN_SCAN, numOfExpr);
|
||||
|
||||
pInfo->binfo.pRes = pResBlock;
|
||||
pInfo->pCondition = pPhyNode->node.pConditions;
|
||||
pInfo->pPseudoColInfo = setRowTsColumnOutputInfo(pSup->pCtx, numOfExpr);
|
||||
|
||||
pOperator->name = "IndefinitOperator";
|
||||
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_INDEF_ROWS_FUNC;
|
||||
pOperator->blocking = false;
|
||||
pOperator->status = OP_NOT_OPENED;
|
||||
pOperator->info = pInfo;
|
||||
pOperator->pTaskInfo = pTaskInfo;
|
||||
|
||||
pOperator->fpSet = createOperatorFpSet(operatorDummyOpenFn, doApplyIndefinitFunction, NULL, NULL,
|
||||
destroyIndefinitOperatorInfo, NULL, NULL, NULL);
|
||||
|
||||
int32_t code = appendDownstream(pOperator, &downstream, 1);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
goto _error;
|
||||
}
|
||||
|
||||
return pOperator;
|
||||
|
||||
_error:
|
||||
taosMemoryFree(pInfo);
|
||||
taosMemoryFree(pOperator);
|
||||
pTaskInfo->code = TSDB_CODE_OUT_OF_MEMORY;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void doHandleDataBlock(SOperatorInfo* pOperator, SSDataBlock* pBlock, SOperatorInfo* downstream,
|
||||
SExecTaskInfo* pTaskInfo) {
|
||||
int32_t order = 0;
|
||||
int32_t scanFlag = 0;
|
||||
|
||||
SIndefOperatorInfo* pIndefInfo = pOperator->info;
|
||||
SOptrBasicInfo* pInfo = &pIndefInfo->binfo;
|
||||
SExprSupp* pSup = &pOperator->exprSupp;
|
||||
|
||||
// the pDataBlock are always the same one, no need to call this again
|
||||
int32_t code = getTableScanInfo(downstream, &order, &scanFlag);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
longjmp(pTaskInfo->env, code);
|
||||
}
|
||||
|
||||
// there is an scalar expression that needs to be calculated before apply the group aggregation.
|
||||
SExprSupp* pScalarSup = &pIndefInfo->scalarSup;
|
||||
if (pScalarSup->pExprInfo != NULL) {
|
||||
code = projectApplyFunctions(pScalarSup->pExprInfo, pBlock, pBlock, pScalarSup->pCtx, pScalarSup->numOfExprs,
|
||||
pIndefInfo->pPseudoColInfo);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
longjmp(pTaskInfo->env, code);
|
||||
}
|
||||
}
|
||||
|
||||
setInputDataBlock(pOperator, pSup->pCtx, pBlock, order, scanFlag, false);
|
||||
blockDataEnsureCapacity(pInfo->pRes, pInfo->pRes->info.rows + pBlock->info.rows);
|
||||
|
||||
code = projectApplyFunctions(pSup->pExprInfo, pInfo->pRes, pBlock, pSup->pCtx, pSup->numOfExprs,
|
||||
pIndefInfo->pPseudoColInfo);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
longjmp(pTaskInfo->env, code);
|
||||
}
|
||||
}
|
||||
|
||||
SSDataBlock* doApplyIndefinitFunction(SOperatorInfo* pOperator) {
|
||||
SIndefOperatorInfo* pIndefInfo = pOperator->info;
|
||||
SOptrBasicInfo* pInfo = &pIndefInfo->binfo;
|
||||
SExprSupp* pSup = &pOperator->exprSupp;
|
||||
|
||||
SSDataBlock* pRes = pInfo->pRes;
|
||||
blockDataCleanup(pRes);
|
||||
|
||||
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
|
||||
if (pOperator->status == OP_EXEC_DONE) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
int64_t st = 0;
|
||||
|
||||
if (pOperator->cost.openCost == 0) {
|
||||
st = taosGetTimestampUs();
|
||||
}
|
||||
|
||||
SOperatorInfo* downstream = pOperator->pDownstream[0];
|
||||
|
||||
while (1) {
|
||||
// here we need to handle the existsed group results
|
||||
if (pIndefInfo->pNextGroupRes != NULL) { // todo extract method
|
||||
for (int32_t k = 0; k < pSup->numOfExprs; ++k) {
|
||||
SqlFunctionCtx* pCtx = &pSup->pCtx[k];
|
||||
|
||||
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
|
||||
pResInfo->initialized = false;
|
||||
pCtx->pOutput = NULL;
|
||||
}
|
||||
|
||||
doHandleDataBlock(pOperator, pIndefInfo->pNextGroupRes, downstream, pTaskInfo);
|
||||
pIndefInfo->pNextGroupRes = NULL;
|
||||
}
|
||||
|
||||
if (pInfo->pRes->info.rows < pOperator->resultInfo.threshold) {
|
||||
while (1) {
|
||||
// The downstream exec may change the value of the newgroup, so use a local variable instead.
|
||||
SSDataBlock* pBlock = downstream->fpSet.getNextFn(downstream);
|
||||
if (pBlock == NULL) {
|
||||
doSetOperatorCompleted(pOperator);
|
||||
break;
|
||||
}
|
||||
|
||||
if (pIndefInfo->groupId == 0 && pBlock->info.groupId != 0) {
|
||||
pIndefInfo->groupId = pBlock->info.groupId; // this is the initial group result
|
||||
} else {
|
||||
if (pIndefInfo->groupId != pBlock->info.groupId) { // reset output buffer and computing status
|
||||
pIndefInfo->groupId = pBlock->info.groupId;
|
||||
pIndefInfo->pNextGroupRes = pBlock;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
doHandleDataBlock(pOperator, pBlock, downstream, pTaskInfo);
|
||||
if (pInfo->pRes->info.rows >= pOperator->resultInfo.threshold) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
doFilter(pIndefInfo->pCondition, pInfo->pRes, NULL);
|
||||
size_t rows = pInfo->pRes->info.rows;
|
||||
if (rows > 0 || pOperator->status == OP_EXEC_DONE) {
|
||||
break;
|
||||
} else {
|
||||
blockDataCleanup(pInfo->pRes);
|
||||
}
|
||||
}
|
||||
|
||||
size_t rows = pInfo->pRes->info.rows;
|
||||
pOperator->resultInfo.totalRows += rows;
|
||||
|
||||
if (pOperator->cost.openCost == 0) {
|
||||
pOperator->cost.openCost = (taosGetTimestampUs() - st) / 1000.0;
|
||||
}
|
||||
|
||||
return (rows > 0) ? pInfo->pRes : NULL;
|
||||
}
|
||||
|
||||
void initCtxOutputBuffer(SqlFunctionCtx* pCtx, int32_t size) {
|
||||
for (int32_t j = 0; j < size; ++j) {
|
||||
struct SResultRowEntryInfo* pResInfo = GET_RES_INFO(&pCtx[j]);
|
||||
if (isRowEntryInitialized(pResInfo) || fmIsPseudoColumnFunc(pCtx[j].functionId) || pCtx[j].functionId == -1 ||
|
||||
fmIsScalarFunc(pCtx[j].functionId)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
pCtx[j].fpSet.init(&pCtx[j], pCtx[j].resultInfo);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* The start of each column SResultRowEntryInfo is denote by RowCellInfoOffset.
|
||||
* Note that in case of top/bottom query, the whole multiple rows of result is treated as only one row of results.
|
||||
* +------------+-----------------result column 1------------+------------------result column 2-----------+
|
||||
* | SResultRow | SResultRowEntryInfo | intermediate buffer1 | SResultRowEntryInfo | intermediate buffer 2|
|
||||
* +------------+--------------------------------------------+--------------------------------------------+
|
||||
* offset[0] offset[1] offset[2]
|
||||
*/
|
||||
// TODO refactor: some function move away
|
||||
void setFunctionResultOutput(SOperatorInfo* pOperator, SOptrBasicInfo* pInfo, SAggSupporter* pSup, int32_t stage,
|
||||
int32_t numOfExprs) {
|
||||
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
|
||||
SqlFunctionCtx* pCtx = pOperator->exprSupp.pCtx;
|
||||
int32_t* rowEntryInfoOffset = pOperator->exprSupp.rowEntryInfoOffset;
|
||||
|
||||
SResultRowInfo* pResultRowInfo = &pInfo->resultRowInfo;
|
||||
initResultRowInfo(pResultRowInfo);
|
||||
|
||||
int64_t tid = 0;
|
||||
int64_t groupId = 0;
|
||||
SResultRow* pRow = doSetResultOutBufByKey(pSup->pResultBuf, pResultRowInfo, (char*)&tid, sizeof(tid), true, groupId,
|
||||
pTaskInfo, false, pSup);
|
||||
|
||||
for (int32_t i = 0; i < numOfExprs; ++i) {
|
||||
struct SResultRowEntryInfo* pEntry = getResultEntryInfo(pRow, i, rowEntryInfoOffset);
|
||||
cleanupResultRowEntry(pEntry);
|
||||
|
||||
pCtx[i].resultInfo = pEntry;
|
||||
pCtx[i].scanFlag = stage;
|
||||
}
|
||||
|
||||
initCtxOutputBuffer(pCtx, numOfExprs);
|
||||
}
|
||||
|
||||
SArray* setRowTsColumnOutputInfo(SqlFunctionCtx* pCtx, int32_t numOfCols) {
|
||||
SArray* pList = taosArrayInit(4, sizeof(int32_t));
|
||||
for (int32_t i = 0; i < numOfCols; ++i) {
|
||||
if (fmIsPseudoColumnFunc(pCtx[i].functionId)) {
|
||||
taosArrayPush(pList, &i);
|
||||
}
|
||||
}
|
||||
|
||||
return pList;
|
||||
}
|
||||
|
||||
SSDataBlock* doGenerateSourceData(SOperatorInfo* pOperator) {
|
||||
SProjectOperatorInfo* pProjectInfo = pOperator->info;
|
||||
|
||||
SExprSupp* pSup = &pOperator->exprSupp;
|
||||
SSDataBlock* pRes = pProjectInfo->binfo.pRes;
|
||||
|
||||
blockDataEnsureCapacity(pRes, pOperator->resultInfo.capacity);
|
||||
SExprInfo* pExpr = pSup->pExprInfo;
|
||||
|
||||
int64_t st = taosGetTimestampUs();
|
||||
|
||||
for (int32_t k = 0; k < pSup->numOfExprs; ++k) {
|
||||
int32_t outputSlotId = pExpr[k].base.resSchema.slotId;
|
||||
|
||||
ASSERT(pExpr[k].pExpr->nodeType == QUERY_NODE_VALUE);
|
||||
SColumnInfoData* pColInfoData = taosArrayGet(pRes->pDataBlock, outputSlotId);
|
||||
|
||||
int32_t type = pExpr[k].base.pParam[0].param.nType;
|
||||
if (TSDB_DATA_TYPE_NULL == type) {
|
||||
colDataAppendNNULL(pColInfoData, 0, 1);
|
||||
} else {
|
||||
colDataAppend(pColInfoData, 0, taosVariantGet(&pExpr[k].base.pParam[0].param, type), false);
|
||||
}
|
||||
}
|
||||
|
||||
pRes->info.rows = 1;
|
||||
doFilter(pProjectInfo->pFilterNode, pRes, NULL);
|
||||
|
||||
/*int32_t status = */doIngroupLimitOffset(&pProjectInfo->limitInfo, 0, pRes, pOperator);
|
||||
|
||||
pOperator->resultInfo.totalRows += pRes->info.rows;
|
||||
|
||||
doSetOperatorCompleted(pOperator);
|
||||
if (pOperator->cost.openCost == 0) {
|
||||
pOperator->cost.openCost = (taosGetTimestampUs() - st) / 1000.0;
|
||||
}
|
||||
|
||||
return (pRes->info.rows > 0) ? pRes : NULL;
|
||||
}
|
|
@ -274,7 +274,7 @@ static int32_t loadDataBlock(SOperatorInfo* pOperator, STableScanInfo* pTableSca
|
|||
qDebug("%s data block filter out, brange:%" PRId64 "-%" PRId64 ", rows:%d", GET_TASKID(pTaskInfo),
|
||||
pBlockInfo->window.skey, pBlockInfo->window.ekey, pBlockInfo->rows);
|
||||
} else {
|
||||
qDebug("%s data block filter out, elapsed time:%"PRId64, GET_TASKID(pTaskInfo), (et - st));
|
||||
qDebug("%s data block filter out, elapsed time:%" PRId64, GET_TASKID(pTaskInfo), (et - st));
|
||||
}
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
@ -1838,11 +1838,14 @@ static SSDataBlock* sysTableScanUserTags(SOperatorInfo* pOperator) {
|
|||
int8_t tagType = smr.me.stbEntry.schemaTag.pSchema[i].type;
|
||||
pColInfoData = taosArrayGet(p->pDataBlock, 4);
|
||||
char tagTypeStr[VARSTR_HEADER_SIZE + 32];
|
||||
int tagTypeLen = sprintf(varDataVal(tagTypeStr), "%s", tDataTypes[tagType].name);
|
||||
int tagTypeLen = sprintf(varDataVal(tagTypeStr), "%s", tDataTypes[tagType].name);
|
||||
if (tagType == TSDB_DATA_TYPE_VARCHAR) {
|
||||
tagTypeLen += sprintf(varDataVal(tagTypeStr) + tagTypeLen, "(%d)", (int32_t)(smr.me.stbEntry.schemaTag.pSchema[i].bytes - VARSTR_HEADER_SIZE));
|
||||
tagTypeLen += sprintf(varDataVal(tagTypeStr) + tagTypeLen, "(%d)",
|
||||
(int32_t)(smr.me.stbEntry.schemaTag.pSchema[i].bytes - VARSTR_HEADER_SIZE));
|
||||
} else if (tagType == TSDB_DATA_TYPE_NCHAR) {
|
||||
tagTypeLen += sprintf(varDataVal(tagTypeStr) + tagTypeLen, "(%d)", (int32_t)((smr.me.stbEntry.schemaTag.pSchema[i].bytes - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE));
|
||||
tagTypeLen +=
|
||||
sprintf(varDataVal(tagTypeStr) + tagTypeLen, "(%d)",
|
||||
(int32_t)((smr.me.stbEntry.schemaTag.pSchema[i].bytes - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE));
|
||||
}
|
||||
varDataSetLen(tagTypeStr, tagTypeLen);
|
||||
colDataAppend(pColInfoData, numOfRows, (char*)tagTypeStr, false);
|
||||
|
@ -2527,49 +2530,6 @@ _error:
|
|||
return NULL;
|
||||
}
|
||||
|
||||
typedef struct STableMergeScanInfo {
|
||||
STableListInfo* tableListInfo;
|
||||
int32_t tableStartIndex;
|
||||
int32_t tableEndIndex;
|
||||
bool hasGroupId;
|
||||
uint64_t groupId;
|
||||
SArray* dataReaders; // array of tsdbReaderT*
|
||||
SReadHandle readHandle;
|
||||
int32_t bufPageSize;
|
||||
uint32_t sortBufSize; // max buffer size for in-memory sort
|
||||
SArray* pSortInfo;
|
||||
SSortHandle* pSortHandle;
|
||||
|
||||
SSDataBlock* pSortInputBlock;
|
||||
int64_t startTs; // sort start time
|
||||
SArray* sortSourceParams;
|
||||
|
||||
SFileBlockLoadRecorder readRecorder;
|
||||
int64_t numOfRows;
|
||||
SScanInfo scanInfo;
|
||||
int32_t scanTimes;
|
||||
SNode* pFilterNode; // filter info, which is push down by optimizer
|
||||
SqlFunctionCtx* pCtx; // which belongs to the direct upstream operator operator query context
|
||||
SResultRowInfo* pResultRowInfo;
|
||||
int32_t* rowEntryInfoOffset;
|
||||
SExprInfo* pExpr;
|
||||
SSDataBlock* pResBlock;
|
||||
SArray* pColMatchInfo;
|
||||
int32_t numOfOutput;
|
||||
|
||||
SExprInfo* pPseudoExpr;
|
||||
int32_t numOfPseudoExpr;
|
||||
SqlFunctionCtx* pPseudoCtx;
|
||||
|
||||
SQueryTableDataCond cond;
|
||||
int32_t scanFlag; // table scan flag to denote if it is a repeat/reverse/main scan
|
||||
int32_t dataBlockLoadFlag;
|
||||
// if the upstream is an interval operator, the interval info is also kept here to get the time
|
||||
// window to check if current data block needs to be loaded.
|
||||
SInterval interval;
|
||||
SSampleExecInfo sample; // sample execution info
|
||||
} STableMergeScanInfo;
|
||||
|
||||
int32_t createScanTableListInfo(SScanPhysiNode* pScanNode, SNodeList* pGroupTags, bool groupSort, SReadHandle* pHandle,
|
||||
STableListInfo* pTableListInfo, SNode* pTagCond, SNode* pTagIndexCond,
|
||||
const char* idStr) {
|
||||
|
@ -2700,9 +2660,9 @@ static int32_t loadDataBlockFromOneTable(SOperatorInfo* pOperator, STableMergeSc
|
|||
relocateColumnData(pBlock, pTableScanInfo->pColMatchInfo, pCols, true);
|
||||
|
||||
// currently only the tbname pseudo column
|
||||
if (pTableScanInfo->numOfPseudoExpr > 0) {
|
||||
int32_t code = addTagPseudoColumnData(&pTableScanInfo->readHandle, pTableScanInfo->pPseudoExpr,
|
||||
pTableScanInfo->numOfPseudoExpr, pBlock, GET_TASKID(pTaskInfo));
|
||||
if (pTableScanInfo->pseudoSup.numOfExprs > 0) {
|
||||
int32_t code = addTagPseudoColumnData(&pTableScanInfo->readHandle, pTableScanInfo->pseudoSup.pExprInfo,
|
||||
pTableScanInfo->pseudoSup.numOfExprs, pBlock, GET_TASKID(pTaskInfo));
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
longjmp(pTaskInfo->env, code);
|
||||
}
|
||||
|
@ -2869,29 +2829,38 @@ int32_t stopGroupTableMergeScan(SOperatorInfo* pOperator) {
|
|||
STableMergeScanInfo* pInfo = pOperator->info;
|
||||
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
|
||||
|
||||
tsortDestroySortHandle(pInfo->pSortHandle);
|
||||
size_t numReaders = taosArrayGetSize(pInfo->dataReaders);
|
||||
|
||||
SSortExecInfo sortExecInfo = tsortGetSortExecInfo(pInfo->pSortHandle);
|
||||
pInfo->sortExecInfo.sortMethod = sortExecInfo.sortMethod;
|
||||
pInfo->sortExecInfo.sortBuffer = sortExecInfo.sortBuffer;
|
||||
pInfo->sortExecInfo.loops += sortExecInfo.loops;
|
||||
pInfo->sortExecInfo.readBytes += sortExecInfo.readBytes;
|
||||
pInfo->sortExecInfo.writeBytes += sortExecInfo.writeBytes;
|
||||
|
||||
for (int32_t i = 0; i < numReaders; ++i) {
|
||||
STableMergeScanSortSourceParam* param = taosArrayGet(pInfo->sortSourceParams, i);
|
||||
blockDataDestroy(param->inputBlock);
|
||||
}
|
||||
taosArrayClear(pInfo->sortSourceParams);
|
||||
|
||||
for (int32_t i = 0; i < taosArrayGetSize(pInfo->dataReaders); ++i) {
|
||||
tsortDestroySortHandle(pInfo->pSortHandle);
|
||||
|
||||
for (int32_t i = 0; i < numReaders; ++i) {
|
||||
STsdbReader* reader = taosArrayGetP(pInfo->dataReaders, i);
|
||||
tsdbReaderClose(reader);
|
||||
}
|
||||
|
||||
taosArrayDestroy(pInfo->dataReaders);
|
||||
pInfo->dataReaders = NULL;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
SSDataBlock* getSortedTableMergeScanBlockData(SSortHandle* pHandle, int32_t capacity, SOperatorInfo* pOperator) {
|
||||
SSDataBlock* getSortedTableMergeScanBlockData(SSortHandle* pHandle, SSDataBlock* pResBlock, int32_t capacity, SOperatorInfo* pOperator) {
|
||||
STableMergeScanInfo* pInfo = pOperator->info;
|
||||
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
|
||||
|
||||
SSDataBlock* p = tsortGetSortedDataBlock(pHandle);
|
||||
if (p == NULL) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
blockDataEnsureCapacity(p, capacity);
|
||||
blockDataCleanup(pResBlock);
|
||||
blockDataEnsureCapacity(pResBlock, capacity);
|
||||
|
||||
while (1) {
|
||||
STupleHandle* pTupleHandle = tsortNextTuple(pHandle);
|
||||
|
@ -2899,14 +2868,15 @@ SSDataBlock* getSortedTableMergeScanBlockData(SSortHandle* pHandle, int32_t capa
|
|||
break;
|
||||
}
|
||||
|
||||
appendOneRowToDataBlock(p, pTupleHandle);
|
||||
if (p->info.rows >= capacity) {
|
||||
appendOneRowToDataBlock(pResBlock, pTupleHandle);
|
||||
if (pResBlock->info.rows >= capacity) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
qDebug("%s get sorted row blocks, rows:%d", GET_TASKID(pTaskInfo), p->info.rows);
|
||||
return (p->info.rows > 0) ? p : NULL;
|
||||
|
||||
qDebug("%s get sorted row blocks, rows:%d", GET_TASKID(pTaskInfo), pResBlock->info.rows);
|
||||
return (pResBlock->info.rows > 0) ? pResBlock : NULL;
|
||||
}
|
||||
|
||||
SSDataBlock* doTableMergeScan(SOperatorInfo* pOperator) {
|
||||
|
@ -2935,7 +2905,7 @@ SSDataBlock* doTableMergeScan(SOperatorInfo* pOperator) {
|
|||
}
|
||||
SSDataBlock* pBlock = NULL;
|
||||
while (pInfo->tableStartIndex < tableListSize) {
|
||||
pBlock = getSortedTableMergeScanBlockData(pInfo->pSortHandle, pOperator->resultInfo.capacity, pOperator);
|
||||
pBlock = getSortedTableMergeScanBlockData(pInfo->pSortHandle, pInfo->pResBlock, pOperator->resultInfo.capacity, pOperator);
|
||||
if (pBlock != NULL) {
|
||||
pBlock->info.groupId = pInfo->groupId;
|
||||
pOperator->resultInfo.totalRows += pBlock->info.rows;
|
||||
|
@ -2959,6 +2929,7 @@ SSDataBlock* doTableMergeScan(SOperatorInfo* pOperator) {
|
|||
void destroyTableMergeScanOperatorInfo(void* param, int32_t numOfOutput) {
|
||||
STableMergeScanInfo* pTableScanInfo = (STableMergeScanInfo*)param;
|
||||
cleanupQueryTableDataCond(&pTableScanInfo->cond);
|
||||
taosArrayDestroy(pTableScanInfo->sortSourceParams);
|
||||
|
||||
for (int32_t i = 0; i < taosArrayGetSize(pTableScanInfo->dataReaders); ++i) {
|
||||
STsdbReader* reader = taosArrayGetP(pTableScanInfo->dataReaders, i);
|
||||
|
@ -2974,7 +2945,9 @@ void destroyTableMergeScanOperatorInfo(void* param, int32_t numOfOutput) {
|
|||
pTableScanInfo->pSortInputBlock = blockDataDestroy(pTableScanInfo->pSortInputBlock);
|
||||
|
||||
taosArrayDestroy(pTableScanInfo->pSortInfo);
|
||||
cleanupExprSupp(&pTableScanInfo->pseudoSup);
|
||||
|
||||
taosMemoryFreeClear(pTableScanInfo->rowEntryInfoOffset);
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
|
@ -2989,7 +2962,7 @@ int32_t getTableMergeScanExplainExecInfo(SOperatorInfo* pOptr, void** pOptrExpla
|
|||
STableMergeScanExecInfo* execInfo = taosMemoryCalloc(1, sizeof(STableMergeScanExecInfo));
|
||||
STableMergeScanInfo* pInfo = pOptr->info;
|
||||
execInfo->blockRecorder = pInfo->readRecorder;
|
||||
execInfo->sortExecInfo = tsortGetSortExecInfo(pInfo->pSortHandle);
|
||||
execInfo->sortExecInfo = pInfo->sortExecInfo;
|
||||
|
||||
*pOptrExplain = execInfo;
|
||||
*len = sizeof(STableMergeScanExecInfo);
|
||||
|
@ -3031,8 +3004,9 @@ SOperatorInfo* createTableMergeScanOperatorInfo(STableScanPhysiNode* pTableScanN
|
|||
}
|
||||
|
||||
if (pTableScanNode->scan.pScanPseudoCols != NULL) {
|
||||
pInfo->pPseudoExpr = createExprInfo(pTableScanNode->scan.pScanPseudoCols, NULL, &pInfo->numOfPseudoExpr);
|
||||
pInfo->pPseudoCtx = createSqlFunctionCtx(pInfo->pPseudoExpr, pInfo->numOfPseudoExpr, &pInfo->rowEntryInfoOffset);
|
||||
SExprSupp* pSup = &pInfo->pseudoSup;
|
||||
pSup->pExprInfo = createExprInfo(pTableScanNode->scan.pScanPseudoCols, NULL, &pSup->numOfExprs);
|
||||
pSup->pCtx = createSqlFunctionCtx(pSup->pExprInfo, pSup->numOfExprs, &pSup->rowEntryInfoOffset);
|
||||
}
|
||||
|
||||
pInfo->scanInfo = (SScanInfo){.numOfAsc = pTableScanNode->scanSeq[0], .numOfDesc = pTableScanNode->scanSeq[1]};
|
||||
|
|
|
@ -940,6 +940,7 @@ static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResul
|
|||
|
||||
if (pInfo->execModel == OPTR_EXEC_MODEL_STREAM && pInfo->twAggSup.calTrigger == STREAM_TRIGGER_AT_ONCE) {
|
||||
saveResultRow(pResult, tableGroupId, pUpdated);
|
||||
setResultBufPageDirty(pInfo->aggSup.pResultBuf, &pResultRowInfo->cur);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -996,6 +997,7 @@ static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResul
|
|||
|
||||
if (pInfo->execModel == OPTR_EXEC_MODEL_STREAM && pInfo->twAggSup.calTrigger == STREAM_TRIGGER_AT_ONCE) {
|
||||
saveResultRow(pResult, tableGroupId, pUpdated);
|
||||
setResultBufPageDirty(pInfo->aggSup.pResultBuf, &pResultRowInfo->cur);
|
||||
}
|
||||
|
||||
ekey = ascScan ? nextWin.ekey : nextWin.skey;
|
||||
|
@ -1092,7 +1094,6 @@ static int32_t doOpenIntervalAgg(SOperatorInfo* pOperator) {
|
|||
hashIntervalAgg(pOperator, &pInfo->binfo.resultRowInfo, pBlock, scanFlag, NULL);
|
||||
}
|
||||
|
||||
closeAllResultRows(&pInfo->binfo.resultRowInfo);
|
||||
initGroupedResultInfo(&pInfo->groupResInfo, pInfo->aggSup.pResultRowHashTable, pInfo->order);
|
||||
OPTR_SET_OPENED(pOperator);
|
||||
|
||||
|
@ -1248,7 +1249,6 @@ static SSDataBlock* doStateWindowAgg(SOperatorInfo* pOperator) {
|
|||
pOperator->cost.openCost = (taosGetTimestampUs() - st) / 1000.0;
|
||||
|
||||
pOperator->status = OP_RES_TO_RETURN;
|
||||
closeAllResultRows(&pBInfo->resultRowInfo);
|
||||
|
||||
initGroupedResultInfo(&pInfo->groupResInfo, pInfo->aggSup.pResultRowHashTable, TSDB_ORDER_ASC);
|
||||
blockDataEnsureCapacity(pBInfo->pRes, pOperator->resultInfo.capacity);
|
||||
|
@ -2043,7 +2043,6 @@ static SSDataBlock* doSessionWindowAgg(SOperatorInfo* pOperator) {
|
|||
|
||||
// restore the value
|
||||
pOperator->status = OP_RES_TO_RETURN;
|
||||
closeAllResultRows(&pBInfo->resultRowInfo);
|
||||
|
||||
initGroupedResultInfo(&pInfo->groupResInfo, pInfo->aggSup.pResultRowHashTable, TSDB_ORDER_ASC);
|
||||
blockDataEnsureCapacity(pBInfo->pRes, pOperator->resultInfo.capacity);
|
||||
|
@ -2207,8 +2206,6 @@ static SSDataBlock* doTimeslice(SOperatorInfo* pOperator) {
|
|||
SSDataBlock* pResBlock = pSliceInfo->pRes;
|
||||
SExprSupp* pSup = &pOperator->exprSupp;
|
||||
|
||||
blockDataEnsureCapacity(pResBlock, pOperator->resultInfo.capacity);
|
||||
|
||||
// if (pOperator->status == OP_RES_TO_RETURN) {
|
||||
// // doBuildResultDatablock(&pRuntimeEnv->groupResInfo, pRuntimeEnv, pIntervalInfo->pRes);
|
||||
// if (pResBlock->info.rows == 0 || !hasDataInGroupInfo(&pSliceInfo->groupResInfo)) {
|
||||
|
@ -2348,10 +2345,10 @@ SOperatorInfo* createTimeSliceOperatorInfo(SOperatorInfo* downstream, SPhysiNode
|
|||
initResultSizeInfo(&pOperator->resultInfo, 4096);
|
||||
|
||||
pInfo->pFillColInfo = createFillColInfo(pExprInfo, numOfExprs, (SNodeListNode*)pInterpPhyNode->pFillValues);
|
||||
pInfo->pRes = createResDataBlock(pPhyNode->pOutputDataBlockDesc);
|
||||
pInfo->win = pInterpPhyNode->timeRange;
|
||||
pInfo->pRes = createResDataBlock(pPhyNode->pOutputDataBlockDesc);
|
||||
pInfo->win = pInterpPhyNode->timeRange;
|
||||
pInfo->interval.interval = pInterpPhyNode->interval;
|
||||
pInfo->current = pInfo->win.skey;
|
||||
pInfo->current = pInfo->win.skey;
|
||||
|
||||
pOperator->name = "TimeSliceOperator";
|
||||
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_INTERP_FUNC;
|
||||
|
@ -2542,6 +2539,7 @@ static void rebuildIntervalWindow(SStreamFinalIntervalOperatorInfo* pInfo, SExpr
|
|||
}
|
||||
if (find && pUpdated) {
|
||||
saveResultRow(pCurResult, pWinRes->groupId, pUpdated);
|
||||
setResultBufPageDirty(pInfo->aggSup.pResultBuf, &pInfo->binfo.resultRowInfo.cur);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -2662,6 +2660,7 @@ static void doHashInterval(SOperatorInfo* pOperatorInfo, SSDataBlock* pSDataBloc
|
|||
}
|
||||
if (pInfo->twAggSup.calTrigger == STREAM_TRIGGER_AT_ONCE && pUpdated) {
|
||||
saveResultRow(pResult, tableGroupId, pUpdated);
|
||||
setResultBufPageDirty(pInfo->aggSup.pResultBuf, &pResultRowInfo->cur);
|
||||
}
|
||||
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &nextWin, true);
|
||||
doApplyFunctions(pTaskInfo, pSup->pCtx, &nextWin, &pInfo->twAggSup.timeWindowData, startPos, forwardRows, tsCols,
|
||||
|
@ -2770,7 +2769,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
|||
|
||||
SExprSupp* pSup = &pOperator->exprSupp;
|
||||
|
||||
qDebug("interval status %d %s", pOperator->status, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
||||
qDebug("interval status %d %s", pOperator->status, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||
|
||||
if (pOperator->status == OP_EXEC_DONE) {
|
||||
return NULL;
|
||||
|
@ -2779,7 +2778,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
|||
if (pInfo->pPullDataRes->info.rows != 0) {
|
||||
// process the rest of the data
|
||||
ASSERT(IS_FINAL_OP(pInfo));
|
||||
printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
||||
printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||
return pInfo->pPullDataRes;
|
||||
}
|
||||
|
||||
|
@ -2794,20 +2793,20 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
|||
}
|
||||
return NULL;
|
||||
}
|
||||
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
||||
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||
return pInfo->binfo.pRes;
|
||||
} else {
|
||||
if (!IS_FINAL_OP(pInfo)) {
|
||||
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->aggSup.pResultBuf);
|
||||
if (pInfo->binfo.pRes->info.rows != 0) {
|
||||
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
||||
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||
return pInfo->binfo.pRes;
|
||||
}
|
||||
}
|
||||
if (pInfo->pUpdateRes->info.rows != 0 && pInfo->returnUpdate) {
|
||||
pInfo->returnUpdate = false;
|
||||
ASSERT(!IS_FINAL_OP(pInfo));
|
||||
printDataBlock(pInfo->pUpdateRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
||||
printDataBlock(pInfo->pUpdateRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||
// process the rest of the data
|
||||
return pInfo->pUpdateRes;
|
||||
}
|
||||
|
@ -2815,13 +2814,13 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
|||
// if (pInfo->pPullDataRes->info.rows != 0) {
|
||||
// // process the rest of the data
|
||||
// ASSERT(IS_FINAL_OP(pInfo));
|
||||
// printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
||||
// printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||
// return pInfo->pPullDataRes;
|
||||
// }
|
||||
doBuildDeleteResult(pInfo->pDelWins, &pInfo->delIndex, pInfo->pDelRes);
|
||||
if (pInfo->pDelRes->info.rows != 0) {
|
||||
// process the rest of the data
|
||||
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
||||
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||
return pInfo->pDelRes;
|
||||
}
|
||||
}
|
||||
|
@ -2832,10 +2831,10 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
|||
clearSpecialDataBlock(pInfo->pUpdateRes);
|
||||
removeDeleteResults(pUpdated, pInfo->pDelWins);
|
||||
pOperator->status = OP_RES_TO_RETURN;
|
||||
qDebug("%s return data", IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
||||
qDebug("%s return data", IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||
break;
|
||||
}
|
||||
printDataBlock(pBlock, IS_FINAL_OP(pInfo) ? "interval Final recv" : "interval Semi recv");
|
||||
printDataBlock(pBlock, IS_FINAL_OP(pInfo) ? "interval final recv" : "interval semi recv");
|
||||
maxTs = TMAX(maxTs, pBlock->info.window.ekey);
|
||||
|
||||
if (pBlock->info.type == STREAM_NORMAL || pBlock->info.type == STREAM_PULL_DATA ||
|
||||
|
@ -2935,20 +2934,20 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
|||
if (pInfo->pPullDataRes->info.rows != 0) {
|
||||
// process the rest of the data
|
||||
ASSERT(IS_FINAL_OP(pInfo));
|
||||
printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
||||
printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||
return pInfo->pPullDataRes;
|
||||
}
|
||||
|
||||
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->aggSup.pResultBuf);
|
||||
if (pInfo->binfo.pRes->info.rows != 0) {
|
||||
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
||||
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||
return pInfo->binfo.pRes;
|
||||
}
|
||||
|
||||
if (pInfo->pUpdateRes->info.rows != 0 && pInfo->returnUpdate) {
|
||||
pInfo->returnUpdate = false;
|
||||
ASSERT(!IS_FINAL_OP(pInfo));
|
||||
printDataBlock(pInfo->pUpdateRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
||||
printDataBlock(pInfo->pUpdateRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||
// process the rest of the data
|
||||
return pInfo->pUpdateRes;
|
||||
}
|
||||
|
@ -2956,7 +2955,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
|||
doBuildDeleteResult(pInfo->pDelWins, &pInfo->delIndex, pInfo->pDelRes);
|
||||
if (pInfo->pDelRes->info.rows != 0) {
|
||||
// process the rest of the data
|
||||
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
||||
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||
return pInfo->pDelRes;
|
||||
}
|
||||
// ASSERT(false);
|
||||
|
@ -3816,14 +3815,14 @@ static SSDataBlock* doStreamSessionAgg(SOperatorInfo* pOperator) {
|
|||
} else if (pOperator->status == OP_RES_TO_RETURN) {
|
||||
doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
|
||||
if (pInfo->pDelRes->info.rows > 0) {
|
||||
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "Final Session" : "Single Session");
|
||||
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "final session" : "single session");
|
||||
return pInfo->pDelRes;
|
||||
}
|
||||
doBuildResultDatablock(pOperator, pBInfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
|
||||
if (pBInfo->pRes->info.rows == 0 || !hasDataInGroupInfo(&pInfo->groupResInfo)) {
|
||||
doSetOperatorCompleted(pOperator);
|
||||
}
|
||||
printDataBlock(pBInfo->pRes, IS_FINAL_OP(pInfo) ? "Final Session" : "Single Session");
|
||||
printDataBlock(pBInfo->pRes, IS_FINAL_OP(pInfo) ? "final session" : "single session");
|
||||
return pBInfo->pRes->info.rows == 0 ? NULL : pBInfo->pRes;
|
||||
}
|
||||
|
||||
|
@ -3836,7 +3835,7 @@ static SSDataBlock* doStreamSessionAgg(SOperatorInfo* pOperator) {
|
|||
if (pBlock == NULL) {
|
||||
break;
|
||||
}
|
||||
printDataBlock(pBlock, IS_FINAL_OP(pInfo) ? "Final Session Recv" : "Single Session Recv");
|
||||
printDataBlock(pBlock, IS_FINAL_OP(pInfo) ? "final session recv" : "single session recv");
|
||||
|
||||
if (pBlock->info.type == STREAM_CLEAR) {
|
||||
SArray* pWins = taosArrayInit(16, sizeof(SResultWindowInfo));
|
||||
|
@ -3913,11 +3912,11 @@ static SSDataBlock* doStreamSessionAgg(SOperatorInfo* pOperator) {
|
|||
blockDataEnsureCapacity(pInfo->binfo.pRes, pOperator->resultInfo.capacity);
|
||||
doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
|
||||
if (pInfo->pDelRes->info.rows > 0) {
|
||||
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "Final Session" : "Single Session");
|
||||
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "final session" : "single session");
|
||||
return pInfo->pDelRes;
|
||||
}
|
||||
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
|
||||
printDataBlock(pBInfo->pRes, IS_FINAL_OP(pInfo) ? "Final Session" : "Single Session");
|
||||
printDataBlock(pBInfo->pRes, IS_FINAL_OP(pInfo) ? "final session" : "single session");
|
||||
return pBInfo->pRes->info.rows == 0 ? NULL : pBInfo->pRes;
|
||||
}
|
||||
|
||||
|
@ -3956,21 +3955,21 @@ static SSDataBlock* doStreamSessionSemiAgg(SOperatorInfo* pOperator) {
|
|||
} else if (pOperator->status == OP_RES_TO_RETURN) {
|
||||
doBuildResultDatablock(pOperator, pBInfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
|
||||
if (pBInfo->pRes->info.rows > 0) {
|
||||
printDataBlock(pBInfo->pRes, "Semi Session");
|
||||
printDataBlock(pBInfo->pRes, "sems session");
|
||||
return pBInfo->pRes;
|
||||
}
|
||||
|
||||
// doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
|
||||
if (pInfo->pDelRes->info.rows > 0 && !pInfo->returnDelete) {
|
||||
pInfo->returnDelete = true;
|
||||
printDataBlock(pInfo->pDelRes, "Semi Session");
|
||||
printDataBlock(pInfo->pDelRes, "sems session");
|
||||
return pInfo->pDelRes;
|
||||
}
|
||||
|
||||
if (pInfo->pUpdateRes->info.rows > 0) {
|
||||
// process the rest of the data
|
||||
pOperator->status = OP_OPENED;
|
||||
printDataBlock(pInfo->pUpdateRes, "Semi Session");
|
||||
printDataBlock(pInfo->pUpdateRes, "sems session");
|
||||
return pInfo->pUpdateRes;
|
||||
}
|
||||
// semi interval operator clear disk buffer
|
||||
|
@ -4034,21 +4033,21 @@ static SSDataBlock* doStreamSessionSemiAgg(SOperatorInfo* pOperator) {
|
|||
|
||||
doBuildResultDatablock(pOperator, pBInfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
|
||||
if (pBInfo->pRes->info.rows > 0) {
|
||||
printDataBlock(pBInfo->pRes, "Semi Session");
|
||||
printDataBlock(pBInfo->pRes, "sems session");
|
||||
return pBInfo->pRes;
|
||||
}
|
||||
|
||||
// doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
|
||||
if (pInfo->pDelRes->info.rows > 0 && !pInfo->returnDelete) {
|
||||
pInfo->returnDelete = true;
|
||||
printDataBlock(pInfo->pDelRes, "Semi Session");
|
||||
printDataBlock(pInfo->pDelRes, "sems session");
|
||||
return pInfo->pDelRes;
|
||||
}
|
||||
|
||||
if (pInfo->pUpdateRes->info.rows > 0) {
|
||||
// process the rest of the data
|
||||
pOperator->status = OP_OPENED;
|
||||
printDataBlock(pInfo->pUpdateRes, "Semi Session");
|
||||
printDataBlock(pInfo->pUpdateRes, "sems session");
|
||||
return pInfo->pUpdateRes;
|
||||
}
|
||||
|
||||
|
|
|
@ -557,11 +557,13 @@ static int32_t translateApercentileImpl(SFunctionNode* pFunc, char* pErrBuf, int
|
|||
pFunc->node.resType =
|
||||
(SDataType){.bytes = getApercentileMaxSize() + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY};
|
||||
} else {
|
||||
if (1 != numOfParams) {
|
||||
// original percent param is reserved
|
||||
if (2 != numOfParams) {
|
||||
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
}
|
||||
uint8_t para1Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type;
|
||||
if (TSDB_DATA_TYPE_BINARY != para1Type) {
|
||||
uint8_t para2Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 1))->resType.type;
|
||||
if (TSDB_DATA_TYPE_BINARY != para1Type || !IS_INTEGER_TYPE(para2Type)) {
|
||||
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
}
|
||||
|
||||
|
@ -621,7 +623,7 @@ static int32_t translateTopBot(SFunctionNode* pFunc, char* pErrBuf, int32_t len)
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t topBotCreateMergePara(SNodeList* pRawParameters, SNode* pPartialRes, SNodeList** pParameters) {
|
||||
static int32_t reserveFirstMergeParam(SNodeList* pRawParameters, SNode* pPartialRes, SNodeList** pParameters) {
|
||||
int32_t code = nodesListMakeAppend(pParameters, pPartialRes);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = nodesListStrictAppend(*pParameters, nodesCloneNode(nodesListGetNode(pRawParameters, 1)));
|
||||
|
@ -629,6 +631,14 @@ int32_t topBotCreateMergePara(SNodeList* pRawParameters, SNode* pPartialRes, SNo
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t topBotCreateMergeParam(SNodeList* pRawParameters, SNode* pPartialRes, SNodeList** pParameters) {
|
||||
return reserveFirstMergeParam(pRawParameters, pPartialRes, pParameters);
|
||||
}
|
||||
|
||||
int32_t apercentileCreateMergeParam(SNodeList* pRawParameters, SNode* pPartialRes, SNodeList** pParameters) {
|
||||
return reserveFirstMergeParam(pRawParameters, pPartialRes, pParameters);
|
||||
}
|
||||
|
||||
static int32_t translateSpread(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
|
||||
if (1 != LIST_LENGTH(pFunc->pParameterList)) {
|
||||
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
|
||||
|
@ -1532,7 +1542,7 @@ static int32_t translateDiff(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
|
|||
}
|
||||
|
||||
uint8_t resType;
|
||||
if (IS_SIGNED_NUMERIC_TYPE(colType) || TSDB_DATA_TYPE_BOOL == colType) {
|
||||
if (IS_SIGNED_NUMERIC_TYPE(colType) || TSDB_DATA_TYPE_BOOL == colType || TSDB_DATA_TYPE_TIMESTAMP == colType) {
|
||||
resType = TSDB_DATA_TYPE_BIGINT;
|
||||
} else {
|
||||
resType = TSDB_DATA_TYPE_DOUBLE;
|
||||
|
@ -2068,7 +2078,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
.invertFunc = NULL,
|
||||
.combineFunc = apercentileCombine,
|
||||
.pPartialFunc = "_apercentile_partial",
|
||||
.pMergeFunc = "_apercentile_merge"
|
||||
.pMergeFunc = "_apercentile_merge",
|
||||
.createMergeParaFuc = apercentileCreateMergeParam
|
||||
},
|
||||
{
|
||||
.name = "_apercentile_partial",
|
||||
|
@ -2107,7 +2118,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
.combineFunc = topCombine,
|
||||
.pPartialFunc = "top",
|
||||
.pMergeFunc = "top",
|
||||
.createMergeParaFuc = topBotCreateMergePara
|
||||
.createMergeParaFuc = topBotCreateMergeParam
|
||||
},
|
||||
{
|
||||
.name = "bottom",
|
||||
|
@ -2122,7 +2133,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
.combineFunc = bottomCombine,
|
||||
.pPartialFunc = "bottom",
|
||||
.pMergeFunc = "bottom",
|
||||
.createMergeParaFuc = topBotCreateMergePara
|
||||
.createMergeParaFuc = topBotCreateMergeParam
|
||||
},
|
||||
{
|
||||
.name = "spread",
|
||||
|
@ -2220,7 +2231,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
{
|
||||
.name = "derivative",
|
||||
.type = FUNCTION_TYPE_DERIVATIVE,
|
||||
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_IMPLICIT_TS_FUNC,
|
||||
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_IMPLICIT_TS_FUNC,
|
||||
.translateFunc = translateDerivative,
|
||||
.getEnvFunc = getDerivativeFuncEnv,
|
||||
.initFunc = derivativeFuncSetup,
|
||||
|
@ -2425,7 +2436,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
{
|
||||
.name = "diff",
|
||||
.type = FUNCTION_TYPE_DIFF,
|
||||
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
|
||||
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
|
||||
.translateFunc = translateDiff,
|
||||
.getEnvFunc = getDiffFuncEnv,
|
||||
.initFunc = diffFunctionSetup,
|
||||
|
|
|
@ -1624,6 +1624,10 @@ int32_t minmaxFunctionFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
|
|||
}
|
||||
|
||||
void setNullSelectivityValue(SqlFunctionCtx* pCtx, SSDataBlock* pBlock, int32_t rowIndex) {
|
||||
if (pCtx->subsidiaries.num <= 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
for (int32_t j = 0; j < pCtx->subsidiaries.num; ++j) {
|
||||
SqlFunctionCtx* pc = pCtx->subsidiaries.pCtx[j];
|
||||
int32_t dstSlotId = pc->pExpr->base.resSchema.slotId;
|
||||
|
@ -1655,8 +1659,6 @@ void setSelectivityValue(SqlFunctionCtx* pCtx, SSDataBlock* pBlock, const STuple
|
|||
SFunctParam* pFuncParam = &pc->pExpr->base.pParam[0];
|
||||
int32_t dstSlotId = pc->pExpr->base.resSchema.slotId;
|
||||
|
||||
int32_t ps = 0;
|
||||
|
||||
SColumnInfoData* pDstCol = taosArrayGet(pBlock->pDataBlock, dstSlotId);
|
||||
ASSERT(pc->pExpr->base.resSchema.bytes == pDstCol->info.bytes);
|
||||
if (nullList[j]) {
|
||||
|
@ -1678,6 +1680,39 @@ void releaseSource(STuplePos* pPos) {
|
|||
// Todo(liuyao) relase row
|
||||
}
|
||||
|
||||
// This function append the selectivity to subsidiaries function context directly, without fetching data
|
||||
// from intermediate disk based buf page
|
||||
void appendSelectivityValue(SqlFunctionCtx* pCtx, int32_t rowIndex, int32_t pos) {
|
||||
if (pCtx->subsidiaries.num <= 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
for (int32_t j = 0; j < pCtx->subsidiaries.num; ++j) {
|
||||
SqlFunctionCtx* pc = pCtx->subsidiaries.pCtx[j];
|
||||
|
||||
// get data from source col
|
||||
SFunctParam* pFuncParam = &pc->pExpr->base.pParam[0];
|
||||
int32_t srcSlotId = pFuncParam->pCol->slotId;
|
||||
|
||||
SColumnInfoData* pSrcCol = taosArrayGet(pCtx->pSrcBlock->pDataBlock, srcSlotId);
|
||||
|
||||
char* pData = colDataGetData(pSrcCol, rowIndex);
|
||||
|
||||
// append to dest col
|
||||
int32_t dstSlotId = pc->pExpr->base.resSchema.slotId;
|
||||
|
||||
SColumnInfoData* pDstCol = taosArrayGet(pCtx->pDstBlock->pDataBlock, dstSlotId);
|
||||
ASSERT(pc->pExpr->base.resSchema.bytes == pDstCol->info.bytes);
|
||||
|
||||
if (colDataIsNull_s(pSrcCol, rowIndex) == true) {
|
||||
colDataAppendNULL(pDstCol, pos);
|
||||
} else {
|
||||
colDataAppend(pDstCol, pos, pData, false);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
void replaceTupleData(STuplePos* pDestPos, STuplePos* pSourcePos) {
|
||||
releaseSource(pDestPos);
|
||||
*pDestPos = *pSourcePos;
|
||||
|
@ -2218,6 +2253,7 @@ int32_t leastSQRFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
|
|||
int32_t currentRow = pBlock->info.rows;
|
||||
|
||||
if (0 == pInfo->num) {
|
||||
colDataAppendNULL(pCol, currentRow);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -3154,6 +3190,7 @@ static void doHandleDiff(SDiffInfo* pDiffInfo, int32_t type, const char* pv, SCo
|
|||
colDataAppendInt64(pOutput, pos, &delta);
|
||||
}
|
||||
pDiffInfo->prev.i64 = v;
|
||||
|
||||
break;
|
||||
}
|
||||
case TSDB_DATA_TYPE_BOOL:
|
||||
|
@ -3247,6 +3284,10 @@ int32_t diffFunction(SqlFunctionCtx* pCtx) {
|
|||
|
||||
if (pDiffInfo->hasPrev) {
|
||||
doHandleDiff(pDiffInfo, pInputCol->info.type, pv, pOutput, pos, pCtx->order);
|
||||
// handle selectivity
|
||||
if (pCtx->subsidiaries.num > 0) {
|
||||
appendSelectivityValue(pCtx, i, pos);
|
||||
}
|
||||
|
||||
numOfElems++;
|
||||
} else {
|
||||
|
@ -3273,6 +3314,10 @@ int32_t diffFunction(SqlFunctionCtx* pCtx) {
|
|||
// there is a row of previous data block to be handled in the first place.
|
||||
if (pDiffInfo->hasPrev) {
|
||||
doHandleDiff(pDiffInfo, pInputCol->info.type, pv, pOutput, pos, pCtx->order);
|
||||
// handle selectivity
|
||||
if (pCtx->subsidiaries.num > 0) {
|
||||
appendSelectivityValue(pCtx, i, pos);
|
||||
}
|
||||
|
||||
numOfElems++;
|
||||
} else {
|
||||
|
@ -5723,6 +5768,12 @@ int32_t derivativeFunction(SqlFunctionCtx* pCtx) {
|
|||
if (pTsOutput != NULL) {
|
||||
colDataAppendInt64(pTsOutput, pos, &tsList[i]);
|
||||
}
|
||||
|
||||
// handle selectivity
|
||||
if (pCtx->subsidiaries.num > 0) {
|
||||
appendSelectivityValue(pCtx, i, pos);
|
||||
}
|
||||
|
||||
numOfElems++;
|
||||
}
|
||||
}
|
||||
|
@ -5755,6 +5806,12 @@ int32_t derivativeFunction(SqlFunctionCtx* pCtx) {
|
|||
if (pTsOutput != NULL) {
|
||||
colDataAppendInt64(pTsOutput, pos, &pDerivInfo->prevTs);
|
||||
}
|
||||
|
||||
// handle selectivity
|
||||
if (pCtx->subsidiaries.num > 0) {
|
||||
appendSelectivityValue(pCtx, i, pos);
|
||||
}
|
||||
|
||||
numOfElems++;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -877,7 +877,7 @@ void udfcUvHandleError(SClientUvConn *conn);
|
|||
void onUdfcPipeRead(uv_stream_t *client, ssize_t nread, const uv_buf_t *buf);
|
||||
void onUdfcPipeWrite(uv_write_t *write, int status);
|
||||
void onUdfcPipeConnect(uv_connect_t *connect, int status);
|
||||
int32_t udfcCreateUvTask(SClientUdfTask *task, int8_t uvTaskType, SClientUvTaskNode **pUvTask);
|
||||
int32_t udfcInitializeUvTask(SClientUdfTask *task, int8_t uvTaskType, SClientUvTaskNode *uvTask);
|
||||
int32_t udfcQueueUvTask(SClientUvTaskNode *uvTask);
|
||||
int32_t udfcStartUvTask(SClientUvTaskNode *uvTask);
|
||||
void udfcAsyncTaskCb(uv_async_t *async);
|
||||
|
@ -1376,8 +1376,7 @@ void onUdfcPipeConnect(uv_connect_t *connect, int status) {
|
|||
uv_sem_post(&uvTask->taskSem);
|
||||
}
|
||||
|
||||
int32_t udfcCreateUvTask(SClientUdfTask *task, int8_t uvTaskType, SClientUvTaskNode **pUvTask) {
|
||||
SClientUvTaskNode *uvTask = taosMemoryCalloc(1, sizeof(SClientUvTaskNode));
|
||||
int32_t udfcInitializeUvTask(SClientUdfTask *task, int8_t uvTaskType, SClientUvTaskNode *uvTask) {
|
||||
uvTask->type = uvTaskType;
|
||||
uvTask->udfc = task->session->udfc;
|
||||
|
||||
|
@ -1412,7 +1411,6 @@ int32_t udfcCreateUvTask(SClientUdfTask *task, int8_t uvTaskType, SClientUvTaskN
|
|||
}
|
||||
uv_sem_init(&uvTask->taskSem, 0);
|
||||
|
||||
*pUvTask = uvTask;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1615,10 +1613,10 @@ int32_t udfcClose() {
|
|||
}
|
||||
|
||||
int32_t udfcRunUdfUvTask(SClientUdfTask *task, int8_t uvTaskType) {
|
||||
SClientUvTaskNode *uvTask = NULL;
|
||||
|
||||
udfcCreateUvTask(task, uvTaskType, &uvTask);
|
||||
SClientUvTaskNode *uvTask = taosMemoryCalloc(1, sizeof(SClientUvTaskNode));
|
||||
fnDebug("udfc client task: %p created uvTask: %p. pipe: %p", task, uvTask, task->session->udfUvPipe);
|
||||
|
||||
udfcInitializeUvTask(task, uvTaskType, uvTask);
|
||||
udfcQueueUvTask(uvTask);
|
||||
udfcGetUdfTaskResultFromUvTask(task, uvTask);
|
||||
if (uvTaskType == UV_TASK_CONNECT) {
|
||||
|
@ -1629,6 +1627,8 @@ int32_t udfcRunUdfUvTask(SClientUdfTask *task, int8_t uvTaskType) {
|
|||
taosMemoryFree(uvTask->reqBuf.base);
|
||||
uvTask->reqBuf.base = NULL;
|
||||
taosMemoryFree(uvTask);
|
||||
fnDebug("udfc freed uvTask: %p", task);
|
||||
|
||||
uvTask = NULL;
|
||||
return task->errCode;
|
||||
}
|
||||
|
|
|
@ -1,7 +1,12 @@
|
|||
#include <string.h>
|
||||
#include <stdlib.h>
|
||||
#include <stdio.h>
|
||||
|
||||
#ifdef LINUX
|
||||
#include <unistd.h>
|
||||
#endif
|
||||
#ifdef WINDOWS
|
||||
#include <windows.h>
|
||||
#endif
|
||||
#include "taosudf.h"
|
||||
|
||||
|
||||
|
@ -35,6 +40,12 @@ DLL_EXPORT int32_t udf1(SUdfDataBlock* block, SUdfColumn *resultCol) {
|
|||
udfColDataSet(resultCol, i, (char *)&luckyNum, false);
|
||||
}
|
||||
}
|
||||
|
||||
//to simulate actual processing delay by udf
|
||||
#ifdef LINUX
|
||||
usleep(1 * 1000); // usleep takes sleep time in us (1 millionth of a second)
|
||||
#endif
|
||||
#ifdef WINDOWS
|
||||
Sleep(1);
|
||||
#endif
|
||||
return 0;
|
||||
}
|
|
@ -375,6 +375,7 @@ static int32_t logicJoinCopy(const SJoinLogicNode* pSrc, SJoinLogicNode* pDst) {
|
|||
CLONE_NODE_FIELD(pMergeCondition);
|
||||
CLONE_NODE_FIELD(pOnConditions);
|
||||
COPY_SCALAR_FIELD(isSingleTableJoin);
|
||||
COPY_SCALAR_FIELD(inputTsOrder);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
@ -440,6 +441,7 @@ static int32_t logicWindowCopy(const SWindowLogicNode* pSrc, SWindowLogicNode* p
|
|||
COPY_SCALAR_FIELD(watermark);
|
||||
COPY_SCALAR_FIELD(igExpired);
|
||||
COPY_SCALAR_FIELD(windowAlgo);
|
||||
COPY_SCALAR_FIELD(inputTsOrder);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
|
|
@ -1717,7 +1717,7 @@ static const char* jkJoinPhysiPlanOnConditions = "OnConditions";
|
|||
static const char* jkJoinPhysiPlanTargets = "Targets";
|
||||
|
||||
static int32_t physiJoinNodeToJson(const void* pObj, SJson* pJson) {
|
||||
const SJoinPhysiNode* pNode = (const SJoinPhysiNode*)pObj;
|
||||
const SSortMergeJoinPhysiNode* pNode = (const SSortMergeJoinPhysiNode*)pObj;
|
||||
|
||||
int32_t code = physicPlanNodeToJson(pObj, pJson);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
|
@ -1737,7 +1737,7 @@ static int32_t physiJoinNodeToJson(const void* pObj, SJson* pJson) {
|
|||
}
|
||||
|
||||
static int32_t jsonToPhysiJoinNode(const SJson* pJson, void* pObj) {
|
||||
SJoinPhysiNode* pNode = (SJoinPhysiNode*)pObj;
|
||||
SSortMergeJoinPhysiNode* pNode = (SSortMergeJoinPhysiNode*)pObj;
|
||||
|
||||
int32_t code = jsonToPhysicPlanNode(pJson, pObj);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
|
|
|
@ -468,7 +468,7 @@ static EDealRes dispatchPhysiPlan(SNode* pNode, ETraversalOrder order, FNodeWalk
|
|||
break;
|
||||
}
|
||||
case QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN: {
|
||||
SJoinPhysiNode* pJoin = (SJoinPhysiNode*)pNode;
|
||||
SSortMergeJoinPhysiNode* pJoin = (SSortMergeJoinPhysiNode*)pNode;
|
||||
res = walkPhysiNode((SPhysiNode*)pNode, order, walker, pContext);
|
||||
if (DEAL_RES_ERROR != res && DEAL_RES_END != res) {
|
||||
res = walkPhysiPlan(pJoin->pMergeCondition, order, walker, pContext);
|
||||
|
|
|
@ -287,7 +287,7 @@ SNode* nodesMakeNode(ENodeType type) {
|
|||
case QUERY_NODE_PHYSICAL_PLAN_PROJECT:
|
||||
return makeNode(type, sizeof(SProjectPhysiNode));
|
||||
case QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN:
|
||||
return makeNode(type, sizeof(SJoinPhysiNode));
|
||||
return makeNode(type, sizeof(SSortMergeJoinPhysiNode));
|
||||
case QUERY_NODE_PHYSICAL_PLAN_HASH_AGG:
|
||||
return makeNode(type, sizeof(SAggPhysiNode));
|
||||
case QUERY_NODE_PHYSICAL_PLAN_EXCHANGE:
|
||||
|
@ -883,7 +883,7 @@ void nodesDestroyNode(SNode* pNode) {
|
|||
break;
|
||||
}
|
||||
case QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN: {
|
||||
SJoinPhysiNode* pPhyNode = (SJoinPhysiNode*)pNode;
|
||||
SSortMergeJoinPhysiNode* pPhyNode = (SSortMergeJoinPhysiNode*)pNode;
|
||||
destroyPhysiNode((SPhysiNode*)pPhyNode);
|
||||
nodesDestroyNode(pPhyNode->pMergeCondition);
|
||||
nodesDestroyNode(pPhyNode->pOnConditions);
|
||||
|
|
|
@ -55,7 +55,11 @@ typedef enum EDatabaseOptionType {
|
|||
DB_OPTION_VGROUPS,
|
||||
DB_OPTION_SINGLE_STABLE,
|
||||
DB_OPTION_RETENTIONS,
|
||||
DB_OPTION_SCHEMALESS
|
||||
DB_OPTION_SCHEMALESS,
|
||||
DB_OPTION_WAL_RETENTION_PERIOD,
|
||||
DB_OPTION_WAL_RETENTION_SIZE,
|
||||
DB_OPTION_WAL_ROLL_PERIOD,
|
||||
DB_OPTION_WAL_SEGMENT_SIZE
|
||||
} EDatabaseOptionType;
|
||||
|
||||
typedef enum ETableOptionType {
|
||||
|
@ -90,7 +94,7 @@ SNode* createValueNode(SAstCreateContext* pCxt, int32_t dataType, const SToken*
|
|||
SNode* createDurationValueNode(SAstCreateContext* pCxt, const SToken* pLiteral);
|
||||
SNode* createDefaultDatabaseCondValue(SAstCreateContext* pCxt);
|
||||
SNode* createPlaceholderValueNode(SAstCreateContext* pCxt, const SToken* pLiteral);
|
||||
SNode* setProjectionAlias(SAstCreateContext* pCxt, SNode* pNode, const SToken* pAlias);
|
||||
SNode* setProjectionAlias(SAstCreateContext* pCxt, SNode* pNode, SToken* pAlias);
|
||||
SNode* createLogicConditionNode(SAstCreateContext* pCxt, ELogicConditionType type, SNode* pParam1, SNode* pParam2);
|
||||
SNode* createOperatorNode(SAstCreateContext* pCxt, EOperatorType type, SNode* pLeft, SNode* pRight);
|
||||
SNode* createBetweenAnd(SAstCreateContext* pCxt, SNode* pExpr, SNode* pLeft, SNode* pRight);
|
||||
|
|
|
@ -191,6 +191,20 @@ db_options(A) ::= db_options(B) VGROUPS NK_INTEGER(C).
|
|||
db_options(A) ::= db_options(B) SINGLE_STABLE NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_SINGLE_STABLE, &C); }
|
||||
db_options(A) ::= db_options(B) RETENTIONS retention_list(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_RETENTIONS, C); }
|
||||
db_options(A) ::= db_options(B) SCHEMALESS NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_SCHEMALESS, &C); }
|
||||
db_options(A) ::= db_options(B) WAL_RETENTION_PERIOD NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_WAL_RETENTION_PERIOD, &C); }
|
||||
db_options(A) ::= db_options(B) WAL_RETENTION_PERIOD NK_MINUS(D) NK_INTEGER(C). {
|
||||
SToken t = D;
|
||||
t.n = (C.z + C.n) - D.z;
|
||||
A = setDatabaseOption(pCxt, B, DB_OPTION_WAL_RETENTION_PERIOD, &t);
|
||||
}
|
||||
db_options(A) ::= db_options(B) WAL_RETENTION_SIZE NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_WAL_RETENTION_SIZE, &C); }
|
||||
db_options(A) ::= db_options(B) WAL_RETENTION_SIZE NK_MINUS(D) NK_INTEGER(C). {
|
||||
SToken t = D;
|
||||
t.n = (C.z + C.n) - D.z;
|
||||
A = setDatabaseOption(pCxt, B, DB_OPTION_WAL_RETENTION_SIZE, &t);
|
||||
}
|
||||
db_options(A) ::= db_options(B) WAL_ROLL_PERIOD NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_WAL_ROLL_PERIOD, &C); }
|
||||
db_options(A) ::= db_options(B) WAL_SEGMENT_SIZE NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_WAL_SEGMENT_SIZE, &C); }
|
||||
|
||||
alter_db_options(A) ::= alter_db_option(B). { A = createAlterDatabaseOptions(pCxt); A = setAlterDatabaseOption(pCxt, A, &B); }
|
||||
alter_db_options(A) ::= alter_db_options(B) alter_db_option(C). { A = setAlterDatabaseOption(pCxt, B, &C); }
|
||||
|
|
|
@ -527,6 +527,7 @@ SNode* createTempTableNode(SAstCreateContext* pCxt, SNode* pSubquery, const STok
|
|||
}
|
||||
if (QUERY_NODE_SELECT_STMT == nodeType(pSubquery)) {
|
||||
strcpy(((SSelectStmt*)pSubquery)->stmtName, tempTable->table.tableAlias);
|
||||
((SSelectStmt*)pSubquery)->isSubquery = true;
|
||||
} else if (QUERY_NODE_SET_OPERATOR == nodeType(pSubquery)) {
|
||||
strcpy(((SSetOperator*)pSubquery)->stmtName, tempTable->table.tableAlias);
|
||||
}
|
||||
|
@ -637,8 +638,9 @@ SNode* createInterpTimeRange(SAstCreateContext* pCxt, SNode* pStart, SNode* pEnd
|
|||
return createBetweenAnd(pCxt, createPrimaryKeyCol(pCxt), pStart, pEnd);
|
||||
}
|
||||
|
||||
SNode* setProjectionAlias(SAstCreateContext* pCxt, SNode* pNode, const SToken* pAlias) {
|
||||
SNode* setProjectionAlias(SAstCreateContext* pCxt, SNode* pNode, SToken* pAlias) {
|
||||
CHECK_PARSER_STATUS(pCxt);
|
||||
trimEscape(pAlias);
|
||||
int32_t len = TMIN(sizeof(((SExprNode*)pNode)->aliasName) - 1, pAlias->n);
|
||||
strncpy(((SExprNode*)pNode)->aliasName, pAlias->z, len);
|
||||
((SExprNode*)pNode)->aliasName[len] = '\0';
|
||||
|
@ -892,6 +894,18 @@ SNode* setDatabaseOption(SAstCreateContext* pCxt, SNode* pOptions, EDatabaseOpti
|
|||
case DB_OPTION_RETENTIONS:
|
||||
((SDatabaseOptions*)pOptions)->pRetentions = pVal;
|
||||
break;
|
||||
case DB_OPTION_WAL_RETENTION_PERIOD:
|
||||
((SDatabaseOptions*)pOptions)->walRetentionPeriod = taosStr2Int32(((SToken*)pVal)->z, NULL, 10);
|
||||
break;
|
||||
case DB_OPTION_WAL_RETENTION_SIZE:
|
||||
((SDatabaseOptions*)pOptions)->walRetentionSize = taosStr2Int32(((SToken*)pVal)->z, NULL, 10);
|
||||
break;
|
||||
case DB_OPTION_WAL_ROLL_PERIOD:
|
||||
((SDatabaseOptions*)pOptions)->walRollPeriod = taosStr2Int32(((SToken*)pVal)->z, NULL, 10);
|
||||
break;
|
||||
case DB_OPTION_WAL_SEGMENT_SIZE:
|
||||
((SDatabaseOptions*)pOptions)->walSegmentSize = taosStr2Int32(((SToken*)pVal)->z, NULL, 10);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
|
|
@ -739,12 +739,13 @@ static int32_t parseBoundColumns(SInsertParseContext* pCxt, SParsedDataColInfo*
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static void buildCreateTbReq(SVCreateTbReq* pTbReq, const char* tname, STag* pTag, int64_t suid, const char* sname, SArray* tagName, uint8_t tagNum) {
|
||||
static void buildCreateTbReq(SVCreateTbReq* pTbReq, const char* tname, STag* pTag, int64_t suid, const char* sname,
|
||||
SArray* tagName, uint8_t tagNum) {
|
||||
pTbReq->type = TD_CHILD_TABLE;
|
||||
pTbReq->name = strdup(tname);
|
||||
pTbReq->ctb.suid = suid;
|
||||
pTbReq->ctb.tagNum = tagNum;
|
||||
if(sname) pTbReq->ctb.name = strdup(sname);
|
||||
if (sname) pTbReq->ctb.name = strdup(sname);
|
||||
pTbReq->ctb.pTag = (uint8_t*)pTag;
|
||||
pTbReq->ctb.tagName = taosArrayDup(tagName);
|
||||
pTbReq->commentLen = -1;
|
||||
|
@ -969,7 +970,7 @@ static int32_t parseTagsClause(SInsertParseContext* pCxt, SSchema* pSchema, uint
|
|||
}
|
||||
|
||||
SSchema* pTagSchema = &pSchema[pCxt->tags.boundColumns[i]];
|
||||
char tmpTokenBuf[TSDB_MAX_BYTES_PER_ROW] = {0}; // todo this can be optimize with parse column
|
||||
char tmpTokenBuf[TSDB_MAX_BYTES_PER_ROW] = {0}; // todo this can be optimize with parse column
|
||||
code = checkAndTrimValue(&sToken, tmpTokenBuf, &pCxt->msg);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
goto end;
|
||||
|
@ -1012,7 +1013,8 @@ static int32_t parseTagsClause(SInsertParseContext* pCxt, SSchema* pSchema, uint
|
|||
goto end;
|
||||
}
|
||||
|
||||
buildCreateTbReq(&pCxt->createTblReq, tName, pTag, pCxt->pTableMeta->suid, pCxt->sTableName, tagName, pCxt->pTableMeta->tableInfo.numOfTags);
|
||||
buildCreateTbReq(&pCxt->createTblReq, tName, pTag, pCxt->pTableMeta->suid, pCxt->sTableName, tagName,
|
||||
pCxt->pTableMeta->tableInfo.numOfTags);
|
||||
|
||||
end:
|
||||
for (int i = 0; i < taosArrayGetSize(pTagVals); ++i) {
|
||||
|
@ -1650,7 +1652,6 @@ static int32_t skipUsingClause(SInsertParseSyntaxCxt* pCxt) {
|
|||
static int32_t collectTableMetaKey(SInsertParseSyntaxCxt* pCxt, SToken* pTbToken) {
|
||||
SName name;
|
||||
CHECK_CODE(createSName(&name, pTbToken, pCxt->pComCxt->acctId, pCxt->pComCxt->db, &pCxt->msg));
|
||||
CHECK_CODE(reserveDbCfgInCache(pCxt->pComCxt->acctId, name.dbname, pCxt->pMetaCache));
|
||||
CHECK_CODE(reserveUserAuthInCacheExt(pCxt->pComCxt->pUser, &name, AUTH_TYPE_WRITE, pCxt->pMetaCache));
|
||||
CHECK_CODE(reserveTableMetaInCacheExt(&name, pCxt->pMetaCache));
|
||||
CHECK_CODE(reserveTableVgroupInCacheExt(&name, pCxt->pMetaCache));
|
||||
|
@ -2332,7 +2333,8 @@ int32_t smlBindData(void* handle, SArray* tags, SArray* colsSchema, SArray* cols
|
|||
return ret;
|
||||
}
|
||||
|
||||
buildCreateTbReq(&smlHandle->tableExecHandle.createTblReq, tableName, pTag, pTableMeta->suid, NULL, tagName, pTableMeta->tableInfo.numOfTags);
|
||||
buildCreateTbReq(&smlHandle->tableExecHandle.createTblReq, tableName, pTag, pTableMeta->suid, NULL, tagName,
|
||||
pTableMeta->tableInfo.numOfTags);
|
||||
taosArrayDestroy(tagName);
|
||||
|
||||
smlHandle->tableExecHandle.createTblReq.ctb.name = taosMemoryMalloc(sTableNameLen + 1);
|
||||
|
|
|
@ -234,6 +234,10 @@ static SKeyword keywordTable[] = {
|
|||
{"VGROUPS", TK_VGROUPS},
|
||||
{"VNODES", TK_VNODES},
|
||||
{"WAL", TK_WAL},
|
||||
{"WAL_RETENTION_PERIOD", TK_WAL_RETENTION_PERIOD},
|
||||
{"WAL_RETENTION_SIZE", TK_WAL_RETENTION_SIZE},
|
||||
{"WAL_ROLL_PERIOD", TK_WAL_ROLL_PERIOD},
|
||||
{"WAL_SEGMENT_SIZE", TK_WAL_SEGMENT_SIZE},
|
||||
{"WATERMARK", TK_WATERMARK},
|
||||
{"WHERE", TK_WHERE},
|
||||
{"WINDOW_CLOSE", TK_WINDOW_CLOSE},
|
||||
|
|
|
@ -2984,6 +2984,10 @@ static int32_t buildCreateDbReq(STranslateContext* pCxt, SCreateDatabaseStmt* pS
|
|||
pReq->cacheLast = pStmt->pOptions->cacheModel;
|
||||
pReq->cacheLastSize = pStmt->pOptions->cacheLastSize;
|
||||
pReq->schemaless = pStmt->pOptions->schemaless;
|
||||
pReq->walRetentionPeriod = pStmt->pOptions->walRetentionPeriod;
|
||||
pReq->walRetentionSize = pStmt->pOptions->walRetentionSize;
|
||||
pReq->walRollPeriod = pStmt->pOptions->walRollPeriod;
|
||||
pReq->walSegmentSize = pStmt->pOptions->walSegmentSize;
|
||||
pReq->ignoreExist = pStmt->ignoreExists;
|
||||
return buildCreateDbRetentions(pStmt->pOptions->pRetentions, pReq);
|
||||
}
|
||||
|
@ -3252,6 +3256,21 @@ static int32_t checkDatabaseOptions(STranslateContext* pCxt, const char* pDbName
|
|||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = checkDbEnumOption(pCxt, "schemaless", pOptions->schemaless, TSDB_DB_SCHEMALESS_ON, TSDB_DB_SCHEMALESS_OFF);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = checkDbRangeOption(pCxt, "walRetentionPeriod", pOptions->walRetentionPeriod,
|
||||
TSDB_DB_MIN_WAL_RETENTION_PERIOD, INT32_MAX);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = checkDbRangeOption(pCxt, "walRetentionSize", pOptions->walRetentionSize, TSDB_DB_MIN_WAL_RETENTION_SIZE,
|
||||
INT32_MAX);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = checkDbRangeOption(pCxt, "walRollPeriod", pOptions->walRollPeriod, TSDB_DB_MIN_WAL_ROLL_PERIOD, INT32_MAX);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code =
|
||||
checkDbRangeOption(pCxt, "walSegmentSize", pOptions->walSegmentSize, TSDB_DB_MIN_WAL_SEGMENT_SIZE, INT32_MAX);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = checkOptionsDependency(pCxt, pDbName, pOptions);
|
||||
}
|
||||
|
|
|
@ -92,7 +92,7 @@ static char* getSyntaxErrFormat(int32_t errCode) {
|
|||
case TSDB_CODE_PAR_INTER_SLIDING_TOO_BIG:
|
||||
return "sliding value no larger than the interval value";
|
||||
case TSDB_CODE_PAR_INTER_SLIDING_TOO_SMALL:
|
||||
return "sliding value can not less than 1% of interval value";
|
||||
return "sliding value can not less than 1%% of interval value";
|
||||
case TSDB_CODE_PAR_ONLY_ONE_JSON_TAG:
|
||||
return "Only one tag if there is a json tag";
|
||||
case TSDB_CODE_PAR_INCORRECT_NUM_OF_COL:
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -77,6 +77,10 @@ TEST_F(ParserInitialCTest, createBnode) {
|
|||
* | WAL value
|
||||
* | VGROUPS value
|
||||
* | SINGLE_STABLE {0 | 1}
|
||||
* | WAL_RETENTION_PERIOD value
|
||||
* | WAL_ROLL_PERIOD value
|
||||
* | WAL_RETENTION_SIZE value
|
||||
* | WAL_SEGMENT_SIZE value
|
||||
* }
|
||||
*/
|
||||
TEST_F(ParserInitialCTest, createDatabase) {
|
||||
|
@ -149,6 +153,10 @@ TEST_F(ParserInitialCTest, createDatabase) {
|
|||
++expect.numOfRetensions;
|
||||
};
|
||||
auto setDbSchemalessFunc = [&](int8_t schemaless) { expect.schemaless = schemaless; };
|
||||
auto setDbWalRetentionPeriod = [&](int32_t walRetentionPeriod) { expect.walRetentionPeriod = walRetentionPeriod; };
|
||||
auto setDbWalRetentionSize = [&](int32_t walRetentionSize) { expect.walRetentionSize = walRetentionSize; };
|
||||
auto setDbWalRollPeriod = [&](int32_t walRollPeriod) { expect.walRollPeriod = walRollPeriod; };
|
||||
auto setDbWalSegmentSize = [&](int32_t walSegmentSize) { expect.walSegmentSize = walSegmentSize; };
|
||||
|
||||
setCheckDdlFunc([&](const SQuery* pQuery, ParserStage stage) {
|
||||
ASSERT_EQ(nodeType(pQuery->pRoot), QUERY_NODE_CREATE_DATABASE_STMT);
|
||||
|
@ -175,6 +183,10 @@ TEST_F(ParserInitialCTest, createDatabase) {
|
|||
ASSERT_EQ(req.strict, expect.strict);
|
||||
ASSERT_EQ(req.cacheLast, expect.cacheLast);
|
||||
ASSERT_EQ(req.cacheLastSize, expect.cacheLastSize);
|
||||
ASSERT_EQ(req.walRetentionPeriod, expect.walRetentionPeriod);
|
||||
ASSERT_EQ(req.walRetentionSize, expect.walRetentionSize);
|
||||
ASSERT_EQ(req.walRollPeriod, expect.walRollPeriod);
|
||||
ASSERT_EQ(req.walSegmentSize, expect.walSegmentSize);
|
||||
// ASSERT_EQ(req.schemaless, expect.schemaless);
|
||||
ASSERT_EQ(req.ignoreExist, expect.ignoreExist);
|
||||
ASSERT_EQ(req.numOfRetensions, expect.numOfRetensions);
|
||||
|
@ -219,6 +231,10 @@ TEST_F(ParserInitialCTest, createDatabase) {
|
|||
setDbVgroupsFunc(100);
|
||||
setDbSingleStableFunc(1);
|
||||
setDbSchemalessFunc(1);
|
||||
setDbWalRetentionPeriod(-1);
|
||||
setDbWalRetentionSize(-1);
|
||||
setDbWalRollPeriod(10);
|
||||
setDbWalSegmentSize(20);
|
||||
run("CREATE DATABASE IF NOT EXISTS wxy_db "
|
||||
"BUFFER 64 "
|
||||
"CACHEMODEL 'last_value' "
|
||||
|
@ -238,7 +254,11 @@ TEST_F(ParserInitialCTest, createDatabase) {
|
|||
"WAL 2 "
|
||||
"VGROUPS 100 "
|
||||
"SINGLE_STABLE 1 "
|
||||
"SCHEMALESS 1");
|
||||
"SCHEMALESS 1 "
|
||||
"WAL_RETENTION_PERIOD -1 "
|
||||
"WAL_RETENTION_SIZE -1 "
|
||||
"WAL_ROLL_PERIOD 10 "
|
||||
"WAL_SEGMENT_SIZE 20");
|
||||
clearCreateDbReq();
|
||||
|
||||
setCreateDbReqFunc("wxy_db", 1);
|
||||
|
|
|
@ -144,9 +144,9 @@ TEST_F(ParserSelectTest, IndefiniteRowsFunc) {
|
|||
TEST_F(ParserSelectTest, IndefiniteRowsFuncSemanticCheck) {
|
||||
useDb("root", "test");
|
||||
|
||||
run("SELECT DIFF(c1), c2 FROM t1", TSDB_CODE_PAR_NOT_SINGLE_GROUP);
|
||||
run("SELECT DIFF(c1), c2 FROM t1");
|
||||
|
||||
run("SELECT DIFF(c1), tbname FROM t1", TSDB_CODE_PAR_NOT_SINGLE_GROUP);
|
||||
run("SELECT DIFF(c1), tbname FROM t1");
|
||||
|
||||
run("SELECT DIFF(c1), count(*) FROM t1", TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
|
||||
|
||||
|
|
|
@ -339,6 +339,7 @@ static int32_t createJoinLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSelect
|
|||
|
||||
pJoin->joinType = pJoinTable->joinType;
|
||||
pJoin->isSingleTableJoin = pJoinTable->table.singleTable;
|
||||
pJoin->inputTsOrder = ORDER_ASC;
|
||||
pJoin->node.groupAction = GROUP_ACTION_CLEAR;
|
||||
pJoin->node.requireDataOrder = DATA_ORDER_LEVEL_GLOBAL;
|
||||
pJoin->node.requireDataOrder = DATA_ORDER_LEVEL_GLOBAL;
|
||||
|
@ -625,14 +626,14 @@ static int32_t createInterpFuncLogicNode(SLogicPlanContext* pCxt, SSelectStmt* p
|
|||
|
||||
static int32_t createWindowLogicNodeFinalize(SLogicPlanContext* pCxt, SSelectStmt* pSelect, SWindowLogicNode* pWindow,
|
||||
SLogicNode** pLogicNode) {
|
||||
int32_t code = nodesCollectFuncs(pSelect, SQL_CLAUSE_WINDOW, fmIsWindowClauseFunc, &pWindow->pFuncs);
|
||||
|
||||
if (pCxt->pPlanCxt->streamQuery) {
|
||||
pWindow->triggerType = pCxt->pPlanCxt->triggerType;
|
||||
pWindow->watermark = pCxt->pPlanCxt->watermark;
|
||||
pWindow->igExpired = pCxt->pPlanCxt->igExpired;
|
||||
}
|
||||
pWindow->inputTsOrder = ORDER_ASC;
|
||||
|
||||
int32_t code = nodesCollectFuncs(pSelect, SQL_CLAUSE_WINDOW, fmIsWindowClauseFunc, &pWindow->pFuncs);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = rewriteExprsForSelect(pWindow->pFuncs, pSelect, SQL_CLAUSE_WINDOW);
|
||||
}
|
||||
|
@ -861,7 +862,8 @@ static int32_t createProjectLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSel
|
|||
|
||||
TSWAP(pProject->node.pLimit, pSelect->pLimit);
|
||||
TSWAP(pProject->node.pSlimit, pSelect->pSlimit);
|
||||
pProject->node.groupAction = GROUP_ACTION_CLEAR;
|
||||
pProject->node.groupAction =
|
||||
(!pSelect->isSubquery && pCxt->pPlanCxt->streamQuery) ? GROUP_ACTION_KEEP : GROUP_ACTION_CLEAR;
|
||||
pProject->node.requireDataOrder = DATA_ORDER_LEVEL_NONE;
|
||||
pProject->node.resultDataOrder = DATA_ORDER_LEVEL_NONE;
|
||||
|
||||
|
|
|
@ -993,25 +993,28 @@ static bool sortPriKeyOptMayBeOptimized(SLogicNode* pNode) {
|
|||
}
|
||||
|
||||
static int32_t sortPriKeyOptGetScanNodesImpl(SLogicNode* pNode, bool* pNotOptimize, SNodeList** pScanNodes) {
|
||||
int32_t code = TSDB_CODE_SUCCESS;
|
||||
|
||||
switch (nodeType(pNode)) {
|
||||
case QUERY_NODE_LOGIC_PLAN_SCAN:
|
||||
if (TSDB_SUPER_TABLE != ((SScanLogicNode*)pNode)->tableType) {
|
||||
return nodesListMakeAppend(pScanNodes, (SNode*)pNode);
|
||||
case QUERY_NODE_LOGIC_PLAN_SCAN: {
|
||||
SScanLogicNode* pScan = (SScanLogicNode*)pNode;
|
||||
if (NULL != pScan->pGroupTags) {
|
||||
*pNotOptimize = true;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
break;
|
||||
case QUERY_NODE_LOGIC_PLAN_JOIN:
|
||||
code =
|
||||
return nodesListMakeAppend(pScanNodes, (SNode*)pNode);
|
||||
}
|
||||
case QUERY_NODE_LOGIC_PLAN_JOIN: {
|
||||
int32_t code =
|
||||
sortPriKeyOptGetScanNodesImpl((SLogicNode*)nodesListGetNode(pNode->pChildren, 0), pNotOptimize, pScanNodes);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code =
|
||||
sortPriKeyOptGetScanNodesImpl((SLogicNode*)nodesListGetNode(pNode->pChildren, 1), pNotOptimize, pScanNodes);
|
||||
}
|
||||
return code;
|
||||
}
|
||||
case QUERY_NODE_LOGIC_PLAN_AGG:
|
||||
case QUERY_NODE_LOGIC_PLAN_PARTITION:
|
||||
*pNotOptimize = true;
|
||||
return code;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
@ -1037,17 +1040,33 @@ static EOrder sortPriKeyOptGetPriKeyOrder(SSortLogicNode* pSort) {
|
|||
return ((SOrderByExprNode*)nodesListGetNode(pSort->pSortKeys, 0))->order;
|
||||
}
|
||||
|
||||
static void sortPriKeyOptSetParentOrder(SLogicNode* pNode, EOrder order) {
|
||||
if (NULL == pNode) {
|
||||
return;
|
||||
}
|
||||
if (QUERY_NODE_LOGIC_PLAN_WINDOW == nodeType(pNode)) {
|
||||
((SWindowLogicNode*)pNode)->inputTsOrder = order;
|
||||
} else if (QUERY_NODE_LOGIC_PLAN_JOIN == nodeType(pNode)) {
|
||||
((SJoinLogicNode*)pNode)->inputTsOrder = order;
|
||||
}
|
||||
sortPriKeyOptSetParentOrder(pNode->pParent, order);
|
||||
}
|
||||
|
||||
static int32_t sortPriKeyOptApply(SOptimizeContext* pCxt, SLogicSubplan* pLogicSubplan, SSortLogicNode* pSort,
|
||||
SNodeList* pScanNodes) {
|
||||
EOrder order = sortPriKeyOptGetPriKeyOrder(pSort);
|
||||
if (ORDER_DESC == order) {
|
||||
SNode* pScanNode = NULL;
|
||||
FOREACH(pScanNode, pScanNodes) {
|
||||
SScanLogicNode* pScan = (SScanLogicNode*)pScanNode;
|
||||
if (pScan->scanSeq[0] > 0) {
|
||||
TSWAP(pScan->scanSeq[0], pScan->scanSeq[1]);
|
||||
}
|
||||
SNode* pScanNode = NULL;
|
||||
FOREACH(pScanNode, pScanNodes) {
|
||||
SScanLogicNode* pScan = (SScanLogicNode*)pScanNode;
|
||||
if (ORDER_DESC == order && pScan->scanSeq[0] > 0) {
|
||||
TSWAP(pScan->scanSeq[0], pScan->scanSeq[1]);
|
||||
}
|
||||
if (TSDB_SUPER_TABLE == pScan->tableType) {
|
||||
pScan->scanType = SCAN_TYPE_TABLE_MERGE;
|
||||
pScan->node.resultDataOrder = DATA_ORDER_LEVEL_GLOBAL;
|
||||
pScan->node.requireDataOrder = DATA_ORDER_LEVEL_GLOBAL;
|
||||
}
|
||||
sortPriKeyOptSetParentOrder(pScan->node.pParent, order);
|
||||
}
|
||||
|
||||
SLogicNode* pChild = (SLogicNode*)nodesListGetNode(pSort->node.pChildren, 0);
|
||||
|
@ -1613,10 +1632,10 @@ static void alignProjectionWithTarget(SLogicNode* pNode) {
|
|||
}
|
||||
|
||||
SProjectLogicNode* pProjectNode = (SProjectLogicNode*)pNode;
|
||||
SNode* pProjection = NULL;
|
||||
SNode* pProjection = NULL;
|
||||
FOREACH(pProjection, pProjectNode->pProjections) {
|
||||
SNode* pTarget = NULL;
|
||||
bool keep = false;
|
||||
bool keep = false;
|
||||
FOREACH(pTarget, pNode->pTargets) {
|
||||
if (0 == strcmp(((SColumnNode*)pProjection)->node.aliasName, ((SColumnNode*)pTarget)->colName)) {
|
||||
keep = true;
|
||||
|
@ -2214,7 +2233,7 @@ static bool tagScanMayBeOptimized(SLogicNode* pNode) {
|
|||
!planOptNodeListHasTbname(pAgg->pGroupKeys)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
|
||||
SNode* pGroupKey = NULL;
|
||||
FOREACH(pGroupKey, pAgg->pGroupKeys) {
|
||||
SNode* pGroup = NULL;
|
||||
|
|
|
@ -415,7 +415,6 @@ static int32_t createScanPhysiNodeFinalize(SPhysiPlanContext* pCxt, SSubplan* pS
|
|||
SScanPhysiNode* pScanPhysiNode, SPhysiNode** pPhyNode) {
|
||||
int32_t code = createScanCols(pCxt, pScanPhysiNode, pScanLogicNode->pScanCols);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
// Data block describe also needs to be set without scanning column, such as SELECT COUNT(*) FROM t
|
||||
code = addDataBlockSlots(pCxt, pScanPhysiNode->pScanCols, pScanPhysiNode->node.pOutputDataBlockDesc);
|
||||
}
|
||||
|
||||
|
@ -622,8 +621,8 @@ static int32_t createScanPhysiNode(SPhysiPlanContext* pCxt, SSubplan* pSubplan,
|
|||
|
||||
static int32_t createJoinPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChildren, SJoinLogicNode* pJoinLogicNode,
|
||||
SPhysiNode** pPhyNode) {
|
||||
SJoinPhysiNode* pJoin =
|
||||
(SJoinPhysiNode*)makePhysiNode(pCxt, (SLogicNode*)pJoinLogicNode, QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN);
|
||||
SSortMergeJoinPhysiNode* pJoin =
|
||||
(SSortMergeJoinPhysiNode*)makePhysiNode(pCxt, (SLogicNode*)pJoinLogicNode, QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN);
|
||||
if (NULL == pJoin) {
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
|
@ -975,6 +974,9 @@ static int32_t createInterpFuncPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pCh
|
|||
}
|
||||
|
||||
static bool projectCanMergeDataBlock(SProjectLogicNode* pProject) {
|
||||
if (GROUP_ACTION_KEEP == pProject->node.groupAction) {
|
||||
return false;
|
||||
}
|
||||
if (DATA_ORDER_LEVEL_NONE == pProject->node.resultDataOrder) {
|
||||
return true;
|
||||
}
|
||||
|
|
|
@ -469,7 +469,7 @@ static int32_t stbSplCreateExchangeNode(SSplitContext* pCxt, SLogicNode* pParent
|
|||
return code;
|
||||
}
|
||||
|
||||
static int32_t stbSplCreateMergeKeysByPrimaryKey(SNode* pPrimaryKey, SNodeList** pMergeKeys) {
|
||||
static int32_t stbSplCreateMergeKeysByPrimaryKey(SNode* pPrimaryKey, EOrder order, SNodeList** pMergeKeys) {
|
||||
SOrderByExprNode* pMergeKey = (SOrderByExprNode*)nodesMakeNode(QUERY_NODE_ORDER_BY_EXPR);
|
||||
if (NULL == pMergeKey) {
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
|
@ -479,7 +479,7 @@ static int32_t stbSplCreateMergeKeysByPrimaryKey(SNode* pPrimaryKey, SNodeList**
|
|||
nodesDestroyNode((SNode*)pMergeKey);
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
pMergeKey->order = ORDER_ASC;
|
||||
pMergeKey->order = order;
|
||||
pMergeKey->nullOrder = NULL_ORDER_FIRST;
|
||||
return nodesListMakeStrictAppend(pMergeKeys, (SNode*)pMergeKey);
|
||||
}
|
||||
|
@ -491,7 +491,8 @@ static int32_t stbSplSplitIntervalForBatch(SSplitContext* pCxt, SStableSplitInfo
|
|||
((SWindowLogicNode*)pPartWindow)->windowAlgo = INTERVAL_ALGO_HASH;
|
||||
((SWindowLogicNode*)pInfo->pSplitNode)->windowAlgo = INTERVAL_ALGO_MERGE;
|
||||
SNodeList* pMergeKeys = NULL;
|
||||
code = stbSplCreateMergeKeysByPrimaryKey(((SWindowLogicNode*)pInfo->pSplitNode)->pTspk, &pMergeKeys);
|
||||
code = stbSplCreateMergeKeysByPrimaryKey(((SWindowLogicNode*)pInfo->pSplitNode)->pTspk,
|
||||
((SWindowLogicNode*)pInfo->pSplitNode)->inputTsOrder, &pMergeKeys);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = stbSplCreateMergeNode(pCxt, NULL, pInfo->pSplitNode, pMergeKeys, pPartWindow, true);
|
||||
}
|
||||
|
@ -579,7 +580,8 @@ static int32_t stbSplSplitSessionOrStateForBatch(SSplitContext* pCxt, SStableSpl
|
|||
SLogicNode* pChild = (SLogicNode*)nodesListGetNode(pWindow->pChildren, 0);
|
||||
|
||||
SNodeList* pMergeKeys = NULL;
|
||||
int32_t code = stbSplCreateMergeKeysByPrimaryKey(((SWindowLogicNode*)pWindow)->pTspk, &pMergeKeys);
|
||||
int32_t code = stbSplCreateMergeKeysByPrimaryKey(((SWindowLogicNode*)pWindow)->pTspk,
|
||||
((SWindowLogicNode*)pWindow)->inputTsOrder, &pMergeKeys);
|
||||
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = stbSplCreateMergeNode(pCxt, pInfo->pSubplan, pChild, pMergeKeys, (SLogicNode*)pChild, true);
|
||||
|
@ -913,27 +915,70 @@ static int32_t stbSplSplitScanNodeWithPartTags(SSplitContext* pCxt, SStableSplit
|
|||
}
|
||||
|
||||
static SNode* stbSplFindPrimaryKeyFromScan(SScanLogicNode* pScan) {
|
||||
bool find = false;
|
||||
SNode* pCol = NULL;
|
||||
FOREACH(pCol, pScan->pScanCols) {
|
||||
if (PRIMARYKEY_TIMESTAMP_COL_ID == ((SColumnNode*)pCol)->colId) {
|
||||
find = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (!find) {
|
||||
return NULL;
|
||||
}
|
||||
SNode* pTarget = NULL;
|
||||
FOREACH(pTarget, pScan->node.pTargets) {
|
||||
if (nodesEqualNode(pTarget, pCol)) {
|
||||
return pCol;
|
||||
}
|
||||
}
|
||||
return NULL;
|
||||
nodesListStrictAppend(pScan->node.pTargets, nodesCloneNode(pCol));
|
||||
return pCol;
|
||||
}
|
||||
|
||||
static int32_t stbSplCreateMergeScanNode(SScanLogicNode* pScan, SLogicNode** pOutputMergeScan,
|
||||
SNodeList** pOutputMergeKeys) {
|
||||
SNodeList* pChildren = pScan->node.pChildren;
|
||||
pScan->node.pChildren = NULL;
|
||||
|
||||
int32_t code = TSDB_CODE_SUCCESS;
|
||||
SScanLogicNode* pMergeScan = (SScanLogicNode*)nodesCloneNode((SNode*)pScan);
|
||||
if (NULL == pMergeScan) {
|
||||
code = TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
SNodeList* pMergeKeys = NULL;
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
pMergeScan->scanType = SCAN_TYPE_TABLE_MERGE;
|
||||
pMergeScan->node.pChildren = pChildren;
|
||||
splSetParent((SLogicNode*)pMergeScan);
|
||||
code = stbSplCreateMergeKeysByPrimaryKey(stbSplFindPrimaryKeyFromScan(pMergeScan),
|
||||
pMergeScan->scanSeq[0] > 0 ? ORDER_ASC : ORDER_DESC, &pMergeKeys);
|
||||
}
|
||||
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
*pOutputMergeScan = (SLogicNode*)pMergeScan;
|
||||
*pOutputMergeKeys = pMergeKeys;
|
||||
} else {
|
||||
nodesDestroyNode((SNode*)pMergeScan);
|
||||
nodesDestroyList(pMergeKeys);
|
||||
}
|
||||
|
||||
return code;
|
||||
}
|
||||
|
||||
static int32_t stbSplSplitMergeScanNode(SSplitContext* pCxt, SLogicSubplan* pSubplan, SScanLogicNode* pScan,
|
||||
bool groupSort) {
|
||||
SNodeList* pMergeKeys = NULL;
|
||||
int32_t code = stbSplCreateMergeKeysByPrimaryKey(stbSplFindPrimaryKeyFromScan(pScan), &pMergeKeys);
|
||||
SLogicNode* pMergeScan = NULL;
|
||||
SNodeList* pMergeKeys = NULL;
|
||||
int32_t code = stbSplCreateMergeScanNode(pScan, &pMergeScan, &pMergeKeys);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = stbSplCreateMergeNode(pCxt, pSubplan, (SLogicNode*)pScan, pMergeKeys, (SLogicNode*)pScan, groupSort);
|
||||
code = stbSplCreateMergeNode(pCxt, pSubplan, (SLogicNode*)pScan, pMergeKeys, pMergeScan, groupSort);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = nodesListMakeStrictAppend(&pSubplan->pChildren,
|
||||
(SNode*)splCreateScanSubplan(pCxt, (SLogicNode*)pScan, SPLIT_FLAG_STABLE_SPLIT));
|
||||
(SNode*)splCreateScanSubplan(pCxt, pMergeScan, SPLIT_FLAG_STABLE_SPLIT));
|
||||
}
|
||||
pScan->scanType = SCAN_TYPE_TABLE_MERGE;
|
||||
++(pCxt->groupId);
|
||||
return code;
|
||||
}
|
||||
|
@ -978,14 +1023,14 @@ static int32_t stbSplSplitJoinNode(SSplitContext* pCxt, SStableSplitInfo* pInfo)
|
|||
}
|
||||
|
||||
static int32_t stbSplCreateMergeKeysForPartitionNode(SLogicNode* pPart, SNodeList** pMergeKeys) {
|
||||
SNode* pPrimaryKey =
|
||||
nodesCloneNode(stbSplFindPrimaryKeyFromScan((SScanLogicNode*)nodesListGetNode(pPart->pChildren, 0)));
|
||||
SScanLogicNode* pScan = (SScanLogicNode*)nodesListGetNode(pPart->pChildren, 0);
|
||||
SNode* pPrimaryKey = nodesCloneNode(stbSplFindPrimaryKeyFromScan(pScan));
|
||||
if (NULL == pPrimaryKey) {
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
int32_t code = nodesListAppend(pPart->pTargets, pPrimaryKey);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = stbSplCreateMergeKeysByPrimaryKey(pPrimaryKey, pMergeKeys);
|
||||
code = stbSplCreateMergeKeysByPrimaryKey(pPrimaryKey, pScan->scanSeq[0] > 0 ? ORDER_ASC : ORDER_DESC, pMergeKeys);
|
||||
}
|
||||
return code;
|
||||
}
|
||||
|
|
|
@ -124,7 +124,8 @@ int32_t replaceLogicNode(SLogicSubplan* pSubplan, SLogicNode* pOld, SLogicNode*
|
|||
}
|
||||
|
||||
static int32_t adjustScanDataRequirement(SScanLogicNode* pScan, EDataOrderLevel requirement) {
|
||||
if (SCAN_TYPE_TABLE != pScan->scanType && SCAN_TYPE_TABLE_MERGE != pScan->scanType) {
|
||||
if ((SCAN_TYPE_TABLE != pScan->scanType && SCAN_TYPE_TABLE_MERGE != pScan->scanType) ||
|
||||
DATA_ORDER_LEVEL_GLOBAL == pScan->node.requireDataOrder) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
// The lowest sort level of scan output data is DATA_ORDER_LEVEL_IN_BLOCK
|
||||
|
|
|
@ -24,9 +24,10 @@ TEST_F(PlanBasicTest, selectClause) {
|
|||
useDb("root", "test");
|
||||
|
||||
run("SELECT * FROM t1");
|
||||
run("SELECT 1 FROM t1");
|
||||
run("SELECT * FROM st1");
|
||||
run("SELECT 1 FROM st1");
|
||||
|
||||
run("SELECT MAX(c1) c2, c2 FROM t1");
|
||||
|
||||
run("SELECT MAX(c1) c2, c2 FROM st1");
|
||||
}
|
||||
|
||||
TEST_F(PlanBasicTest, whereClause) {
|
||||
|
|
|
@ -53,6 +53,8 @@ TEST_F(PlanOptimizeTest, sortPrimaryKey) {
|
|||
|
||||
run("SELECT c1 FROM t1 ORDER BY ts");
|
||||
|
||||
run("SELECT c1 FROM st1 ORDER BY ts");
|
||||
|
||||
run("SELECT c1 FROM t1 ORDER BY ts DESC");
|
||||
|
||||
run("SELECT COUNT(*) FROM t1 INTERVAL(10S) ORDER BY _WSTART DESC");
|
||||
|
|
|
@ -66,7 +66,7 @@ int32_t qwHandleTaskComplete(QW_FPARAMS_DEF, SQWTaskCtx *ctx) {
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t qwExecTask(QW_FPARAMS_DEF, SQWTaskCtx *ctx, bool *queryEnd) {
|
||||
int32_t qwExecTask(QW_FPARAMS_DEF, SQWTaskCtx *ctx, bool *queryStop) {
|
||||
int32_t code = 0;
|
||||
bool qcontinue = true;
|
||||
SSDataBlock *pRes = NULL;
|
||||
|
@ -104,8 +104,8 @@ int32_t qwExecTask(QW_FPARAMS_DEF, SQWTaskCtx *ctx, bool *queryEnd) {
|
|||
|
||||
QW_ERR_RET(qwHandleTaskComplete(QW_FPARAMS(), ctx));
|
||||
|
||||
if (queryEnd) {
|
||||
*queryEnd = true;
|
||||
if (queryStop) {
|
||||
*queryStop = true;
|
||||
}
|
||||
|
||||
break;
|
||||
|
@ -125,6 +125,10 @@ int32_t qwExecTask(QW_FPARAMS_DEF, SQWTaskCtx *ctx, bool *queryEnd) {
|
|||
QW_TASK_DLOG("data put into sink, rows:%d, continueExecTask:%d", rows, qcontinue);
|
||||
|
||||
if (!qcontinue) {
|
||||
if (queryStop) {
|
||||
*queryStop = true;
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
|
@ -279,6 +283,8 @@ int32_t qwGetDeleteResFromSink(QW_FPARAMS_DEF, SQWTaskCtx *ctx, SDeleteRes *pRes
|
|||
pRes->skey = pDelRes->skey;
|
||||
pRes->ekey = pDelRes->ekey;
|
||||
pRes->affectedRows = pDelRes->affectedRows;
|
||||
|
||||
taosMemoryFree(output.pData);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
@ -566,7 +572,7 @@ int32_t qwProcessCQuery(QW_FPARAMS_DEF, SQWMsg *qwMsg) {
|
|||
SQWPhaseInput input = {0};
|
||||
void *rsp = NULL;
|
||||
int32_t dataLen = 0;
|
||||
bool queryEnd = false;
|
||||
bool queryStop = false;
|
||||
|
||||
do {
|
||||
QW_ERR_JRET(qwHandlePrePhaseEvents(QW_FPARAMS(), QW_PHASE_PRE_CQUERY, &input, NULL));
|
||||
|
@ -576,7 +582,7 @@ int32_t qwProcessCQuery(QW_FPARAMS_DEF, SQWMsg *qwMsg) {
|
|||
atomic_store_8((int8_t *)&ctx->queryInQueue, 0);
|
||||
atomic_store_8((int8_t *)&ctx->queryContinue, 0);
|
||||
|
||||
QW_ERR_JRET(qwExecTask(QW_FPARAMS(), ctx, &queryEnd));
|
||||
QW_ERR_JRET(qwExecTask(QW_FPARAMS(), ctx, &queryStop));
|
||||
|
||||
if (QW_EVENT_RECEIVED(ctx, QW_EVENT_FETCH)) {
|
||||
SOutputData sOutput = {0};
|
||||
|
@ -627,7 +633,7 @@ int32_t qwProcessCQuery(QW_FPARAMS_DEF, SQWMsg *qwMsg) {
|
|||
}
|
||||
|
||||
QW_LOCK(QW_WRITE, &ctx->lock);
|
||||
if (queryEnd || code || 0 == atomic_load_8((int8_t *)&ctx->queryContinue)) {
|
||||
if (queryStop || code || 0 == atomic_load_8((int8_t *)&ctx->queryContinue)) {
|
||||
// Note: query is not running anymore
|
||||
QW_SET_PHASE(ctx, 0);
|
||||
QW_UNLOCK(QW_WRITE, &ctx->lock);
|
||||
|
|
|
@ -26,7 +26,7 @@ static int32_t streamTaskExecImpl(SStreamTask* pTask, void* data, SArray* pRes)
|
|||
} else if (pItem->type == STREAM_INPUT__DATA_SUBMIT) {
|
||||
ASSERT(pTask->isDataScan);
|
||||
SStreamDataSubmit* pSubmit = (SStreamDataSubmit*)data;
|
||||
qDebug("task %d %p set submit input %p %p %d", pTask->taskId, pTask, pSubmit, pSubmit->data, *pSubmit->dataRef);
|
||||
qDebug("task %d %p set submit input %p %p %d 1", pTask->taskId, pTask, pSubmit, pSubmit->data, *pSubmit->dataRef);
|
||||
qSetStreamInput(exec, pSubmit->data, STREAM_INPUT__DATA_SUBMIT, false);
|
||||
} else if (pItem->type == STREAM_INPUT__DATA_BLOCK || pItem->type == STREAM_INPUT__DATA_RETRIEVE) {
|
||||
SStreamDataBlock* pBlock = (SStreamDataBlock*)data;
|
||||
|
@ -72,6 +72,8 @@ static int32_t streamTaskExecImpl(SStreamTask* pTask, void* data, SArray* pRes)
|
|||
continue;
|
||||
}
|
||||
|
||||
qDebug("task %d(child %d) executed and get block");
|
||||
|
||||
SSDataBlock block = {0};
|
||||
assignOneDataBlock(&block, output);
|
||||
block.info.childId = pTask->selfChildId;
|
||||
|
@ -188,7 +190,7 @@ static SArray* streamExecForQall(SStreamTask* pTask, SArray* pRes) {
|
|||
if (pTask->execType == TASK_EXEC__NONE) {
|
||||
ASSERT(((SStreamQueueItem*)data)->type == STREAM_INPUT__DATA_BLOCK);
|
||||
streamTaskOutput(pTask, data);
|
||||
return pRes;
|
||||
continue;
|
||||
}
|
||||
|
||||
qDebug("stream task %d exec begin, msg batch: %d", pTask->taskId, cnt);
|
||||
|
|
|
@ -238,6 +238,7 @@ int32_t syncNodeGetPreIndexTerm(SSyncNode* pSyncNode, SyncIndex index, SyncInd
|
|||
|
||||
bool syncNodeIsOptimizedOneReplica(SSyncNode* ths, SRpcMsg* pMsg);
|
||||
int32_t syncNodeCommit(SSyncNode* ths, SyncIndex beginIndex, SyncIndex endIndex, uint64_t flag);
|
||||
int32_t syncNodePreCommit(SSyncNode* ths, SSyncRaftEntry* pEntry, int32_t code);
|
||||
|
||||
int32_t syncNodeUpdateNewConfigIndex(SSyncNode* ths, SSyncCfg* pNewCfg);
|
||||
|
||||
|
|
|
@ -26,6 +26,7 @@ extern "C" {
|
|||
#include "syncInt.h"
|
||||
#include "syncMessage.h"
|
||||
#include "taosdef.h"
|
||||
#include "tref.h"
|
||||
#include "tskiplist.h"
|
||||
|
||||
typedef struct SSyncRaftEntry {
|
||||
|
@ -89,6 +90,7 @@ typedef struct SRaftEntryCache {
|
|||
SSkipList* pSkipList;
|
||||
int32_t maxCount;
|
||||
int32_t currentCount;
|
||||
int32_t refMgr;
|
||||
TdThreadMutex mutex;
|
||||
SSyncNode* pSyncNode;
|
||||
} SRaftEntryCache;
|
||||
|
|
|
@ -244,22 +244,7 @@ int32_t syncNodeOnAppendEntriesCb(SSyncNode* ths, SyncAppendEntries* pMsg) {
|
|||
ths->pLogStore->appendEntry(ths->pLogStore, pAppendEntry);
|
||||
|
||||
// pre commit
|
||||
SRpcMsg rpcMsg;
|
||||
syncEntry2OriginalRpc(pAppendEntry, &rpcMsg);
|
||||
if (ths->pFsm != NULL) {
|
||||
// if (ths->pFsm->FpPreCommitCb != NULL && pAppendEntry->originalRpcType != TDMT_SYNC_NOOP) {
|
||||
if (ths->pFsm->FpPreCommitCb != NULL && syncUtilUserPreCommit(pAppendEntry->originalRpcType)) {
|
||||
SFsmCbMeta cbMeta = {0};
|
||||
cbMeta.index = pAppendEntry->index;
|
||||
cbMeta.lastConfigIndex = syncNodeGetSnapshotConfigIndex(ths, cbMeta.index);
|
||||
cbMeta.isWeak = pAppendEntry->isWeak;
|
||||
cbMeta.code = 2;
|
||||
cbMeta.state = ths->state;
|
||||
cbMeta.seqNum = pAppendEntry->seqNum;
|
||||
ths->pFsm->FpPreCommitCb(ths->pFsm, &rpcMsg, cbMeta);
|
||||
}
|
||||
}
|
||||
rpcFreeCont(rpcMsg.pCont);
|
||||
syncNodePreCommit(ths, pAppendEntry, 0);
|
||||
}
|
||||
|
||||
// free memory
|
||||
|
@ -280,22 +265,7 @@ int32_t syncNodeOnAppendEntriesCb(SSyncNode* ths, SyncAppendEntries* pMsg) {
|
|||
ths->pLogStore->appendEntry(ths->pLogStore, pAppendEntry);
|
||||
|
||||
// pre commit
|
||||
SRpcMsg rpcMsg;
|
||||
syncEntry2OriginalRpc(pAppendEntry, &rpcMsg);
|
||||
if (ths->pFsm != NULL) {
|
||||
// if (ths->pFsm->FpPreCommitCb != NULL && pAppendEntry->originalRpcType != TDMT_SYNC_NOOP) {
|
||||
if (ths->pFsm->FpPreCommitCb != NULL && syncUtilUserPreCommit(pAppendEntry->originalRpcType)) {
|
||||
SFsmCbMeta cbMeta = {0};
|
||||
cbMeta.index = pAppendEntry->index;
|
||||
cbMeta.lastConfigIndex = syncNodeGetSnapshotConfigIndex(ths, cbMeta.index);
|
||||
cbMeta.isWeak = pAppendEntry->isWeak;
|
||||
cbMeta.code = 3;
|
||||
cbMeta.state = ths->state;
|
||||
cbMeta.seqNum = pAppendEntry->seqNum;
|
||||
ths->pFsm->FpPreCommitCb(ths->pFsm, &rpcMsg, cbMeta);
|
||||
}
|
||||
}
|
||||
rpcFreeCont(rpcMsg.pCont);
|
||||
syncNodePreCommit(ths, pAppendEntry, 0);
|
||||
|
||||
// free memory
|
||||
syncEntryDestory(pAppendEntry);
|
||||
|
@ -440,7 +410,7 @@ static int32_t syncNodeDoMakeLogSame(SSyncNode* ths, SyncIndex FromIndex) {
|
|||
return code;
|
||||
}
|
||||
|
||||
static int32_t syncNodePreCommit(SSyncNode* ths, SSyncRaftEntry* pEntry) {
|
||||
int32_t syncNodePreCommit(SSyncNode* ths, SSyncRaftEntry* pEntry, int32_t code) {
|
||||
SRpcMsg rpcMsg;
|
||||
syncEntry2OriginalRpc(pEntry, &rpcMsg);
|
||||
|
||||
|
@ -456,7 +426,7 @@ static int32_t syncNodePreCommit(SSyncNode* ths, SSyncRaftEntry* pEntry) {
|
|||
cbMeta.index = pEntry->index;
|
||||
cbMeta.lastConfigIndex = syncNodeGetSnapshotConfigIndex(ths, cbMeta.index);
|
||||
cbMeta.isWeak = pEntry->isWeak;
|
||||
cbMeta.code = 2;
|
||||
cbMeta.code = code;
|
||||
cbMeta.state = ths->state;
|
||||
cbMeta.seqNum = pEntry->seqNum;
|
||||
ths->pFsm->FpPreCommitCb(ths->pFsm, &rpcMsg, cbMeta);
|
||||
|
@ -594,7 +564,7 @@ int32_t syncNodeOnAppendEntriesSnapshot2Cb(SSyncNode* ths, SyncAppendEntriesBatc
|
|||
return -1;
|
||||
}
|
||||
|
||||
code = syncNodePreCommit(ths, pAppendEntry);
|
||||
code = syncNodePreCommit(ths, pAppendEntry, 0);
|
||||
ASSERT(code == 0);
|
||||
|
||||
// syncEntryDestory(pAppendEntry);
|
||||
|
@ -715,7 +685,7 @@ int32_t syncNodeOnAppendEntriesSnapshot2Cb(SSyncNode* ths, SyncAppendEntriesBatc
|
|||
return -1;
|
||||
}
|
||||
|
||||
code = syncNodePreCommit(ths, pAppendEntry);
|
||||
code = syncNodePreCommit(ths, pAppendEntry, 0);
|
||||
ASSERT(code == 0);
|
||||
|
||||
// syncEntryDestory(pAppendEntry);
|
||||
|
@ -919,7 +889,7 @@ int32_t syncNodeOnAppendEntriesSnapshotCb(SSyncNode* ths, SyncAppendEntries* pMs
|
|||
}
|
||||
|
||||
// pre commit
|
||||
code = syncNodePreCommit(ths, pAppendEntry);
|
||||
code = syncNodePreCommit(ths, pAppendEntry, 0);
|
||||
ASSERT(code == 0);
|
||||
|
||||
// update match index
|
||||
|
@ -1032,7 +1002,7 @@ int32_t syncNodeOnAppendEntriesSnapshotCb(SSyncNode* ths, SyncAppendEntries* pMs
|
|||
}
|
||||
|
||||
// pre commit
|
||||
code = syncNodePreCommit(ths, pAppendEntry);
|
||||
code = syncNodePreCommit(ths, pAppendEntry, 0);
|
||||
ASSERT(code == 0);
|
||||
|
||||
syncEntryDestory(pAppendEntry);
|
||||
|
|
|
@ -67,11 +67,6 @@ void syncMaybeAdvanceCommitIndex(SSyncNode* pSyncNode) {
|
|||
for (SyncIndex index = syncNodeGetLastIndex(pSyncNode); index > pSyncNode->commitIndex; --index) {
|
||||
bool agree = syncAgree(pSyncNode, index);
|
||||
|
||||
if (gRaftDetailLog) {
|
||||
sTrace("syncMaybeAdvanceCommitIndex syncAgree:%d, index:%" PRId64 ", pSyncNode->commitIndex:%" PRId64, agree,
|
||||
index, pSyncNode->commitIndex);
|
||||
}
|
||||
|
||||
if (agree) {
|
||||
// term
|
||||
SSyncRaftEntry* pEntry = pSyncNode->pLogStore->getEntry(pSyncNode->pLogStore, index);
|
||||
|
@ -82,20 +77,15 @@ void syncMaybeAdvanceCommitIndex(SSyncNode* pSyncNode) {
|
|||
// update commit index
|
||||
newCommitIndex = index;
|
||||
|
||||
if (gRaftDetailLog) {
|
||||
sTrace("syncMaybeAdvanceCommitIndex maybe to update, newCommitIndex:%" PRId64
|
||||
" commit, pSyncNode->commitIndex:%" PRId64,
|
||||
newCommitIndex, pSyncNode->commitIndex);
|
||||
}
|
||||
|
||||
syncEntryDestory(pEntry);
|
||||
break;
|
||||
} else {
|
||||
if (gRaftDetailLog) {
|
||||
sTrace("syncMaybeAdvanceCommitIndex can not commit due to term not equal, pEntry->term:%" PRIu64
|
||||
", pSyncNode->pRaftStore->currentTerm:%" PRIu64,
|
||||
pEntry->term, pSyncNode->pRaftStore->currentTerm);
|
||||
}
|
||||
do {
|
||||
char logBuf[128];
|
||||
snprintf(logBuf, sizeof(logBuf), "can not commit due to term not equal, index:%ld, term:%lu", pEntry->index,
|
||||
pEntry->term);
|
||||
syncNodeEventLog(pSyncNode, logBuf);
|
||||
} while (0);
|
||||
}
|
||||
|
||||
syncEntryDestory(pEntry);
|
||||
|
@ -107,10 +97,6 @@ void syncMaybeAdvanceCommitIndex(SSyncNode* pSyncNode) {
|
|||
SyncIndex beginIndex = pSyncNode->commitIndex + 1;
|
||||
SyncIndex endIndex = newCommitIndex;
|
||||
|
||||
if (gRaftDetailLog) {
|
||||
sTrace("syncMaybeAdvanceCommitIndex sync commit %" PRId64, newCommitIndex);
|
||||
}
|
||||
|
||||
// update commit index
|
||||
pSyncNode->commitIndex = newCommitIndex;
|
||||
|
||||
|
|
|
@ -242,9 +242,9 @@ static int32_t syncIOStopInternal(SSyncIO *io) {
|
|||
}
|
||||
|
||||
static void *syncIOConsumerFunc(void *param) {
|
||||
SSyncIO *io = param;
|
||||
SSyncIO * io = param;
|
||||
STaosQall *qall = taosAllocateQall();
|
||||
SRpcMsg *pRpcMsg, rpcMsg;
|
||||
SRpcMsg * pRpcMsg, rpcMsg;
|
||||
SQueueInfo qinfo = {0};
|
||||
|
||||
while (1) {
|
||||
|
|
|
@ -125,7 +125,7 @@ cJSON *syncIndexMgr2Json(SSyncIndexMgr *pSyncIndexMgr) {
|
|||
|
||||
char *syncIndexMgr2Str(SSyncIndexMgr *pSyncIndexMgr) {
|
||||
cJSON *pJson = syncIndexMgr2Json(pSyncIndexMgr);
|
||||
char *serialized = cJSON_Print(pJson);
|
||||
char * serialized = cJSON_Print(pJson);
|
||||
cJSON_Delete(pJson);
|
||||
return serialized;
|
||||
}
|
||||
|
|
|
@ -2504,22 +2504,7 @@ int32_t syncNodeOnClientRequestCb(SSyncNode* ths, SyncClientRequest* pMsg, SyncI
|
|||
}
|
||||
|
||||
// pre commit
|
||||
SRpcMsg rpcMsg;
|
||||
syncEntry2OriginalRpc(pEntry, &rpcMsg);
|
||||
|
||||
if (ths->pFsm != NULL) {
|
||||
if (ths->pFsm->FpPreCommitCb != NULL && syncUtilUserPreCommit(pEntry->originalRpcType)) {
|
||||
SFsmCbMeta cbMeta = {0};
|
||||
cbMeta.index = pEntry->index;
|
||||
cbMeta.lastConfigIndex = syncNodeGetSnapshotConfigIndex(ths, cbMeta.index);
|
||||
cbMeta.isWeak = pEntry->isWeak;
|
||||
cbMeta.code = 0;
|
||||
cbMeta.state = ths->state;
|
||||
cbMeta.seqNum = pEntry->seqNum;
|
||||
ths->pFsm->FpPreCommitCb(ths->pFsm, &rpcMsg, cbMeta);
|
||||
}
|
||||
}
|
||||
rpcFreeCont(rpcMsg.pCont);
|
||||
syncNodePreCommit(ths, pEntry, 0);
|
||||
|
||||
// if only myself, maybe commit right now
|
||||
if (ths->replicaNum == 1) {
|
||||
|
@ -2528,22 +2513,7 @@ int32_t syncNodeOnClientRequestCb(SSyncNode* ths, SyncClientRequest* pMsg, SyncI
|
|||
|
||||
} else {
|
||||
// pre commit
|
||||
SRpcMsg rpcMsg;
|
||||
syncEntry2OriginalRpc(pEntry, &rpcMsg);
|
||||
|
||||
if (ths->pFsm != NULL) {
|
||||
if (ths->pFsm->FpPreCommitCb != NULL && syncUtilUserPreCommit(pEntry->originalRpcType)) {
|
||||
SFsmCbMeta cbMeta = {0};
|
||||
cbMeta.index = pEntry->index;
|
||||
cbMeta.lastConfigIndex = syncNodeGetSnapshotConfigIndex(ths, cbMeta.index);
|
||||
cbMeta.isWeak = pEntry->isWeak;
|
||||
cbMeta.code = 1;
|
||||
cbMeta.state = ths->state;
|
||||
cbMeta.seqNum = pEntry->seqNum;
|
||||
ths->pFsm->FpPreCommitCb(ths->pFsm, &rpcMsg, cbMeta);
|
||||
}
|
||||
}
|
||||
rpcFreeCont(rpcMsg.pCont);
|
||||
syncNodePreCommit(ths, pEntry, 0);
|
||||
}
|
||||
|
||||
if (pRetIndex != NULL) {
|
||||
|
|
|
@ -101,7 +101,7 @@ cJSON *syncCfg2Json(SSyncCfg *pSyncCfg) {
|
|||
|
||||
char *syncCfg2Str(SSyncCfg *pSyncCfg) {
|
||||
cJSON *pJson = syncCfg2Json(pSyncCfg);
|
||||
char *serialized = cJSON_Print(pJson);
|
||||
char * serialized = cJSON_Print(pJson);
|
||||
cJSON_Delete(pJson);
|
||||
return serialized;
|
||||
}
|
||||
|
@ -109,7 +109,7 @@ char *syncCfg2Str(SSyncCfg *pSyncCfg) {
|
|||
char *syncCfg2SimpleStr(SSyncCfg *pSyncCfg) {
|
||||
if (pSyncCfg != NULL) {
|
||||
int32_t len = 512;
|
||||
char *s = taosMemoryMalloc(len);
|
||||
char * s = taosMemoryMalloc(len);
|
||||
memset(s, 0, len);
|
||||
|
||||
snprintf(s, len, "{r-num:%d, my:%d, ", pSyncCfg->replicaNum, pSyncCfg->myIndex);
|
||||
|
@ -206,7 +206,7 @@ cJSON *raftCfg2Json(SRaftCfg *pRaftCfg) {
|
|||
|
||||
char *raftCfg2Str(SRaftCfg *pRaftCfg) {
|
||||
cJSON *pJson = raftCfg2Json(pRaftCfg);
|
||||
char *serialized = cJSON_Print(pJson);
|
||||
char * serialized = cJSON_Print(pJson);
|
||||
cJSON_Delete(pJson);
|
||||
return serialized;
|
||||
}
|
||||
|
@ -285,7 +285,7 @@ int32_t raftCfgFromJson(const cJSON *pRoot, SRaftCfg *pRaftCfg) {
|
|||
(pRaftCfg->configIndexArr)[i] = atoll(pIndex->valuestring);
|
||||
}
|
||||
|
||||
cJSON *pJsonSyncCfg = cJSON_GetObjectItem(pJson, "SSyncCfg");
|
||||
cJSON * pJsonSyncCfg = cJSON_GetObjectItem(pJson, "SSyncCfg");
|
||||
int32_t code = syncCfgFromJson(pJsonSyncCfg, &(pRaftCfg->cfg));
|
||||
ASSERT(code == 0);
|
||||
|
||||
|
|
|
@ -23,6 +23,7 @@ SSyncRaftEntry* syncEntryBuild(uint32_t dataLen) {
|
|||
memset(pEntry, 0, bytes);
|
||||
pEntry->bytes = bytes;
|
||||
pEntry->dataLen = dataLen;
|
||||
pEntry->rid = -1;
|
||||
return pEntry;
|
||||
}
|
||||
|
||||
|
@ -451,6 +452,11 @@ static char* keyFn(const void* pData) {
|
|||
|
||||
static int cmpFn(const void* p1, const void* p2) { return memcmp(p1, p2, sizeof(SyncIndex)); }
|
||||
|
||||
static void freeRaftEntry(void* param) {
|
||||
SSyncRaftEntry* pEntry = (SSyncRaftEntry*)param;
|
||||
syncEntryDestory(pEntry);
|
||||
}
|
||||
|
||||
SRaftEntryCache* raftEntryCacheCreate(SSyncNode* pSyncNode, int32_t maxCount) {
|
||||
SRaftEntryCache* pCache = taosMemoryMalloc(sizeof(SRaftEntryCache));
|
||||
if (pCache == NULL) {
|
||||
|
@ -466,6 +472,7 @@ SRaftEntryCache* raftEntryCacheCreate(SSyncNode* pSyncNode, int32_t maxCount) {
|
|||
}
|
||||
|
||||
taosThreadMutexInit(&(pCache->mutex), NULL);
|
||||
pCache->refMgr = taosOpenRef(10, freeRaftEntry);
|
||||
pCache->maxCount = maxCount;
|
||||
pCache->currentCount = 0;
|
||||
pCache->pSyncNode = pSyncNode;
|
||||
|
@ -477,6 +484,10 @@ void raftEntryCacheDestroy(SRaftEntryCache* pCache) {
|
|||
if (pCache != NULL) {
|
||||
taosThreadMutexLock(&(pCache->mutex));
|
||||
tSkipListDestroy(pCache->pSkipList);
|
||||
if (pCache->refMgr != -1) {
|
||||
taosCloseRef(pCache->refMgr);
|
||||
pCache->refMgr = -1;
|
||||
}
|
||||
taosThreadMutexUnlock(&(pCache->mutex));
|
||||
taosThreadMutexDestroy(&(pCache->mutex));
|
||||
taosMemoryFree(pCache);
|
||||
|
@ -498,6 +509,9 @@ int32_t raftEntryCachePutEntry(struct SRaftEntryCache* pCache, SSyncRaftEntry* p
|
|||
ASSERT(pSkipListNode != NULL);
|
||||
++(pCache->currentCount);
|
||||
|
||||
pEntry->rid = taosAddRef(pCache->refMgr, pEntry);
|
||||
ASSERT(pEntry->rid >= 0);
|
||||
|
||||
do {
|
||||
char eventLog[128];
|
||||
snprintf(eventLog, sizeof(eventLog), "raft cache add, type:%s,%d, type2:%s,%d, index:%" PRId64 ", bytes:%d",
|
||||
|
@ -520,6 +534,7 @@ int32_t raftEntryCacheGetEntry(struct SRaftEntryCache* pCache, SyncIndex index,
|
|||
if (code == 1) {
|
||||
*ppEntry = taosMemoryMalloc(pEntry->bytes);
|
||||
memcpy(*ppEntry, pEntry, pEntry->bytes);
|
||||
(*ppEntry)->rid = -1;
|
||||
} else {
|
||||
*ppEntry = NULL;
|
||||
}
|
||||
|
@ -541,6 +556,7 @@ int32_t raftEntryCacheGetEntryP(struct SRaftEntryCache* pCache, SyncIndex index,
|
|||
SSkipListNode** ppNode = (SSkipListNode**)taosArrayGet(entryPArray, 0);
|
||||
ASSERT(*ppNode != NULL);
|
||||
*ppEntry = (SSyncRaftEntry*)SL_GET_NODE_DATA(*ppNode);
|
||||
taosAcquireRef(pCache->refMgr, (*ppEntry)->rid);
|
||||
code = 1;
|
||||
|
||||
} else if (arraySize == 0) {
|
||||
|
@ -600,7 +616,9 @@ int32_t raftEntryCacheClear(struct SRaftEntryCache* pCache, int32_t count) {
|
|||
taosArrayPush(delNodeArray, &pNode);
|
||||
++returnCnt;
|
||||
SSyncRaftEntry* pEntry = (SSyncRaftEntry*)SL_GET_NODE_DATA(pNode);
|
||||
syncEntryDestory(pEntry);
|
||||
|
||||
// syncEntryDestory(pEntry);
|
||||
taosRemoveRef(pCache->refMgr, pEntry->rid);
|
||||
}
|
||||
tSkipListDestroyIter(pIter);
|
||||
|
||||
|
|
|
@ -216,7 +216,7 @@ cJSON *raftStore2Json(SRaftStore *pRaftStore) {
|
|||
|
||||
char *raftStore2Str(SRaftStore *pRaftStore) {
|
||||
cJSON *pJson = raftStore2Json(pRaftStore);
|
||||
char *serialized = cJSON_Print(pJson);
|
||||
char * serialized = cJSON_Print(pJson);
|
||||
cJSON_Delete(pJson);
|
||||
return serialized;
|
||||
}
|
||||
|
|
|
@ -129,7 +129,7 @@ void syncRespCleanByTTL(SSyncRespMgr *pObj, int64_t ttl) {
|
|||
|
||||
while (pStub) {
|
||||
size_t len;
|
||||
void *key = taosHashGetKey(pStub, &len);
|
||||
void * key = taosHashGetKey(pStub, &len);
|
||||
uint64_t *pSeqNum = (uint64_t *)key;
|
||||
sum++;
|
||||
|
||||
|
|
|
@ -374,14 +374,14 @@ cJSON *snapshotSender2Json(SSyncSnapshotSender *pSender) {
|
|||
|
||||
char *snapshotSender2Str(SSyncSnapshotSender *pSender) {
|
||||
cJSON *pJson = snapshotSender2Json(pSender);
|
||||
char *serialized = cJSON_Print(pJson);
|
||||
char * serialized = cJSON_Print(pJson);
|
||||
cJSON_Delete(pJson);
|
||||
return serialized;
|
||||
}
|
||||
|
||||
char *snapshotSender2SimpleStr(SSyncSnapshotSender *pSender, char *event) {
|
||||
int32_t len = 256;
|
||||
char *s = taosMemoryMalloc(len);
|
||||
char * s = taosMemoryMalloc(len);
|
||||
|
||||
SRaftId destId = pSender->pSyncNode->replicasId[pSender->replicaIndex];
|
||||
char host[64];
|
||||
|
@ -653,7 +653,7 @@ cJSON *snapshotReceiver2Json(SSyncSnapshotReceiver *pReceiver) {
|
|||
cJSON_AddStringToObject(pFromId, "addr", u64buf);
|
||||
{
|
||||
uint64_t u64 = pReceiver->fromId.addr;
|
||||
cJSON *pTmp = pFromId;
|
||||
cJSON * pTmp = pFromId;
|
||||
char host[128] = {0};
|
||||
uint16_t port;
|
||||
syncUtilU642Addr(u64, host, sizeof(host), &port);
|
||||
|
@ -686,14 +686,14 @@ cJSON *snapshotReceiver2Json(SSyncSnapshotReceiver *pReceiver) {
|
|||
|
||||
char *snapshotReceiver2Str(SSyncSnapshotReceiver *pReceiver) {
|
||||
cJSON *pJson = snapshotReceiver2Json(pReceiver);
|
||||
char *serialized = cJSON_Print(pJson);
|
||||
char * serialized = cJSON_Print(pJson);
|
||||
cJSON_Delete(pJson);
|
||||
return serialized;
|
||||
}
|
||||
|
||||
char *snapshotReceiver2SimpleStr(SSyncSnapshotReceiver *pReceiver, char *event) {
|
||||
int32_t len = 256;
|
||||
char *s = taosMemoryMalloc(len);
|
||||
char * s = taosMemoryMalloc(len);
|
||||
|
||||
SRaftId fromId = pReceiver->fromId;
|
||||
char host[128];
|
||||
|
|
|
@ -125,7 +125,7 @@ int32_t SnapshotStartWrite(struct SSyncFSM* pFsm, void* pParam, void** ppWriter)
|
|||
return 0;
|
||||
}
|
||||
|
||||
int32_t SnapshotStopWrite(struct SSyncFSM* pFsm, void* pWriter, bool isApply, SSnapshot *pSnapshot) {
|
||||
int32_t SnapshotStopWrite(struct SSyncFSM* pFsm, void* pWriter, bool isApply, SSnapshot* pSnapshot) {
|
||||
char logBuf[256] = {0};
|
||||
snprintf(logBuf, sizeof(logBuf), "==callback== ==SnapshotStopWrite== pFsm:%p, pWriter:%p, isApply:%d", pFsm, pWriter,
|
||||
isApply);
|
||||
|
|
|
@ -5,8 +5,8 @@
|
|||
#include "syncRaftLog.h"
|
||||
#include "syncRaftStore.h"
|
||||
#include "syncUtil.h"
|
||||
#include "tskiplist.h"
|
||||
#include "tref.h"
|
||||
#include "tskiplist.h"
|
||||
|
||||
void logTest() {
|
||||
sTrace("--- sync log test: trace");
|
||||
|
@ -51,7 +51,7 @@ SRaftEntryCache* createCache(int maxCount) {
|
|||
}
|
||||
|
||||
void test1() {
|
||||
int32_t code = 0;
|
||||
int32_t code = 0;
|
||||
SRaftEntryCache* pCache = createCache(5);
|
||||
for (int i = 0; i < 10; ++i) {
|
||||
SSyncRaftEntry* pEntry = createEntry(i);
|
||||
|
@ -68,7 +68,7 @@ void test1() {
|
|||
}
|
||||
|
||||
void test2() {
|
||||
int32_t code = 0;
|
||||
int32_t code = 0;
|
||||
SRaftEntryCache* pCache = createCache(5);
|
||||
for (int i = 0; i < 10; ++i) {
|
||||
SSyncRaftEntry* pEntry = createEntry(i);
|
||||
|
@ -77,7 +77,7 @@ void test2() {
|
|||
}
|
||||
raftEntryCacheLog2((char*)"==test1 write 5 entries==", pCache);
|
||||
|
||||
SyncIndex index = 2;
|
||||
SyncIndex index = 2;
|
||||
SSyncRaftEntry* pEntry = NULL;
|
||||
|
||||
code = raftEntryCacheGetEntryP(pCache, index, &pEntry);
|
||||
|
@ -107,7 +107,7 @@ void test2() {
|
|||
}
|
||||
|
||||
void test3() {
|
||||
int32_t code = 0;
|
||||
int32_t code = 0;
|
||||
SRaftEntryCache* pCache = createCache(20);
|
||||
for (int i = 0; i <= 4; ++i) {
|
||||
SSyncRaftEntry* pEntry = createEntry(i);
|
||||
|
@ -122,8 +122,6 @@ void test3() {
|
|||
raftEntryCacheLog2((char*)"==test3 write 10 entries==", pCache);
|
||||
}
|
||||
|
||||
|
||||
|
||||
static void freeObj(void* param) {
|
||||
SSyncRaftEntry* pEntry = (SSyncRaftEntry*)param;
|
||||
syncEntryLog2((char*)"freeObj: ", pEntry);
|
||||
|
@ -138,19 +136,41 @@ void test4() {
|
|||
|
||||
int64_t rid = taosAddRef(testRefId, pEntry);
|
||||
sTrace("rid: %ld", rid);
|
||||
|
||||
|
||||
do {
|
||||
SSyncRaftEntry* pAcquireEntry = (SSyncRaftEntry*)taosAcquireRef(testRefId, rid);
|
||||
syncEntryLog2((char*)"acquire: ", pAcquireEntry);
|
||||
|
||||
taosAcquireRef(testRefId, rid);
|
||||
taosAcquireRef(testRefId, rid);
|
||||
taosAcquireRef(testRefId, rid);
|
||||
|
||||
taosReleaseRef(testRefId, rid);
|
||||
//taosReleaseRef(testRefId, rid);
|
||||
// taosReleaseRef(testRefId, rid);
|
||||
// taosReleaseRef(testRefId, rid);
|
||||
} while (0);
|
||||
|
||||
taosRemoveRef(testRefId, rid);
|
||||
|
||||
for (int i = 0; i < 10; ++i) {
|
||||
sTrace("taosReleaseRef, %d", i);
|
||||
taosReleaseRef(testRefId, rid);
|
||||
}
|
||||
}
|
||||
|
||||
void test5() {
|
||||
int32_t testRefId = taosOpenRef(5, freeObj);
|
||||
for (int i = 0; i < 100; i++) {
|
||||
SSyncRaftEntry* pEntry = createEntry(i);
|
||||
ASSERT(pEntry != NULL);
|
||||
|
||||
int64_t rid = taosAddRef(testRefId, pEntry);
|
||||
sTrace("rid: %ld", rid);
|
||||
}
|
||||
|
||||
for (int64_t rid = 2; rid < 101; rid++) {
|
||||
SSyncRaftEntry* pAcquireEntry = (SSyncRaftEntry*)taosAcquireRef(testRefId, rid);
|
||||
syncEntryLog2((char*)"taosAcquireRef: ", pAcquireEntry);
|
||||
}
|
||||
}
|
||||
|
||||
int main(int argc, char** argv) {
|
||||
|
@ -158,11 +178,13 @@ int main(int argc, char** argv) {
|
|||
tsAsyncLog = 0;
|
||||
sDebugFlag = DEBUG_TRACE + DEBUG_SCREEN + DEBUG_FILE + DEBUG_DEBUG;
|
||||
|
||||
test1();
|
||||
test2();
|
||||
test3();
|
||||
|
||||
//test4();
|
||||
/*
|
||||
test1();
|
||||
test2();
|
||||
test3();
|
||||
*/
|
||||
test4();
|
||||
// test5();
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -30,7 +30,7 @@ int32_t SnapshotStopRead(struct SSyncFSM* pFsm, void* pReader) { return 0; }
|
|||
int32_t SnapshotDoRead(struct SSyncFSM* pFsm, void* pReader, void** ppBuf, int32_t* len) { return 0; }
|
||||
|
||||
int32_t SnapshotStartWrite(struct SSyncFSM* pFsm, void* pParam, void** ppWriter) { return 0; }
|
||||
int32_t SnapshotStopWrite(struct SSyncFSM* pFsm, void* pWriter, bool isApply, SSnapshot *pSnapshot) { return 0; }
|
||||
int32_t SnapshotStopWrite(struct SSyncFSM* pFsm, void* pWriter, bool isApply, SSnapshot* pSnapshot) { return 0; }
|
||||
int32_t SnapshotDoWrite(struct SSyncFSM* pFsm, void* pWriter, void* pBuf, int32_t len) { return 0; }
|
||||
|
||||
SSyncSnapshotReceiver* createReceiver() {
|
||||
|
|
|
@ -126,7 +126,7 @@ int32_t SnapshotStartWrite(struct SSyncFSM* pFsm, void* pParam, void** ppWriter)
|
|||
return 0;
|
||||
}
|
||||
|
||||
int32_t SnapshotStopWrite(struct SSyncFSM* pFsm, void* pWriter, bool isApply, SSnapshot *pSnapshot) {
|
||||
int32_t SnapshotStopWrite(struct SSyncFSM* pFsm, void* pWriter, bool isApply, SSnapshot* pSnapshot) {
|
||||
if (isApply) {
|
||||
gSnapshotLastApplyIndex = gFinishLastApplyIndex;
|
||||
gSnapshotLastApplyTerm = gFinishLastApplyTerm;
|
||||
|
|
|
@ -93,7 +93,7 @@ SWal *walOpen(const char *path, SWalCfg *pCfg) {
|
|||
}
|
||||
|
||||
// init ref
|
||||
pWal->pRefHash = taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_UBIGINT), true, HASH_ENTRY_LOCK);
|
||||
pWal->pRefHash = taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, HASH_ENTRY_LOCK);
|
||||
if (pWal->pRefHash == NULL) {
|
||||
taosMemoryFree(pWal);
|
||||
return NULL;
|
||||
|
@ -101,8 +101,8 @@ SWal *walOpen(const char *path, SWalCfg *pCfg) {
|
|||
|
||||
// open meta
|
||||
walResetVer(&pWal->vers);
|
||||
pWal->pWriteLogTFile = NULL;
|
||||
pWal->pWriteIdxTFile = NULL;
|
||||
pWal->pLogFile = NULL;
|
||||
pWal->pIdxFile = NULL;
|
||||
pWal->writeCur = -1;
|
||||
pWal->fileInfoSet = taosArrayInit(8, sizeof(SWalFileInfo));
|
||||
if (pWal->fileInfoSet == NULL) {
|
||||
|
@ -179,10 +179,10 @@ int32_t walAlter(SWal *pWal, SWalCfg *pCfg) {
|
|||
|
||||
void walClose(SWal *pWal) {
|
||||
taosThreadMutexLock(&pWal->mutex);
|
||||
taosCloseFile(&pWal->pWriteLogTFile);
|
||||
pWal->pWriteLogTFile = NULL;
|
||||
taosCloseFile(&pWal->pWriteIdxTFile);
|
||||
pWal->pWriteIdxTFile = NULL;
|
||||
taosCloseFile(&pWal->pLogFile);
|
||||
pWal->pLogFile = NULL;
|
||||
taosCloseFile(&pWal->pIdxFile);
|
||||
pWal->pIdxFile = NULL;
|
||||
walSaveMeta(pWal);
|
||||
taosArrayDestroy(pWal->fileInfoSet);
|
||||
pWal->fileInfoSet = NULL;
|
||||
|
@ -223,7 +223,7 @@ static void walFsyncAll() {
|
|||
if (walNeedFsync(pWal)) {
|
||||
wTrace("vgId:%d, do fsync, level:%d seq:%d rseq:%d", pWal->cfg.vgId, pWal->cfg.level, pWal->fsyncSeq,
|
||||
atomic_load_32(&tsWal.seq));
|
||||
int32_t code = taosFsyncFile(pWal->pWriteLogTFile);
|
||||
int32_t code = taosFsyncFile(pWal->pLogFile);
|
||||
if (code != 0) {
|
||||
wError("vgId:%d, file:%" PRId64 ".log, failed to fsync since %s", pWal->cfg.vgId, walGetLastFileFirstVer(pWal),
|
||||
strerror(code));
|
||||
|
|
|
@ -21,107 +21,112 @@ static int32_t walFetchBodyNew(SWalReader *pRead);
|
|||
static int32_t walSkipFetchBodyNew(SWalReader *pRead);
|
||||
|
||||
SWalReader *walOpenReader(SWal *pWal, SWalFilterCond *cond) {
|
||||
SWalReader *pRead = taosMemoryCalloc(1, sizeof(SWalReader));
|
||||
if (pRead == NULL) {
|
||||
SWalReader *pReader = taosMemoryCalloc(1, sizeof(SWalReader));
|
||||
if (pReader == NULL) {
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
pRead->pWal = pWal;
|
||||
pRead->pIdxFile = NULL;
|
||||
pRead->pLogFile = NULL;
|
||||
pRead->curVersion = -1;
|
||||
pRead->curFileFirstVer = -1;
|
||||
pRead->curInvalid = 1;
|
||||
pRead->capacity = 0;
|
||||
pReader->pWal = pWal;
|
||||
pReader->readerId = tGenIdPI64();
|
||||
pReader->pIdxFile = NULL;
|
||||
pReader->pLogFile = NULL;
|
||||
pReader->curVersion = -1;
|
||||
pReader->curFileFirstVer = -1;
|
||||
pReader->curInvalid = 1;
|
||||
pReader->capacity = 0;
|
||||
if (cond) {
|
||||
pRead->cond = *cond;
|
||||
pReader->cond = *cond;
|
||||
} else {
|
||||
pRead->cond.scanMeta = 0;
|
||||
pRead->cond.scanUncommited = 0;
|
||||
pRead->cond.enableRef = 0;
|
||||
pReader->cond.scanUncommited = 0;
|
||||
pReader->cond.scanNotApplied = 0;
|
||||
pReader->cond.scanMeta = 0;
|
||||
pReader->cond.enableRef = 0;
|
||||
}
|
||||
|
||||
taosThreadMutexInit(&pRead->mutex, NULL);
|
||||
taosThreadMutexInit(&pReader->mutex, NULL);
|
||||
|
||||
/*if (pRead->cond.enableRef) {*/
|
||||
/*walOpenRef(pWal);*/
|
||||
/*}*/
|
||||
|
||||
pRead->pHead = taosMemoryMalloc(sizeof(SWalCkHead));
|
||||
if (pRead->pHead == NULL) {
|
||||
pReader->pHead = taosMemoryMalloc(sizeof(SWalCkHead));
|
||||
if (pReader->pHead == NULL) {
|
||||
terrno = TSDB_CODE_WAL_OUT_OF_MEMORY;
|
||||
taosMemoryFree(pRead);
|
||||
taosMemoryFree(pReader);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
return pRead;
|
||||
/*if (pReader->cond.enableRef) {*/
|
||||
/* taosHashPut(pWal->pRefHash, &pReader->readerId, sizeof(int64_t), &pReader, sizeof(void *));*/
|
||||
/*}*/
|
||||
|
||||
return pReader;
|
||||
}
|
||||
|
||||
void walCloseReader(SWalReader *pRead) {
|
||||
taosCloseFile(&pRead->pIdxFile);
|
||||
taosCloseFile(&pRead->pLogFile);
|
||||
taosMemoryFreeClear(pRead->pHead);
|
||||
taosMemoryFree(pRead);
|
||||
void walCloseReader(SWalReader *pReader) {
|
||||
taosCloseFile(&pReader->pIdxFile);
|
||||
taosCloseFile(&pReader->pLogFile);
|
||||
/*if (pReader->cond.enableRef) {*/
|
||||
/*taosHashRemove(pReader->pWal->pRefHash, &pReader->readerId, sizeof(int64_t));*/
|
||||
/*}*/
|
||||
taosMemoryFreeClear(pReader->pHead);
|
||||
taosMemoryFree(pReader);
|
||||
}
|
||||
|
||||
int32_t walNextValidMsg(SWalReader *pRead) {
|
||||
int64_t fetchVer = pRead->curVersion;
|
||||
int64_t lastVer = walGetLastVer(pRead->pWal);
|
||||
int64_t committedVer = walGetCommittedVer(pRead->pWal);
|
||||
int64_t appliedVer = walGetAppliedVer(pRead->pWal);
|
||||
int64_t endVer = pRead->cond.scanUncommited ? lastVer : committedVer;
|
||||
int32_t walNextValidMsg(SWalReader *pReader) {
|
||||
int64_t fetchVer = pReader->curVersion;
|
||||
int64_t lastVer = walGetLastVer(pReader->pWal);
|
||||
int64_t committedVer = walGetCommittedVer(pReader->pWal);
|
||||
int64_t appliedVer = walGetAppliedVer(pReader->pWal);
|
||||
int64_t endVer = pReader->cond.scanUncommited ? lastVer : committedVer;
|
||||
endVer = TMIN(appliedVer, endVer);
|
||||
|
||||
wDebug("vgId:%d wal start to fetch, ver %ld, last ver %ld commit ver %ld, applied ver %ld, end ver %ld",
|
||||
pRead->pWal->cfg.vgId, fetchVer, lastVer, committedVer, appliedVer, endVer);
|
||||
pRead->curStopped = 0;
|
||||
pReader->pWal->cfg.vgId, fetchVer, lastVer, committedVer, appliedVer, endVer);
|
||||
pReader->curStopped = 0;
|
||||
while (fetchVer <= endVer) {
|
||||
if (walFetchHeadNew(pRead, fetchVer) < 0) {
|
||||
if (walFetchHeadNew(pReader, fetchVer) < 0) {
|
||||
return -1;
|
||||
}
|
||||
if (pRead->pHead->head.msgType == TDMT_VND_SUBMIT ||
|
||||
(IS_META_MSG(pRead->pHead->head.msgType) && pRead->cond.scanMeta)) {
|
||||
if (walFetchBodyNew(pRead) < 0) {
|
||||
if (pReader->pHead->head.msgType == TDMT_VND_SUBMIT ||
|
||||
(IS_META_MSG(pReader->pHead->head.msgType) && pReader->cond.scanMeta)) {
|
||||
if (walFetchBodyNew(pReader) < 0) {
|
||||
return -1;
|
||||
}
|
||||
return 0;
|
||||
} else {
|
||||
if (walSkipFetchBodyNew(pRead) < 0) {
|
||||
if (walSkipFetchBodyNew(pReader) < 0) {
|
||||
return -1;
|
||||
}
|
||||
fetchVer++;
|
||||
ASSERT(fetchVer == pRead->curVersion);
|
||||
ASSERT(fetchVer == pReader->curVersion);
|
||||
}
|
||||
}
|
||||
pRead->curStopped = 1;
|
||||
pReader->curStopped = 1;
|
||||
return -1;
|
||||
}
|
||||
|
||||
static int64_t walReadSeekFilePos(SWalReader *pRead, int64_t fileFirstVer, int64_t ver) {
|
||||
static int64_t walReadSeekFilePos(SWalReader *pReader, int64_t fileFirstVer, int64_t ver) {
|
||||
int64_t ret = 0;
|
||||
|
||||
TdFilePtr pIdxTFile = pRead->pIdxFile;
|
||||
TdFilePtr pLogTFile = pRead->pLogFile;
|
||||
TdFilePtr pIdxTFile = pReader->pIdxFile;
|
||||
TdFilePtr pLogTFile = pReader->pLogFile;
|
||||
|
||||
// seek position
|
||||
int64_t offset = (ver - fileFirstVer) * sizeof(SWalIdxEntry);
|
||||
ret = taosLSeekFile(pIdxTFile, offset, SEEK_SET);
|
||||
if (ret < 0) {
|
||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||
wError("vgId:%d, failed to seek idx file, index:%" PRId64 ", pos:%" PRId64 ", since %s", pRead->pWal->cfg.vgId, ver,
|
||||
offset, terrstr());
|
||||
wError("vgId:%d, failed to seek idx file, index:%" PRId64 ", pos:%" PRId64 ", since %s", pReader->pWal->cfg.vgId,
|
||||
ver, offset, terrstr());
|
||||
return -1;
|
||||
}
|
||||
SWalIdxEntry entry = {0};
|
||||
if ((ret = taosReadFile(pIdxTFile, &entry, sizeof(SWalIdxEntry))) != sizeof(SWalIdxEntry)) {
|
||||
if (ret < 0) {
|
||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||
wError("vgId:%d, failed to read idx file, since %s", pRead->pWal->cfg.vgId, terrstr());
|
||||
wError("vgId:%d, failed to read idx file, since %s", pReader->pWal->cfg.vgId, terrstr());
|
||||
} else {
|
||||
terrno = TSDB_CODE_WAL_FILE_CORRUPTED;
|
||||
wError("vgId:%d, read idx file incompletely, read bytes %" PRId64 ", bytes should be %" PRIu64,
|
||||
pRead->pWal->cfg.vgId, ret, sizeof(SWalIdxEntry));
|
||||
pReader->pWal->cfg.vgId, ret, sizeof(SWalIdxEntry));
|
||||
}
|
||||
return -1;
|
||||
}
|
||||
|
@ -130,79 +135,79 @@ static int64_t walReadSeekFilePos(SWalReader *pRead, int64_t fileFirstVer, int64
|
|||
ret = taosLSeekFile(pLogTFile, entry.offset, SEEK_SET);
|
||||
if (ret < 0) {
|
||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||
wError("vgId:%d, failed to seek log file, index:%" PRId64 ", pos:%" PRId64 ", since %s", pRead->pWal->cfg.vgId, ver,
|
||||
entry.offset, terrstr());
|
||||
wError("vgId:%d, failed to seek log file, index:%" PRId64 ", pos:%" PRId64 ", since %s", pReader->pWal->cfg.vgId,
|
||||
ver, entry.offset, terrstr());
|
||||
return -1;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int32_t walReadChangeFile(SWalReader *pRead, int64_t fileFirstVer) {
|
||||
static int32_t walReadChangeFile(SWalReader *pReader, int64_t fileFirstVer) {
|
||||
char fnameStr[WAL_FILE_LEN];
|
||||
|
||||
taosCloseFile(&pRead->pIdxFile);
|
||||
taosCloseFile(&pRead->pLogFile);
|
||||
taosCloseFile(&pReader->pIdxFile);
|
||||
taosCloseFile(&pReader->pLogFile);
|
||||
|
||||
walBuildLogName(pRead->pWal, fileFirstVer, fnameStr);
|
||||
TdFilePtr pLogTFile = taosOpenFile(fnameStr, TD_FILE_READ);
|
||||
if (pLogTFile == NULL) {
|
||||
walBuildLogName(pReader->pWal, fileFirstVer, fnameStr);
|
||||
TdFilePtr pLogFile = taosOpenFile(fnameStr, TD_FILE_READ);
|
||||
if (pLogFile == NULL) {
|
||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||
wError("vgId:%d, cannot open file %s, since %s", pRead->pWal->cfg.vgId, fnameStr, terrstr());
|
||||
wError("vgId:%d, cannot open file %s, since %s", pReader->pWal->cfg.vgId, fnameStr, terrstr());
|
||||
return -1;
|
||||
}
|
||||
|
||||
pRead->pLogFile = pLogTFile;
|
||||
pReader->pLogFile = pLogFile;
|
||||
|
||||
walBuildIdxName(pRead->pWal, fileFirstVer, fnameStr);
|
||||
TdFilePtr pIdxTFile = taosOpenFile(fnameStr, TD_FILE_READ);
|
||||
if (pIdxTFile == NULL) {
|
||||
walBuildIdxName(pReader->pWal, fileFirstVer, fnameStr);
|
||||
TdFilePtr pIdxFile = taosOpenFile(fnameStr, TD_FILE_READ);
|
||||
if (pIdxFile == NULL) {
|
||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||
wError("vgId:%d, cannot open file %s, since %s", pRead->pWal->cfg.vgId, fnameStr, terrstr());
|
||||
wError("vgId:%d, cannot open file %s, since %s", pReader->pWal->cfg.vgId, fnameStr, terrstr());
|
||||
return -1;
|
||||
}
|
||||
|
||||
pRead->pIdxFile = pIdxTFile;
|
||||
pReader->pIdxFile = pIdxFile;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int32_t walReadSeekVerImpl(SWalReader *pRead, int64_t ver) {
|
||||
SWal *pWal = pRead->pWal;
|
||||
int32_t walReadSeekVerImpl(SWalReader *pReader, int64_t ver) {
|
||||
SWal *pWal = pReader->pWal;
|
||||
|
||||
// bsearch in fileSet
|
||||
SWalFileInfo tmpInfo;
|
||||
tmpInfo.firstVer = ver;
|
||||
// bsearch in fileSet
|
||||
SWalFileInfo *pRet = taosArraySearch(pWal->fileInfoSet, &tmpInfo, compareWalFileInfo, TD_LE);
|
||||
ASSERT(pRet != NULL);
|
||||
if (pRead->curFileFirstVer != pRet->firstVer) {
|
||||
if (pReader->curFileFirstVer != pRet->firstVer) {
|
||||
// error code was set inner
|
||||
if (walReadChangeFile(pRead, pRet->firstVer) < 0) {
|
||||
if (walReadChangeFile(pReader, pRet->firstVer) < 0) {
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
// error code was set inner
|
||||
if (walReadSeekFilePos(pRead, pRet->firstVer, ver) < 0) {
|
||||
if (walReadSeekFilePos(pReader, pRet->firstVer, ver) < 0) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
wDebug("wal version reset from %ld(invalid: %d) to %ld", pRead->curVersion, pRead->curInvalid, ver);
|
||||
wDebug("wal version reset from %ld(invalid: %d) to %ld", pReader->curVersion, pReader->curInvalid, ver);
|
||||
|
||||
pRead->curVersion = ver;
|
||||
pReader->curVersion = ver;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int32_t walReadSeekVer(SWalReader *pRead, int64_t ver) {
|
||||
SWal *pWal = pRead->pWal;
|
||||
if (!pRead->curInvalid && ver == pRead->curVersion) {
|
||||
int32_t walReadSeekVer(SWalReader *pReader, int64_t ver) {
|
||||
SWal *pWal = pReader->pWal;
|
||||
if (!pReader->curInvalid && ver == pReader->curVersion) {
|
||||
wDebug("wal version %ld match, no need to reset", ver);
|
||||
return 0;
|
||||
}
|
||||
|
||||
pRead->curInvalid = 1;
|
||||
pRead->curVersion = ver;
|
||||
pReader->curInvalid = 1;
|
||||
pReader->curVersion = ver;
|
||||
|
||||
if (ver > pWal->vers.lastVer || ver < pWal->vers.firstVer) {
|
||||
wDebug("vgId:%d, invalid index:%" PRId64 ", first index:%" PRId64 ", last index:%" PRId64, pRead->pWal->cfg.vgId,
|
||||
wDebug("vgId:%d, invalid index:%" PRId64 ", first index:%" PRId64 ", last index:%" PRId64, pReader->pWal->cfg.vgId,
|
||||
ver, pWal->vers.firstVer, pWal->vers.lastVer);
|
||||
terrno = TSDB_CODE_WAL_LOG_NOT_EXIST;
|
||||
return -1;
|
||||
|
@ -210,7 +215,7 @@ int32_t walReadSeekVer(SWalReader *pRead, int64_t ver) {
|
|||
if (ver < pWal->vers.snapshotVer) {
|
||||
}
|
||||
|
||||
if (walReadSeekVerImpl(pRead, ver) < 0) {
|
||||
if (walReadSeekVerImpl(pReader, ver) < 0) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue