Merge branch '3.0' into feature/stream
This commit is contained in:
commit
dd18eeb6bd
|
@ -3,11 +3,31 @@ sidebar_label: Docker
|
||||||
title: 通过 Docker 快速体验 TDengine
|
title: 通过 Docker 快速体验 TDengine
|
||||||
---
|
---
|
||||||
|
|
||||||
虽然并不推荐在生产环境中通过 Docker 来部署 TDengine 服务,但 Docker 工具能够很好地屏蔽底层操作系统的环境差异,很适合在开发测试或初次体验时用于安装运行 TDengine 的工具集。特别是,借助 Docker,能够比较方便地在 macOS 和 Windows 系统上尝试 TDengine,而无需安装虚拟机或额外租用 Linux 服务器。另外,从 2.0.14.0 版本开始,TDengine 提供的镜像已经可以同时支持 X86-64、X86、arm64、arm32 平台,像 NAS、树莓派、嵌入式开发板之类可以运行 docker 的非主流计算机也可以基于本文档轻松体验 TDengine。
|
本节首先介绍如何通过 Docker 快速体验 TDengine,然后介绍如何在 Docker 环境下体验 TDengine 的写入和查询功能。
|
||||||
|
|
||||||
下文通过 Step by Step 风格的介绍,讲解如何通过 Docker 快速建立 TDengine 的单节点运行环境,以支持开发和测试。
|
## 启动 TDengine
|
||||||
|
|
||||||
## 下载 Docker
|
如果已经安装了 docker, 只需执行下面的命令。
|
||||||
|
|
||||||
|
```shell
|
||||||
|
docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
|
||||||
|
```
|
||||||
|
|
||||||
|
确定该容器已经启动并且在正常运行
|
||||||
|
|
||||||
|
```shell
|
||||||
|
docker ps
|
||||||
|
```
|
||||||
|
|
||||||
|
进入该容器并执行 bash
|
||||||
|
|
||||||
|
```shell
|
||||||
|
docker exec -it <container name> bash
|
||||||
|
```
|
||||||
|
|
||||||
|
然后就可以执行相关的 Linux 命令操作和访问 TDengine
|
||||||
|
|
||||||
|
:::info
|
||||||
|
|
||||||
Docker 工具自身的下载请参考 [Docker 官网文档](https://docs.docker.com/get-docker/)。
|
Docker 工具自身的下载请参考 [Docker 官网文档](https://docs.docker.com/get-docker/)。
|
||||||
|
|
||||||
|
@ -18,95 +38,49 @@ $ docker -v
|
||||||
Docker version 20.10.3, build 48d30b5
|
Docker version 20.10.3, build 48d30b5
|
||||||
```
|
```
|
||||||
|
|
||||||
## 使用 Docker 在容器中运行 TDengine
|
:::
|
||||||
|
|
||||||
### 在 Docker 容器中运行 TDengine server
|
## 运行 TDengine CLI
|
||||||
|
|
||||||
```bash
|
有两种方式在 Docker 环境下使用 TDengine CLI (taos) 访问 TDengine.
|
||||||
$ docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
|
- 进入容器后,执行 taos
|
||||||
526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd
|
- 在宿主机使用容器映射到主机的端口进行访问 `taos -h <hostname> -P <port>`
|
||||||
```
|
|
||||||
|
|
||||||
这条命令,启动一个运行了 TDengine server 的 docker 容器,并且将容器的 6030 到 6049 端口映射到宿主机的 6030 到 6049 端口上。如果宿主机已经运行了 TDengine server 并占用了相同端口,需要映射容器的端口到不同的未使用端口段。(详情参见 [TDengine 2.0 端口说明](/train-faq/faq#port)。为了支持 TDengine 客户端操作 TDengine server 服务, TCP 和 UDP 端口都需要打开。
|
|
||||||
|
|
||||||
- **docker run**:通过 Docker 运行一个容器
|
|
||||||
- **-d**:让容器在后台运行
|
|
||||||
- **-p**:指定映射端口。注意:如果不是用端口映射,依然可以进入 Docker 容器内部使用 TDengine 服务或进行应用开发,只是不能对容器外部提供服务
|
|
||||||
- **tdengine/tdengine**:拉取的 TDengine 官方发布的应用镜像
|
|
||||||
- **526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd**:这个返回的长字符是容器 ID,我们也可以通过容器 ID 来查看对应的容器
|
|
||||||
|
|
||||||
进一步,还可以使用 docker run 命令启动运行 TDengine server 的 docker 容器,并使用 `--name` 命令行参数将容器命名为 `tdengine`,使用 `--hostname` 指定 hostname 为 `tdengine-server`,通过 `-v` 挂载本地目录到容器,实现宿主机与容器内部的数据同步,防止容器删除后,数据丢失。
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker run -d --name tdengine --hostname="tdengine-server" -v ~/work/taos/log:/var/log/taos -v ~/work/taos/data:/var/lib/taos -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
|
|
||||||
```
|
|
||||||
|
|
||||||
- **--name tdengine**:设置容器名称,我们可以通过容器名称来访问对应的容器
|
|
||||||
- **--hostname=tdengine-server**:设置容器内 Linux 系统的 hostname,我们可以通过映射 hostname 和 IP 来解决容器 IP 可能变化的问题。
|
|
||||||
- **-v**:设置宿主机文件目录映射到容器内目录,避免容器删除后数据丢失。
|
|
||||||
|
|
||||||
### 使用 docker ps 命令确认容器是否已经正确运行
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker ps
|
|
||||||
```
|
|
||||||
|
|
||||||
输出示例如下:
|
|
||||||
|
|
||||||
```
|
|
||||||
CONTAINER ID IMAGE COMMAND CREATED STATUS ···
|
|
||||||
c452519b0f9b tdengine/tdengine "taosd" 14 minutes ago Up 14 minutes ···
|
|
||||||
```
|
|
||||||
|
|
||||||
- **docker ps**:列出所有正在运行状态的容器信息。
|
|
||||||
- **CONTAINER ID**:容器 ID。
|
|
||||||
- **IMAGE**:使用的镜像。
|
|
||||||
- **COMMAND**:启动容器时运行的命令。
|
|
||||||
- **CREATED**:容器创建时间。
|
|
||||||
- **STATUS**:容器状态。UP 表示运行中。
|
|
||||||
|
|
||||||
### 通过 docker exec 命令,进入到 docker 容器中去做开发
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ docker exec -it tdengine /bin/bash
|
|
||||||
root@tdengine-server:~/TDengine-server-2.4.0.4#
|
|
||||||
```
|
|
||||||
|
|
||||||
- **docker exec**:通过 docker exec 命令进入容器,如果退出,容器不会停止。
|
|
||||||
- **-i**:进入交互模式。
|
|
||||||
- **-t**:指定一个终端。
|
|
||||||
- **tdengine**:容器名称,需要根据 docker ps 指令返回的值进行修改。
|
|
||||||
- **/bin/bash**:载入容器后运行 bash 来进行交互。
|
|
||||||
|
|
||||||
进入容器后,执行 taos shell 客户端程序。
|
|
||||||
|
|
||||||
```bash
|
|
||||||
root@tdengine-server:~/TDengine-server-2.4.0.4# taos
|
|
||||||
|
|
||||||
Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
|
|
||||||
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
|
|
||||||
|
|
||||||
taos>
|
|
||||||
```
|
|
||||||
|
|
||||||
TDengine 终端成功连接服务端,打印出了欢迎消息和版本信息。如果失败,会有错误信息打印出来。
|
|
||||||
|
|
||||||
在 TDengine 终端中,可以通过 SQL 命令来创建/删除数据库、表、超级表等,并可以进行插入和查询操作。具体可以参考 [TAOS SQL 说明文档](/taos-sql/)。
|
|
||||||
|
|
||||||
### 在宿主机访问 Docker 容器中的 TDengine server
|
|
||||||
|
|
||||||
在使用了 -p 命令行参数映射了正确的端口启动了 TDengine Docker 容器后,就在宿主机使用 taos shell 命令即可访问运行在 Docker 容器中的 TDengine。
|
|
||||||
|
|
||||||
```
|
```
|
||||||
$ taos
|
$ taos
|
||||||
|
Welcome to the TDengine shell from Linux, Client Version:3.0.0.0
|
||||||
|
Copyright (c) 2022 by TAOS Data, Inc. All rights reserved.
|
||||||
|
|
||||||
Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
|
Server is Enterprise trial Edition, ver:3.0.0.0 and will expire at 2022-09-24 15:29:46.
|
||||||
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
|
|
||||||
|
taos>
|
||||||
|
|
||||||
taos>
|
|
||||||
```
|
```
|
||||||
|
|
||||||
也可以在宿主机使用 curl 通过 RESTful 端口访问 Docker 容器内的 TDengine server。
|
|
||||||
|
## 启动 REST 服务
|
||||||
|
|
||||||
|
taosAdapter 是 TDengine 中提供 REST 服务的组件。下面这条命令会在容器中同时启动 `taosd` 和 `taosadapter` 两个服务组件。
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run -d --name tdengine-all -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
|
||||||
|
```
|
||||||
|
|
||||||
|
如果想只启动 `taosadapter`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run -d --name tdengine-taosa -p 6041-6049:6041-6049 -p 6041-6049:6041-6049/udp -e TAOS_FIRST_EP=tdengine-all tdengine/tdengine:3.0.0.0 taosadapter
|
||||||
|
```
|
||||||
|
|
||||||
|
如果想只启动 `taosd`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run -d --name tdengine-taosd -p 6030-6042:6030-6042 -p 6030-6042:6030-6042/udp -e TAOS_DISABLE_ADAPTER=true tdengine/tdengine:3.0.0.0
|
||||||
|
```
|
||||||
|
|
||||||
|
## 访问 REST 接口
|
||||||
|
|
||||||
|
可以在宿主机使用 curl 通过 RESTful 端口访问 Docker 容器内的 TDengine server。
|
||||||
|
|
||||||
```
|
```
|
||||||
curl -L -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
|
curl -L -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
|
||||||
|
@ -115,217 +89,60 @@ curl -L -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
|
||||||
输出示例如下:
|
输出示例如下:
|
||||||
|
|
||||||
```
|
```
|
||||||
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep0,keep1,keep(D)","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep0,keep1,keep(D)",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["test","2021-08-18 06:01:11.021",10000,4,1,1,10,"3650,3650,3650",16,6,100,4096,1,3000,2,0,"ms",0,"ready"],["log","2021-08-18 05:51:51.065",4,1,1,1,10,"30,30,30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":2}
|
{"code":0,"column_meta":[["name","VARCHAR",64],["create_time","TIMESTAMP",8],["vgroups","SMALLINT",2],["ntables","BIGINT",8],["replica","TINYINT",1],["strict","VARCHAR",4],["duration","VARCHAR",10],["keep","VARCHAR",32],["buffer","INT",4],["pagesize","INT",4],["pages","INT",4],["minrows","INT",4],["maxrows","INT",4],["wal","TINYINT",1],["fsync","INT",4],["comp","TINYINT",1],["cacheModel","VARCHAR",11],["precision","VARCHAR",2],["single_stable","BOOL",1],["status","VARCHAR",10],["retention","VARCHAR",60]],"data":[["information_schema",null,null,14,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"ready"],["performance_schema",null,null,3,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"ready"]],"rows":2}
|
||||||
```
|
```
|
||||||
|
|
||||||
这条命令,通过 REST API 访问 TDengine server,这时连接的是本机的 6041 端口,可见连接成功。
|
这条命令,通过 REST API 访问 TDengine server,这时连接的是从容器映射到主机的 6041 端口。
|
||||||
|
|
||||||
TDengine REST API 详情请参考[官方文档](/reference/rest-api/)。
|
TDengine REST API 详情请参考[官方文档](/reference/rest-api/)。
|
||||||
|
|
||||||
### 使用 Docker 容器运行 TDengine server 和 taosAdapter
|
## 写入数据
|
||||||
|
|
||||||
在 TDengine 2.4.0.0 之后版本的 Docker 容器,开始提供一个独立运行的组件 taosAdapter,代替之前版本 TDengine 中 taosd 进程中内置的 http server。taosAdapter 支持通过 RESTful 接口对 TDengine server 的数据写入和查询能力,并提供和 InfluxDB/OpenTSDB 兼容的数据摄取接口,允许 InfluxDB/OpenTSDB 应用程序无缝移植到 TDengine。在新版本 Docker 镜像中,默认启用了 taosAdapter,也可以使用 docker run 命令中设置 TAOS_DISABLE_ADAPTER=true 来禁用 taosAdapter;也可以在 docker run 命令中单独使用 taosAdapter,而不运行 taosd 。
|
可以使用 TDengine 的自带工具 taosBenchmark 快速体验 TDengine 的写入。
|
||||||
|
|
||||||
注意:如果容器中运行 taosAdapter,需要根据需要映射其他端口,具体端口默认配置和修改方法请参考[taosAdapter 文档](/reference/taosadapter/)。
|
假定启动容器时已经将容器的6030端口映射到了宿主机的6030端口,则可以直接在宿主机命令行启动 taosBenchmark,也可以进入容器后执行:
|
||||||
|
|
||||||
使用 docker 运行 TDengine 2.4.0.4 版本镜像(taosd + taosAdapter):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker run -d --name tdengine-all -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine:2.4.0.4
|
|
||||||
```
|
|
||||||
|
|
||||||
使用 docker 运行 TDengine 2.4.0.4 版本镜像(仅 taosAdapter,需要设置 firstEp 配置项 或 TAOS_FIRST_EP 环境变量):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker run -d --name tdengine-taosa -p 6041-6049:6041-6049 -p 6041-6049:6041-6049/udp -e TAOS_FIRST_EP=tdengine-all tdengine/tdengine:2.4.0.4 taosadapter
|
|
||||||
```
|
|
||||||
|
|
||||||
使用 docker 运行 TDengine 2.4.0.4 版本镜像(仅 taosd):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker run -d --name tdengine-taosd -p 6030-6042:6030-6042 -p 6030-6042:6030-6042/udp -e TAOS_DISABLE_ADAPTER=true tdengine/tdengine:2.4.0.4
|
|
||||||
```
|
|
||||||
|
|
||||||
使用 curl 命令验证 RESTful 接口可以正常工作:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" -d "show databases;" 127.0.0.1:6041/rest/sql
|
|
||||||
```
|
|
||||||
|
|
||||||
输出示例如下:
|
|
||||||
|
|
||||||
```
|
|
||||||
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["log","2021-12-28 09:18:55.765",10,1,1,1,10,"30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":1}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 应用示例:在宿主机使用 taosBenchmark 写入数据到 Docker 容器中的 TDengine server
|
|
||||||
|
|
||||||
1. 在宿主机命令行界面执行 taosBenchmark (曾命名为 taosdemo)写入数据到 Docker 容器中的 TDengine server
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ taosBenchmark
|
$ taosBenchmark
|
||||||
|
|
||||||
taosBenchmark is simulating data generated by power equipments monitoring...
|
|
||||||
|
|
||||||
host: 127.0.0.1:6030
|
|
||||||
user: root
|
|
||||||
password: taosdata
|
|
||||||
configDir:
|
|
||||||
resultFile: ./output.txt
|
|
||||||
thread num of insert data: 10
|
|
||||||
thread num of create table: 10
|
|
||||||
top insert interval: 0
|
|
||||||
number of records per req: 30000
|
|
||||||
max sql length: 1048576
|
|
||||||
database count: 1
|
|
||||||
database[0]:
|
|
||||||
database[0] name: test
|
|
||||||
drop: yes
|
|
||||||
replica: 1
|
|
||||||
precision: ms
|
|
||||||
super table count: 1
|
|
||||||
super table[0]:
|
|
||||||
stbName: meters
|
|
||||||
autoCreateTable: no
|
|
||||||
childTblExists: no
|
|
||||||
childTblCount: 10000
|
|
||||||
childTblPrefix: d
|
|
||||||
dataSource: rand
|
|
||||||
iface: taosc
|
|
||||||
insertRows: 10000
|
|
||||||
interlaceRows: 0
|
|
||||||
disorderRange: 1000
|
|
||||||
disorderRatio: 0
|
|
||||||
maxSqlLen: 1048576
|
|
||||||
timeStampStep: 1
|
|
||||||
startTimestamp: 2017-07-14 10:40:00.000
|
|
||||||
sampleFormat:
|
|
||||||
sampleFile:
|
|
||||||
tagsFile:
|
|
||||||
columnCount: 3
|
|
||||||
column[0]:FLOAT column[1]:INT column[2]:FLOAT
|
|
||||||
tagCount: 2
|
|
||||||
tag[0]:INT tag[1]:BINARY(16)
|
|
||||||
|
|
||||||
Press enter key to continue or Ctrl-C to stop
|
|
||||||
```
|
```
|
||||||
|
|
||||||
回车后,该命令将在数据库 test 下面自动创建一张超级表 meters,该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupId,groupId 被设置为 1 到 10, location 被设置为 "California.SanFrancisco" 或者 "California.SanDieo"。
|
该命令将在数据库 test 下面自动创建一张超级表 meters,该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupId,groupId 被设置为 1 到 10, location 被设置为 "California.SanFrancisco" 或者 "California.LosAngeles"。
|
||||||
|
|
||||||
最后共插入 1 亿条记录。
|
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能。
|
||||||
|
|
||||||
2. 进入 TDengine 终端,查看 taosBenchmark 生成的数据。
|
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,您可以设置不同参数进行体验,请执行 `taosBenchmark --help` 详细列出。taosBenchmark 详细使用方法请参照 [taosBenchmark 参考手册](../reference/taosbenchmark)。
|
||||||
|
|
||||||
- **进入命令行。**
|
## 体验查询
|
||||||
|
|
||||||
```bash
|
使用上述 taosBenchmark 插入数据后,可以在 TDengine CLI 输入查询命令,体验查询速度。可以直接在宿主机上也可以进入容器后运行。
|
||||||
$ root@c452519b0f9b:~/TDengine-server-2.4.0.4# taos
|
|
||||||
|
|
||||||
Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
|
查询超级表下记录总条数:
|
||||||
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
|
|
||||||
|
|
||||||
taos>
|
```sql
|
||||||
```
|
taos> select count(*) from test.meters;
|
||||||
|
|
||||||
- **查看数据库。**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ taos> show databases;
|
|
||||||
name | created_time | ntables | vgroups | ···
|
|
||||||
test | 2021-08-18 06:01:11.021 | 10000 | 6 | ···
|
|
||||||
log | 2021-08-18 05:51:51.065 | 4 | 1 | ···
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
- **查看超级表。**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ taos> use test;
|
|
||||||
Database changed.
|
|
||||||
|
|
||||||
$ taos> show stables;
|
|
||||||
name | created_time | columns | tags | tables |
|
|
||||||
============================================================================================
|
|
||||||
meters | 2021-08-18 06:01:11.116 | 4 | 2 | 10000 |
|
|
||||||
Query OK, 1 row(s) in set (0.003259s)
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
- **查看表,限制输出十条。**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ taos> select * from test.t0 limit 10;
|
|
||||||
|
|
||||||
DB error: Table does not exist (0.002857s)
|
|
||||||
taos> select * from test.d0 limit 10;
|
|
||||||
ts | current | voltage | phase |
|
|
||||||
======================================================================================
|
|
||||||
2017-07-14 10:40:00.000 | 10.12072 | 223 | 0.34167 |
|
|
||||||
2017-07-14 10:40:00.001 | 10.16103 | 224 | 0.34445 |
|
|
||||||
2017-07-14 10:40:00.002 | 10.00204 | 220 | 0.33334 |
|
|
||||||
2017-07-14 10:40:00.003 | 10.00030 | 220 | 0.33333 |
|
|
||||||
2017-07-14 10:40:00.004 | 9.84029 | 216 | 0.32222 |
|
|
||||||
2017-07-14 10:40:00.005 | 9.88028 | 217 | 0.32500 |
|
|
||||||
2017-07-14 10:40:00.006 | 9.88110 | 217 | 0.32500 |
|
|
||||||
2017-07-14 10:40:00.007 | 10.08137 | 222 | 0.33889 |
|
|
||||||
2017-07-14 10:40:00.008 | 10.12063 | 223 | 0.34167 |
|
|
||||||
2017-07-14 10:40:00.009 | 10.16086 | 224 | 0.34445 |
|
|
||||||
Query OK, 10 row(s) in set (0.016791s)
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
- **查看 d0 表的标签值。**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ taos> select groupid, location from test.d0;
|
|
||||||
groupid | location |
|
|
||||||
=================================
|
|
||||||
0 | California.SanDieo |
|
|
||||||
Query OK, 1 row(s) in set (0.003490s)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 应用示例:使用数据收集代理软件写入 TDengine
|
|
||||||
|
|
||||||
taosAdapter 支持多个数据收集代理软件(如 Telegraf、StatsD、collectd 等),这里仅模拟 StasD 写入数据,在宿主机执行命令如下:
|
|
||||||
|
|
||||||
```
|
|
||||||
echo "foo:1|c" | nc -u -w0 127.0.0.1 6044
|
|
||||||
```
|
```
|
||||||
|
|
||||||
然后可以使用 taos shell 查询 taosAdapter 自动创建的数据库 statsd 和 超级表 foo 中的内容:
|
查询 1 亿条记录的平均值、最大值、最小值等:
|
||||||
|
|
||||||
```
|
```sql
|
||||||
taos> show databases;
|
taos> select avg(current), max(voltage), min(phase) from test.meters;
|
||||||
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
|
|
||||||
====================================================================================================================================================================================================================================================================================
|
|
||||||
log | 2021-12-28 09:18:55.765 | 12 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
|
|
||||||
statsd | 2021-12-28 09:21:48.841 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
|
|
||||||
Query OK, 2 row(s) in set (0.002112s)
|
|
||||||
|
|
||||||
taos> use statsd;
|
|
||||||
Database changed.
|
|
||||||
|
|
||||||
taos> show stables;
|
|
||||||
name | created_time | columns | tags | tables |
|
|
||||||
============================================================================================
|
|
||||||
foo | 2021-12-28 09:21:48.894 | 2 | 1 | 1 |
|
|
||||||
Query OK, 1 row(s) in set (0.001160s)
|
|
||||||
|
|
||||||
taos> select * from foo;
|
|
||||||
ts | value | metric_type |
|
|
||||||
=======================================================================================
|
|
||||||
2021-12-28 09:21:48.840820836 | 1 | counter |
|
|
||||||
Query OK, 1 row(s) in set (0.001639s)
|
|
||||||
|
|
||||||
taos>
|
|
||||||
```
|
```
|
||||||
|
|
||||||
可以看到模拟数据已经被写入到 TDengine 中。
|
查询 location="California.SanFrancisco" 的记录总条数:
|
||||||
|
|
||||||
## 停止正在 Docker 中运行的 TDengine 服务
|
```sql
|
||||||
|
taos> select count(*) from test.meters where location="California.SanFrancisco";
|
||||||
```bash
|
|
||||||
docker stop tdengine
|
|
||||||
```
|
```
|
||||||
|
|
||||||
- **docker stop**:通过 docker stop 停止指定的正在运行中的 docker 镜像。
|
查询 groupId=10 的所有记录的平均值、最大值、最小值等:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
taos> select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
|
||||||
|
```
|
||||||
|
|
||||||
|
对表 d10 按 10s 进行平均值、最大值和最小值聚合统计:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
|
||||||
|
```
|
|
@ -3,6 +3,7 @@ title: 立即开始
|
||||||
description: '快速设置 TDengine 环境并体验其高效写入和查询'
|
description: '快速设置 TDengine 环境并体验其高效写入和查询'
|
||||||
---
|
---
|
||||||
|
|
||||||
|
TDengine 完整的软件包包括服务端(taosd)、用于与第三方系统对接并提供 RESTful 接口的 taosAdapter、应用驱动(taosc)、命令行程序 (CLI,taos) 和一些工具软件。TDengine 除了提供多种语言的连接器之外,还通过 [taosAdapter](/reference/taosadapter) 提供 [RESTful 接口](/reference/rest-api)。
|
||||||
|
|
||||||
本章主要介绍如何利用 Docker 或者安装包快速设置 TDengine 环境并体验其高效写入和查询。
|
本章主要介绍如何利用 Docker 或者安装包快速设置 TDengine 环境并体验其高效写入和查询。
|
||||||
|
|
||||||
|
|
|
@ -152,7 +152,10 @@ void taosCfgDynamicOptions(const char *option, const char *value);
|
||||||
void taosAddDataDir(int32_t index, char *v1, int32_t level, int32_t primary);
|
void taosAddDataDir(int32_t index, char *v1, int32_t level, int32_t primary);
|
||||||
|
|
||||||
struct SConfig *taosGetCfg();
|
struct SConfig *taosGetCfg();
|
||||||
int32_t taosSetCfg(SConfig *pCfg, char* name);
|
|
||||||
|
void taosSetAllDebugFlag(int32_t flag);
|
||||||
|
void taosSetDebugFlag(int32_t *pFlagPtr, const char *flagName, int32_t flagVal);
|
||||||
|
int32_t taosSetCfg(SConfig *pCfg, char *name);
|
||||||
|
|
||||||
#ifdef __cplusplus
|
#ifdef __cplusplus
|
||||||
}
|
}
|
||||||
|
|
|
@ -748,6 +748,10 @@ typedef struct {
|
||||||
int8_t ignoreExist;
|
int8_t ignoreExist;
|
||||||
int32_t numOfRetensions;
|
int32_t numOfRetensions;
|
||||||
SArray* pRetensions; // SRetention
|
SArray* pRetensions; // SRetention
|
||||||
|
int32_t walRetentionPeriod;
|
||||||
|
int32_t walRetentionSize;
|
||||||
|
int32_t walRollPeriod;
|
||||||
|
int32_t walSegmentSize;
|
||||||
} SCreateDbReq;
|
} SCreateDbReq;
|
||||||
|
|
||||||
int32_t tSerializeSCreateDbReq(void* buf, int32_t bufLen, SCreateDbReq* pReq);
|
int32_t tSerializeSCreateDbReq(void* buf, int32_t bufLen, SCreateDbReq* pReq);
|
||||||
|
@ -1977,7 +1981,7 @@ typedef struct SVCreateTbReq {
|
||||||
union {
|
union {
|
||||||
struct {
|
struct {
|
||||||
char* name; // super table name
|
char* name; // super table name
|
||||||
uint8_t tagNum;
|
uint8_t tagNum;
|
||||||
tb_uid_t suid;
|
tb_uid_t suid;
|
||||||
SArray* tagName;
|
SArray* tagName;
|
||||||
uint8_t* pTag;
|
uint8_t* pTag;
|
||||||
|
|
|
@ -16,261 +16,265 @@
|
||||||
#ifndef _TD_COMMON_TOKEN_H_
|
#ifndef _TD_COMMON_TOKEN_H_
|
||||||
#define _TD_COMMON_TOKEN_H_
|
#define _TD_COMMON_TOKEN_H_
|
||||||
|
|
||||||
#define TK_OR 1
|
#define TK_OR 1
|
||||||
#define TK_AND 2
|
#define TK_AND 2
|
||||||
#define TK_UNION 3
|
#define TK_UNION 3
|
||||||
#define TK_ALL 4
|
#define TK_ALL 4
|
||||||
#define TK_MINUS 5
|
#define TK_MINUS 5
|
||||||
#define TK_EXCEPT 6
|
#define TK_EXCEPT 6
|
||||||
#define TK_INTERSECT 7
|
#define TK_INTERSECT 7
|
||||||
#define TK_NK_BITAND 8
|
#define TK_NK_BITAND 8
|
||||||
#define TK_NK_BITOR 9
|
#define TK_NK_BITOR 9
|
||||||
#define TK_NK_LSHIFT 10
|
#define TK_NK_LSHIFT 10
|
||||||
#define TK_NK_RSHIFT 11
|
#define TK_NK_RSHIFT 11
|
||||||
#define TK_NK_PLUS 12
|
#define TK_NK_PLUS 12
|
||||||
#define TK_NK_MINUS 13
|
#define TK_NK_MINUS 13
|
||||||
#define TK_NK_STAR 14
|
#define TK_NK_STAR 14
|
||||||
#define TK_NK_SLASH 15
|
#define TK_NK_SLASH 15
|
||||||
#define TK_NK_REM 16
|
#define TK_NK_REM 16
|
||||||
#define TK_NK_CONCAT 17
|
#define TK_NK_CONCAT 17
|
||||||
#define TK_CREATE 18
|
#define TK_CREATE 18
|
||||||
#define TK_ACCOUNT 19
|
#define TK_ACCOUNT 19
|
||||||
#define TK_NK_ID 20
|
#define TK_NK_ID 20
|
||||||
#define TK_PASS 21
|
#define TK_PASS 21
|
||||||
#define TK_NK_STRING 22
|
#define TK_NK_STRING 22
|
||||||
#define TK_ALTER 23
|
#define TK_ALTER 23
|
||||||
#define TK_PPS 24
|
#define TK_PPS 24
|
||||||
#define TK_TSERIES 25
|
#define TK_TSERIES 25
|
||||||
#define TK_STORAGE 26
|
#define TK_STORAGE 26
|
||||||
#define TK_STREAMS 27
|
#define TK_STREAMS 27
|
||||||
#define TK_QTIME 28
|
#define TK_QTIME 28
|
||||||
#define TK_DBS 29
|
#define TK_DBS 29
|
||||||
#define TK_USERS 30
|
#define TK_USERS 30
|
||||||
#define TK_CONNS 31
|
#define TK_CONNS 31
|
||||||
#define TK_STATE 32
|
#define TK_STATE 32
|
||||||
#define TK_USER 33
|
#define TK_USER 33
|
||||||
#define TK_ENABLE 34
|
#define TK_ENABLE 34
|
||||||
#define TK_NK_INTEGER 35
|
#define TK_NK_INTEGER 35
|
||||||
#define TK_SYSINFO 36
|
#define TK_SYSINFO 36
|
||||||
#define TK_DROP 37
|
#define TK_DROP 37
|
||||||
#define TK_GRANT 38
|
#define TK_GRANT 38
|
||||||
#define TK_ON 39
|
#define TK_ON 39
|
||||||
#define TK_TO 40
|
#define TK_TO 40
|
||||||
#define TK_REVOKE 41
|
#define TK_REVOKE 41
|
||||||
#define TK_FROM 42
|
#define TK_FROM 42
|
||||||
#define TK_NK_COMMA 43
|
#define TK_NK_COMMA 43
|
||||||
#define TK_READ 44
|
#define TK_READ 44
|
||||||
#define TK_WRITE 45
|
#define TK_WRITE 45
|
||||||
#define TK_NK_DOT 46
|
#define TK_NK_DOT 46
|
||||||
#define TK_DNODE 47
|
#define TK_DNODE 47
|
||||||
#define TK_PORT 48
|
#define TK_PORT 48
|
||||||
#define TK_DNODES 49
|
#define TK_DNODES 49
|
||||||
#define TK_NK_IPTOKEN 50
|
#define TK_NK_IPTOKEN 50
|
||||||
#define TK_LOCAL 51
|
#define TK_LOCAL 51
|
||||||
#define TK_QNODE 52
|
#define TK_QNODE 52
|
||||||
#define TK_BNODE 53
|
#define TK_BNODE 53
|
||||||
#define TK_SNODE 54
|
#define TK_SNODE 54
|
||||||
#define TK_MNODE 55
|
#define TK_MNODE 55
|
||||||
#define TK_DATABASE 56
|
#define TK_DATABASE 56
|
||||||
#define TK_USE 57
|
#define TK_USE 57
|
||||||
#define TK_FLUSH 58
|
#define TK_FLUSH 58
|
||||||
#define TK_TRIM 59
|
#define TK_TRIM 59
|
||||||
#define TK_IF 60
|
#define TK_IF 60
|
||||||
#define TK_NOT 61
|
#define TK_NOT 61
|
||||||
#define TK_EXISTS 62
|
#define TK_EXISTS 62
|
||||||
#define TK_BUFFER 63
|
#define TK_BUFFER 63
|
||||||
#define TK_CACHEMODEL 64
|
#define TK_CACHEMODEL 64
|
||||||
#define TK_CACHESIZE 65
|
#define TK_CACHESIZE 65
|
||||||
#define TK_COMP 66
|
#define TK_COMP 66
|
||||||
#define TK_DURATION 67
|
#define TK_DURATION 67
|
||||||
#define TK_NK_VARIABLE 68
|
#define TK_NK_VARIABLE 68
|
||||||
#define TK_FSYNC 69
|
#define TK_FSYNC 69
|
||||||
#define TK_MAXROWS 70
|
#define TK_MAXROWS 70
|
||||||
#define TK_MINROWS 71
|
#define TK_MINROWS 71
|
||||||
#define TK_KEEP 72
|
#define TK_KEEP 72
|
||||||
#define TK_PAGES 73
|
#define TK_PAGES 73
|
||||||
#define TK_PAGESIZE 74
|
#define TK_PAGESIZE 74
|
||||||
#define TK_PRECISION 75
|
#define TK_PRECISION 75
|
||||||
#define TK_REPLICA 76
|
#define TK_REPLICA 76
|
||||||
#define TK_STRICT 77
|
#define TK_STRICT 77
|
||||||
#define TK_WAL 78
|
#define TK_WAL 78
|
||||||
#define TK_VGROUPS 79
|
#define TK_VGROUPS 79
|
||||||
#define TK_SINGLE_STABLE 80
|
#define TK_SINGLE_STABLE 80
|
||||||
#define TK_RETENTIONS 81
|
#define TK_RETENTIONS 81
|
||||||
#define TK_SCHEMALESS 82
|
#define TK_SCHEMALESS 82
|
||||||
#define TK_NK_COLON 83
|
#define TK_WAL_RETENTION_PERIOD 83
|
||||||
#define TK_TABLE 84
|
#define TK_WAL_RETENTION_SIZE 84
|
||||||
#define TK_NK_LP 85
|
#define TK_WAL_ROLL_PERIOD 85
|
||||||
#define TK_NK_RP 86
|
#define TK_WAL_SEGMENT_SIZE 86
|
||||||
#define TK_STABLE 87
|
#define TK_NK_COLON 87
|
||||||
#define TK_ADD 88
|
#define TK_TABLE 88
|
||||||
#define TK_COLUMN 89
|
#define TK_NK_LP 89
|
||||||
#define TK_MODIFY 90
|
#define TK_NK_RP 90
|
||||||
#define TK_RENAME 91
|
#define TK_STABLE 91
|
||||||
#define TK_TAG 92
|
#define TK_ADD 92
|
||||||
#define TK_SET 93
|
#define TK_COLUMN 93
|
||||||
#define TK_NK_EQ 94
|
#define TK_MODIFY 94
|
||||||
#define TK_USING 95
|
#define TK_RENAME 95
|
||||||
#define TK_TAGS 96
|
#define TK_TAG 96
|
||||||
#define TK_COMMENT 97
|
#define TK_SET 97
|
||||||
#define TK_BOOL 98
|
#define TK_NK_EQ 98
|
||||||
#define TK_TINYINT 99
|
#define TK_USING 99
|
||||||
#define TK_SMALLINT 100
|
#define TK_TAGS 100
|
||||||
#define TK_INT 101
|
#define TK_COMMENT 101
|
||||||
#define TK_INTEGER 102
|
#define TK_BOOL 102
|
||||||
#define TK_BIGINT 103
|
#define TK_TINYINT 103
|
||||||
#define TK_FLOAT 104
|
#define TK_SMALLINT 104
|
||||||
#define TK_DOUBLE 105
|
#define TK_INT 105
|
||||||
#define TK_BINARY 106
|
#define TK_INTEGER 106
|
||||||
#define TK_TIMESTAMP 107
|
#define TK_BIGINT 107
|
||||||
#define TK_NCHAR 108
|
#define TK_FLOAT 108
|
||||||
#define TK_UNSIGNED 109
|
#define TK_DOUBLE 109
|
||||||
#define TK_JSON 110
|
#define TK_BINARY 110
|
||||||
#define TK_VARCHAR 111
|
#define TK_TIMESTAMP 111
|
||||||
#define TK_MEDIUMBLOB 112
|
#define TK_NCHAR 112
|
||||||
#define TK_BLOB 113
|
#define TK_UNSIGNED 113
|
||||||
#define TK_VARBINARY 114
|
#define TK_JSON 114
|
||||||
#define TK_DECIMAL 115
|
#define TK_VARCHAR 115
|
||||||
#define TK_MAX_DELAY 116
|
#define TK_MEDIUMBLOB 116
|
||||||
#define TK_WATERMARK 117
|
#define TK_BLOB 117
|
||||||
#define TK_ROLLUP 118
|
#define TK_VARBINARY 118
|
||||||
#define TK_TTL 119
|
#define TK_DECIMAL 119
|
||||||
#define TK_SMA 120
|
#define TK_MAX_DELAY 120
|
||||||
#define TK_FIRST 121
|
#define TK_WATERMARK 121
|
||||||
#define TK_LAST 122
|
#define TK_ROLLUP 122
|
||||||
#define TK_SHOW 123
|
#define TK_TTL 123
|
||||||
#define TK_DATABASES 124
|
#define TK_SMA 124
|
||||||
#define TK_TABLES 125
|
#define TK_FIRST 125
|
||||||
#define TK_STABLES 126
|
#define TK_LAST 126
|
||||||
#define TK_MNODES 127
|
#define TK_SHOW 127
|
||||||
#define TK_MODULES 128
|
#define TK_DATABASES 128
|
||||||
#define TK_QNODES 129
|
#define TK_TABLES 129
|
||||||
#define TK_FUNCTIONS 130
|
#define TK_STABLES 130
|
||||||
#define TK_INDEXES 131
|
#define TK_MNODES 131
|
||||||
#define TK_ACCOUNTS 132
|
#define TK_MODULES 132
|
||||||
#define TK_APPS 133
|
#define TK_QNODES 133
|
||||||
#define TK_CONNECTIONS 134
|
#define TK_FUNCTIONS 134
|
||||||
#define TK_LICENCE 135
|
#define TK_INDEXES 135
|
||||||
#define TK_GRANTS 136
|
#define TK_ACCOUNTS 136
|
||||||
#define TK_QUERIES 137
|
#define TK_APPS 137
|
||||||
#define TK_SCORES 138
|
#define TK_CONNECTIONS 138
|
||||||
#define TK_TOPICS 139
|
#define TK_LICENCE 139
|
||||||
#define TK_VARIABLES 140
|
#define TK_GRANTS 140
|
||||||
#define TK_BNODES 141
|
#define TK_QUERIES 141
|
||||||
#define TK_SNODES 142
|
#define TK_SCORES 142
|
||||||
#define TK_CLUSTER 143
|
#define TK_TOPICS 143
|
||||||
#define TK_TRANSACTIONS 144
|
#define TK_VARIABLES 144
|
||||||
#define TK_DISTRIBUTED 145
|
#define TK_BNODES 145
|
||||||
#define TK_CONSUMERS 146
|
#define TK_SNODES 146
|
||||||
#define TK_SUBSCRIPTIONS 147
|
#define TK_CLUSTER 147
|
||||||
#define TK_LIKE 148
|
#define TK_TRANSACTIONS 148
|
||||||
#define TK_INDEX 149
|
#define TK_DISTRIBUTED 149
|
||||||
#define TK_FUNCTION 150
|
#define TK_CONSUMERS 150
|
||||||
#define TK_INTERVAL 151
|
#define TK_SUBSCRIPTIONS 151
|
||||||
#define TK_TOPIC 152
|
#define TK_LIKE 152
|
||||||
#define TK_AS 153
|
#define TK_INDEX 153
|
||||||
#define TK_WITH 154
|
#define TK_FUNCTION 154
|
||||||
#define TK_META 155
|
#define TK_INTERVAL 155
|
||||||
#define TK_CONSUMER 156
|
#define TK_TOPIC 156
|
||||||
#define TK_GROUP 157
|
#define TK_AS 157
|
||||||
#define TK_DESC 158
|
#define TK_WITH 158
|
||||||
#define TK_DESCRIBE 159
|
#define TK_META 159
|
||||||
#define TK_RESET 160
|
#define TK_CONSUMER 160
|
||||||
#define TK_QUERY 161
|
#define TK_GROUP 161
|
||||||
#define TK_CACHE 162
|
#define TK_DESC 162
|
||||||
#define TK_EXPLAIN 163
|
#define TK_DESCRIBE 163
|
||||||
#define TK_ANALYZE 164
|
#define TK_RESET 164
|
||||||
#define TK_VERBOSE 165
|
#define TK_QUERY 165
|
||||||
#define TK_NK_BOOL 166
|
#define TK_CACHE 166
|
||||||
#define TK_RATIO 167
|
#define TK_EXPLAIN 167
|
||||||
#define TK_NK_FLOAT 168
|
#define TK_ANALYZE 168
|
||||||
#define TK_COMPACT 169
|
#define TK_VERBOSE 169
|
||||||
#define TK_VNODES 170
|
#define TK_NK_BOOL 170
|
||||||
#define TK_IN 171
|
#define TK_RATIO 171
|
||||||
#define TK_OUTPUTTYPE 172
|
#define TK_NK_FLOAT 172
|
||||||
#define TK_AGGREGATE 173
|
#define TK_COMPACT 173
|
||||||
#define TK_BUFSIZE 174
|
#define TK_VNODES 174
|
||||||
#define TK_STREAM 175
|
#define TK_IN 175
|
||||||
#define TK_INTO 176
|
#define TK_OUTPUTTYPE 176
|
||||||
#define TK_TRIGGER 177
|
#define TK_AGGREGATE 177
|
||||||
#define TK_AT_ONCE 178
|
#define TK_BUFSIZE 178
|
||||||
#define TK_WINDOW_CLOSE 179
|
#define TK_STREAM 179
|
||||||
#define TK_IGNORE 180
|
#define TK_INTO 180
|
||||||
#define TK_EXPIRED 181
|
#define TK_TRIGGER 181
|
||||||
#define TK_KILL 182
|
#define TK_AT_ONCE 182
|
||||||
#define TK_CONNECTION 183
|
#define TK_WINDOW_CLOSE 183
|
||||||
#define TK_TRANSACTION 184
|
#define TK_IGNORE 184
|
||||||
#define TK_BALANCE 185
|
#define TK_EXPIRED 185
|
||||||
#define TK_VGROUP 186
|
#define TK_KILL 186
|
||||||
#define TK_MERGE 187
|
#define TK_CONNECTION 187
|
||||||
#define TK_REDISTRIBUTE 188
|
#define TK_TRANSACTION 188
|
||||||
#define TK_SPLIT 189
|
#define TK_BALANCE 189
|
||||||
#define TK_SYNCDB 190
|
#define TK_VGROUP 190
|
||||||
#define TK_DELETE 191
|
#define TK_MERGE 191
|
||||||
#define TK_INSERT 192
|
#define TK_REDISTRIBUTE 192
|
||||||
#define TK_NULL 193
|
#define TK_SPLIT 193
|
||||||
#define TK_NK_QUESTION 194
|
#define TK_SYNCDB 194
|
||||||
#define TK_NK_ARROW 195
|
#define TK_DELETE 195
|
||||||
#define TK_ROWTS 196
|
#define TK_INSERT 196
|
||||||
#define TK_TBNAME 197
|
#define TK_NULL 197
|
||||||
#define TK_QSTART 198
|
#define TK_NK_QUESTION 198
|
||||||
#define TK_QEND 199
|
#define TK_NK_ARROW 199
|
||||||
#define TK_QDURATION 200
|
#define TK_ROWTS 200
|
||||||
#define TK_WSTART 201
|
#define TK_TBNAME 201
|
||||||
#define TK_WEND 202
|
#define TK_QSTART 202
|
||||||
#define TK_WDURATION 203
|
#define TK_QEND 203
|
||||||
#define TK_CAST 204
|
#define TK_QDURATION 204
|
||||||
#define TK_NOW 205
|
#define TK_WSTART 205
|
||||||
#define TK_TODAY 206
|
#define TK_WEND 206
|
||||||
#define TK_TIMEZONE 207
|
#define TK_WDURATION 207
|
||||||
#define TK_CLIENT_VERSION 208
|
#define TK_CAST 208
|
||||||
#define TK_SERVER_VERSION 209
|
#define TK_NOW 209
|
||||||
#define TK_SERVER_STATUS 210
|
#define TK_TODAY 210
|
||||||
#define TK_CURRENT_USER 211
|
#define TK_TIMEZONE 211
|
||||||
#define TK_COUNT 212
|
#define TK_CLIENT_VERSION 212
|
||||||
#define TK_LAST_ROW 213
|
#define TK_SERVER_VERSION 213
|
||||||
#define TK_BETWEEN 214
|
#define TK_SERVER_STATUS 214
|
||||||
#define TK_IS 215
|
#define TK_CURRENT_USER 215
|
||||||
#define TK_NK_LT 216
|
#define TK_COUNT 216
|
||||||
#define TK_NK_GT 217
|
#define TK_LAST_ROW 217
|
||||||
#define TK_NK_LE 218
|
#define TK_BETWEEN 218
|
||||||
#define TK_NK_GE 219
|
#define TK_IS 219
|
||||||
#define TK_NK_NE 220
|
#define TK_NK_LT 220
|
||||||
#define TK_MATCH 221
|
#define TK_NK_GT 221
|
||||||
#define TK_NMATCH 222
|
#define TK_NK_LE 222
|
||||||
#define TK_CONTAINS 223
|
#define TK_NK_GE 223
|
||||||
#define TK_JOIN 224
|
#define TK_NK_NE 224
|
||||||
#define TK_INNER 225
|
#define TK_MATCH 225
|
||||||
#define TK_SELECT 226
|
#define TK_NMATCH 226
|
||||||
#define TK_DISTINCT 227
|
#define TK_CONTAINS 227
|
||||||
#define TK_WHERE 228
|
#define TK_JOIN 228
|
||||||
#define TK_PARTITION 229
|
#define TK_INNER 229
|
||||||
#define TK_BY 230
|
#define TK_SELECT 230
|
||||||
#define TK_SESSION 231
|
#define TK_DISTINCT 231
|
||||||
#define TK_STATE_WINDOW 232
|
#define TK_WHERE 232
|
||||||
#define TK_SLIDING 233
|
#define TK_PARTITION 233
|
||||||
#define TK_FILL 234
|
#define TK_BY 234
|
||||||
#define TK_VALUE 235
|
#define TK_SESSION 235
|
||||||
#define TK_NONE 236
|
#define TK_STATE_WINDOW 236
|
||||||
#define TK_PREV 237
|
#define TK_SLIDING 237
|
||||||
#define TK_LINEAR 238
|
#define TK_FILL 238
|
||||||
#define TK_NEXT 239
|
#define TK_VALUE 239
|
||||||
#define TK_HAVING 240
|
#define TK_NONE 240
|
||||||
#define TK_RANGE 241
|
#define TK_PREV 241
|
||||||
#define TK_EVERY 242
|
#define TK_LINEAR 242
|
||||||
#define TK_ORDER 243
|
#define TK_NEXT 243
|
||||||
#define TK_SLIMIT 244
|
#define TK_HAVING 244
|
||||||
#define TK_SOFFSET 245
|
#define TK_RANGE 245
|
||||||
#define TK_LIMIT 246
|
#define TK_EVERY 246
|
||||||
#define TK_OFFSET 247
|
#define TK_ORDER 247
|
||||||
#define TK_ASC 248
|
#define TK_SLIMIT 248
|
||||||
#define TK_NULLS 249
|
#define TK_SOFFSET 249
|
||||||
#define TK_ID 250
|
#define TK_LIMIT 250
|
||||||
#define TK_NK_BITNOT 251
|
#define TK_OFFSET 251
|
||||||
#define TK_VALUES 252
|
#define TK_ASC 252
|
||||||
#define TK_IMPORT 253
|
#define TK_NULLS 253
|
||||||
#define TK_NK_SEMI 254
|
#define TK_ID 254
|
||||||
#define TK_FILE 255
|
#define TK_NK_BITNOT 255
|
||||||
|
#define TK_VALUES 256
|
||||||
|
#define TK_IMPORT 257
|
||||||
|
#define TK_NK_SEMI 258
|
||||||
|
#define TK_FILE 259
|
||||||
|
|
||||||
#define TK_NK_SPACE 300
|
#define TK_NK_SPACE 300
|
||||||
#define TK_NK_COMMENT 301
|
#define TK_NK_COMMENT 301
|
||||||
|
|
|
@ -74,6 +74,10 @@ typedef struct SDatabaseOptions {
|
||||||
int8_t singleStable;
|
int8_t singleStable;
|
||||||
SNodeList* pRetentions;
|
SNodeList* pRetentions;
|
||||||
int8_t schemaless;
|
int8_t schemaless;
|
||||||
|
int32_t walRetentionPeriod;
|
||||||
|
int32_t walRetentionSize;
|
||||||
|
int32_t walRollPeriod;
|
||||||
|
int32_t walSegmentSize;
|
||||||
} SDatabaseOptions;
|
} SDatabaseOptions;
|
||||||
|
|
||||||
typedef struct SCreateDatabaseStmt {
|
typedef struct SCreateDatabaseStmt {
|
||||||
|
|
|
@ -104,6 +104,7 @@ typedef struct SJoinLogicNode {
|
||||||
SNode* pMergeCondition;
|
SNode* pMergeCondition;
|
||||||
SNode* pOnConditions;
|
SNode* pOnConditions;
|
||||||
bool isSingleTableJoin;
|
bool isSingleTableJoin;
|
||||||
|
EOrder inputTsOrder;
|
||||||
} SJoinLogicNode;
|
} SJoinLogicNode;
|
||||||
|
|
||||||
typedef struct SAggLogicNode {
|
typedef struct SAggLogicNode {
|
||||||
|
@ -201,6 +202,7 @@ typedef struct SWindowLogicNode {
|
||||||
int64_t watermark;
|
int64_t watermark;
|
||||||
int8_t igExpired;
|
int8_t igExpired;
|
||||||
EWindowAlgorithm windowAlgo;
|
EWindowAlgorithm windowAlgo;
|
||||||
|
EOrder inputTsOrder;
|
||||||
} SWindowLogicNode;
|
} SWindowLogicNode;
|
||||||
|
|
||||||
typedef struct SFillLogicNode {
|
typedef struct SFillLogicNode {
|
||||||
|
@ -356,15 +358,14 @@ typedef struct SInterpFuncPhysiNode {
|
||||||
SNode* pTimeSeries; // SColumnNode
|
SNode* pTimeSeries; // SColumnNode
|
||||||
} SInterpFuncPhysiNode;
|
} SInterpFuncPhysiNode;
|
||||||
|
|
||||||
typedef struct SJoinPhysiNode {
|
typedef struct SSortMergeJoinPhysiNode {
|
||||||
SPhysiNode node;
|
SPhysiNode node;
|
||||||
EJoinType joinType;
|
EJoinType joinType;
|
||||||
SNode* pMergeCondition;
|
SNode* pMergeCondition;
|
||||||
SNode* pOnConditions;
|
SNode* pOnConditions;
|
||||||
SNodeList* pTargets;
|
SNodeList* pTargets;
|
||||||
} SJoinPhysiNode;
|
EOrder inputTsOrder;
|
||||||
|
} SSortMergeJoinPhysiNode;
|
||||||
typedef SJoinPhysiNode SSortMergeJoinPhysiNode;
|
|
||||||
|
|
||||||
typedef struct SAggPhysiNode {
|
typedef struct SAggPhysiNode {
|
||||||
SPhysiNode node;
|
SPhysiNode node;
|
||||||
|
|
|
@ -255,6 +255,7 @@ typedef struct SSelectStmt {
|
||||||
int32_t selectFuncNum;
|
int32_t selectFuncNum;
|
||||||
bool isEmptyResult;
|
bool isEmptyResult;
|
||||||
bool isTimeLineResult;
|
bool isTimeLineResult;
|
||||||
|
bool isSubquery;
|
||||||
bool hasAggFuncs;
|
bool hasAggFuncs;
|
||||||
bool hasRepeatScanFuncs;
|
bool hasRepeatScanFuncs;
|
||||||
bool hasIndefiniteRowsFunc;
|
bool hasIndefiniteRowsFunc;
|
||||||
|
|
|
@ -358,6 +358,15 @@ typedef enum ELogicConditionType {
|
||||||
#define TSDB_DB_SCHEMALESS_OFF 0
|
#define TSDB_DB_SCHEMALESS_OFF 0
|
||||||
#define TSDB_DEFAULT_DB_SCHEMALESS TSDB_DB_SCHEMALESS_OFF
|
#define TSDB_DEFAULT_DB_SCHEMALESS TSDB_DB_SCHEMALESS_OFF
|
||||||
|
|
||||||
|
#define TSDB_DB_MIN_WAL_RETENTION_PERIOD -1
|
||||||
|
#define TSDB_DEFAULT_DB_WAL_RETENTION_PERIOD 0
|
||||||
|
#define TSDB_DB_MIN_WAL_RETENTION_SIZE -1
|
||||||
|
#define TSDB_DEFAULT_DB_WAL_RETENTION_SIZE 0
|
||||||
|
#define TSDB_DB_MIN_WAL_ROLL_PERIOD 0
|
||||||
|
#define TSDB_DEFAULT_DB_WAL_ROLL_PERIOD 0
|
||||||
|
#define TSDB_DB_MIN_WAL_SEGMENT_SIZE 0
|
||||||
|
#define TSDB_DEFAULT_DB_WAL_SEGMENT_SIZE 0
|
||||||
|
|
||||||
#define TSDB_MIN_ROLLUP_MAX_DELAY 1 // unit millisecond
|
#define TSDB_MIN_ROLLUP_MAX_DELAY 1 // unit millisecond
|
||||||
#define TSDB_MAX_ROLLUP_MAX_DELAY (15 * 60 * 1000)
|
#define TSDB_MAX_ROLLUP_MAX_DELAY (15 * 60 * 1000)
|
||||||
#define TSDB_MIN_ROLLUP_WATERMARK 0 // unit millisecond
|
#define TSDB_MIN_ROLLUP_WATERMARK 0 // unit millisecond
|
||||||
|
|
|
@ -67,7 +67,6 @@ extern int32_t idxDebugFlag;
|
||||||
int32_t taosInitLog(const char *logName, int32_t maxFiles);
|
int32_t taosInitLog(const char *logName, int32_t maxFiles);
|
||||||
void taosCloseLog();
|
void taosCloseLog();
|
||||||
void taosResetLog();
|
void taosResetLog();
|
||||||
void taosSetAllDebugFlag(int32_t flag);
|
|
||||||
void taosDumpData(uint8_t *msg, int32_t len);
|
void taosDumpData(uint8_t *msg, int32_t len);
|
||||||
|
|
||||||
void taosPrintLog(const char *flags, ELogLevel level, int32_t dflag, const char *format, ...)
|
void taosPrintLog(const char *flags, ELogLevel level, int32_t dflag, const char *format, ...)
|
||||||
|
|
|
@ -2019,7 +2019,7 @@ int32_t transferTableNameList(const char* tbList, int32_t acctId, char* dbName,
|
||||||
}
|
}
|
||||||
|
|
||||||
if (('a' <= *(tbList + i) && 'z' >= *(tbList + i)) || ('A' <= *(tbList + i) && 'Z' >= *(tbList + i)) ||
|
if (('a' <= *(tbList + i) && 'z' >= *(tbList + i)) || ('A' <= *(tbList + i) && 'Z' >= *(tbList + i)) ||
|
||||||
('0' <= *(tbList + i) && '9' >= *(tbList + i))) {
|
('0' <= *(tbList + i) && '9' >= *(tbList + i)) || ('_' == *(tbList + i))) {
|
||||||
if (vLen[vIdx] > 0) {
|
if (vLen[vIdx] > 0) {
|
||||||
goto _return;
|
goto _return;
|
||||||
}
|
}
|
||||||
|
|
|
@ -973,7 +973,7 @@ int taos_load_table_info(TAOS *taos, const char *tableNameList) {
|
||||||
|
|
||||||
conn.mgmtEps = getEpSet_s(&pTscObj->pAppInfo->mgmtEp);
|
conn.mgmtEps = getEpSet_s(&pTscObj->pAppInfo->mgmtEp);
|
||||||
|
|
||||||
code = catalogAsyncGetAllMeta(pCtg, &conn, &catalogReq, syncCatalogFn, NULL, NULL);
|
code = catalogAsyncGetAllMeta(pCtg, &conn, &catalogReq, syncCatalogFn, pRequest->body.param, NULL);
|
||||||
if (code) {
|
if (code) {
|
||||||
goto _return;
|
goto _return;
|
||||||
}
|
}
|
||||||
|
|
|
@ -1763,9 +1763,9 @@ char* dumpBlockData(SSDataBlock* pDataBlock, const char* flag, char** pDataBuf)
|
||||||
int32_t colNum = taosArrayGetSize(pDataBlock->pDataBlock);
|
int32_t colNum = taosArrayGetSize(pDataBlock->pDataBlock);
|
||||||
int32_t rows = pDataBlock->info.rows;
|
int32_t rows = pDataBlock->info.rows;
|
||||||
int32_t len = 0;
|
int32_t len = 0;
|
||||||
len += snprintf(dumpBuf + len, size - len, "===stream===%s |block type %d |child id %d|group id:%" PRIu64 "| uid:%ld|\n", flag,
|
len += snprintf(dumpBuf + len, size - len, "===stream===%s |block type %d|child id %d|group id:%" PRIu64 "|uid:%ld|rows:%d\n", flag,
|
||||||
(int32_t)pDataBlock->info.type, pDataBlock->info.childId, pDataBlock->info.groupId,
|
(int32_t)pDataBlock->info.type, pDataBlock->info.childId, pDataBlock->info.groupId,
|
||||||
pDataBlock->info.uid);
|
pDataBlock->info.uid, pDataBlock->info.rows);
|
||||||
if (len >= size - 1) return dumpBuf;
|
if (len >= size - 1) return dumpBuf;
|
||||||
|
|
||||||
for (int32_t j = 0; j < rows; j++) {
|
for (int32_t j = 0; j < rows; j++) {
|
||||||
|
@ -1878,7 +1878,7 @@ int32_t buildSubmitReqFromDataBlock(SSubmitReq** pReq, const SArray* pDataBlocks
|
||||||
msgLen += sizeof(SSubmitBlk);
|
msgLen += sizeof(SSubmitBlk);
|
||||||
int32_t dataLen = 0;
|
int32_t dataLen = 0;
|
||||||
for (int32_t j = 0; j < rows; ++j) { // iterate by row
|
for (int32_t j = 0; j < rows; ++j) { // iterate by row
|
||||||
tdSRowResetBuf(&rb, POINTER_SHIFT(pDataBuf, msgLen)); // set row buf
|
tdSRowResetBuf(&rb, POINTER_SHIFT(pDataBuf, msgLen + dataLen)); // set row buf
|
||||||
bool isStartKey = false;
|
bool isStartKey = false;
|
||||||
int32_t offset = 0;
|
int32_t offset = 0;
|
||||||
for (int32_t k = 0; k < colNum; ++k) { // iterate by column
|
for (int32_t k = 0; k < colNum; ++k) { // iterate by column
|
||||||
|
|
|
@ -1143,6 +1143,10 @@ void taosCfgDynamicOptions(const char *option, const char *value) {
|
||||||
int32_t monitor = atoi(value);
|
int32_t monitor = atoi(value);
|
||||||
uInfo("monitor set from %d to %d", tsEnableMonitor, monitor);
|
uInfo("monitor set from %d to %d", tsEnableMonitor, monitor);
|
||||||
tsEnableMonitor = monitor;
|
tsEnableMonitor = monitor;
|
||||||
|
SConfigItem *pItem = cfgGetItem(tsCfg, "monitor");
|
||||||
|
if (pItem != NULL) {
|
||||||
|
pItem->bval = tsEnableMonitor;
|
||||||
|
}
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1166,8 +1170,39 @@ void taosCfgDynamicOptions(const char *option, const char *value) {
|
||||||
int32_t flag = atoi(value);
|
int32_t flag = atoi(value);
|
||||||
uInfo("%s set from %d to %d", optName, *optionVars[d], flag);
|
uInfo("%s set from %d to %d", optName, *optionVars[d], flag);
|
||||||
*optionVars[d] = flag;
|
*optionVars[d] = flag;
|
||||||
|
taosSetDebugFlag(optionVars[d], optName, flag);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
uError("failed to cfg dynamic option:%s value:%s", option, value);
|
uError("failed to cfg dynamic option:%s value:%s", option, value);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void taosSetDebugFlag(int32_t *pFlagPtr, const char *flagName, int32_t flagVal) {
|
||||||
|
SConfigItem *pItem = cfgGetItem(tsCfg, flagName);
|
||||||
|
if (pItem != NULL) {
|
||||||
|
pItem->i32 = flagVal;
|
||||||
|
}
|
||||||
|
*pFlagPtr = flagVal;
|
||||||
|
}
|
||||||
|
|
||||||
|
void taosSetAllDebugFlag(int32_t flag) {
|
||||||
|
if (flag <= 0) return;
|
||||||
|
|
||||||
|
taosSetDebugFlag(&uDebugFlag, "uDebugFlag", flag);
|
||||||
|
taosSetDebugFlag(&rpcDebugFlag, "rpcDebugFlag", flag);
|
||||||
|
taosSetDebugFlag(&jniDebugFlag, "jniDebugFlag", flag);
|
||||||
|
taosSetDebugFlag(&qDebugFlag, "qDebugFlag", flag);
|
||||||
|
taosSetDebugFlag(&cDebugFlag, "cDebugFlag", flag);
|
||||||
|
taosSetDebugFlag(&dDebugFlag, "dDebugFlag", flag);
|
||||||
|
taosSetDebugFlag(&vDebugFlag, "vDebugFlag", flag);
|
||||||
|
taosSetDebugFlag(&mDebugFlag, "mDebugFlag", flag);
|
||||||
|
taosSetDebugFlag(&wDebugFlag, "wDebugFlag", flag);
|
||||||
|
taosSetDebugFlag(&sDebugFlag, "sDebugFlag", flag);
|
||||||
|
taosSetDebugFlag(&tsdbDebugFlag, "tsdbDebugFlag", flag);
|
||||||
|
taosSetDebugFlag(&tqDebugFlag, "tqDebugFlag", flag);
|
||||||
|
taosSetDebugFlag(&fsDebugFlag, "fsDebugFlag", flag);
|
||||||
|
taosSetDebugFlag(&udfDebugFlag, "udfDebugFlag", flag);
|
||||||
|
taosSetDebugFlag(&smaDebugFlag, "smaDebugFlag", flag);
|
||||||
|
taosSetDebugFlag(&idxDebugFlag, "idxDebugFlag", flag);
|
||||||
|
uInfo("all debug flag are set to %d", flag);
|
||||||
|
}
|
||||||
|
|
|
@ -2018,6 +2018,10 @@ int32_t tSerializeSCreateDbReq(void *buf, int32_t bufLen, SCreateDbReq *pReq) {
|
||||||
if (tEncodeI8(&encoder, pReq->strict) < 0) return -1;
|
if (tEncodeI8(&encoder, pReq->strict) < 0) return -1;
|
||||||
if (tEncodeI8(&encoder, pReq->cacheLast) < 0) return -1;
|
if (tEncodeI8(&encoder, pReq->cacheLast) < 0) return -1;
|
||||||
if (tEncodeI8(&encoder, pReq->schemaless) < 0) return -1;
|
if (tEncodeI8(&encoder, pReq->schemaless) < 0) return -1;
|
||||||
|
if (tEncodeI32(&encoder, pReq->walRetentionPeriod) < 0) return -1;
|
||||||
|
if (tEncodeI32(&encoder, pReq->walRetentionSize) < 0) return -1;
|
||||||
|
if (tEncodeI32(&encoder, pReq->walRollPeriod) < 0) return -1;
|
||||||
|
if (tEncodeI32(&encoder, pReq->walSegmentSize) < 0) return -1;
|
||||||
if (tEncodeI8(&encoder, pReq->ignoreExist) < 0) return -1;
|
if (tEncodeI8(&encoder, pReq->ignoreExist) < 0) return -1;
|
||||||
if (tEncodeI32(&encoder, pReq->numOfRetensions) < 0) return -1;
|
if (tEncodeI32(&encoder, pReq->numOfRetensions) < 0) return -1;
|
||||||
for (int32_t i = 0; i < pReq->numOfRetensions; ++i) {
|
for (int32_t i = 0; i < pReq->numOfRetensions; ++i) {
|
||||||
|
@ -2060,6 +2064,10 @@ int32_t tDeserializeSCreateDbReq(void *buf, int32_t bufLen, SCreateDbReq *pReq)
|
||||||
if (tDecodeI8(&decoder, &pReq->strict) < 0) return -1;
|
if (tDecodeI8(&decoder, &pReq->strict) < 0) return -1;
|
||||||
if (tDecodeI8(&decoder, &pReq->cacheLast) < 0) return -1;
|
if (tDecodeI8(&decoder, &pReq->cacheLast) < 0) return -1;
|
||||||
if (tDecodeI8(&decoder, &pReq->schemaless) < 0) return -1;
|
if (tDecodeI8(&decoder, &pReq->schemaless) < 0) return -1;
|
||||||
|
if (tDecodeI32(&decoder, &pReq->walRetentionPeriod) < 0) return -1;
|
||||||
|
if (tDecodeI32(&decoder, &pReq->walRetentionSize) < 0) return -1;
|
||||||
|
if (tDecodeI32(&decoder, &pReq->walRollPeriod) < 0) return -1;
|
||||||
|
if (tDecodeI32(&decoder, &pReq->walSegmentSize) < 0) return -1;
|
||||||
if (tDecodeI8(&decoder, &pReq->ignoreExist) < 0) return -1;
|
if (tDecodeI8(&decoder, &pReq->ignoreExist) < 0) return -1;
|
||||||
if (tDecodeI32(&decoder, &pReq->numOfRetensions) < 0) return -1;
|
if (tDecodeI32(&decoder, &pReq->numOfRetensions) < 0) return -1;
|
||||||
pReq->pRetensions = taosArrayInit(pReq->numOfRetensions, sizeof(SRetention));
|
pReq->pRetensions = taosArrayInit(pReq->numOfRetensions, sizeof(SRetention));
|
||||||
|
|
|
@ -164,8 +164,8 @@ typedef struct {
|
||||||
int32_t lastErrorNo;
|
int32_t lastErrorNo;
|
||||||
tmsg_t lastMsgType;
|
tmsg_t lastMsgType;
|
||||||
SEpSet lastEpset;
|
SEpSet lastEpset;
|
||||||
char dbname1[TSDB_DB_FNAME_LEN];
|
char dbname1[TSDB_TABLE_FNAME_LEN];
|
||||||
char dbname2[TSDB_DB_FNAME_LEN];
|
char dbname2[TSDB_TABLE_FNAME_LEN];
|
||||||
int32_t startFunc;
|
int32_t startFunc;
|
||||||
int32_t stopFunc;
|
int32_t stopFunc;
|
||||||
int32_t paramLen;
|
int32_t paramLen;
|
||||||
|
|
|
@ -874,7 +874,7 @@ static int32_t mndProcessConfigDnodeReq(SRpcMsg *pReq) {
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t mndProcessConfigDnodeRsp(SRpcMsg *pRsp) {
|
static int32_t mndProcessConfigDnodeRsp(SRpcMsg *pRsp) {
|
||||||
mInfo("config rsp from dnode, app:%p", pRsp->info.ahandle);
|
mInfo("config rsp from dnode");
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -281,7 +281,7 @@ static int32_t mndSetDropOffsetRedoLogs(SMnode *pMnode, STrans *pTrans, SMqOffse
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t mndDropOffsetByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
|
int32_t mndDropOffsetByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
|
||||||
int32_t code = -1;
|
int32_t code = 0;
|
||||||
SSdb *pSdb = pMnode->pSdb;
|
SSdb *pSdb = pMnode->pSdb;
|
||||||
|
|
||||||
void *pIter = NULL;
|
void *pIter = NULL;
|
||||||
|
@ -297,15 +297,15 @@ int32_t mndDropOffsetByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
|
||||||
|
|
||||||
if (mndSetDropOffsetCommitLogs(pMnode, pTrans, pOffset) < 0) {
|
if (mndSetDropOffsetCommitLogs(pMnode, pTrans, pOffset) < 0) {
|
||||||
sdbRelease(pSdb, pOffset);
|
sdbRelease(pSdb, pOffset);
|
||||||
goto END;
|
sdbCancelFetch(pSdb, pIter);
|
||||||
|
code = -1;
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
sdbRelease(pSdb, pOffset);
|
sdbRelease(pSdb, pOffset);
|
||||||
}
|
}
|
||||||
|
|
||||||
code = 0;
|
return code;
|
||||||
END:
|
|
||||||
return code;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t mndDropOffsetByTopic(SMnode *pMnode, STrans *pTrans, const char *topic) {
|
int32_t mndDropOffsetByTopic(SMnode *pMnode, STrans *pTrans, const char *topic) {
|
||||||
|
|
|
@ -641,6 +641,7 @@ static int32_t mndSetCreateStbRedoActions(SMnode *pMnode, STrans *pTrans, SDbObj
|
||||||
action.contLen = contLen;
|
action.contLen = contLen;
|
||||||
action.msgType = TDMT_VND_CREATE_STB;
|
action.msgType = TDMT_VND_CREATE_STB;
|
||||||
action.acceptableCode = TSDB_CODE_TDB_STB_ALREADY_EXIST;
|
action.acceptableCode = TSDB_CODE_TDB_STB_ALREADY_EXIST;
|
||||||
|
action.retryCode = TSDB_CODE_TDB_STB_NOT_EXIST;
|
||||||
if (mndTransAppendRedoAction(pTrans, &action) != 0) {
|
if (mndTransAppendRedoAction(pTrans, &action) != 0) {
|
||||||
taosMemoryFree(pReq);
|
taosMemoryFree(pReq);
|
||||||
sdbCancelFetch(pSdb, pIter);
|
sdbCancelFetch(pSdb, pIter);
|
||||||
|
@ -805,7 +806,7 @@ _OVER:
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t mndAddStbToTrans(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SStbObj *pStb) {
|
int32_t mndAddStbToTrans(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SStbObj *pStb) {
|
||||||
mndTransSetDbName(pTrans, pDb->name, NULL);
|
mndTransSetDbName(pTrans, pDb->name, pStb->name);
|
||||||
if (mndSetCreateStbRedoLogs(pMnode, pTrans, pDb, pStb) != 0) return -1;
|
if (mndSetCreateStbRedoLogs(pMnode, pTrans, pDb, pStb) != 0) return -1;
|
||||||
if (mndSetCreateStbUndoLogs(pMnode, pTrans, pDb, pStb) != 0) return -1;
|
if (mndSetCreateStbUndoLogs(pMnode, pTrans, pDb, pStb) != 0) return -1;
|
||||||
if (mndSetCreateStbCommitLogs(pMnode, pTrans, pDb, pStb) != 0) return -1;
|
if (mndSetCreateStbCommitLogs(pMnode, pTrans, pDb, pStb) != 0) return -1;
|
||||||
|
@ -1612,7 +1613,7 @@ static int32_t mndAlterStbImp(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SStbOb
|
||||||
if (pTrans == NULL) goto _OVER;
|
if (pTrans == NULL) goto _OVER;
|
||||||
|
|
||||||
mDebug("trans:%d, used to alter stb:%s", pTrans->id, pStb->name);
|
mDebug("trans:%d, used to alter stb:%s", pTrans->id, pStb->name);
|
||||||
mndTransSetDbName(pTrans, pDb->name, NULL);
|
mndTransSetDbName(pTrans, pDb->name, pStb->name);
|
||||||
|
|
||||||
if (needRsp) {
|
if (needRsp) {
|
||||||
void *pCont = NULL;
|
void *pCont = NULL;
|
||||||
|
@ -1811,7 +1812,7 @@ static int32_t mndDropStb(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SStbObj *p
|
||||||
if (pTrans == NULL) goto _OVER;
|
if (pTrans == NULL) goto _OVER;
|
||||||
|
|
||||||
mDebug("trans:%d, used to drop stb:%s", pTrans->id, pStb->name);
|
mDebug("trans:%d, used to drop stb:%s", pTrans->id, pStb->name);
|
||||||
mndTransSetDbName(pTrans, pDb->name, NULL);
|
mndTransSetDbName(pTrans, pDb->name, pStb->name);
|
||||||
|
|
||||||
if (mndSetDropStbRedoLogs(pMnode, pTrans, pStb) != 0) goto _OVER;
|
if (mndSetDropStbRedoLogs(pMnode, pTrans, pStb) != 0) goto _OVER;
|
||||||
if (mndSetDropStbCommitLogs(pMnode, pTrans, pStb) != 0) goto _OVER;
|
if (mndSetDropStbCommitLogs(pMnode, pTrans, pStb) != 0) goto _OVER;
|
||||||
|
|
|
@ -824,7 +824,7 @@ int32_t mndSetDropSubCommitLogs(SMnode *pMnode, STrans *pTrans, SMqSubscribeObj
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t mndDropSubByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
|
int32_t mndDropSubByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
|
||||||
int32_t code = -1;
|
int32_t code = 0;
|
||||||
SSdb *pSdb = pMnode->pSdb;
|
SSdb *pSdb = pMnode->pSdb;
|
||||||
|
|
||||||
void *pIter = NULL;
|
void *pIter = NULL;
|
||||||
|
@ -840,12 +840,14 @@ int32_t mndDropSubByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
|
||||||
|
|
||||||
if (mndSetDropSubCommitLogs(pMnode, pTrans, pSub) < 0) {
|
if (mndSetDropSubCommitLogs(pMnode, pTrans, pSub) < 0) {
|
||||||
sdbRelease(pSdb, pSub);
|
sdbRelease(pSdb, pSub);
|
||||||
goto END;
|
sdbCancelFetch(pSdb, pIter);
|
||||||
|
code = -1;
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
sdbRelease(pSdb, pSub);
|
||||||
}
|
}
|
||||||
|
|
||||||
code = 0;
|
|
||||||
END:
|
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -833,7 +833,7 @@ static void mndCancelGetNextTopic(SMnode *pMnode, void *pIter) {
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t mndDropTopicByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
|
int32_t mndDropTopicByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
|
||||||
int32_t code = -1;
|
int32_t code = 0;
|
||||||
SSdb *pSdb = pMnode->pSdb;
|
SSdb *pSdb = pMnode->pSdb;
|
||||||
|
|
||||||
void *pIter = NULL;
|
void *pIter = NULL;
|
||||||
|
@ -848,11 +848,14 @@ int32_t mndDropTopicByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
|
||||||
}
|
}
|
||||||
|
|
||||||
if (mndSetDropTopicCommitLogs(pMnode, pTrans, pTopic) < 0) {
|
if (mndSetDropTopicCommitLogs(pMnode, pTrans, pTopic) < 0) {
|
||||||
goto END;
|
sdbRelease(pSdb, pTopic);
|
||||||
|
sdbCancelFetch(pSdb, pIter);
|
||||||
|
code = -1;
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
sdbRelease(pSdb, pTopic);
|
||||||
}
|
}
|
||||||
|
|
||||||
code = 0;
|
|
||||||
END:
|
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
|
@ -127,8 +127,8 @@ static SSdbRaw *mndTransActionEncode(STrans *pTrans) {
|
||||||
SDB_SET_INT8(pRaw, dataPos, 0, _OVER)
|
SDB_SET_INT8(pRaw, dataPos, 0, _OVER)
|
||||||
SDB_SET_INT8(pRaw, dataPos, 0, _OVER)
|
SDB_SET_INT8(pRaw, dataPos, 0, _OVER)
|
||||||
SDB_SET_INT64(pRaw, dataPos, pTrans->createdTime, _OVER)
|
SDB_SET_INT64(pRaw, dataPos, pTrans->createdTime, _OVER)
|
||||||
SDB_SET_BINARY(pRaw, dataPos, pTrans->dbname1, TSDB_DB_FNAME_LEN, _OVER)
|
SDB_SET_BINARY(pRaw, dataPos, pTrans->dbname1, TSDB_TABLE_FNAME_LEN, _OVER)
|
||||||
SDB_SET_BINARY(pRaw, dataPos, pTrans->dbname2, TSDB_DB_FNAME_LEN, _OVER)
|
SDB_SET_BINARY(pRaw, dataPos, pTrans->dbname2, TSDB_TABLE_FNAME_LEN, _OVER)
|
||||||
SDB_SET_INT32(pRaw, dataPos, pTrans->redoActionPos, _OVER)
|
SDB_SET_INT32(pRaw, dataPos, pTrans->redoActionPos, _OVER)
|
||||||
|
|
||||||
int32_t redoActionNum = taosArrayGetSize(pTrans->redoActions);
|
int32_t redoActionNum = taosArrayGetSize(pTrans->redoActions);
|
||||||
|
@ -290,8 +290,8 @@ static SSdbRow *mndTransActionDecode(SSdbRaw *pRaw) {
|
||||||
pTrans->exec = exec;
|
pTrans->exec = exec;
|
||||||
pTrans->oper = oper;
|
pTrans->oper = oper;
|
||||||
SDB_GET_INT64(pRaw, dataPos, &pTrans->createdTime, _OVER)
|
SDB_GET_INT64(pRaw, dataPos, &pTrans->createdTime, _OVER)
|
||||||
SDB_GET_BINARY(pRaw, dataPos, pTrans->dbname1, TSDB_DB_FNAME_LEN, _OVER)
|
SDB_GET_BINARY(pRaw, dataPos, pTrans->dbname1, TSDB_TABLE_FNAME_LEN, _OVER)
|
||||||
SDB_GET_BINARY(pRaw, dataPos, pTrans->dbname2, TSDB_DB_FNAME_LEN, _OVER)
|
SDB_GET_BINARY(pRaw, dataPos, pTrans->dbname2, TSDB_TABLE_FNAME_LEN, _OVER)
|
||||||
SDB_GET_INT32(pRaw, dataPos, &pTrans->redoActionPos, _OVER)
|
SDB_GET_INT32(pRaw, dataPos, &pTrans->redoActionPos, _OVER)
|
||||||
SDB_GET_INT32(pRaw, dataPos, &redoActionNum, _OVER)
|
SDB_GET_INT32(pRaw, dataPos, &redoActionNum, _OVER)
|
||||||
SDB_GET_INT32(pRaw, dataPos, &undoActionNum, _OVER)
|
SDB_GET_INT32(pRaw, dataPos, &undoActionNum, _OVER)
|
||||||
|
@ -727,10 +727,10 @@ int32_t mndSetRpcInfoForDbTrans(SMnode *pMnode, SRpcMsg *pMsg, EOperType oper, c
|
||||||
|
|
||||||
void mndTransSetDbName(STrans *pTrans, const char *dbname1, const char *dbname2) {
|
void mndTransSetDbName(STrans *pTrans, const char *dbname1, const char *dbname2) {
|
||||||
if (dbname1 != NULL) {
|
if (dbname1 != NULL) {
|
||||||
memcpy(pTrans->dbname1, dbname1, TSDB_DB_FNAME_LEN);
|
tstrncpy(pTrans->dbname1, dbname1, TSDB_TABLE_FNAME_LEN);
|
||||||
}
|
}
|
||||||
if (dbname2 != NULL) {
|
if (dbname2 != NULL) {
|
||||||
memcpy(pTrans->dbname2, dbname2, TSDB_DB_FNAME_LEN);
|
tstrncpy(pTrans->dbname2, dbname2, TSDB_TABLE_FNAME_LEN);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1289,6 +1289,19 @@ static bool mndTransPerformRedoActionStage(SMnode *pMnode, STrans *pTrans) {
|
||||||
} else {
|
} else {
|
||||||
pTrans->code = terrno;
|
pTrans->code = terrno;
|
||||||
if (pTrans->policy == TRN_POLICY_ROLLBACK) {
|
if (pTrans->policy == TRN_POLICY_ROLLBACK) {
|
||||||
|
if (pTrans->lastAction != 0) {
|
||||||
|
STransAction *pAction = taosArrayGet(pTrans->redoActions, pTrans->lastAction);
|
||||||
|
if (pAction->retryCode != 0 && pAction->retryCode != pAction->errCode) {
|
||||||
|
if (pTrans->failedTimes < 6) {
|
||||||
|
mError("trans:%d, stage keep on redoAction since action:%d code:0x%x not 0x%x, failedTimes:%d", pTrans->id,
|
||||||
|
pTrans->lastAction, pTrans->code, pAction->retryCode, pTrans->failedTimes);
|
||||||
|
taosMsleep(1000);
|
||||||
|
continueExec = true;
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
pTrans->stage = TRN_STAGE_ROLLBACK;
|
pTrans->stage = TRN_STAGE_ROLLBACK;
|
||||||
mError("trans:%d, stage from redoAction to rollback since %s", pTrans->id, terrstr());
|
mError("trans:%d, stage from redoAction to rollback since %s", pTrans->id, terrstr());
|
||||||
continueExec = true;
|
continueExec = true;
|
||||||
|
|
|
@ -268,6 +268,7 @@ struct SVnode {
|
||||||
tsem_t canCommit;
|
tsem_t canCommit;
|
||||||
int64_t sync;
|
int64_t sync;
|
||||||
int32_t blockCount;
|
int32_t blockCount;
|
||||||
|
bool restored;
|
||||||
tsem_t syncSem;
|
tsem_t syncSem;
|
||||||
SQHandle* pQuery;
|
SQHandle* pQuery;
|
||||||
};
|
};
|
||||||
|
|
|
@ -178,7 +178,7 @@ int metaCreateSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) {
|
||||||
if (metaGetTableEntryByName(&mr, pReq->name) == 0) {
|
if (metaGetTableEntryByName(&mr, pReq->name) == 0) {
|
||||||
// TODO: just for pass case
|
// TODO: just for pass case
|
||||||
#if 0
|
#if 0
|
||||||
terrno = TSDB_CODE_TDB_TABLE_ALREADY_EXIST;
|
terrno = TSDB_CODE_TDB_STB_ALREADY_EXIST;
|
||||||
metaReaderClear(&mr);
|
metaReaderClear(&mr);
|
||||||
return -1;
|
return -1;
|
||||||
#else
|
#else
|
||||||
|
@ -223,7 +223,7 @@ int metaDropSTable(SMeta *pMeta, int64_t verison, SVDropStbReq *pReq, SArray *tb
|
||||||
// check if super table exists
|
// check if super table exists
|
||||||
rc = tdbTbGet(pMeta->pNameIdx, pReq->name, strlen(pReq->name) + 1, &pData, &nData);
|
rc = tdbTbGet(pMeta->pNameIdx, pReq->name, strlen(pReq->name) + 1, &pData, &nData);
|
||||||
if (rc < 0 || *(tb_uid_t *)pData != pReq->suid) {
|
if (rc < 0 || *(tb_uid_t *)pData != pReq->suid) {
|
||||||
terrno = TSDB_CODE_VND_TABLE_NOT_EXIST;
|
terrno = TSDB_CODE_TDB_STB_NOT_EXIST;
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -1395,10 +1395,26 @@ void tsdbCalcColDataSMA(SColData *pColData, SColumnDataAgg *pColAgg) {
|
||||||
break;
|
break;
|
||||||
case TSDB_DATA_TYPE_BOOL:
|
case TSDB_DATA_TYPE_BOOL:
|
||||||
break;
|
break;
|
||||||
case TSDB_DATA_TYPE_TINYINT:
|
case TSDB_DATA_TYPE_TINYINT:{
|
||||||
|
pColAgg->sum += colVal.value.i8;
|
||||||
|
if (pColAgg->min > colVal.value.i8) {
|
||||||
|
pColAgg->min = colVal.value.i8;
|
||||||
|
}
|
||||||
|
if (pColAgg->max < colVal.value.i8) {
|
||||||
|
pColAgg->max = colVal.value.i8;
|
||||||
|
}
|
||||||
break;
|
break;
|
||||||
case TSDB_DATA_TYPE_SMALLINT:
|
}
|
||||||
|
case TSDB_DATA_TYPE_SMALLINT:{
|
||||||
|
pColAgg->sum += colVal.value.i16;
|
||||||
|
if (pColAgg->min > colVal.value.i16) {
|
||||||
|
pColAgg->min = colVal.value.i16;
|
||||||
|
}
|
||||||
|
if (pColAgg->max < colVal.value.i16) {
|
||||||
|
pColAgg->max = colVal.value.i16;
|
||||||
|
}
|
||||||
break;
|
break;
|
||||||
|
}
|
||||||
case TSDB_DATA_TYPE_INT: {
|
case TSDB_DATA_TYPE_INT: {
|
||||||
pColAgg->sum += colVal.value.i32;
|
pColAgg->sum += colVal.value.i32;
|
||||||
if (pColAgg->min > colVal.value.i32) {
|
if (pColAgg->min > colVal.value.i32) {
|
||||||
|
@ -1419,24 +1435,79 @@ void tsdbCalcColDataSMA(SColData *pColData, SColumnDataAgg *pColAgg) {
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case TSDB_DATA_TYPE_FLOAT:
|
case TSDB_DATA_TYPE_FLOAT:{
|
||||||
|
pColAgg->sum += colVal.value.f;
|
||||||
|
if (pColAgg->min > colVal.value.f) {
|
||||||
|
pColAgg->min = colVal.value.f;
|
||||||
|
}
|
||||||
|
if (pColAgg->max < colVal.value.f) {
|
||||||
|
pColAgg->max = colVal.value.f;
|
||||||
|
}
|
||||||
break;
|
break;
|
||||||
case TSDB_DATA_TYPE_DOUBLE:
|
}
|
||||||
|
case TSDB_DATA_TYPE_DOUBLE:{
|
||||||
|
pColAgg->sum += colVal.value.d;
|
||||||
|
if (pColAgg->min > colVal.value.d) {
|
||||||
|
pColAgg->min = colVal.value.d;
|
||||||
|
}
|
||||||
|
if (pColAgg->max < colVal.value.d) {
|
||||||
|
pColAgg->max = colVal.value.d;
|
||||||
|
}
|
||||||
break;
|
break;
|
||||||
|
}
|
||||||
case TSDB_DATA_TYPE_VARCHAR:
|
case TSDB_DATA_TYPE_VARCHAR:
|
||||||
break;
|
break;
|
||||||
case TSDB_DATA_TYPE_TIMESTAMP:
|
case TSDB_DATA_TYPE_TIMESTAMP:{
|
||||||
|
if (pColAgg->min > colVal.value.i64) {
|
||||||
|
pColAgg->min = colVal.value.i64;
|
||||||
|
}
|
||||||
|
if (pColAgg->max < colVal.value.i64) {
|
||||||
|
pColAgg->max = colVal.value.i64;
|
||||||
|
}
|
||||||
break;
|
break;
|
||||||
|
}
|
||||||
case TSDB_DATA_TYPE_NCHAR:
|
case TSDB_DATA_TYPE_NCHAR:
|
||||||
break;
|
break;
|
||||||
case TSDB_DATA_TYPE_UTINYINT:
|
case TSDB_DATA_TYPE_UTINYINT:{
|
||||||
|
pColAgg->sum += colVal.value.u8;
|
||||||
|
if (pColAgg->min > colVal.value.u8) {
|
||||||
|
pColAgg->min = colVal.value.u8;
|
||||||
|
}
|
||||||
|
if (pColAgg->max < colVal.value.u8) {
|
||||||
|
pColAgg->max = colVal.value.u8;
|
||||||
|
}
|
||||||
break;
|
break;
|
||||||
case TSDB_DATA_TYPE_USMALLINT:
|
}
|
||||||
|
case TSDB_DATA_TYPE_USMALLINT:{
|
||||||
|
pColAgg->sum += colVal.value.u16;
|
||||||
|
if (pColAgg->min > colVal.value.u16) {
|
||||||
|
pColAgg->min = colVal.value.u16;
|
||||||
|
}
|
||||||
|
if (pColAgg->max < colVal.value.u16) {
|
||||||
|
pColAgg->max = colVal.value.u16;
|
||||||
|
}
|
||||||
break;
|
break;
|
||||||
case TSDB_DATA_TYPE_UINT:
|
}
|
||||||
|
case TSDB_DATA_TYPE_UINT:{
|
||||||
|
pColAgg->sum += colVal.value.u32;
|
||||||
|
if (pColAgg->min > colVal.value.u32) {
|
||||||
|
pColAgg->min = colVal.value.u32;
|
||||||
|
}
|
||||||
|
if (pColAgg->max < colVal.value.u32) {
|
||||||
|
pColAgg->max = colVal.value.u32;
|
||||||
|
}
|
||||||
break;
|
break;
|
||||||
case TSDB_DATA_TYPE_UBIGINT:
|
}
|
||||||
|
case TSDB_DATA_TYPE_UBIGINT:{
|
||||||
|
pColAgg->sum += colVal.value.u64;
|
||||||
|
if (pColAgg->min > colVal.value.u64) {
|
||||||
|
pColAgg->min = colVal.value.u64;
|
||||||
|
}
|
||||||
|
if (pColAgg->max < colVal.value.u64) {
|
||||||
|
pColAgg->max = colVal.value.u64;
|
||||||
|
}
|
||||||
break;
|
break;
|
||||||
|
}
|
||||||
case TSDB_DATA_TYPE_JSON:
|
case TSDB_DATA_TYPE_JSON:
|
||||||
break;
|
break;
|
||||||
case TSDB_DATA_TYPE_VARBINARY:
|
case TSDB_DATA_TYPE_VARBINARY:
|
||||||
|
|
|
@ -16,23 +16,28 @@
|
||||||
#define _DEFAULT_SOURCE
|
#define _DEFAULT_SOURCE
|
||||||
#include "vnd.h"
|
#include "vnd.h"
|
||||||
|
|
||||||
|
#define BATCH_DISABLE 1
|
||||||
|
|
||||||
static inline bool vnodeIsMsgBlock(tmsg_t type) {
|
static inline bool vnodeIsMsgBlock(tmsg_t type) {
|
||||||
return (type == TDMT_VND_CREATE_TABLE) || (type == TDMT_VND_CREATE_TABLE) || (type == TDMT_VND_CREATE_TABLE) ||
|
return (type == TDMT_VND_CREATE_TABLE) || (type == TDMT_VND_CREATE_TABLE) || (type == TDMT_VND_CREATE_TABLE) ||
|
||||||
(type == TDMT_VND_ALTER_TABLE) || (type == TDMT_VND_DROP_TABLE) || (type == TDMT_VND_UPDATE_TAG_VAL);
|
(type == TDMT_VND_ALTER_TABLE) || (type == TDMT_VND_DROP_TABLE) || (type == TDMT_VND_UPDATE_TAG_VAL) ||
|
||||||
|
(type == TDMT_VND_ALTER_REPLICA);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline bool vnodeIsMsgWeak(tmsg_t type) { return false; }
|
static inline bool vnodeIsMsgWeak(tmsg_t type) { return false; }
|
||||||
|
|
||||||
static inline void vnodeWaitBlockMsg(SVnode *pVnode, const SRpcMsg *pMsg) {
|
static inline void vnodeWaitBlockMsg(SVnode *pVnode, const SRpcMsg *pMsg) {
|
||||||
if (vnodeIsMsgBlock(pMsg->msgType)) {
|
if (vnodeIsMsgBlock(pMsg->msgType)) {
|
||||||
vTrace("vgId:%d, msg:%p wait block, type:%s", pVnode->config.vgId, pMsg, TMSG_INFO(pMsg->msgType));
|
const STraceId *trace = &pMsg->info.traceId;
|
||||||
|
vGTrace("vgId:%d, msg:%p wait block, type:%s", pVnode->config.vgId, pMsg, TMSG_INFO(pMsg->msgType));
|
||||||
tsem_wait(&pVnode->syncSem);
|
tsem_wait(&pVnode->syncSem);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void vnodePostBlockMsg(SVnode *pVnode, const SRpcMsg *pMsg) {
|
static inline void vnodePostBlockMsg(SVnode *pVnode, const SRpcMsg *pMsg) {
|
||||||
if (vnodeIsMsgBlock(pMsg->msgType)) {
|
if (vnodeIsMsgBlock(pMsg->msgType)) {
|
||||||
vTrace("vgId:%d, msg:%p post block, type:%s", pVnode->config.vgId, pMsg, TMSG_INFO(pMsg->msgType));
|
const STraceId *trace = &pMsg->info.traceId;
|
||||||
|
vGTrace("vgId:%d, msg:%p post block, type:%s", pVnode->config.vgId, pMsg, TMSG_INFO(pMsg->msgType));
|
||||||
tsem_post(&pVnode->syncSem);
|
tsem_post(&pVnode->syncSem);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -124,60 +129,147 @@ void vnodeRedirectRpcMsg(SVnode *pVnode, SRpcMsg *pMsg) {
|
||||||
tmsgSendRedirectRsp(&rsp, &newEpSet);
|
tmsgSendRedirectRsp(&rsp, &newEpSet);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void inline vnodeHandleWriteMsg(SVnode *pVnode, SRpcMsg *pMsg) {
|
||||||
|
SRpcMsg rsp = {.code = pMsg->code, .info = pMsg->info};
|
||||||
|
if (vnodeProcessWriteMsg(pVnode, pMsg, pMsg->info.conn.applyIndex, &rsp) < 0) {
|
||||||
|
rsp.code = terrno;
|
||||||
|
const STraceId *trace = &pMsg->info.traceId;
|
||||||
|
vGError("vgId:%d, msg:%p failed to apply right now since %s", pVnode->config.vgId, pMsg, terrstr());
|
||||||
|
}
|
||||||
|
if (rsp.info.handle != NULL) {
|
||||||
|
tmsgSendRsp(&rsp);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void vnodeHandleProposeError(SVnode *pVnode, SRpcMsg *pMsg, int32_t code) {
|
||||||
|
if (code == TSDB_CODE_SYN_NOT_LEADER) {
|
||||||
|
vnodeRedirectRpcMsg(pVnode, pMsg);
|
||||||
|
} else {
|
||||||
|
const STraceId *trace = &pMsg->info.traceId;
|
||||||
|
vGError("vgId:%d, msg:%p failed to propose since %s, code:0x%x", pVnode->config.vgId, pMsg, tstrerror(code), code);
|
||||||
|
SRpcMsg rsp = {.code = code, .info = pMsg->info};
|
||||||
|
if (rsp.info.handle != NULL) {
|
||||||
|
tmsgSendRsp(&rsp);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void vnodeHandleAlterReplicaReq(SVnode *pVnode, SRpcMsg *pMsg) {
|
||||||
|
int32_t code = vnodeProcessAlterReplicaReq(pVnode, pMsg);
|
||||||
|
|
||||||
|
if (code > 0) {
|
||||||
|
ASSERT(0);
|
||||||
|
} else if (code == 0) {
|
||||||
|
vnodeWaitBlockMsg(pVnode, pMsg);
|
||||||
|
} else {
|
||||||
|
if (terrno != 0) code = terrno;
|
||||||
|
vnodeHandleProposeError(pVnode, pMsg, code);
|
||||||
|
}
|
||||||
|
|
||||||
|
const STraceId *trace = &pMsg->info.traceId;
|
||||||
|
vGTrace("vgId:%d, msg:%p is freed, code:0x%x", pVnode->config.vgId, pMsg, code);
|
||||||
|
rpcFreeCont(pMsg->pCont);
|
||||||
|
taosFreeQitem(pMsg);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void inline vnodeProposeBatchMsg(SVnode *pVnode, SRpcMsg **pMsgArr, bool *pIsWeakArr, int32_t *arrSize) {
|
||||||
|
if (*arrSize <= 0) return;
|
||||||
|
|
||||||
|
#if BATCH_DISABLE
|
||||||
|
int32_t code = syncPropose(pVnode->sync, pMsgArr[0], pIsWeakArr[0]);
|
||||||
|
#else
|
||||||
|
int32_t code = syncProposeBatch(pVnode->sync, pMsgArr, pIsWeakArr, *arrSize);
|
||||||
|
#endif
|
||||||
|
|
||||||
|
if (code > 0) {
|
||||||
|
for (int32_t i = 0; i < *arrSize; ++i) {
|
||||||
|
vnodeHandleWriteMsg(pVnode, pMsgArr[i]);
|
||||||
|
}
|
||||||
|
} else if (code == 0) {
|
||||||
|
vnodeWaitBlockMsg(pVnode, pMsgArr[*arrSize - 1]);
|
||||||
|
} else {
|
||||||
|
if (terrno != 0) code = terrno;
|
||||||
|
for (int32_t i = 0; i < *arrSize; ++i) {
|
||||||
|
vnodeHandleProposeError(pVnode, pMsgArr[i], code);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for (int32_t i = 0; i < *arrSize; ++i) {
|
||||||
|
SRpcMsg *pMsg = pMsgArr[i];
|
||||||
|
const STraceId *trace = &pMsg->info.traceId;
|
||||||
|
vGTrace("vgId:%d, msg:%p is freed, code:0x%x", pVnode->config.vgId, pMsg, code);
|
||||||
|
rpcFreeCont(pMsg->pCont);
|
||||||
|
taosFreeQitem(pMsg);
|
||||||
|
}
|
||||||
|
|
||||||
|
*arrSize = 0;
|
||||||
|
}
|
||||||
|
|
||||||
void vnodeProposeWriteMsg(SQueueInfo *pInfo, STaosQall *qall, int32_t numOfMsgs) {
|
void vnodeProposeWriteMsg(SQueueInfo *pInfo, STaosQall *qall, int32_t numOfMsgs) {
|
||||||
SVnode *pVnode = pInfo->ahandle;
|
SVnode *pVnode = pInfo->ahandle;
|
||||||
int32_t vgId = pVnode->config.vgId;
|
int32_t vgId = pVnode->config.vgId;
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
SRpcMsg *pMsg = NULL;
|
SRpcMsg *pMsg = NULL;
|
||||||
|
int32_t arrayPos = 0;
|
||||||
|
SRpcMsg **pMsgArr = taosMemoryCalloc(numOfMsgs, sizeof(SRpcMsg*));
|
||||||
|
bool *pIsWeakArr = taosMemoryCalloc(numOfMsgs, sizeof(bool));
|
||||||
vTrace("vgId:%d, get %d msgs from vnode-write queue", vgId, numOfMsgs);
|
vTrace("vgId:%d, get %d msgs from vnode-write queue", vgId, numOfMsgs);
|
||||||
|
|
||||||
for (int32_t m = 0; m < numOfMsgs; m++) {
|
for (int32_t msg = 0; msg < numOfMsgs; msg++) {
|
||||||
if (taosGetQitem(qall, (void **)&pMsg) == 0) continue;
|
if (taosGetQitem(qall, (void **)&pMsg) == 0) continue;
|
||||||
|
bool isWeak = vnodeIsMsgWeak(pMsg->msgType);
|
||||||
|
bool isBlock = vnodeIsMsgBlock(pMsg->msgType);
|
||||||
|
|
||||||
const STraceId *trace = &pMsg->info.traceId;
|
const STraceId *trace = &pMsg->info.traceId;
|
||||||
vGTrace("vgId:%d, msg:%p get from vnode-write queue handle:%p", vgId, pMsg, pMsg->info.handle);
|
vGTrace("vgId:%d, msg:%p get from vnode-write queue, weak:%d block:%d msg:%d:%d pos:%d, handle:%p", vgId, pMsg,
|
||||||
|
isWeak, isBlock, msg, numOfMsgs, arrayPos, pMsg->info.handle);
|
||||||
|
|
||||||
|
if (!pVnode->restored) {
|
||||||
|
vGError("vgId:%d, msg:%p failed to process since not leader", vgId, pMsg);
|
||||||
|
terrno = TSDB_CODE_APP_NOT_READY;
|
||||||
|
vnodeHandleProposeError(pVnode, pMsg, TSDB_CODE_APP_NOT_READY);
|
||||||
|
rpcFreeCont(pMsg->pCont);
|
||||||
|
taosFreeQitem(pMsg);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (pMsgArr == NULL || pIsWeakArr == NULL) {
|
||||||
|
vGError("vgId:%d, msg:%p failed to process since out of memory", vgId, pMsg);
|
||||||
|
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
vnodeHandleProposeError(pVnode, pMsg, terrno);
|
||||||
|
rpcFreeCont(pMsg->pCont);
|
||||||
|
taosFreeQitem(pMsg);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
code = vnodePreProcessWriteMsg(pVnode, pMsg);
|
code = vnodePreProcessWriteMsg(pVnode, pMsg);
|
||||||
if (code != 0) {
|
if (code != 0) {
|
||||||
vError("vgId:%d, msg:%p failed to pre-process since %s", vgId, pMsg, terrstr());
|
vGError("vgId:%d, msg:%p failed to pre-process since %s", vgId, pMsg, terrstr());
|
||||||
} else {
|
rpcFreeCont(pMsg->pCont);
|
||||||
if (pMsg->msgType == TDMT_VND_ALTER_REPLICA) {
|
taosFreeQitem(pMsg);
|
||||||
code = vnodeProcessAlterReplicaReq(pVnode, pMsg);
|
continue;
|
||||||
} else {
|
|
||||||
code = syncPropose(pVnode->sync, pMsg, vnodeIsMsgWeak(pMsg->msgType));
|
|
||||||
if (code > 0) {
|
|
||||||
SRpcMsg rsp = {.code = pMsg->code, .info = pMsg->info};
|
|
||||||
if (vnodeProcessWriteMsg(pVnode, pMsg, pMsg->info.conn.applyIndex, &rsp) < 0) {
|
|
||||||
rsp.code = terrno;
|
|
||||||
vError("vgId:%d, msg:%p failed to apply right now since %s", vgId, pMsg, terrstr());
|
|
||||||
}
|
|
||||||
if (rsp.info.handle != NULL) {
|
|
||||||
tmsgSendRsp(&rsp);
|
|
||||||
}
|
|
||||||
} else if (code == 0) {
|
|
||||||
vnodeWaitBlockMsg(pVnode, pMsg);
|
|
||||||
} else {
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (code < 0) {
|
if (pMsg->msgType == TDMT_VND_ALTER_REPLICA) {
|
||||||
if (terrno == TSDB_CODE_SYN_NOT_LEADER) {
|
vnodeHandleAlterReplicaReq(pVnode, pMsg);
|
||||||
vnodeRedirectRpcMsg(pVnode, pMsg);
|
continue;
|
||||||
} else {
|
|
||||||
if (terrno != 0) code = terrno;
|
|
||||||
vError("vgId:%d, msg:%p failed to propose since %s, code:0x%x", vgId, pMsg, tstrerror(code), code);
|
|
||||||
SRpcMsg rsp = {.code = code, .info = pMsg->info};
|
|
||||||
if (rsp.info.handle != NULL) {
|
|
||||||
tmsgSendRsp(&rsp);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
vGTrace("vgId:%d, msg:%p is freed, code:0x%x", vgId, pMsg, code);
|
if (isBlock || BATCH_DISABLE) {
|
||||||
rpcFreeCont(pMsg->pCont);
|
vnodeProposeBatchMsg(pVnode, pMsgArr, pIsWeakArr, &arrayPos);
|
||||||
taosFreeQitem(pMsg);
|
}
|
||||||
|
|
||||||
|
pMsgArr[arrayPos] = pMsg;
|
||||||
|
pIsWeakArr[arrayPos] = isWeak;
|
||||||
|
arrayPos++;
|
||||||
|
|
||||||
|
if (isBlock || msg == numOfMsgs - 1 || BATCH_DISABLE) {
|
||||||
|
vnodeProposeBatchMsg(pVnode, pMsgArr, pIsWeakArr, &arrayPos);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
taosMemoryFree(pMsgArr);
|
||||||
|
taosMemoryFree(pIsWeakArr);
|
||||||
}
|
}
|
||||||
|
|
||||||
void vnodeApplyWriteMsg(SQueueInfo *pInfo, STaosQall *qall, int32_t numOfMsgs) {
|
void vnodeApplyWriteMsg(SQueueInfo *pInfo, STaosQall *qall, int32_t numOfMsgs) {
|
||||||
|
@ -527,6 +619,12 @@ static void vnodeLeaderTransfer(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsm
|
||||||
SVnode *pVnode = pFsm->data;
|
SVnode *pVnode = pFsm->data;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void vnodeRestoreFinish(struct SSyncFSM *pFsm) {
|
||||||
|
SVnode *pVnode = pFsm->data;
|
||||||
|
pVnode->restored = true;
|
||||||
|
vDebug("vgId:%d, sync restore finished", pVnode->config.vgId);
|
||||||
|
}
|
||||||
|
|
||||||
static SSyncFSM *vnodeSyncMakeFsm(SVnode *pVnode) {
|
static SSyncFSM *vnodeSyncMakeFsm(SVnode *pVnode) {
|
||||||
SSyncFSM *pFsm = taosMemoryCalloc(1, sizeof(SSyncFSM));
|
SSyncFSM *pFsm = taosMemoryCalloc(1, sizeof(SSyncFSM));
|
||||||
pFsm->data = pVnode;
|
pFsm->data = pVnode;
|
||||||
|
@ -534,7 +632,7 @@ static SSyncFSM *vnodeSyncMakeFsm(SVnode *pVnode) {
|
||||||
pFsm->FpPreCommitCb = vnodeSyncPreCommitMsg;
|
pFsm->FpPreCommitCb = vnodeSyncPreCommitMsg;
|
||||||
pFsm->FpRollBackCb = vnodeSyncRollBackMsg;
|
pFsm->FpRollBackCb = vnodeSyncRollBackMsg;
|
||||||
pFsm->FpGetSnapshotInfo = vnodeSyncGetSnapshot;
|
pFsm->FpGetSnapshotInfo = vnodeSyncGetSnapshot;
|
||||||
pFsm->FpRestoreFinishCb = NULL;
|
pFsm->FpRestoreFinishCb = vnodeRestoreFinish;
|
||||||
pFsm->FpLeaderTransferCb = vnodeLeaderTransfer;
|
pFsm->FpLeaderTransferCb = vnodeLeaderTransfer;
|
||||||
pFsm->FpReConfigCb = vnodeSyncReconfig;
|
pFsm->FpReConfigCb = vnodeSyncReconfig;
|
||||||
pFsm->FpSnapshotStartRead = vnodeSnapshotStartRead;
|
pFsm->FpSnapshotStartRead = vnodeSnapshotStartRead;
|
||||||
|
@ -588,11 +686,10 @@ bool vnodeIsLeader(SVnode *pVnode) {
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
// todo
|
if (!pVnode->restored) {
|
||||||
// if (!pVnode->restored) {
|
terrno = TSDB_CODE_APP_NOT_READY;
|
||||||
// terrno = TSDB_CODE_APP_NOT_READY;
|
return false;
|
||||||
// return false;
|
}
|
||||||
// }
|
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
|
@ -135,7 +135,7 @@ int32_t qExplainGenerateResChildren(SPhysiNode *pNode, SExplainGroup *group, SNo
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN: {
|
case QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN: {
|
||||||
SJoinPhysiNode *pJoinNode = (SJoinPhysiNode *)pNode;
|
SSortMergeJoinPhysiNode *pJoinNode = (SSortMergeJoinPhysiNode *)pNode;
|
||||||
pPhysiChildren = pJoinNode->node.pChildren;
|
pPhysiChildren = pJoinNode->node.pChildren;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
@ -434,7 +434,8 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
|
||||||
case QUERY_NODE_PHYSICAL_PLAN_TABLE_SCAN: {
|
case QUERY_NODE_PHYSICAL_PLAN_TABLE_SCAN: {
|
||||||
STableScanPhysiNode *pTblScanNode = (STableScanPhysiNode *)pNode;
|
STableScanPhysiNode *pTblScanNode = (STableScanPhysiNode *)pNode;
|
||||||
EXPLAIN_ROW_NEW(level,
|
EXPLAIN_ROW_NEW(level,
|
||||||
QUERY_NODE_PHYSICAL_PLAN_TABLE_MERGE_SCAN == pNode->type ? EXPLAIN_TBL_MERGE_SCAN_FORMAT : EXPLAIN_TBL_SCAN_FORMAT,
|
QUERY_NODE_PHYSICAL_PLAN_TABLE_MERGE_SCAN == pNode->type ? EXPLAIN_TBL_MERGE_SCAN_FORMAT
|
||||||
|
: EXPLAIN_TBL_SCAN_FORMAT,
|
||||||
pTblScanNode->scan.tableName.tname);
|
pTblScanNode->scan.tableName.tname);
|
||||||
EXPLAIN_ROW_APPEND(EXPLAIN_LEFT_PARENTHESIS_FORMAT);
|
EXPLAIN_ROW_APPEND(EXPLAIN_LEFT_PARENTHESIS_FORMAT);
|
||||||
if (pResNode->pExecInfo) {
|
if (pResNode->pExecInfo) {
|
||||||
|
@ -551,7 +552,7 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
|
||||||
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
||||||
if (pSTblScanNode->scan.pScanPseudoCols) {
|
if (pSTblScanNode->scan.pScanPseudoCols) {
|
||||||
EXPLAIN_ROW_APPEND(EXPLAIN_PSEUDO_COLUMNS_FORMAT, pSTblScanNode->scan.pScanPseudoCols->length);
|
EXPLAIN_ROW_APPEND(EXPLAIN_PSEUDO_COLUMNS_FORMAT, pSTblScanNode->scan.pScanPseudoCols->length);
|
||||||
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
||||||
}
|
}
|
||||||
EXPLAIN_ROW_APPEND(EXPLAIN_WIDTH_FORMAT, pSTblScanNode->scan.node.pOutputDataBlockDesc->totalRowSize);
|
EXPLAIN_ROW_APPEND(EXPLAIN_WIDTH_FORMAT, pSTblScanNode->scan.node.pOutputDataBlockDesc->totalRowSize);
|
||||||
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
||||||
|
@ -613,7 +614,7 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN: {
|
case QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN: {
|
||||||
SJoinPhysiNode *pJoinNode = (SJoinPhysiNode *)pNode;
|
SSortMergeJoinPhysiNode *pJoinNode = (SSortMergeJoinPhysiNode *)pNode;
|
||||||
EXPLAIN_ROW_NEW(level, EXPLAIN_JOIN_FORMAT, EXPLAIN_JOIN_STRING(pJoinNode->joinType));
|
EXPLAIN_ROW_NEW(level, EXPLAIN_JOIN_FORMAT, EXPLAIN_JOIN_STRING(pJoinNode->joinType));
|
||||||
EXPLAIN_ROW_APPEND(EXPLAIN_LEFT_PARENTHESIS_FORMAT);
|
EXPLAIN_ROW_APPEND(EXPLAIN_LEFT_PARENTHESIS_FORMAT);
|
||||||
if (pResNode->pExecInfo) {
|
if (pResNode->pExecInfo) {
|
||||||
|
@ -1180,7 +1181,7 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
|
||||||
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
||||||
if (pDistScanNode->pScanPseudoCols) {
|
if (pDistScanNode->pScanPseudoCols) {
|
||||||
EXPLAIN_ROW_APPEND(EXPLAIN_PSEUDO_COLUMNS_FORMAT, pDistScanNode->pScanPseudoCols->length);
|
EXPLAIN_ROW_APPEND(EXPLAIN_PSEUDO_COLUMNS_FORMAT, pDistScanNode->pScanPseudoCols->length);
|
||||||
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
||||||
}
|
}
|
||||||
EXPLAIN_ROW_APPEND(EXPLAIN_WIDTH_FORMAT, pDistScanNode->node.pOutputDataBlockDesc->totalRowSize);
|
EXPLAIN_ROW_APPEND(EXPLAIN_WIDTH_FORMAT, pDistScanNode->node.pOutputDataBlockDesc->totalRowSize);
|
||||||
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
||||||
|
@ -1367,7 +1368,7 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
|
||||||
EXPLAIN_ROW_APPEND(EXPLAIN_FUNCTIONS_FORMAT, pInterpNode->pFuncs->length);
|
EXPLAIN_ROW_APPEND(EXPLAIN_FUNCTIONS_FORMAT, pInterpNode->pFuncs->length);
|
||||||
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
||||||
}
|
}
|
||||||
|
|
||||||
EXPLAIN_ROW_APPEND(EXPLAIN_MODE_FORMAT, nodesGetFillModeString(pInterpNode->fillMode));
|
EXPLAIN_ROW_APPEND(EXPLAIN_MODE_FORMAT, nodesGetFillModeString(pInterpNode->fillMode));
|
||||||
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
|
||||||
|
|
||||||
|
@ -1419,7 +1420,7 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
default:
|
default:
|
||||||
qError("not supported physical node type %d", pNode->type);
|
qError("not supported physical node type %d", pNode->type);
|
||||||
return TSDB_CODE_QRY_APP_ERROR;
|
return TSDB_CODE_QRY_APP_ERROR;
|
||||||
|
|
|
@ -320,6 +320,47 @@ typedef struct STableScanInfo {
|
||||||
int8_t noTable;
|
int8_t noTable;
|
||||||
} STableScanInfo;
|
} STableScanInfo;
|
||||||
|
|
||||||
|
typedef struct STableMergeScanInfo {
|
||||||
|
STableListInfo* tableListInfo;
|
||||||
|
int32_t tableStartIndex;
|
||||||
|
int32_t tableEndIndex;
|
||||||
|
bool hasGroupId;
|
||||||
|
uint64_t groupId;
|
||||||
|
SArray* dataReaders; // array of tsdbReaderT*
|
||||||
|
SReadHandle readHandle;
|
||||||
|
int32_t bufPageSize;
|
||||||
|
uint32_t sortBufSize; // max buffer size for in-memory sort
|
||||||
|
SArray* pSortInfo;
|
||||||
|
SSortHandle* pSortHandle;
|
||||||
|
|
||||||
|
SSDataBlock* pSortInputBlock;
|
||||||
|
int64_t startTs; // sort start time
|
||||||
|
SArray* sortSourceParams;
|
||||||
|
|
||||||
|
SFileBlockLoadRecorder readRecorder;
|
||||||
|
int64_t numOfRows;
|
||||||
|
SScanInfo scanInfo;
|
||||||
|
int32_t scanTimes;
|
||||||
|
SNode* pFilterNode; // filter info, which is push down by optimizer
|
||||||
|
SqlFunctionCtx* pCtx; // which belongs to the direct upstream operator operator query context
|
||||||
|
SResultRowInfo* pResultRowInfo;
|
||||||
|
int32_t* rowEntryInfoOffset;
|
||||||
|
SExprInfo* pExpr;
|
||||||
|
SSDataBlock* pResBlock;
|
||||||
|
SArray* pColMatchInfo;
|
||||||
|
int32_t numOfOutput;
|
||||||
|
|
||||||
|
SExprSupp pseudoSup;
|
||||||
|
|
||||||
|
SQueryTableDataCond cond;
|
||||||
|
int32_t scanFlag; // table scan flag to denote if it is a repeat/reverse/main scan
|
||||||
|
int32_t dataBlockLoadFlag;
|
||||||
|
// if the upstream is an interval operator, the interval info is also kept here to get the time
|
||||||
|
// window to check if current data block needs to be loaded.
|
||||||
|
SInterval interval;
|
||||||
|
SSampleExecInfo sample; // sample execution info
|
||||||
|
} STableMergeScanInfo;
|
||||||
|
|
||||||
typedef struct STagScanInfo {
|
typedef struct STagScanInfo {
|
||||||
SColumnInfo *pCols;
|
SColumnInfo *pCols;
|
||||||
SSDataBlock *pRes;
|
SSDataBlock *pRes;
|
||||||
|
@ -886,7 +927,7 @@ SOperatorInfo* createPartitionOperatorInfo(SOperatorInfo* downstream, SPartition
|
||||||
|
|
||||||
SOperatorInfo* createTimeSliceOperatorInfo(SOperatorInfo* downstream, SPhysiNode* pNode, SExecTaskInfo* pTaskInfo);
|
SOperatorInfo* createTimeSliceOperatorInfo(SOperatorInfo* downstream, SPhysiNode* pNode, SExecTaskInfo* pTaskInfo);
|
||||||
|
|
||||||
SOperatorInfo* createMergeJoinOperatorInfo(SOperatorInfo** pDownstream, int32_t numOfDownstream, SJoinPhysiNode* pJoinNode,
|
SOperatorInfo* createMergeJoinOperatorInfo(SOperatorInfo** pDownstream, int32_t numOfDownstream, SSortMergeJoinPhysiNode* pJoinNode,
|
||||||
SExecTaskInfo* pTaskInfo);
|
SExecTaskInfo* pTaskInfo);
|
||||||
|
|
||||||
SOperatorInfo* createStreamSessionAggOperatorInfo(SOperatorInfo* downstream,
|
SOperatorInfo* createStreamSessionAggOperatorInfo(SOperatorInfo* downstream,
|
||||||
|
@ -959,6 +1000,7 @@ int32_t updateSessionWindowInfo(SResultWindowInfo* pWinInfo, TSKEY* pStartTs,
|
||||||
bool functionNeedToExecute(SqlFunctionCtx* pCtx);
|
bool functionNeedToExecute(SqlFunctionCtx* pCtx);
|
||||||
bool isCloseWindow(STimeWindow* pWin, STimeWindowAggSupp* pSup);
|
bool isCloseWindow(STimeWindow* pWin, STimeWindowAggSupp* pSup);
|
||||||
void appendOneRow(SSDataBlock* pBlock, TSKEY* pStartTs, TSKEY* pEndTs, uint64_t* pUid);
|
void appendOneRow(SSDataBlock* pBlock, TSKEY* pStartTs, TSKEY* pEndTs, uint64_t* pUid);
|
||||||
|
void printDataBlock(SSDataBlock* pBlock, const char* flag);
|
||||||
|
|
||||||
int32_t finalizeResultRowIntoResultDataBlock(SDiskbasedBuf* pBuf, SResultRowPosition* resultRowPosition,
|
int32_t finalizeResultRowIntoResultDataBlock(SDiskbasedBuf* pBuf, SResultRowPosition* resultRowPosition,
|
||||||
SqlFunctionCtx* pCtx, SExprInfo* pExprInfo, int32_t numOfExprs, const int32_t* rowCellOffset,
|
SqlFunctionCtx* pCtx, SExprInfo* pExprInfo, int32_t numOfExprs, const int32_t* rowCellOffset,
|
||||||
|
|
|
@ -182,7 +182,8 @@ static int32_t getDataBlock(SDataSinkHandle* pHandle, SOutputData* pOutput) {
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
SDataCacheEntry* pEntry = (SDataCacheEntry*)(pDeleter->nextOutput.pData);
|
SDataCacheEntry* pEntry = (SDataCacheEntry*)(pDeleter->nextOutput.pData);
|
||||||
memcpy(pOutput->pData, pEntry->data, pEntry->dataLen);
|
memcpy(pOutput->pData, pEntry->data, pEntry->dataLen);
|
||||||
|
pDeleter->pParam->pUidList = NULL;
|
||||||
pOutput->numOfRows = pEntry->numOfRows;
|
pOutput->numOfRows = pEntry->numOfRows;
|
||||||
pOutput->numOfCols = pEntry->numOfCols;
|
pOutput->numOfCols = pEntry->numOfCols;
|
||||||
pOutput->compressed = pEntry->compressed;
|
pOutput->compressed = pEntry->compressed;
|
||||||
|
@ -205,6 +206,8 @@ static int32_t destroyDataSinker(SDataSinkHandle* pHandle) {
|
||||||
SDataDeleterHandle* pDeleter = (SDataDeleterHandle*)pHandle;
|
SDataDeleterHandle* pDeleter = (SDataDeleterHandle*)pHandle;
|
||||||
atomic_sub_fetch_64(&gDataSinkStat.cachedSize, pDeleter->cachedSize);
|
atomic_sub_fetch_64(&gDataSinkStat.cachedSize, pDeleter->cachedSize);
|
||||||
taosMemoryFreeClear(pDeleter->nextOutput.pData);
|
taosMemoryFreeClear(pDeleter->nextOutput.pData);
|
||||||
|
taosArrayDestroy(pDeleter->pParam->pUidList);
|
||||||
|
taosMemoryFree(pDeleter->pParam);
|
||||||
while (!taosQueueEmpty(pDeleter->pDataBlocks)) {
|
while (!taosQueueEmpty(pDeleter->pDataBlocks)) {
|
||||||
SDataDeleterBuf* pBuf = NULL;
|
SDataDeleterBuf* pBuf = NULL;
|
||||||
taosReadQitem(pDeleter->pDataBlocks, (void**)&pBuf);
|
taosReadQitem(pDeleter->pDataBlocks, (void**)&pBuf);
|
||||||
|
|
|
@ -416,6 +416,7 @@ int32_t qExecTask(qTaskInfo_t tinfo, SSDataBlock** pRes, uint64_t* useconds) {
|
||||||
}
|
}
|
||||||
|
|
||||||
if (isTaskKilled(pTaskInfo)) {
|
if (isTaskKilled(pTaskInfo)) {
|
||||||
|
atomic_store_64(&pTaskInfo->owner, 0);
|
||||||
qDebug("%s already killed, abort", GET_TASKID(pTaskInfo));
|
qDebug("%s already killed, abort", GET_TASKID(pTaskInfo));
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
|
@ -1325,7 +1325,7 @@ void doFilter(const SNode* pFilterNode, SSDataBlock* pBlock, const SArray* pColM
|
||||||
extractQualifiedTupleByFilterResult(pBlock, rowRes, keep);
|
extractQualifiedTupleByFilterResult(pBlock, rowRes, keep);
|
||||||
|
|
||||||
if (pColMatchInfo != NULL) {
|
if (pColMatchInfo != NULL) {
|
||||||
for(int32_t i = 0; i < taosArrayGetSize(pColMatchInfo); ++i) {
|
for (int32_t i = 0; i < taosArrayGetSize(pColMatchInfo); ++i) {
|
||||||
SColMatchInfo* pInfo = taosArrayGet(pColMatchInfo, i);
|
SColMatchInfo* pInfo = taosArrayGet(pColMatchInfo, i);
|
||||||
if (pInfo->colId == PRIMARYKEY_TIMESTAMP_COL_ID) {
|
if (pInfo->colId == PRIMARYKEY_TIMESTAMP_COL_ID) {
|
||||||
SColumnInfoData* pColData = taosArrayGet(pBlock->pDataBlock, pInfo->targetSlotId);
|
SColumnInfoData* pColData = taosArrayGet(pBlock->pDataBlock, pInfo->targetSlotId);
|
||||||
|
@ -1646,10 +1646,10 @@ void queryCostStatis(SExecTaskInfo* pTaskInfo) {
|
||||||
SFileBlockLoadRecorder* pRecorder = pSummary->pRecoder;
|
SFileBlockLoadRecorder* pRecorder = pSummary->pRecoder;
|
||||||
if (pSummary->pRecoder != NULL) {
|
if (pSummary->pRecoder != NULL) {
|
||||||
qDebug(
|
qDebug(
|
||||||
"%s :cost summary: elapsed time:%.2f ms, total blocks:%d, load block SMA:%d, load data block:%d, total rows:%"
|
"%s :cost summary: elapsed time:%.2f ms, total blocks:%d, load block SMA:%d, load data block:%d, total "
|
||||||
PRId64 ", check rows:%" PRId64, GET_TASKID(pTaskInfo), pSummary->elapsedTime / 1000.0,
|
"rows:%" PRId64 ", check rows:%" PRId64,
|
||||||
pRecorder->totalBlocks, pRecorder->loadBlockStatis, pRecorder->loadBlocks, pRecorder->totalRows,
|
GET_TASKID(pTaskInfo), pSummary->elapsedTime / 1000.0, pRecorder->totalBlocks, pRecorder->loadBlockStatis,
|
||||||
pRecorder->totalCheckedRows);
|
pRecorder->loadBlocks, pRecorder->totalRows, pRecorder->totalCheckedRows);
|
||||||
}
|
}
|
||||||
|
|
||||||
// qDebug("QInfo:0x%"PRIx64" :cost summary: winResPool size:%.2f Kb, numOfWin:%"PRId64", tableInfoSize:%.2f Kb,
|
// qDebug("QInfo:0x%"PRIx64" :cost summary: winResPool size:%.2f Kb, numOfWin:%"PRId64", tableInfoSize:%.2f Kb,
|
||||||
|
@ -2783,11 +2783,16 @@ int32_t getTableScanInfo(SOperatorInfo* pOperator, int32_t* order, int32_t* scan
|
||||||
*order = TSDB_ORDER_ASC;
|
*order = TSDB_ORDER_ASC;
|
||||||
*scanFlag = MAIN_SCAN;
|
*scanFlag = MAIN_SCAN;
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
} else if (type == QUERY_NODE_PHYSICAL_PLAN_TABLE_SCAN || type == QUERY_NODE_PHYSICAL_PLAN_TABLE_MERGE_SCAN) {
|
} else if (type == QUERY_NODE_PHYSICAL_PLAN_TABLE_SCAN) {
|
||||||
STableScanInfo* pTableScanInfo = pOperator->info;
|
STableScanInfo* pTableScanInfo = pOperator->info;
|
||||||
*order = pTableScanInfo->cond.order;
|
*order = pTableScanInfo->cond.order;
|
||||||
*scanFlag = pTableScanInfo->scanFlag;
|
*scanFlag = pTableScanInfo->scanFlag;
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
|
} else if (type == QUERY_NODE_PHYSICAL_PLAN_TABLE_MERGE_SCAN) {
|
||||||
|
STableMergeScanInfo* pTableScanInfo = pOperator->info;
|
||||||
|
*order = pTableScanInfo->cond.order;
|
||||||
|
*scanFlag = pTableScanInfo->scanFlag;
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
} else {
|
} else {
|
||||||
if (pOperator->pDownstream == NULL || pOperator->pDownstream[0] == NULL) {
|
if (pOperator->pDownstream == NULL || pOperator->pDownstream[0] == NULL) {
|
||||||
return TSDB_CODE_INVALID_PARA;
|
return TSDB_CODE_INVALID_PARA;
|
||||||
|
@ -3728,7 +3733,7 @@ SSchemaWrapper* extractQueriedColumnSchema(SScanPhysiNode* pScanNode) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// this the tags and pseudo function columns, we only keep the tag columns
|
// this the tags and pseudo function columns, we only keep the tag columns
|
||||||
for(int32_t i = 0; i < numOfTags; ++i) {
|
for (int32_t i = 0; i < numOfTags; ++i) {
|
||||||
STargetNode* pNode = (STargetNode*)nodesListGetNode(pScanNode->pScanPseudoCols, i);
|
STargetNode* pNode = (STargetNode*)nodesListGetNode(pScanNode->pScanPseudoCols, i);
|
||||||
|
|
||||||
int32_t type = nodeType(pNode->pExpr);
|
int32_t type = nodeType(pNode->pExpr);
|
||||||
|
@ -3844,7 +3849,7 @@ int32_t generateGroupIdMap(STableListInfo* pTableListInfo, SReadHandle* pHandle,
|
||||||
int32_t groupNum = 0;
|
int32_t groupNum = 0;
|
||||||
for (int32_t i = 0; i < taosArrayGetSize(pTableListInfo->pTableList); i++) {
|
for (int32_t i = 0; i < taosArrayGetSize(pTableListInfo->pTableList); i++) {
|
||||||
STableKeyInfo* info = taosArrayGet(pTableListInfo->pTableList, i);
|
STableKeyInfo* info = taosArrayGet(pTableListInfo->pTableList, i);
|
||||||
int32_t code = getGroupIdFromTagsVal(pHandle->meta, info->uid, group, keyBuf, &info->groupId);
|
int32_t code = getGroupIdFromTagsVal(pHandle->meta, info->uid, group, keyBuf, &info->groupId);
|
||||||
if (code != TSDB_CODE_SUCCESS) {
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
@ -4165,7 +4170,7 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
|
||||||
} else if (QUERY_NODE_PHYSICAL_PLAN_STREAM_STATE == type) {
|
} else if (QUERY_NODE_PHYSICAL_PLAN_STREAM_STATE == type) {
|
||||||
pOptr = createStreamStateAggOperatorInfo(ops[0], pPhyNode, pTaskInfo);
|
pOptr = createStreamStateAggOperatorInfo(ops[0], pPhyNode, pTaskInfo);
|
||||||
} else if (QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN == type) {
|
} else if (QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN == type) {
|
||||||
pOptr = createMergeJoinOperatorInfo(ops, size, (SJoinPhysiNode*)pPhyNode, pTaskInfo);
|
pOptr = createMergeJoinOperatorInfo(ops, size, (SSortMergeJoinPhysiNode*)pPhyNode, pTaskInfo);
|
||||||
} else if (QUERY_NODE_PHYSICAL_PLAN_FILL == type) {
|
} else if (QUERY_NODE_PHYSICAL_PLAN_FILL == type) {
|
||||||
pOptr = createFillOperatorInfo(ops[0], (SFillPhysiNode*)pPhyNode, pTaskInfo);
|
pOptr = createFillOperatorInfo(ops[0], (SFillPhysiNode*)pPhyNode, pTaskInfo);
|
||||||
} else if (QUERY_NODE_PHYSICAL_PLAN_INDEF_ROWS_FUNC == type) {
|
} else if (QUERY_NODE_PHYSICAL_PLAN_INDEF_ROWS_FUNC == type) {
|
||||||
|
|
|
@ -28,30 +28,30 @@ static SSDataBlock* doMergeJoin(struct SOperatorInfo* pOperator);
|
||||||
static void destroyMergeJoinOperator(void* param, int32_t numOfOutput);
|
static void destroyMergeJoinOperator(void* param, int32_t numOfOutput);
|
||||||
static void extractTimeCondition(SJoinOperatorInfo* Info, SLogicConditionNode* pLogicConditionNode);
|
static void extractTimeCondition(SJoinOperatorInfo* Info, SLogicConditionNode* pLogicConditionNode);
|
||||||
|
|
||||||
SOperatorInfo* createMergeJoinOperatorInfo(SOperatorInfo** pDownstream, int32_t numOfDownstream, SJoinPhysiNode* pJoinNode,
|
SOperatorInfo* createMergeJoinOperatorInfo(SOperatorInfo** pDownstream, int32_t numOfDownstream,
|
||||||
SExecTaskInfo* pTaskInfo) {
|
SSortMergeJoinPhysiNode* pJoinNode, SExecTaskInfo* pTaskInfo) {
|
||||||
SJoinOperatorInfo* pInfo = taosMemoryCalloc(1, sizeof(SJoinOperatorInfo));
|
SJoinOperatorInfo* pInfo = taosMemoryCalloc(1, sizeof(SJoinOperatorInfo));
|
||||||
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
|
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
|
||||||
if (pOperator == NULL || pInfo == NULL) {
|
if (pOperator == NULL || pInfo == NULL) {
|
||||||
goto _error;
|
goto _error;
|
||||||
}
|
}
|
||||||
|
|
||||||
SSDataBlock* pResBlock = createResDataBlock(pJoinNode->node.pOutputDataBlockDesc);
|
SSDataBlock* pResBlock = createResDataBlock(pJoinNode->node.pOutputDataBlockDesc);
|
||||||
|
|
||||||
int32_t numOfCols = 0;
|
int32_t numOfCols = 0;
|
||||||
SExprInfo* pExprInfo = createExprInfo(pJoinNode->pTargets, NULL, &numOfCols);
|
SExprInfo* pExprInfo = createExprInfo(pJoinNode->pTargets, NULL, &numOfCols);
|
||||||
|
|
||||||
initResultSizeInfo(&pOperator->resultInfo, 4096);
|
initResultSizeInfo(&pOperator->resultInfo, 4096);
|
||||||
|
|
||||||
pInfo->pRes = pResBlock;
|
pInfo->pRes = pResBlock;
|
||||||
pOperator->name = "MergeJoinOperator";
|
pOperator->name = "MergeJoinOperator";
|
||||||
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN;
|
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN;
|
||||||
pOperator->blocking = false;
|
pOperator->blocking = false;
|
||||||
pOperator->status = OP_NOT_OPENED;
|
pOperator->status = OP_NOT_OPENED;
|
||||||
pOperator->exprSupp.pExprInfo = pExprInfo;
|
pOperator->exprSupp.pExprInfo = pExprInfo;
|
||||||
pOperator->exprSupp.numOfExprs = numOfCols;
|
pOperator->exprSupp.numOfExprs = numOfCols;
|
||||||
pOperator->info = pInfo;
|
pOperator->info = pInfo;
|
||||||
pOperator->pTaskInfo = pTaskInfo;
|
pOperator->pTaskInfo = pTaskInfo;
|
||||||
|
|
||||||
SNode* pMergeCondition = pJoinNode->pMergeCondition;
|
SNode* pMergeCondition = pJoinNode->pMergeCondition;
|
||||||
if (nodeType(pMergeCondition) == QUERY_NODE_OPERATOR) {
|
if (nodeType(pMergeCondition) == QUERY_NODE_OPERATOR) {
|
||||||
|
@ -104,7 +104,7 @@ void setJoinColumnInfo(SColumnInfo* pColumn, const SColumnNode* pColumnNode) {
|
||||||
void destroyMergeJoinOperator(void* param, int32_t numOfOutput) {
|
void destroyMergeJoinOperator(void* param, int32_t numOfOutput) {
|
||||||
SJoinOperatorInfo* pJoinOperator = (SJoinOperatorInfo*)param;
|
SJoinOperatorInfo* pJoinOperator = (SJoinOperatorInfo*)param;
|
||||||
nodesDestroyNode(pJoinOperator->pCondAfterMerge);
|
nodesDestroyNode(pJoinOperator->pCondAfterMerge);
|
||||||
|
|
||||||
taosMemoryFreeClear(param);
|
taosMemoryFreeClear(param);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -68,10 +68,12 @@ SOperatorInfo* createProjectOperatorInfo(SOperatorInfo* downstream, SProjectPhys
|
||||||
pInfo->mergeDataBlocks = pProjPhyNode->mergeDataBlock;
|
pInfo->mergeDataBlocks = pProjPhyNode->mergeDataBlock;
|
||||||
|
|
||||||
// todo remove it soon
|
// todo remove it soon
|
||||||
|
|
||||||
if (pTaskInfo->execModel == OPTR_EXEC_MODEL_STREAM) {
|
if (pTaskInfo->execModel == OPTR_EXEC_MODEL_STREAM) {
|
||||||
pInfo->mergeDataBlocks = true;
|
pInfo->mergeDataBlocks = false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
int32_t numOfRows = 4096;
|
int32_t numOfRows = 4096;
|
||||||
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
|
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
|
||||||
|
|
||||||
|
@ -181,6 +183,16 @@ static int32_t doIngroupLimitOffset(SLimitInfo* pLimitInfo, uint64_t groupId, SS
|
||||||
return PROJECT_RETRIEVE_DONE;
|
return PROJECT_RETRIEVE_DONE;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void printDataBlock1(SSDataBlock* pBlock, const char* flag) {
|
||||||
|
if (!pBlock || pBlock->info.rows == 0) {
|
||||||
|
qDebug("===stream===printDataBlock: Block is Null or Empty");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
char* pBuf = NULL;
|
||||||
|
qDebug("%s", dumpBlockData(pBlock, flag, &pBuf));
|
||||||
|
taosMemoryFreeClear(pBuf);
|
||||||
|
}
|
||||||
|
|
||||||
SSDataBlock* doProjectOperation(SOperatorInfo* pOperator) {
|
SSDataBlock* doProjectOperation(SOperatorInfo* pOperator) {
|
||||||
SProjectOperatorInfo* pProjectInfo = pOperator->info;
|
SProjectOperatorInfo* pProjectInfo = pOperator->info;
|
||||||
SOptrBasicInfo* pInfo = &pProjectInfo->binfo;
|
SOptrBasicInfo* pInfo = &pProjectInfo->binfo;
|
||||||
|
@ -229,6 +241,7 @@ SSDataBlock* doProjectOperation(SOperatorInfo* pOperator) {
|
||||||
|
|
||||||
// for stream interval
|
// for stream interval
|
||||||
if (pBlock->info.type == STREAM_RETRIEVE) {
|
if (pBlock->info.type == STREAM_RETRIEVE) {
|
||||||
|
// printDataBlock1(pBlock, "project1");
|
||||||
return pBlock;
|
return pBlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -302,7 +315,8 @@ SSDataBlock* doProjectOperation(SOperatorInfo* pOperator) {
|
||||||
if (pOperator->cost.openCost == 0) {
|
if (pOperator->cost.openCost == 0) {
|
||||||
pOperator->cost.openCost = (taosGetTimestampUs() - st) / 1000.0;
|
pOperator->cost.openCost = (taosGetTimestampUs() - st) / 1000.0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// printDataBlock1(p, "project");
|
||||||
return (p->info.rows > 0) ? p : NULL;
|
return (p->info.rows > 0) ? p : NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -587,4 +601,4 @@ SSDataBlock* doGenerateSourceData(SOperatorInfo* pOperator) {
|
||||||
}
|
}
|
||||||
|
|
||||||
return (pRes->info.rows > 0) ? pRes : NULL;
|
return (pRes->info.rows > 0) ? pRes : NULL;
|
||||||
}
|
}
|
||||||
|
|
|
@ -274,7 +274,7 @@ static int32_t loadDataBlock(SOperatorInfo* pOperator, STableScanInfo* pTableSca
|
||||||
qDebug("%s data block filter out, brange:%" PRId64 "-%" PRId64 ", rows:%d", GET_TASKID(pTaskInfo),
|
qDebug("%s data block filter out, brange:%" PRId64 "-%" PRId64 ", rows:%d", GET_TASKID(pTaskInfo),
|
||||||
pBlockInfo->window.skey, pBlockInfo->window.ekey, pBlockInfo->rows);
|
pBlockInfo->window.skey, pBlockInfo->window.ekey, pBlockInfo->rows);
|
||||||
} else {
|
} else {
|
||||||
qDebug("%s data block filter out, elapsed time:%"PRId64, GET_TASKID(pTaskInfo), (et - st));
|
qDebug("%s data block filter out, elapsed time:%" PRId64, GET_TASKID(pTaskInfo), (et - st));
|
||||||
}
|
}
|
||||||
|
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
|
@ -1838,11 +1838,14 @@ static SSDataBlock* sysTableScanUserTags(SOperatorInfo* pOperator) {
|
||||||
int8_t tagType = smr.me.stbEntry.schemaTag.pSchema[i].type;
|
int8_t tagType = smr.me.stbEntry.schemaTag.pSchema[i].type;
|
||||||
pColInfoData = taosArrayGet(p->pDataBlock, 4);
|
pColInfoData = taosArrayGet(p->pDataBlock, 4);
|
||||||
char tagTypeStr[VARSTR_HEADER_SIZE + 32];
|
char tagTypeStr[VARSTR_HEADER_SIZE + 32];
|
||||||
int tagTypeLen = sprintf(varDataVal(tagTypeStr), "%s", tDataTypes[tagType].name);
|
int tagTypeLen = sprintf(varDataVal(tagTypeStr), "%s", tDataTypes[tagType].name);
|
||||||
if (tagType == TSDB_DATA_TYPE_VARCHAR) {
|
if (tagType == TSDB_DATA_TYPE_VARCHAR) {
|
||||||
tagTypeLen += sprintf(varDataVal(tagTypeStr) + tagTypeLen, "(%d)", (int32_t)(smr.me.stbEntry.schemaTag.pSchema[i].bytes - VARSTR_HEADER_SIZE));
|
tagTypeLen += sprintf(varDataVal(tagTypeStr) + tagTypeLen, "(%d)",
|
||||||
|
(int32_t)(smr.me.stbEntry.schemaTag.pSchema[i].bytes - VARSTR_HEADER_SIZE));
|
||||||
} else if (tagType == TSDB_DATA_TYPE_NCHAR) {
|
} else if (tagType == TSDB_DATA_TYPE_NCHAR) {
|
||||||
tagTypeLen += sprintf(varDataVal(tagTypeStr) + tagTypeLen, "(%d)", (int32_t)((smr.me.stbEntry.schemaTag.pSchema[i].bytes - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE));
|
tagTypeLen +=
|
||||||
|
sprintf(varDataVal(tagTypeStr) + tagTypeLen, "(%d)",
|
||||||
|
(int32_t)((smr.me.stbEntry.schemaTag.pSchema[i].bytes - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE));
|
||||||
}
|
}
|
||||||
varDataSetLen(tagTypeStr, tagTypeLen);
|
varDataSetLen(tagTypeStr, tagTypeLen);
|
||||||
colDataAppend(pColInfoData, numOfRows, (char*)tagTypeStr, false);
|
colDataAppend(pColInfoData, numOfRows, (char*)tagTypeStr, false);
|
||||||
|
@ -2527,49 +2530,6 @@ _error:
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
typedef struct STableMergeScanInfo {
|
|
||||||
STableListInfo* tableListInfo;
|
|
||||||
int32_t tableStartIndex;
|
|
||||||
int32_t tableEndIndex;
|
|
||||||
bool hasGroupId;
|
|
||||||
uint64_t groupId;
|
|
||||||
SArray* dataReaders; // array of tsdbReaderT*
|
|
||||||
SReadHandle readHandle;
|
|
||||||
int32_t bufPageSize;
|
|
||||||
uint32_t sortBufSize; // max buffer size for in-memory sort
|
|
||||||
SArray* pSortInfo;
|
|
||||||
SSortHandle* pSortHandle;
|
|
||||||
|
|
||||||
SSDataBlock* pSortInputBlock;
|
|
||||||
int64_t startTs; // sort start time
|
|
||||||
SArray* sortSourceParams;
|
|
||||||
|
|
||||||
SFileBlockLoadRecorder readRecorder;
|
|
||||||
int64_t numOfRows;
|
|
||||||
SScanInfo scanInfo;
|
|
||||||
int32_t scanTimes;
|
|
||||||
SNode* pFilterNode; // filter info, which is push down by optimizer
|
|
||||||
SqlFunctionCtx* pCtx; // which belongs to the direct upstream operator operator query context
|
|
||||||
SResultRowInfo* pResultRowInfo;
|
|
||||||
int32_t* rowEntryInfoOffset;
|
|
||||||
SExprInfo* pExpr;
|
|
||||||
SSDataBlock* pResBlock;
|
|
||||||
SArray* pColMatchInfo;
|
|
||||||
int32_t numOfOutput;
|
|
||||||
|
|
||||||
SExprInfo* pPseudoExpr;
|
|
||||||
int32_t numOfPseudoExpr;
|
|
||||||
SqlFunctionCtx* pPseudoCtx;
|
|
||||||
|
|
||||||
SQueryTableDataCond cond;
|
|
||||||
int32_t scanFlag; // table scan flag to denote if it is a repeat/reverse/main scan
|
|
||||||
int32_t dataBlockLoadFlag;
|
|
||||||
// if the upstream is an interval operator, the interval info is also kept here to get the time
|
|
||||||
// window to check if current data block needs to be loaded.
|
|
||||||
SInterval interval;
|
|
||||||
SSampleExecInfo sample; // sample execution info
|
|
||||||
} STableMergeScanInfo;
|
|
||||||
|
|
||||||
int32_t createScanTableListInfo(SScanPhysiNode* pScanNode, SNodeList* pGroupTags, bool groupSort, SReadHandle* pHandle,
|
int32_t createScanTableListInfo(SScanPhysiNode* pScanNode, SNodeList* pGroupTags, bool groupSort, SReadHandle* pHandle,
|
||||||
STableListInfo* pTableListInfo, SNode* pTagCond, SNode* pTagIndexCond,
|
STableListInfo* pTableListInfo, SNode* pTagCond, SNode* pTagIndexCond,
|
||||||
const char* idStr) {
|
const char* idStr) {
|
||||||
|
@ -2700,9 +2660,9 @@ static int32_t loadDataBlockFromOneTable(SOperatorInfo* pOperator, STableMergeSc
|
||||||
relocateColumnData(pBlock, pTableScanInfo->pColMatchInfo, pCols, true);
|
relocateColumnData(pBlock, pTableScanInfo->pColMatchInfo, pCols, true);
|
||||||
|
|
||||||
// currently only the tbname pseudo column
|
// currently only the tbname pseudo column
|
||||||
if (pTableScanInfo->numOfPseudoExpr > 0) {
|
if (pTableScanInfo->pseudoSup.numOfExprs > 0) {
|
||||||
int32_t code = addTagPseudoColumnData(&pTableScanInfo->readHandle, pTableScanInfo->pPseudoExpr,
|
int32_t code = addTagPseudoColumnData(&pTableScanInfo->readHandle, pTableScanInfo->pseudoSup.pExprInfo,
|
||||||
pTableScanInfo->numOfPseudoExpr, pBlock, GET_TASKID(pTaskInfo));
|
pTableScanInfo->pseudoSup.numOfExprs, pBlock, GET_TASKID(pTaskInfo));
|
||||||
if (code != TSDB_CODE_SUCCESS) {
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
longjmp(pTaskInfo->env, code);
|
longjmp(pTaskInfo->env, code);
|
||||||
}
|
}
|
||||||
|
@ -2869,29 +2829,31 @@ int32_t stopGroupTableMergeScan(SOperatorInfo* pOperator) {
|
||||||
STableMergeScanInfo* pInfo = pOperator->info;
|
STableMergeScanInfo* pInfo = pOperator->info;
|
||||||
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
|
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
|
||||||
|
|
||||||
tsortDestroySortHandle(pInfo->pSortHandle);
|
size_t numReaders = taosArrayGetSize(pInfo->dataReaders);
|
||||||
|
|
||||||
|
for (int32_t i = 0; i < numReaders; ++i) {
|
||||||
|
STableMergeScanSortSourceParam* param = taosArrayGet(pInfo->sortSourceParams, i);
|
||||||
|
blockDataDestroy(param->inputBlock);
|
||||||
|
}
|
||||||
taosArrayClear(pInfo->sortSourceParams);
|
taosArrayClear(pInfo->sortSourceParams);
|
||||||
|
|
||||||
for (int32_t i = 0; i < taosArrayGetSize(pInfo->dataReaders); ++i) {
|
tsortDestroySortHandle(pInfo->pSortHandle);
|
||||||
|
|
||||||
|
for (int32_t i = 0; i < numReaders; ++i) {
|
||||||
STsdbReader* reader = taosArrayGetP(pInfo->dataReaders, i);
|
STsdbReader* reader = taosArrayGetP(pInfo->dataReaders, i);
|
||||||
tsdbReaderClose(reader);
|
tsdbReaderClose(reader);
|
||||||
}
|
}
|
||||||
|
|
||||||
taosArrayDestroy(pInfo->dataReaders);
|
taosArrayDestroy(pInfo->dataReaders);
|
||||||
pInfo->dataReaders = NULL;
|
pInfo->dataReaders = NULL;
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
SSDataBlock* getSortedTableMergeScanBlockData(SSortHandle* pHandle, int32_t capacity, SOperatorInfo* pOperator) {
|
SSDataBlock* getSortedTableMergeScanBlockData(SSortHandle* pHandle, SSDataBlock* pResBlock, int32_t capacity, SOperatorInfo* pOperator) {
|
||||||
STableMergeScanInfo* pInfo = pOperator->info;
|
STableMergeScanInfo* pInfo = pOperator->info;
|
||||||
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
|
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
|
||||||
|
|
||||||
SSDataBlock* p = tsortGetSortedDataBlock(pHandle);
|
blockDataCleanup(pResBlock);
|
||||||
if (p == NULL) {
|
blockDataEnsureCapacity(pResBlock, capacity);
|
||||||
return NULL;
|
|
||||||
}
|
|
||||||
|
|
||||||
blockDataEnsureCapacity(p, capacity);
|
|
||||||
|
|
||||||
while (1) {
|
while (1) {
|
||||||
STupleHandle* pTupleHandle = tsortNextTuple(pHandle);
|
STupleHandle* pTupleHandle = tsortNextTuple(pHandle);
|
||||||
|
@ -2899,14 +2861,15 @@ SSDataBlock* getSortedTableMergeScanBlockData(SSortHandle* pHandle, int32_t capa
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
appendOneRowToDataBlock(p, pTupleHandle);
|
appendOneRowToDataBlock(pResBlock, pTupleHandle);
|
||||||
if (p->info.rows >= capacity) {
|
if (pResBlock->info.rows >= capacity) {
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
qDebug("%s get sorted row blocks, rows:%d", GET_TASKID(pTaskInfo), p->info.rows);
|
|
||||||
return (p->info.rows > 0) ? p : NULL;
|
qDebug("%s get sorted row blocks, rows:%d", GET_TASKID(pTaskInfo), pResBlock->info.rows);
|
||||||
|
return (pResBlock->info.rows > 0) ? pResBlock : NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
SSDataBlock* doTableMergeScan(SOperatorInfo* pOperator) {
|
SSDataBlock* doTableMergeScan(SOperatorInfo* pOperator) {
|
||||||
|
@ -2935,7 +2898,7 @@ SSDataBlock* doTableMergeScan(SOperatorInfo* pOperator) {
|
||||||
}
|
}
|
||||||
SSDataBlock* pBlock = NULL;
|
SSDataBlock* pBlock = NULL;
|
||||||
while (pInfo->tableStartIndex < tableListSize) {
|
while (pInfo->tableStartIndex < tableListSize) {
|
||||||
pBlock = getSortedTableMergeScanBlockData(pInfo->pSortHandle, pOperator->resultInfo.capacity, pOperator);
|
pBlock = getSortedTableMergeScanBlockData(pInfo->pSortHandle, pInfo->pResBlock, pOperator->resultInfo.capacity, pOperator);
|
||||||
if (pBlock != NULL) {
|
if (pBlock != NULL) {
|
||||||
pBlock->info.groupId = pInfo->groupId;
|
pBlock->info.groupId = pInfo->groupId;
|
||||||
pOperator->resultInfo.totalRows += pBlock->info.rows;
|
pOperator->resultInfo.totalRows += pBlock->info.rows;
|
||||||
|
@ -2959,6 +2922,7 @@ SSDataBlock* doTableMergeScan(SOperatorInfo* pOperator) {
|
||||||
void destroyTableMergeScanOperatorInfo(void* param, int32_t numOfOutput) {
|
void destroyTableMergeScanOperatorInfo(void* param, int32_t numOfOutput) {
|
||||||
STableMergeScanInfo* pTableScanInfo = (STableMergeScanInfo*)param;
|
STableMergeScanInfo* pTableScanInfo = (STableMergeScanInfo*)param;
|
||||||
cleanupQueryTableDataCond(&pTableScanInfo->cond);
|
cleanupQueryTableDataCond(&pTableScanInfo->cond);
|
||||||
|
taosArrayDestroy(pTableScanInfo->sortSourceParams);
|
||||||
|
|
||||||
for (int32_t i = 0; i < taosArrayGetSize(pTableScanInfo->dataReaders); ++i) {
|
for (int32_t i = 0; i < taosArrayGetSize(pTableScanInfo->dataReaders); ++i) {
|
||||||
STsdbReader* reader = taosArrayGetP(pTableScanInfo->dataReaders, i);
|
STsdbReader* reader = taosArrayGetP(pTableScanInfo->dataReaders, i);
|
||||||
|
@ -2974,7 +2938,9 @@ void destroyTableMergeScanOperatorInfo(void* param, int32_t numOfOutput) {
|
||||||
pTableScanInfo->pSortInputBlock = blockDataDestroy(pTableScanInfo->pSortInputBlock);
|
pTableScanInfo->pSortInputBlock = blockDataDestroy(pTableScanInfo->pSortInputBlock);
|
||||||
|
|
||||||
taosArrayDestroy(pTableScanInfo->pSortInfo);
|
taosArrayDestroy(pTableScanInfo->pSortInfo);
|
||||||
|
cleanupExprSupp(&pTableScanInfo->pseudoSup);
|
||||||
|
|
||||||
|
taosMemoryFreeClear(pTableScanInfo->rowEntryInfoOffset);
|
||||||
taosMemoryFreeClear(param);
|
taosMemoryFreeClear(param);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -3031,8 +2997,9 @@ SOperatorInfo* createTableMergeScanOperatorInfo(STableScanPhysiNode* pTableScanN
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pTableScanNode->scan.pScanPseudoCols != NULL) {
|
if (pTableScanNode->scan.pScanPseudoCols != NULL) {
|
||||||
pInfo->pPseudoExpr = createExprInfo(pTableScanNode->scan.pScanPseudoCols, NULL, &pInfo->numOfPseudoExpr);
|
SExprSupp* pSup = &pInfo->pseudoSup;
|
||||||
pInfo->pPseudoCtx = createSqlFunctionCtx(pInfo->pPseudoExpr, pInfo->numOfPseudoExpr, &pInfo->rowEntryInfoOffset);
|
pSup->pExprInfo = createExprInfo(pTableScanNode->scan.pScanPseudoCols, NULL, &pSup->numOfExprs);
|
||||||
|
pSup->pCtx = createSqlFunctionCtx(pSup->pExprInfo, pSup->numOfExprs, &pSup->rowEntryInfoOffset);
|
||||||
}
|
}
|
||||||
|
|
||||||
pInfo->scanInfo = (SScanInfo){.numOfAsc = pTableScanNode->scanSeq[0], .numOfDesc = pTableScanNode->scanSeq[1]};
|
pInfo->scanInfo = (SScanInfo){.numOfAsc = pTableScanNode->scanSeq[0], .numOfDesc = pTableScanNode->scanSeq[1]};
|
||||||
|
|
|
@ -2769,7 +2769,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
|
|
||||||
SExprSupp* pSup = &pOperator->exprSupp;
|
SExprSupp* pSup = &pOperator->exprSupp;
|
||||||
|
|
||||||
qDebug("interval status %d %s", pOperator->status, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
qDebug("interval status %d %s", pOperator->status, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
|
|
||||||
if (pOperator->status == OP_EXEC_DONE) {
|
if (pOperator->status == OP_EXEC_DONE) {
|
||||||
return NULL;
|
return NULL;
|
||||||
|
@ -2778,7 +2778,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
if (pInfo->pPullDataRes->info.rows != 0) {
|
if (pInfo->pPullDataRes->info.rows != 0) {
|
||||||
// process the rest of the data
|
// process the rest of the data
|
||||||
ASSERT(IS_FINAL_OP(pInfo));
|
ASSERT(IS_FINAL_OP(pInfo));
|
||||||
printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
return pInfo->pPullDataRes;
|
return pInfo->pPullDataRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2793,20 +2793,20 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
}
|
}
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
return pInfo->binfo.pRes;
|
return pInfo->binfo.pRes;
|
||||||
} else {
|
} else {
|
||||||
if (!IS_FINAL_OP(pInfo)) {
|
if (!IS_FINAL_OP(pInfo)) {
|
||||||
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->aggSup.pResultBuf);
|
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->aggSup.pResultBuf);
|
||||||
if (pInfo->binfo.pRes->info.rows != 0) {
|
if (pInfo->binfo.pRes->info.rows != 0) {
|
||||||
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
return pInfo->binfo.pRes;
|
return pInfo->binfo.pRes;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (pInfo->pUpdateRes->info.rows != 0 && pInfo->returnUpdate) {
|
if (pInfo->pUpdateRes->info.rows != 0 && pInfo->returnUpdate) {
|
||||||
pInfo->returnUpdate = false;
|
pInfo->returnUpdate = false;
|
||||||
ASSERT(!IS_FINAL_OP(pInfo));
|
ASSERT(!IS_FINAL_OP(pInfo));
|
||||||
printDataBlock(pInfo->pUpdateRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
printDataBlock(pInfo->pUpdateRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
// process the rest of the data
|
// process the rest of the data
|
||||||
return pInfo->pUpdateRes;
|
return pInfo->pUpdateRes;
|
||||||
}
|
}
|
||||||
|
@ -2814,13 +2814,13 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
// if (pInfo->pPullDataRes->info.rows != 0) {
|
// if (pInfo->pPullDataRes->info.rows != 0) {
|
||||||
// // process the rest of the data
|
// // process the rest of the data
|
||||||
// ASSERT(IS_FINAL_OP(pInfo));
|
// ASSERT(IS_FINAL_OP(pInfo));
|
||||||
// printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
// printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
// return pInfo->pPullDataRes;
|
// return pInfo->pPullDataRes;
|
||||||
// }
|
// }
|
||||||
doBuildDeleteResult(pInfo->pDelWins, &pInfo->delIndex, pInfo->pDelRes);
|
doBuildDeleteResult(pInfo->pDelWins, &pInfo->delIndex, pInfo->pDelRes);
|
||||||
if (pInfo->pDelRes->info.rows != 0) {
|
if (pInfo->pDelRes->info.rows != 0) {
|
||||||
// process the rest of the data
|
// process the rest of the data
|
||||||
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
return pInfo->pDelRes;
|
return pInfo->pDelRes;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -2831,10 +2831,10 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
clearSpecialDataBlock(pInfo->pUpdateRes);
|
clearSpecialDataBlock(pInfo->pUpdateRes);
|
||||||
removeDeleteResults(pUpdated, pInfo->pDelWins);
|
removeDeleteResults(pUpdated, pInfo->pDelWins);
|
||||||
pOperator->status = OP_RES_TO_RETURN;
|
pOperator->status = OP_RES_TO_RETURN;
|
||||||
qDebug("%s return data", IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
qDebug("%s return data", IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
printDataBlock(pBlock, IS_FINAL_OP(pInfo) ? "interval Final recv" : "interval Semi recv");
|
printDataBlock(pBlock, IS_FINAL_OP(pInfo) ? "interval final recv" : "interval semi recv");
|
||||||
maxTs = TMAX(maxTs, pBlock->info.window.ekey);
|
maxTs = TMAX(maxTs, pBlock->info.window.ekey);
|
||||||
|
|
||||||
if (pBlock->info.type == STREAM_NORMAL || pBlock->info.type == STREAM_PULL_DATA ||
|
if (pBlock->info.type == STREAM_NORMAL || pBlock->info.type == STREAM_PULL_DATA ||
|
||||||
|
@ -2934,20 +2934,20 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
if (pInfo->pPullDataRes->info.rows != 0) {
|
if (pInfo->pPullDataRes->info.rows != 0) {
|
||||||
// process the rest of the data
|
// process the rest of the data
|
||||||
ASSERT(IS_FINAL_OP(pInfo));
|
ASSERT(IS_FINAL_OP(pInfo));
|
||||||
printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
return pInfo->pPullDataRes;
|
return pInfo->pPullDataRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->aggSup.pResultBuf);
|
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->aggSup.pResultBuf);
|
||||||
if (pInfo->binfo.pRes->info.rows != 0) {
|
if (pInfo->binfo.pRes->info.rows != 0) {
|
||||||
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
return pInfo->binfo.pRes;
|
return pInfo->binfo.pRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pInfo->pUpdateRes->info.rows != 0 && pInfo->returnUpdate) {
|
if (pInfo->pUpdateRes->info.rows != 0 && pInfo->returnUpdate) {
|
||||||
pInfo->returnUpdate = false;
|
pInfo->returnUpdate = false;
|
||||||
ASSERT(!IS_FINAL_OP(pInfo));
|
ASSERT(!IS_FINAL_OP(pInfo));
|
||||||
printDataBlock(pInfo->pUpdateRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
printDataBlock(pInfo->pUpdateRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
// process the rest of the data
|
// process the rest of the data
|
||||||
return pInfo->pUpdateRes;
|
return pInfo->pUpdateRes;
|
||||||
}
|
}
|
||||||
|
@ -2955,7 +2955,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
doBuildDeleteResult(pInfo->pDelWins, &pInfo->delIndex, pInfo->pDelRes);
|
doBuildDeleteResult(pInfo->pDelWins, &pInfo->delIndex, pInfo->pDelRes);
|
||||||
if (pInfo->pDelRes->info.rows != 0) {
|
if (pInfo->pDelRes->info.rows != 0) {
|
||||||
// process the rest of the data
|
// process the rest of the data
|
||||||
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
return pInfo->pDelRes;
|
return pInfo->pDelRes;
|
||||||
}
|
}
|
||||||
// ASSERT(false);
|
// ASSERT(false);
|
||||||
|
@ -3815,14 +3815,14 @@ static SSDataBlock* doStreamSessionAgg(SOperatorInfo* pOperator) {
|
||||||
} else if (pOperator->status == OP_RES_TO_RETURN) {
|
} else if (pOperator->status == OP_RES_TO_RETURN) {
|
||||||
doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
|
doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
|
||||||
if (pInfo->pDelRes->info.rows > 0) {
|
if (pInfo->pDelRes->info.rows > 0) {
|
||||||
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "Final Session" : "Single Session");
|
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "final session" : "single session");
|
||||||
return pInfo->pDelRes;
|
return pInfo->pDelRes;
|
||||||
}
|
}
|
||||||
doBuildResultDatablock(pOperator, pBInfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
|
doBuildResultDatablock(pOperator, pBInfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
|
||||||
if (pBInfo->pRes->info.rows == 0 || !hasDataInGroupInfo(&pInfo->groupResInfo)) {
|
if (pBInfo->pRes->info.rows == 0 || !hasDataInGroupInfo(&pInfo->groupResInfo)) {
|
||||||
doSetOperatorCompleted(pOperator);
|
doSetOperatorCompleted(pOperator);
|
||||||
}
|
}
|
||||||
printDataBlock(pBInfo->pRes, IS_FINAL_OP(pInfo) ? "Final Session" : "Single Session");
|
printDataBlock(pBInfo->pRes, IS_FINAL_OP(pInfo) ? "final session" : "single session");
|
||||||
return pBInfo->pRes->info.rows == 0 ? NULL : pBInfo->pRes;
|
return pBInfo->pRes->info.rows == 0 ? NULL : pBInfo->pRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -3835,7 +3835,7 @@ static SSDataBlock* doStreamSessionAgg(SOperatorInfo* pOperator) {
|
||||||
if (pBlock == NULL) {
|
if (pBlock == NULL) {
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
printDataBlock(pBlock, IS_FINAL_OP(pInfo) ? "Final Session Recv" : "Single Session Recv");
|
printDataBlock(pBlock, IS_FINAL_OP(pInfo) ? "final session recv" : "single session recv");
|
||||||
|
|
||||||
if (pBlock->info.type == STREAM_CLEAR) {
|
if (pBlock->info.type == STREAM_CLEAR) {
|
||||||
SArray* pWins = taosArrayInit(16, sizeof(SResultWindowInfo));
|
SArray* pWins = taosArrayInit(16, sizeof(SResultWindowInfo));
|
||||||
|
@ -3912,11 +3912,11 @@ static SSDataBlock* doStreamSessionAgg(SOperatorInfo* pOperator) {
|
||||||
blockDataEnsureCapacity(pInfo->binfo.pRes, pOperator->resultInfo.capacity);
|
blockDataEnsureCapacity(pInfo->binfo.pRes, pOperator->resultInfo.capacity);
|
||||||
doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
|
doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
|
||||||
if (pInfo->pDelRes->info.rows > 0) {
|
if (pInfo->pDelRes->info.rows > 0) {
|
||||||
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "Final Session" : "Single Session");
|
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "final session" : "single session");
|
||||||
return pInfo->pDelRes;
|
return pInfo->pDelRes;
|
||||||
}
|
}
|
||||||
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
|
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
|
||||||
printDataBlock(pBInfo->pRes, IS_FINAL_OP(pInfo) ? "Final Session" : "Single Session");
|
printDataBlock(pBInfo->pRes, IS_FINAL_OP(pInfo) ? "final session" : "single session");
|
||||||
return pBInfo->pRes->info.rows == 0 ? NULL : pBInfo->pRes;
|
return pBInfo->pRes->info.rows == 0 ? NULL : pBInfo->pRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -3955,21 +3955,21 @@ static SSDataBlock* doStreamSessionSemiAgg(SOperatorInfo* pOperator) {
|
||||||
} else if (pOperator->status == OP_RES_TO_RETURN) {
|
} else if (pOperator->status == OP_RES_TO_RETURN) {
|
||||||
doBuildResultDatablock(pOperator, pBInfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
|
doBuildResultDatablock(pOperator, pBInfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
|
||||||
if (pBInfo->pRes->info.rows > 0) {
|
if (pBInfo->pRes->info.rows > 0) {
|
||||||
printDataBlock(pBInfo->pRes, "Semi Session");
|
printDataBlock(pBInfo->pRes, "sems session");
|
||||||
return pBInfo->pRes;
|
return pBInfo->pRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
// doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
|
// doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
|
||||||
if (pInfo->pDelRes->info.rows > 0 && !pInfo->returnDelete) {
|
if (pInfo->pDelRes->info.rows > 0 && !pInfo->returnDelete) {
|
||||||
pInfo->returnDelete = true;
|
pInfo->returnDelete = true;
|
||||||
printDataBlock(pInfo->pDelRes, "Semi Session");
|
printDataBlock(pInfo->pDelRes, "sems session");
|
||||||
return pInfo->pDelRes;
|
return pInfo->pDelRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pInfo->pUpdateRes->info.rows > 0) {
|
if (pInfo->pUpdateRes->info.rows > 0) {
|
||||||
// process the rest of the data
|
// process the rest of the data
|
||||||
pOperator->status = OP_OPENED;
|
pOperator->status = OP_OPENED;
|
||||||
printDataBlock(pInfo->pUpdateRes, "Semi Session");
|
printDataBlock(pInfo->pUpdateRes, "sems session");
|
||||||
return pInfo->pUpdateRes;
|
return pInfo->pUpdateRes;
|
||||||
}
|
}
|
||||||
// semi interval operator clear disk buffer
|
// semi interval operator clear disk buffer
|
||||||
|
@ -4033,21 +4033,21 @@ static SSDataBlock* doStreamSessionSemiAgg(SOperatorInfo* pOperator) {
|
||||||
|
|
||||||
doBuildResultDatablock(pOperator, pBInfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
|
doBuildResultDatablock(pOperator, pBInfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
|
||||||
if (pBInfo->pRes->info.rows > 0) {
|
if (pBInfo->pRes->info.rows > 0) {
|
||||||
printDataBlock(pBInfo->pRes, "Semi Session");
|
printDataBlock(pBInfo->pRes, "sems session");
|
||||||
return pBInfo->pRes;
|
return pBInfo->pRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
// doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
|
// doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
|
||||||
if (pInfo->pDelRes->info.rows > 0 && !pInfo->returnDelete) {
|
if (pInfo->pDelRes->info.rows > 0 && !pInfo->returnDelete) {
|
||||||
pInfo->returnDelete = true;
|
pInfo->returnDelete = true;
|
||||||
printDataBlock(pInfo->pDelRes, "Semi Session");
|
printDataBlock(pInfo->pDelRes, "sems session");
|
||||||
return pInfo->pDelRes;
|
return pInfo->pDelRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pInfo->pUpdateRes->info.rows > 0) {
|
if (pInfo->pUpdateRes->info.rows > 0) {
|
||||||
// process the rest of the data
|
// process the rest of the data
|
||||||
pOperator->status = OP_OPENED;
|
pOperator->status = OP_OPENED;
|
||||||
printDataBlock(pInfo->pUpdateRes, "Semi Session");
|
printDataBlock(pInfo->pUpdateRes, "sems session");
|
||||||
return pInfo->pUpdateRes;
|
return pInfo->pUpdateRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -557,11 +557,13 @@ static int32_t translateApercentileImpl(SFunctionNode* pFunc, char* pErrBuf, int
|
||||||
pFunc->node.resType =
|
pFunc->node.resType =
|
||||||
(SDataType){.bytes = getApercentileMaxSize() + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY};
|
(SDataType){.bytes = getApercentileMaxSize() + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY};
|
||||||
} else {
|
} else {
|
||||||
if (1 != numOfParams) {
|
// original percent param is reserved
|
||||||
|
if (2 != numOfParams) {
|
||||||
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
|
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
|
||||||
}
|
}
|
||||||
uint8_t para1Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type;
|
uint8_t para1Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type;
|
||||||
if (TSDB_DATA_TYPE_BINARY != para1Type) {
|
uint8_t para2Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 1))->resType.type;
|
||||||
|
if (TSDB_DATA_TYPE_BINARY != para1Type || !IS_INTEGER_TYPE(para2Type)) {
|
||||||
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
|
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -621,7 +623,7 @@ static int32_t translateTopBot(SFunctionNode* pFunc, char* pErrBuf, int32_t len)
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t topBotCreateMergePara(SNodeList* pRawParameters, SNode* pPartialRes, SNodeList** pParameters) {
|
static int32_t reserveFirstMergeParam(SNodeList* pRawParameters, SNode* pPartialRes, SNodeList** pParameters) {
|
||||||
int32_t code = nodesListMakeAppend(pParameters, pPartialRes);
|
int32_t code = nodesListMakeAppend(pParameters, pPartialRes);
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = nodesListStrictAppend(*pParameters, nodesCloneNode(nodesListGetNode(pRawParameters, 1)));
|
code = nodesListStrictAppend(*pParameters, nodesCloneNode(nodesListGetNode(pRawParameters, 1)));
|
||||||
|
@ -629,6 +631,14 @@ int32_t topBotCreateMergePara(SNodeList* pRawParameters, SNode* pPartialRes, SNo
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
int32_t topBotCreateMergeParam(SNodeList* pRawParameters, SNode* pPartialRes, SNodeList** pParameters) {
|
||||||
|
return reserveFirstMergeParam(pRawParameters, pPartialRes, pParameters);
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t apercentileCreateMergeParam(SNodeList* pRawParameters, SNode* pPartialRes, SNodeList** pParameters) {
|
||||||
|
return reserveFirstMergeParam(pRawParameters, pPartialRes, pParameters);
|
||||||
|
}
|
||||||
|
|
||||||
static int32_t translateSpread(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
|
static int32_t translateSpread(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
|
||||||
if (1 != LIST_LENGTH(pFunc->pParameterList)) {
|
if (1 != LIST_LENGTH(pFunc->pParameterList)) {
|
||||||
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
|
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
|
||||||
|
@ -1532,7 +1542,7 @@ static int32_t translateDiff(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
|
||||||
}
|
}
|
||||||
|
|
||||||
uint8_t resType;
|
uint8_t resType;
|
||||||
if (IS_SIGNED_NUMERIC_TYPE(colType) || TSDB_DATA_TYPE_BOOL == colType) {
|
if (IS_SIGNED_NUMERIC_TYPE(colType) || TSDB_DATA_TYPE_BOOL == colType || TSDB_DATA_TYPE_TIMESTAMP == colType) {
|
||||||
resType = TSDB_DATA_TYPE_BIGINT;
|
resType = TSDB_DATA_TYPE_BIGINT;
|
||||||
} else {
|
} else {
|
||||||
resType = TSDB_DATA_TYPE_DOUBLE;
|
resType = TSDB_DATA_TYPE_DOUBLE;
|
||||||
|
@ -2068,7 +2078,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
||||||
.invertFunc = NULL,
|
.invertFunc = NULL,
|
||||||
.combineFunc = apercentileCombine,
|
.combineFunc = apercentileCombine,
|
||||||
.pPartialFunc = "_apercentile_partial",
|
.pPartialFunc = "_apercentile_partial",
|
||||||
.pMergeFunc = "_apercentile_merge"
|
.pMergeFunc = "_apercentile_merge",
|
||||||
|
.createMergeParaFuc = apercentileCreateMergeParam
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
.name = "_apercentile_partial",
|
.name = "_apercentile_partial",
|
||||||
|
@ -2107,7 +2118,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
||||||
.combineFunc = topCombine,
|
.combineFunc = topCombine,
|
||||||
.pPartialFunc = "top",
|
.pPartialFunc = "top",
|
||||||
.pMergeFunc = "top",
|
.pMergeFunc = "top",
|
||||||
.createMergeParaFuc = topBotCreateMergePara
|
.createMergeParaFuc = topBotCreateMergeParam
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
.name = "bottom",
|
.name = "bottom",
|
||||||
|
@ -2122,7 +2133,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
||||||
.combineFunc = bottomCombine,
|
.combineFunc = bottomCombine,
|
||||||
.pPartialFunc = "bottom",
|
.pPartialFunc = "bottom",
|
||||||
.pMergeFunc = "bottom",
|
.pMergeFunc = "bottom",
|
||||||
.createMergeParaFuc = topBotCreateMergePara
|
.createMergeParaFuc = topBotCreateMergeParam
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
.name = "spread",
|
.name = "spread",
|
||||||
|
|
|
@ -877,7 +877,7 @@ void udfcUvHandleError(SClientUvConn *conn);
|
||||||
void onUdfcPipeRead(uv_stream_t *client, ssize_t nread, const uv_buf_t *buf);
|
void onUdfcPipeRead(uv_stream_t *client, ssize_t nread, const uv_buf_t *buf);
|
||||||
void onUdfcPipeWrite(uv_write_t *write, int status);
|
void onUdfcPipeWrite(uv_write_t *write, int status);
|
||||||
void onUdfcPipeConnect(uv_connect_t *connect, int status);
|
void onUdfcPipeConnect(uv_connect_t *connect, int status);
|
||||||
int32_t udfcCreateUvTask(SClientUdfTask *task, int8_t uvTaskType, SClientUvTaskNode **pUvTask);
|
int32_t udfcInitializeUvTask(SClientUdfTask *task, int8_t uvTaskType, SClientUvTaskNode *uvTask);
|
||||||
int32_t udfcQueueUvTask(SClientUvTaskNode *uvTask);
|
int32_t udfcQueueUvTask(SClientUvTaskNode *uvTask);
|
||||||
int32_t udfcStartUvTask(SClientUvTaskNode *uvTask);
|
int32_t udfcStartUvTask(SClientUvTaskNode *uvTask);
|
||||||
void udfcAsyncTaskCb(uv_async_t *async);
|
void udfcAsyncTaskCb(uv_async_t *async);
|
||||||
|
@ -1376,8 +1376,7 @@ void onUdfcPipeConnect(uv_connect_t *connect, int status) {
|
||||||
uv_sem_post(&uvTask->taskSem);
|
uv_sem_post(&uvTask->taskSem);
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t udfcCreateUvTask(SClientUdfTask *task, int8_t uvTaskType, SClientUvTaskNode **pUvTask) {
|
int32_t udfcInitializeUvTask(SClientUdfTask *task, int8_t uvTaskType, SClientUvTaskNode *uvTask) {
|
||||||
SClientUvTaskNode *uvTask = taosMemoryCalloc(1, sizeof(SClientUvTaskNode));
|
|
||||||
uvTask->type = uvTaskType;
|
uvTask->type = uvTaskType;
|
||||||
uvTask->udfc = task->session->udfc;
|
uvTask->udfc = task->session->udfc;
|
||||||
|
|
||||||
|
@ -1412,7 +1411,6 @@ int32_t udfcCreateUvTask(SClientUdfTask *task, int8_t uvTaskType, SClientUvTaskN
|
||||||
}
|
}
|
||||||
uv_sem_init(&uvTask->taskSem, 0);
|
uv_sem_init(&uvTask->taskSem, 0);
|
||||||
|
|
||||||
*pUvTask = uvTask;
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1615,10 +1613,10 @@ int32_t udfcClose() {
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t udfcRunUdfUvTask(SClientUdfTask *task, int8_t uvTaskType) {
|
int32_t udfcRunUdfUvTask(SClientUdfTask *task, int8_t uvTaskType) {
|
||||||
SClientUvTaskNode *uvTask = NULL;
|
SClientUvTaskNode *uvTask = taosMemoryCalloc(1, sizeof(SClientUvTaskNode));
|
||||||
|
|
||||||
udfcCreateUvTask(task, uvTaskType, &uvTask);
|
|
||||||
fnDebug("udfc client task: %p created uvTask: %p. pipe: %p", task, uvTask, task->session->udfUvPipe);
|
fnDebug("udfc client task: %p created uvTask: %p. pipe: %p", task, uvTask, task->session->udfUvPipe);
|
||||||
|
|
||||||
|
udfcInitializeUvTask(task, uvTaskType, uvTask);
|
||||||
udfcQueueUvTask(uvTask);
|
udfcQueueUvTask(uvTask);
|
||||||
udfcGetUdfTaskResultFromUvTask(task, uvTask);
|
udfcGetUdfTaskResultFromUvTask(task, uvTask);
|
||||||
if (uvTaskType == UV_TASK_CONNECT) {
|
if (uvTaskType == UV_TASK_CONNECT) {
|
||||||
|
@ -1629,6 +1627,8 @@ int32_t udfcRunUdfUvTask(SClientUdfTask *task, int8_t uvTaskType) {
|
||||||
taosMemoryFree(uvTask->reqBuf.base);
|
taosMemoryFree(uvTask->reqBuf.base);
|
||||||
uvTask->reqBuf.base = NULL;
|
uvTask->reqBuf.base = NULL;
|
||||||
taosMemoryFree(uvTask);
|
taosMemoryFree(uvTask);
|
||||||
|
fnDebug("udfc freed uvTask: %p", task);
|
||||||
|
|
||||||
uvTask = NULL;
|
uvTask = NULL;
|
||||||
return task->errCode;
|
return task->errCode;
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,7 +1,12 @@
|
||||||
#include <string.h>
|
#include <string.h>
|
||||||
#include <stdlib.h>
|
#include <stdlib.h>
|
||||||
#include <stdio.h>
|
#include <stdio.h>
|
||||||
|
#ifdef LINUX
|
||||||
|
#include <unistd.h>
|
||||||
|
#endif
|
||||||
|
#ifdef WINDOWS
|
||||||
|
#include <windows.h>
|
||||||
|
#endif
|
||||||
#include "taosudf.h"
|
#include "taosudf.h"
|
||||||
|
|
||||||
|
|
||||||
|
@ -35,6 +40,12 @@ DLL_EXPORT int32_t udf1(SUdfDataBlock* block, SUdfColumn *resultCol) {
|
||||||
udfColDataSet(resultCol, i, (char *)&luckyNum, false);
|
udfColDataSet(resultCol, i, (char *)&luckyNum, false);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
//to simulate actual processing delay by udf
|
||||||
|
#ifdef LINUX
|
||||||
|
usleep(1 * 1000); // usleep takes sleep time in us (1 millionth of a second)
|
||||||
|
#endif
|
||||||
|
#ifdef WINDOWS
|
||||||
|
Sleep(1);
|
||||||
|
#endif
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
|
@ -375,6 +375,7 @@ static int32_t logicJoinCopy(const SJoinLogicNode* pSrc, SJoinLogicNode* pDst) {
|
||||||
CLONE_NODE_FIELD(pMergeCondition);
|
CLONE_NODE_FIELD(pMergeCondition);
|
||||||
CLONE_NODE_FIELD(pOnConditions);
|
CLONE_NODE_FIELD(pOnConditions);
|
||||||
COPY_SCALAR_FIELD(isSingleTableJoin);
|
COPY_SCALAR_FIELD(isSingleTableJoin);
|
||||||
|
COPY_SCALAR_FIELD(inputTsOrder);
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -440,6 +441,7 @@ static int32_t logicWindowCopy(const SWindowLogicNode* pSrc, SWindowLogicNode* p
|
||||||
COPY_SCALAR_FIELD(watermark);
|
COPY_SCALAR_FIELD(watermark);
|
||||||
COPY_SCALAR_FIELD(igExpired);
|
COPY_SCALAR_FIELD(igExpired);
|
||||||
COPY_SCALAR_FIELD(windowAlgo);
|
COPY_SCALAR_FIELD(windowAlgo);
|
||||||
|
COPY_SCALAR_FIELD(inputTsOrder);
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -1717,7 +1717,7 @@ static const char* jkJoinPhysiPlanOnConditions = "OnConditions";
|
||||||
static const char* jkJoinPhysiPlanTargets = "Targets";
|
static const char* jkJoinPhysiPlanTargets = "Targets";
|
||||||
|
|
||||||
static int32_t physiJoinNodeToJson(const void* pObj, SJson* pJson) {
|
static int32_t physiJoinNodeToJson(const void* pObj, SJson* pJson) {
|
||||||
const SJoinPhysiNode* pNode = (const SJoinPhysiNode*)pObj;
|
const SSortMergeJoinPhysiNode* pNode = (const SSortMergeJoinPhysiNode*)pObj;
|
||||||
|
|
||||||
int32_t code = physicPlanNodeToJson(pObj, pJson);
|
int32_t code = physicPlanNodeToJson(pObj, pJson);
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
|
@ -1737,7 +1737,7 @@ static int32_t physiJoinNodeToJson(const void* pObj, SJson* pJson) {
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t jsonToPhysiJoinNode(const SJson* pJson, void* pObj) {
|
static int32_t jsonToPhysiJoinNode(const SJson* pJson, void* pObj) {
|
||||||
SJoinPhysiNode* pNode = (SJoinPhysiNode*)pObj;
|
SSortMergeJoinPhysiNode* pNode = (SSortMergeJoinPhysiNode*)pObj;
|
||||||
|
|
||||||
int32_t code = jsonToPhysicPlanNode(pJson, pObj);
|
int32_t code = jsonToPhysicPlanNode(pJson, pObj);
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
|
|
|
@ -468,7 +468,7 @@ static EDealRes dispatchPhysiPlan(SNode* pNode, ETraversalOrder order, FNodeWalk
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN: {
|
case QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN: {
|
||||||
SJoinPhysiNode* pJoin = (SJoinPhysiNode*)pNode;
|
SSortMergeJoinPhysiNode* pJoin = (SSortMergeJoinPhysiNode*)pNode;
|
||||||
res = walkPhysiNode((SPhysiNode*)pNode, order, walker, pContext);
|
res = walkPhysiNode((SPhysiNode*)pNode, order, walker, pContext);
|
||||||
if (DEAL_RES_ERROR != res && DEAL_RES_END != res) {
|
if (DEAL_RES_ERROR != res && DEAL_RES_END != res) {
|
||||||
res = walkPhysiPlan(pJoin->pMergeCondition, order, walker, pContext);
|
res = walkPhysiPlan(pJoin->pMergeCondition, order, walker, pContext);
|
||||||
|
|
|
@ -287,7 +287,7 @@ SNode* nodesMakeNode(ENodeType type) {
|
||||||
case QUERY_NODE_PHYSICAL_PLAN_PROJECT:
|
case QUERY_NODE_PHYSICAL_PLAN_PROJECT:
|
||||||
return makeNode(type, sizeof(SProjectPhysiNode));
|
return makeNode(type, sizeof(SProjectPhysiNode));
|
||||||
case QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN:
|
case QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN:
|
||||||
return makeNode(type, sizeof(SJoinPhysiNode));
|
return makeNode(type, sizeof(SSortMergeJoinPhysiNode));
|
||||||
case QUERY_NODE_PHYSICAL_PLAN_HASH_AGG:
|
case QUERY_NODE_PHYSICAL_PLAN_HASH_AGG:
|
||||||
return makeNode(type, sizeof(SAggPhysiNode));
|
return makeNode(type, sizeof(SAggPhysiNode));
|
||||||
case QUERY_NODE_PHYSICAL_PLAN_EXCHANGE:
|
case QUERY_NODE_PHYSICAL_PLAN_EXCHANGE:
|
||||||
|
@ -883,7 +883,7 @@ void nodesDestroyNode(SNode* pNode) {
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN: {
|
case QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN: {
|
||||||
SJoinPhysiNode* pPhyNode = (SJoinPhysiNode*)pNode;
|
SSortMergeJoinPhysiNode* pPhyNode = (SSortMergeJoinPhysiNode*)pNode;
|
||||||
destroyPhysiNode((SPhysiNode*)pPhyNode);
|
destroyPhysiNode((SPhysiNode*)pPhyNode);
|
||||||
nodesDestroyNode(pPhyNode->pMergeCondition);
|
nodesDestroyNode(pPhyNode->pMergeCondition);
|
||||||
nodesDestroyNode(pPhyNode->pOnConditions);
|
nodesDestroyNode(pPhyNode->pOnConditions);
|
||||||
|
|
|
@ -55,7 +55,11 @@ typedef enum EDatabaseOptionType {
|
||||||
DB_OPTION_VGROUPS,
|
DB_OPTION_VGROUPS,
|
||||||
DB_OPTION_SINGLE_STABLE,
|
DB_OPTION_SINGLE_STABLE,
|
||||||
DB_OPTION_RETENTIONS,
|
DB_OPTION_RETENTIONS,
|
||||||
DB_OPTION_SCHEMALESS
|
DB_OPTION_SCHEMALESS,
|
||||||
|
DB_OPTION_WAL_RETENTION_PERIOD,
|
||||||
|
DB_OPTION_WAL_RETENTION_SIZE,
|
||||||
|
DB_OPTION_WAL_ROLL_PERIOD,
|
||||||
|
DB_OPTION_WAL_SEGMENT_SIZE
|
||||||
} EDatabaseOptionType;
|
} EDatabaseOptionType;
|
||||||
|
|
||||||
typedef enum ETableOptionType {
|
typedef enum ETableOptionType {
|
||||||
|
@ -90,7 +94,7 @@ SNode* createValueNode(SAstCreateContext* pCxt, int32_t dataType, const SToken*
|
||||||
SNode* createDurationValueNode(SAstCreateContext* pCxt, const SToken* pLiteral);
|
SNode* createDurationValueNode(SAstCreateContext* pCxt, const SToken* pLiteral);
|
||||||
SNode* createDefaultDatabaseCondValue(SAstCreateContext* pCxt);
|
SNode* createDefaultDatabaseCondValue(SAstCreateContext* pCxt);
|
||||||
SNode* createPlaceholderValueNode(SAstCreateContext* pCxt, const SToken* pLiteral);
|
SNode* createPlaceholderValueNode(SAstCreateContext* pCxt, const SToken* pLiteral);
|
||||||
SNode* setProjectionAlias(SAstCreateContext* pCxt, SNode* pNode, const SToken* pAlias);
|
SNode* setProjectionAlias(SAstCreateContext* pCxt, SNode* pNode, SToken* pAlias);
|
||||||
SNode* createLogicConditionNode(SAstCreateContext* pCxt, ELogicConditionType type, SNode* pParam1, SNode* pParam2);
|
SNode* createLogicConditionNode(SAstCreateContext* pCxt, ELogicConditionType type, SNode* pParam1, SNode* pParam2);
|
||||||
SNode* createOperatorNode(SAstCreateContext* pCxt, EOperatorType type, SNode* pLeft, SNode* pRight);
|
SNode* createOperatorNode(SAstCreateContext* pCxt, EOperatorType type, SNode* pLeft, SNode* pRight);
|
||||||
SNode* createBetweenAnd(SAstCreateContext* pCxt, SNode* pExpr, SNode* pLeft, SNode* pRight);
|
SNode* createBetweenAnd(SAstCreateContext* pCxt, SNode* pExpr, SNode* pLeft, SNode* pRight);
|
||||||
|
|
|
@ -191,6 +191,20 @@ db_options(A) ::= db_options(B) VGROUPS NK_INTEGER(C).
|
||||||
db_options(A) ::= db_options(B) SINGLE_STABLE NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_SINGLE_STABLE, &C); }
|
db_options(A) ::= db_options(B) SINGLE_STABLE NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_SINGLE_STABLE, &C); }
|
||||||
db_options(A) ::= db_options(B) RETENTIONS retention_list(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_RETENTIONS, C); }
|
db_options(A) ::= db_options(B) RETENTIONS retention_list(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_RETENTIONS, C); }
|
||||||
db_options(A) ::= db_options(B) SCHEMALESS NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_SCHEMALESS, &C); }
|
db_options(A) ::= db_options(B) SCHEMALESS NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_SCHEMALESS, &C); }
|
||||||
|
db_options(A) ::= db_options(B) WAL_RETENTION_PERIOD NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_WAL_RETENTION_PERIOD, &C); }
|
||||||
|
db_options(A) ::= db_options(B) WAL_RETENTION_PERIOD NK_MINUS(D) NK_INTEGER(C). {
|
||||||
|
SToken t = D;
|
||||||
|
t.n = (C.z + C.n) - D.z;
|
||||||
|
A = setDatabaseOption(pCxt, B, DB_OPTION_WAL_RETENTION_PERIOD, &t);
|
||||||
|
}
|
||||||
|
db_options(A) ::= db_options(B) WAL_RETENTION_SIZE NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_WAL_RETENTION_SIZE, &C); }
|
||||||
|
db_options(A) ::= db_options(B) WAL_RETENTION_SIZE NK_MINUS(D) NK_INTEGER(C). {
|
||||||
|
SToken t = D;
|
||||||
|
t.n = (C.z + C.n) - D.z;
|
||||||
|
A = setDatabaseOption(pCxt, B, DB_OPTION_WAL_RETENTION_SIZE, &t);
|
||||||
|
}
|
||||||
|
db_options(A) ::= db_options(B) WAL_ROLL_PERIOD NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_WAL_ROLL_PERIOD, &C); }
|
||||||
|
db_options(A) ::= db_options(B) WAL_SEGMENT_SIZE NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_WAL_SEGMENT_SIZE, &C); }
|
||||||
|
|
||||||
alter_db_options(A) ::= alter_db_option(B). { A = createAlterDatabaseOptions(pCxt); A = setAlterDatabaseOption(pCxt, A, &B); }
|
alter_db_options(A) ::= alter_db_option(B). { A = createAlterDatabaseOptions(pCxt); A = setAlterDatabaseOption(pCxt, A, &B); }
|
||||||
alter_db_options(A) ::= alter_db_options(B) alter_db_option(C). { A = setAlterDatabaseOption(pCxt, B, &C); }
|
alter_db_options(A) ::= alter_db_options(B) alter_db_option(C). { A = setAlterDatabaseOption(pCxt, B, &C); }
|
||||||
|
|
|
@ -527,6 +527,7 @@ SNode* createTempTableNode(SAstCreateContext* pCxt, SNode* pSubquery, const STok
|
||||||
}
|
}
|
||||||
if (QUERY_NODE_SELECT_STMT == nodeType(pSubquery)) {
|
if (QUERY_NODE_SELECT_STMT == nodeType(pSubquery)) {
|
||||||
strcpy(((SSelectStmt*)pSubquery)->stmtName, tempTable->table.tableAlias);
|
strcpy(((SSelectStmt*)pSubquery)->stmtName, tempTable->table.tableAlias);
|
||||||
|
((SSelectStmt*)pSubquery)->isSubquery = true;
|
||||||
} else if (QUERY_NODE_SET_OPERATOR == nodeType(pSubquery)) {
|
} else if (QUERY_NODE_SET_OPERATOR == nodeType(pSubquery)) {
|
||||||
strcpy(((SSetOperator*)pSubquery)->stmtName, tempTable->table.tableAlias);
|
strcpy(((SSetOperator*)pSubquery)->stmtName, tempTable->table.tableAlias);
|
||||||
}
|
}
|
||||||
|
@ -637,8 +638,9 @@ SNode* createInterpTimeRange(SAstCreateContext* pCxt, SNode* pStart, SNode* pEnd
|
||||||
return createBetweenAnd(pCxt, createPrimaryKeyCol(pCxt), pStart, pEnd);
|
return createBetweenAnd(pCxt, createPrimaryKeyCol(pCxt), pStart, pEnd);
|
||||||
}
|
}
|
||||||
|
|
||||||
SNode* setProjectionAlias(SAstCreateContext* pCxt, SNode* pNode, const SToken* pAlias) {
|
SNode* setProjectionAlias(SAstCreateContext* pCxt, SNode* pNode, SToken* pAlias) {
|
||||||
CHECK_PARSER_STATUS(pCxt);
|
CHECK_PARSER_STATUS(pCxt);
|
||||||
|
trimEscape(pAlias);
|
||||||
int32_t len = TMIN(sizeof(((SExprNode*)pNode)->aliasName) - 1, pAlias->n);
|
int32_t len = TMIN(sizeof(((SExprNode*)pNode)->aliasName) - 1, pAlias->n);
|
||||||
strncpy(((SExprNode*)pNode)->aliasName, pAlias->z, len);
|
strncpy(((SExprNode*)pNode)->aliasName, pAlias->z, len);
|
||||||
((SExprNode*)pNode)->aliasName[len] = '\0';
|
((SExprNode*)pNode)->aliasName[len] = '\0';
|
||||||
|
@ -892,6 +894,18 @@ SNode* setDatabaseOption(SAstCreateContext* pCxt, SNode* pOptions, EDatabaseOpti
|
||||||
case DB_OPTION_RETENTIONS:
|
case DB_OPTION_RETENTIONS:
|
||||||
((SDatabaseOptions*)pOptions)->pRetentions = pVal;
|
((SDatabaseOptions*)pOptions)->pRetentions = pVal;
|
||||||
break;
|
break;
|
||||||
|
case DB_OPTION_WAL_RETENTION_PERIOD:
|
||||||
|
((SDatabaseOptions*)pOptions)->walRetentionPeriod = taosStr2Int32(((SToken*)pVal)->z, NULL, 10);
|
||||||
|
break;
|
||||||
|
case DB_OPTION_WAL_RETENTION_SIZE:
|
||||||
|
((SDatabaseOptions*)pOptions)->walRetentionSize = taosStr2Int32(((SToken*)pVal)->z, NULL, 10);
|
||||||
|
break;
|
||||||
|
case DB_OPTION_WAL_ROLL_PERIOD:
|
||||||
|
((SDatabaseOptions*)pOptions)->walRollPeriod = taosStr2Int32(((SToken*)pVal)->z, NULL, 10);
|
||||||
|
break;
|
||||||
|
case DB_OPTION_WAL_SEGMENT_SIZE:
|
||||||
|
((SDatabaseOptions*)pOptions)->walSegmentSize = taosStr2Int32(((SToken*)pVal)->z, NULL, 10);
|
||||||
|
break;
|
||||||
default:
|
default:
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
|
@ -739,12 +739,13 @@ static int32_t parseBoundColumns(SInsertParseContext* pCxt, SParsedDataColInfo*
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void buildCreateTbReq(SVCreateTbReq* pTbReq, const char* tname, STag* pTag, int64_t suid, const char* sname, SArray* tagName, uint8_t tagNum) {
|
static void buildCreateTbReq(SVCreateTbReq* pTbReq, const char* tname, STag* pTag, int64_t suid, const char* sname,
|
||||||
|
SArray* tagName, uint8_t tagNum) {
|
||||||
pTbReq->type = TD_CHILD_TABLE;
|
pTbReq->type = TD_CHILD_TABLE;
|
||||||
pTbReq->name = strdup(tname);
|
pTbReq->name = strdup(tname);
|
||||||
pTbReq->ctb.suid = suid;
|
pTbReq->ctb.suid = suid;
|
||||||
pTbReq->ctb.tagNum = tagNum;
|
pTbReq->ctb.tagNum = tagNum;
|
||||||
if(sname) pTbReq->ctb.name = strdup(sname);
|
if (sname) pTbReq->ctb.name = strdup(sname);
|
||||||
pTbReq->ctb.pTag = (uint8_t*)pTag;
|
pTbReq->ctb.pTag = (uint8_t*)pTag;
|
||||||
pTbReq->ctb.tagName = taosArrayDup(tagName);
|
pTbReq->ctb.tagName = taosArrayDup(tagName);
|
||||||
pTbReq->commentLen = -1;
|
pTbReq->commentLen = -1;
|
||||||
|
@ -969,7 +970,7 @@ static int32_t parseTagsClause(SInsertParseContext* pCxt, SSchema* pSchema, uint
|
||||||
}
|
}
|
||||||
|
|
||||||
SSchema* pTagSchema = &pSchema[pCxt->tags.boundColumns[i]];
|
SSchema* pTagSchema = &pSchema[pCxt->tags.boundColumns[i]];
|
||||||
char tmpTokenBuf[TSDB_MAX_BYTES_PER_ROW] = {0}; // todo this can be optimize with parse column
|
char tmpTokenBuf[TSDB_MAX_BYTES_PER_ROW] = {0}; // todo this can be optimize with parse column
|
||||||
code = checkAndTrimValue(&sToken, tmpTokenBuf, &pCxt->msg);
|
code = checkAndTrimValue(&sToken, tmpTokenBuf, &pCxt->msg);
|
||||||
if (code != TSDB_CODE_SUCCESS) {
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
goto end;
|
goto end;
|
||||||
|
@ -1012,7 +1013,8 @@ static int32_t parseTagsClause(SInsertParseContext* pCxt, SSchema* pSchema, uint
|
||||||
goto end;
|
goto end;
|
||||||
}
|
}
|
||||||
|
|
||||||
buildCreateTbReq(&pCxt->createTblReq, tName, pTag, pCxt->pTableMeta->suid, pCxt->sTableName, tagName, pCxt->pTableMeta->tableInfo.numOfTags);
|
buildCreateTbReq(&pCxt->createTblReq, tName, pTag, pCxt->pTableMeta->suid, pCxt->sTableName, tagName,
|
||||||
|
pCxt->pTableMeta->tableInfo.numOfTags);
|
||||||
|
|
||||||
end:
|
end:
|
||||||
for (int i = 0; i < taosArrayGetSize(pTagVals); ++i) {
|
for (int i = 0; i < taosArrayGetSize(pTagVals); ++i) {
|
||||||
|
@ -1650,7 +1652,6 @@ static int32_t skipUsingClause(SInsertParseSyntaxCxt* pCxt) {
|
||||||
static int32_t collectTableMetaKey(SInsertParseSyntaxCxt* pCxt, SToken* pTbToken) {
|
static int32_t collectTableMetaKey(SInsertParseSyntaxCxt* pCxt, SToken* pTbToken) {
|
||||||
SName name;
|
SName name;
|
||||||
CHECK_CODE(createSName(&name, pTbToken, pCxt->pComCxt->acctId, pCxt->pComCxt->db, &pCxt->msg));
|
CHECK_CODE(createSName(&name, pTbToken, pCxt->pComCxt->acctId, pCxt->pComCxt->db, &pCxt->msg));
|
||||||
CHECK_CODE(reserveDbCfgInCache(pCxt->pComCxt->acctId, name.dbname, pCxt->pMetaCache));
|
|
||||||
CHECK_CODE(reserveUserAuthInCacheExt(pCxt->pComCxt->pUser, &name, AUTH_TYPE_WRITE, pCxt->pMetaCache));
|
CHECK_CODE(reserveUserAuthInCacheExt(pCxt->pComCxt->pUser, &name, AUTH_TYPE_WRITE, pCxt->pMetaCache));
|
||||||
CHECK_CODE(reserveTableMetaInCacheExt(&name, pCxt->pMetaCache));
|
CHECK_CODE(reserveTableMetaInCacheExt(&name, pCxt->pMetaCache));
|
||||||
CHECK_CODE(reserveTableVgroupInCacheExt(&name, pCxt->pMetaCache));
|
CHECK_CODE(reserveTableVgroupInCacheExt(&name, pCxt->pMetaCache));
|
||||||
|
@ -2332,7 +2333,8 @@ int32_t smlBindData(void* handle, SArray* tags, SArray* colsSchema, SArray* cols
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
buildCreateTbReq(&smlHandle->tableExecHandle.createTblReq, tableName, pTag, pTableMeta->suid, NULL, tagName, pTableMeta->tableInfo.numOfTags);
|
buildCreateTbReq(&smlHandle->tableExecHandle.createTblReq, tableName, pTag, pTableMeta->suid, NULL, tagName,
|
||||||
|
pTableMeta->tableInfo.numOfTags);
|
||||||
taosArrayDestroy(tagName);
|
taosArrayDestroy(tagName);
|
||||||
|
|
||||||
smlHandle->tableExecHandle.createTblReq.ctb.name = taosMemoryMalloc(sTableNameLen + 1);
|
smlHandle->tableExecHandle.createTblReq.ctb.name = taosMemoryMalloc(sTableNameLen + 1);
|
||||||
|
|
|
@ -234,6 +234,10 @@ static SKeyword keywordTable[] = {
|
||||||
{"VGROUPS", TK_VGROUPS},
|
{"VGROUPS", TK_VGROUPS},
|
||||||
{"VNODES", TK_VNODES},
|
{"VNODES", TK_VNODES},
|
||||||
{"WAL", TK_WAL},
|
{"WAL", TK_WAL},
|
||||||
|
{"WAL_RETENTION_PERIOD", TK_WAL_RETENTION_PERIOD},
|
||||||
|
{"WAL_RETENTION_SIZE", TK_WAL_RETENTION_SIZE},
|
||||||
|
{"WAL_ROLL_PERIOD", TK_WAL_ROLL_PERIOD},
|
||||||
|
{"WAL_SEGMENT_SIZE", TK_WAL_SEGMENT_SIZE},
|
||||||
{"WATERMARK", TK_WATERMARK},
|
{"WATERMARK", TK_WATERMARK},
|
||||||
{"WHERE", TK_WHERE},
|
{"WHERE", TK_WHERE},
|
||||||
{"WINDOW_CLOSE", TK_WINDOW_CLOSE},
|
{"WINDOW_CLOSE", TK_WINDOW_CLOSE},
|
||||||
|
|
|
@ -2984,6 +2984,10 @@ static int32_t buildCreateDbReq(STranslateContext* pCxt, SCreateDatabaseStmt* pS
|
||||||
pReq->cacheLast = pStmt->pOptions->cacheModel;
|
pReq->cacheLast = pStmt->pOptions->cacheModel;
|
||||||
pReq->cacheLastSize = pStmt->pOptions->cacheLastSize;
|
pReq->cacheLastSize = pStmt->pOptions->cacheLastSize;
|
||||||
pReq->schemaless = pStmt->pOptions->schemaless;
|
pReq->schemaless = pStmt->pOptions->schemaless;
|
||||||
|
pReq->walRetentionPeriod = pStmt->pOptions->walRetentionPeriod;
|
||||||
|
pReq->walRetentionSize = pStmt->pOptions->walRetentionSize;
|
||||||
|
pReq->walRollPeriod = pStmt->pOptions->walRollPeriod;
|
||||||
|
pReq->walSegmentSize = pStmt->pOptions->walSegmentSize;
|
||||||
pReq->ignoreExist = pStmt->ignoreExists;
|
pReq->ignoreExist = pStmt->ignoreExists;
|
||||||
return buildCreateDbRetentions(pStmt->pOptions->pRetentions, pReq);
|
return buildCreateDbRetentions(pStmt->pOptions->pRetentions, pReq);
|
||||||
}
|
}
|
||||||
|
@ -3252,6 +3256,21 @@ static int32_t checkDatabaseOptions(STranslateContext* pCxt, const char* pDbName
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = checkDbEnumOption(pCxt, "schemaless", pOptions->schemaless, TSDB_DB_SCHEMALESS_ON, TSDB_DB_SCHEMALESS_OFF);
|
code = checkDbEnumOption(pCxt, "schemaless", pOptions->schemaless, TSDB_DB_SCHEMALESS_ON, TSDB_DB_SCHEMALESS_OFF);
|
||||||
}
|
}
|
||||||
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
|
code = checkDbRangeOption(pCxt, "walRetentionPeriod", pOptions->walRetentionPeriod,
|
||||||
|
TSDB_DB_MIN_WAL_RETENTION_PERIOD, INT32_MAX);
|
||||||
|
}
|
||||||
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
|
code = checkDbRangeOption(pCxt, "walRetentionSize", pOptions->walRetentionSize, TSDB_DB_MIN_WAL_RETENTION_SIZE,
|
||||||
|
INT32_MAX);
|
||||||
|
}
|
||||||
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
|
code = checkDbRangeOption(pCxt, "walRollPeriod", pOptions->walRollPeriod, TSDB_DB_MIN_WAL_ROLL_PERIOD, INT32_MAX);
|
||||||
|
}
|
||||||
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
|
code =
|
||||||
|
checkDbRangeOption(pCxt, "walSegmentSize", pOptions->walSegmentSize, TSDB_DB_MIN_WAL_SEGMENT_SIZE, INT32_MAX);
|
||||||
|
}
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = checkOptionsDependency(pCxt, pDbName, pOptions);
|
code = checkOptionsDependency(pCxt, pDbName, pOptions);
|
||||||
}
|
}
|
||||||
|
|
|
@ -92,7 +92,7 @@ static char* getSyntaxErrFormat(int32_t errCode) {
|
||||||
case TSDB_CODE_PAR_INTER_SLIDING_TOO_BIG:
|
case TSDB_CODE_PAR_INTER_SLIDING_TOO_BIG:
|
||||||
return "sliding value no larger than the interval value";
|
return "sliding value no larger than the interval value";
|
||||||
case TSDB_CODE_PAR_INTER_SLIDING_TOO_SMALL:
|
case TSDB_CODE_PAR_INTER_SLIDING_TOO_SMALL:
|
||||||
return "sliding value can not less than 1% of interval value";
|
return "sliding value can not less than 1%% of interval value";
|
||||||
case TSDB_CODE_PAR_ONLY_ONE_JSON_TAG:
|
case TSDB_CODE_PAR_ONLY_ONE_JSON_TAG:
|
||||||
return "Only one tag if there is a json tag";
|
return "Only one tag if there is a json tag";
|
||||||
case TSDB_CODE_PAR_INCORRECT_NUM_OF_COL:
|
case TSDB_CODE_PAR_INCORRECT_NUM_OF_COL:
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -77,6 +77,10 @@ TEST_F(ParserInitialCTest, createBnode) {
|
||||||
* | WAL value
|
* | WAL value
|
||||||
* | VGROUPS value
|
* | VGROUPS value
|
||||||
* | SINGLE_STABLE {0 | 1}
|
* | SINGLE_STABLE {0 | 1}
|
||||||
|
* | WAL_RETENTION_PERIOD value
|
||||||
|
* | WAL_ROLL_PERIOD value
|
||||||
|
* | WAL_RETENTION_SIZE value
|
||||||
|
* | WAL_SEGMENT_SIZE value
|
||||||
* }
|
* }
|
||||||
*/
|
*/
|
||||||
TEST_F(ParserInitialCTest, createDatabase) {
|
TEST_F(ParserInitialCTest, createDatabase) {
|
||||||
|
@ -149,6 +153,10 @@ TEST_F(ParserInitialCTest, createDatabase) {
|
||||||
++expect.numOfRetensions;
|
++expect.numOfRetensions;
|
||||||
};
|
};
|
||||||
auto setDbSchemalessFunc = [&](int8_t schemaless) { expect.schemaless = schemaless; };
|
auto setDbSchemalessFunc = [&](int8_t schemaless) { expect.schemaless = schemaless; };
|
||||||
|
auto setDbWalRetentionPeriod = [&](int32_t walRetentionPeriod) { expect.walRetentionPeriod = walRetentionPeriod; };
|
||||||
|
auto setDbWalRetentionSize = [&](int32_t walRetentionSize) { expect.walRetentionSize = walRetentionSize; };
|
||||||
|
auto setDbWalRollPeriod = [&](int32_t walRollPeriod) { expect.walRollPeriod = walRollPeriod; };
|
||||||
|
auto setDbWalSegmentSize = [&](int32_t walSegmentSize) { expect.walSegmentSize = walSegmentSize; };
|
||||||
|
|
||||||
setCheckDdlFunc([&](const SQuery* pQuery, ParserStage stage) {
|
setCheckDdlFunc([&](const SQuery* pQuery, ParserStage stage) {
|
||||||
ASSERT_EQ(nodeType(pQuery->pRoot), QUERY_NODE_CREATE_DATABASE_STMT);
|
ASSERT_EQ(nodeType(pQuery->pRoot), QUERY_NODE_CREATE_DATABASE_STMT);
|
||||||
|
@ -175,6 +183,10 @@ TEST_F(ParserInitialCTest, createDatabase) {
|
||||||
ASSERT_EQ(req.strict, expect.strict);
|
ASSERT_EQ(req.strict, expect.strict);
|
||||||
ASSERT_EQ(req.cacheLast, expect.cacheLast);
|
ASSERT_EQ(req.cacheLast, expect.cacheLast);
|
||||||
ASSERT_EQ(req.cacheLastSize, expect.cacheLastSize);
|
ASSERT_EQ(req.cacheLastSize, expect.cacheLastSize);
|
||||||
|
ASSERT_EQ(req.walRetentionPeriod, expect.walRetentionPeriod);
|
||||||
|
ASSERT_EQ(req.walRetentionSize, expect.walRetentionSize);
|
||||||
|
ASSERT_EQ(req.walRollPeriod, expect.walRollPeriod);
|
||||||
|
ASSERT_EQ(req.walSegmentSize, expect.walSegmentSize);
|
||||||
// ASSERT_EQ(req.schemaless, expect.schemaless);
|
// ASSERT_EQ(req.schemaless, expect.schemaless);
|
||||||
ASSERT_EQ(req.ignoreExist, expect.ignoreExist);
|
ASSERT_EQ(req.ignoreExist, expect.ignoreExist);
|
||||||
ASSERT_EQ(req.numOfRetensions, expect.numOfRetensions);
|
ASSERT_EQ(req.numOfRetensions, expect.numOfRetensions);
|
||||||
|
@ -219,6 +231,10 @@ TEST_F(ParserInitialCTest, createDatabase) {
|
||||||
setDbVgroupsFunc(100);
|
setDbVgroupsFunc(100);
|
||||||
setDbSingleStableFunc(1);
|
setDbSingleStableFunc(1);
|
||||||
setDbSchemalessFunc(1);
|
setDbSchemalessFunc(1);
|
||||||
|
setDbWalRetentionPeriod(-1);
|
||||||
|
setDbWalRetentionSize(-1);
|
||||||
|
setDbWalRollPeriod(10);
|
||||||
|
setDbWalSegmentSize(20);
|
||||||
run("CREATE DATABASE IF NOT EXISTS wxy_db "
|
run("CREATE DATABASE IF NOT EXISTS wxy_db "
|
||||||
"BUFFER 64 "
|
"BUFFER 64 "
|
||||||
"CACHEMODEL 'last_value' "
|
"CACHEMODEL 'last_value' "
|
||||||
|
@ -238,7 +254,11 @@ TEST_F(ParserInitialCTest, createDatabase) {
|
||||||
"WAL 2 "
|
"WAL 2 "
|
||||||
"VGROUPS 100 "
|
"VGROUPS 100 "
|
||||||
"SINGLE_STABLE 1 "
|
"SINGLE_STABLE 1 "
|
||||||
"SCHEMALESS 1");
|
"SCHEMALESS 1 "
|
||||||
|
"WAL_RETENTION_PERIOD -1 "
|
||||||
|
"WAL_RETENTION_SIZE -1 "
|
||||||
|
"WAL_ROLL_PERIOD 10 "
|
||||||
|
"WAL_SEGMENT_SIZE 20");
|
||||||
clearCreateDbReq();
|
clearCreateDbReq();
|
||||||
|
|
||||||
setCreateDbReqFunc("wxy_db", 1);
|
setCreateDbReqFunc("wxy_db", 1);
|
||||||
|
|
|
@ -339,6 +339,7 @@ static int32_t createJoinLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSelect
|
||||||
|
|
||||||
pJoin->joinType = pJoinTable->joinType;
|
pJoin->joinType = pJoinTable->joinType;
|
||||||
pJoin->isSingleTableJoin = pJoinTable->table.singleTable;
|
pJoin->isSingleTableJoin = pJoinTable->table.singleTable;
|
||||||
|
pJoin->inputTsOrder = ORDER_ASC;
|
||||||
pJoin->node.groupAction = GROUP_ACTION_CLEAR;
|
pJoin->node.groupAction = GROUP_ACTION_CLEAR;
|
||||||
pJoin->node.requireDataOrder = DATA_ORDER_LEVEL_GLOBAL;
|
pJoin->node.requireDataOrder = DATA_ORDER_LEVEL_GLOBAL;
|
||||||
pJoin->node.requireDataOrder = DATA_ORDER_LEVEL_GLOBAL;
|
pJoin->node.requireDataOrder = DATA_ORDER_LEVEL_GLOBAL;
|
||||||
|
@ -625,14 +626,14 @@ static int32_t createInterpFuncLogicNode(SLogicPlanContext* pCxt, SSelectStmt* p
|
||||||
|
|
||||||
static int32_t createWindowLogicNodeFinalize(SLogicPlanContext* pCxt, SSelectStmt* pSelect, SWindowLogicNode* pWindow,
|
static int32_t createWindowLogicNodeFinalize(SLogicPlanContext* pCxt, SSelectStmt* pSelect, SWindowLogicNode* pWindow,
|
||||||
SLogicNode** pLogicNode) {
|
SLogicNode** pLogicNode) {
|
||||||
int32_t code = nodesCollectFuncs(pSelect, SQL_CLAUSE_WINDOW, fmIsWindowClauseFunc, &pWindow->pFuncs);
|
|
||||||
|
|
||||||
if (pCxt->pPlanCxt->streamQuery) {
|
if (pCxt->pPlanCxt->streamQuery) {
|
||||||
pWindow->triggerType = pCxt->pPlanCxt->triggerType;
|
pWindow->triggerType = pCxt->pPlanCxt->triggerType;
|
||||||
pWindow->watermark = pCxt->pPlanCxt->watermark;
|
pWindow->watermark = pCxt->pPlanCxt->watermark;
|
||||||
pWindow->igExpired = pCxt->pPlanCxt->igExpired;
|
pWindow->igExpired = pCxt->pPlanCxt->igExpired;
|
||||||
}
|
}
|
||||||
|
pWindow->inputTsOrder = ORDER_ASC;
|
||||||
|
|
||||||
|
int32_t code = nodesCollectFuncs(pSelect, SQL_CLAUSE_WINDOW, fmIsWindowClauseFunc, &pWindow->pFuncs);
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = rewriteExprsForSelect(pWindow->pFuncs, pSelect, SQL_CLAUSE_WINDOW);
|
code = rewriteExprsForSelect(pWindow->pFuncs, pSelect, SQL_CLAUSE_WINDOW);
|
||||||
}
|
}
|
||||||
|
@ -861,7 +862,8 @@ static int32_t createProjectLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSel
|
||||||
|
|
||||||
TSWAP(pProject->node.pLimit, pSelect->pLimit);
|
TSWAP(pProject->node.pLimit, pSelect->pLimit);
|
||||||
TSWAP(pProject->node.pSlimit, pSelect->pSlimit);
|
TSWAP(pProject->node.pSlimit, pSelect->pSlimit);
|
||||||
pProject->node.groupAction = GROUP_ACTION_CLEAR;
|
pProject->node.groupAction =
|
||||||
|
(!pSelect->isSubquery && pCxt->pPlanCxt->streamQuery) ? GROUP_ACTION_KEEP : GROUP_ACTION_CLEAR;
|
||||||
pProject->node.requireDataOrder = DATA_ORDER_LEVEL_NONE;
|
pProject->node.requireDataOrder = DATA_ORDER_LEVEL_NONE;
|
||||||
pProject->node.resultDataOrder = DATA_ORDER_LEVEL_NONE;
|
pProject->node.resultDataOrder = DATA_ORDER_LEVEL_NONE;
|
||||||
|
|
||||||
|
|
|
@ -993,25 +993,28 @@ static bool sortPriKeyOptMayBeOptimized(SLogicNode* pNode) {
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t sortPriKeyOptGetScanNodesImpl(SLogicNode* pNode, bool* pNotOptimize, SNodeList** pScanNodes) {
|
static int32_t sortPriKeyOptGetScanNodesImpl(SLogicNode* pNode, bool* pNotOptimize, SNodeList** pScanNodes) {
|
||||||
int32_t code = TSDB_CODE_SUCCESS;
|
|
||||||
|
|
||||||
switch (nodeType(pNode)) {
|
switch (nodeType(pNode)) {
|
||||||
case QUERY_NODE_LOGIC_PLAN_SCAN:
|
case QUERY_NODE_LOGIC_PLAN_SCAN: {
|
||||||
if (TSDB_SUPER_TABLE != ((SScanLogicNode*)pNode)->tableType) {
|
SScanLogicNode* pScan = (SScanLogicNode*)pNode;
|
||||||
return nodesListMakeAppend(pScanNodes, (SNode*)pNode);
|
if (NULL != pScan->pGroupTags) {
|
||||||
|
*pNotOptimize = true;
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
break;
|
return nodesListMakeAppend(pScanNodes, (SNode*)pNode);
|
||||||
case QUERY_NODE_LOGIC_PLAN_JOIN:
|
}
|
||||||
code =
|
case QUERY_NODE_LOGIC_PLAN_JOIN: {
|
||||||
|
int32_t code =
|
||||||
sortPriKeyOptGetScanNodesImpl((SLogicNode*)nodesListGetNode(pNode->pChildren, 0), pNotOptimize, pScanNodes);
|
sortPriKeyOptGetScanNodesImpl((SLogicNode*)nodesListGetNode(pNode->pChildren, 0), pNotOptimize, pScanNodes);
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code =
|
code =
|
||||||
sortPriKeyOptGetScanNodesImpl((SLogicNode*)nodesListGetNode(pNode->pChildren, 1), pNotOptimize, pScanNodes);
|
sortPriKeyOptGetScanNodesImpl((SLogicNode*)nodesListGetNode(pNode->pChildren, 1), pNotOptimize, pScanNodes);
|
||||||
}
|
}
|
||||||
return code;
|
return code;
|
||||||
|
}
|
||||||
case QUERY_NODE_LOGIC_PLAN_AGG:
|
case QUERY_NODE_LOGIC_PLAN_AGG:
|
||||||
|
case QUERY_NODE_LOGIC_PLAN_PARTITION:
|
||||||
*pNotOptimize = true;
|
*pNotOptimize = true;
|
||||||
return code;
|
return TSDB_CODE_SUCCESS;
|
||||||
default:
|
default:
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
@ -1037,17 +1040,33 @@ static EOrder sortPriKeyOptGetPriKeyOrder(SSortLogicNode* pSort) {
|
||||||
return ((SOrderByExprNode*)nodesListGetNode(pSort->pSortKeys, 0))->order;
|
return ((SOrderByExprNode*)nodesListGetNode(pSort->pSortKeys, 0))->order;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void sortPriKeyOptSetParentOrder(SLogicNode* pNode, EOrder order) {
|
||||||
|
if (NULL == pNode) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
if (QUERY_NODE_LOGIC_PLAN_WINDOW == nodeType(pNode)) {
|
||||||
|
((SWindowLogicNode*)pNode)->inputTsOrder = order;
|
||||||
|
} else if (QUERY_NODE_LOGIC_PLAN_JOIN == nodeType(pNode)) {
|
||||||
|
((SJoinLogicNode*)pNode)->inputTsOrder = order;
|
||||||
|
}
|
||||||
|
sortPriKeyOptSetParentOrder(pNode->pParent, order);
|
||||||
|
}
|
||||||
|
|
||||||
static int32_t sortPriKeyOptApply(SOptimizeContext* pCxt, SLogicSubplan* pLogicSubplan, SSortLogicNode* pSort,
|
static int32_t sortPriKeyOptApply(SOptimizeContext* pCxt, SLogicSubplan* pLogicSubplan, SSortLogicNode* pSort,
|
||||||
SNodeList* pScanNodes) {
|
SNodeList* pScanNodes) {
|
||||||
EOrder order = sortPriKeyOptGetPriKeyOrder(pSort);
|
EOrder order = sortPriKeyOptGetPriKeyOrder(pSort);
|
||||||
if (ORDER_DESC == order) {
|
SNode* pScanNode = NULL;
|
||||||
SNode* pScanNode = NULL;
|
FOREACH(pScanNode, pScanNodes) {
|
||||||
FOREACH(pScanNode, pScanNodes) {
|
SScanLogicNode* pScan = (SScanLogicNode*)pScanNode;
|
||||||
SScanLogicNode* pScan = (SScanLogicNode*)pScanNode;
|
if (ORDER_DESC == order && pScan->scanSeq[0] > 0) {
|
||||||
if (pScan->scanSeq[0] > 0) {
|
TSWAP(pScan->scanSeq[0], pScan->scanSeq[1]);
|
||||||
TSWAP(pScan->scanSeq[0], pScan->scanSeq[1]);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
if (TSDB_SUPER_TABLE == pScan->tableType) {
|
||||||
|
pScan->scanType = SCAN_TYPE_TABLE_MERGE;
|
||||||
|
pScan->node.resultDataOrder = DATA_ORDER_LEVEL_GLOBAL;
|
||||||
|
pScan->node.requireDataOrder = DATA_ORDER_LEVEL_GLOBAL;
|
||||||
|
}
|
||||||
|
sortPriKeyOptSetParentOrder(pScan->node.pParent, order);
|
||||||
}
|
}
|
||||||
|
|
||||||
SLogicNode* pChild = (SLogicNode*)nodesListGetNode(pSort->node.pChildren, 0);
|
SLogicNode* pChild = (SLogicNode*)nodesListGetNode(pSort->node.pChildren, 0);
|
||||||
|
@ -1613,10 +1632,10 @@ static void alignProjectionWithTarget(SLogicNode* pNode) {
|
||||||
}
|
}
|
||||||
|
|
||||||
SProjectLogicNode* pProjectNode = (SProjectLogicNode*)pNode;
|
SProjectLogicNode* pProjectNode = (SProjectLogicNode*)pNode;
|
||||||
SNode* pProjection = NULL;
|
SNode* pProjection = NULL;
|
||||||
FOREACH(pProjection, pProjectNode->pProjections) {
|
FOREACH(pProjection, pProjectNode->pProjections) {
|
||||||
SNode* pTarget = NULL;
|
SNode* pTarget = NULL;
|
||||||
bool keep = false;
|
bool keep = false;
|
||||||
FOREACH(pTarget, pNode->pTargets) {
|
FOREACH(pTarget, pNode->pTargets) {
|
||||||
if (0 == strcmp(((SColumnNode*)pProjection)->node.aliasName, ((SColumnNode*)pTarget)->colName)) {
|
if (0 == strcmp(((SColumnNode*)pProjection)->node.aliasName, ((SColumnNode*)pTarget)->colName)) {
|
||||||
keep = true;
|
keep = true;
|
||||||
|
@ -2214,7 +2233,7 @@ static bool tagScanMayBeOptimized(SLogicNode* pNode) {
|
||||||
!planOptNodeListHasTbname(pAgg->pGroupKeys)) {
|
!planOptNodeListHasTbname(pAgg->pGroupKeys)) {
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
SNode* pGroupKey = NULL;
|
SNode* pGroupKey = NULL;
|
||||||
FOREACH(pGroupKey, pAgg->pGroupKeys) {
|
FOREACH(pGroupKey, pAgg->pGroupKeys) {
|
||||||
SNode* pGroup = NULL;
|
SNode* pGroup = NULL;
|
||||||
|
|
|
@ -415,7 +415,6 @@ static int32_t createScanPhysiNodeFinalize(SPhysiPlanContext* pCxt, SSubplan* pS
|
||||||
SScanPhysiNode* pScanPhysiNode, SPhysiNode** pPhyNode) {
|
SScanPhysiNode* pScanPhysiNode, SPhysiNode** pPhyNode) {
|
||||||
int32_t code = createScanCols(pCxt, pScanPhysiNode, pScanLogicNode->pScanCols);
|
int32_t code = createScanCols(pCxt, pScanPhysiNode, pScanLogicNode->pScanCols);
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
// Data block describe also needs to be set without scanning column, such as SELECT COUNT(*) FROM t
|
|
||||||
code = addDataBlockSlots(pCxt, pScanPhysiNode->pScanCols, pScanPhysiNode->node.pOutputDataBlockDesc);
|
code = addDataBlockSlots(pCxt, pScanPhysiNode->pScanCols, pScanPhysiNode->node.pOutputDataBlockDesc);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -622,8 +621,8 @@ static int32_t createScanPhysiNode(SPhysiPlanContext* pCxt, SSubplan* pSubplan,
|
||||||
|
|
||||||
static int32_t createJoinPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChildren, SJoinLogicNode* pJoinLogicNode,
|
static int32_t createJoinPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChildren, SJoinLogicNode* pJoinLogicNode,
|
||||||
SPhysiNode** pPhyNode) {
|
SPhysiNode** pPhyNode) {
|
||||||
SJoinPhysiNode* pJoin =
|
SSortMergeJoinPhysiNode* pJoin =
|
||||||
(SJoinPhysiNode*)makePhysiNode(pCxt, (SLogicNode*)pJoinLogicNode, QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN);
|
(SSortMergeJoinPhysiNode*)makePhysiNode(pCxt, (SLogicNode*)pJoinLogicNode, QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN);
|
||||||
if (NULL == pJoin) {
|
if (NULL == pJoin) {
|
||||||
return TSDB_CODE_OUT_OF_MEMORY;
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
}
|
}
|
||||||
|
@ -975,6 +974,9 @@ static int32_t createInterpFuncPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pCh
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool projectCanMergeDataBlock(SProjectLogicNode* pProject) {
|
static bool projectCanMergeDataBlock(SProjectLogicNode* pProject) {
|
||||||
|
if (GROUP_ACTION_KEEP == pProject->node.groupAction) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
if (DATA_ORDER_LEVEL_NONE == pProject->node.resultDataOrder) {
|
if (DATA_ORDER_LEVEL_NONE == pProject->node.resultDataOrder) {
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
|
@ -469,7 +469,7 @@ static int32_t stbSplCreateExchangeNode(SSplitContext* pCxt, SLogicNode* pParent
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t stbSplCreateMergeKeysByPrimaryKey(SNode* pPrimaryKey, SNodeList** pMergeKeys) {
|
static int32_t stbSplCreateMergeKeysByPrimaryKey(SNode* pPrimaryKey, EOrder order, SNodeList** pMergeKeys) {
|
||||||
SOrderByExprNode* pMergeKey = (SOrderByExprNode*)nodesMakeNode(QUERY_NODE_ORDER_BY_EXPR);
|
SOrderByExprNode* pMergeKey = (SOrderByExprNode*)nodesMakeNode(QUERY_NODE_ORDER_BY_EXPR);
|
||||||
if (NULL == pMergeKey) {
|
if (NULL == pMergeKey) {
|
||||||
return TSDB_CODE_OUT_OF_MEMORY;
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
@ -479,7 +479,7 @@ static int32_t stbSplCreateMergeKeysByPrimaryKey(SNode* pPrimaryKey, SNodeList**
|
||||||
nodesDestroyNode((SNode*)pMergeKey);
|
nodesDestroyNode((SNode*)pMergeKey);
|
||||||
return TSDB_CODE_OUT_OF_MEMORY;
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
}
|
}
|
||||||
pMergeKey->order = ORDER_ASC;
|
pMergeKey->order = order;
|
||||||
pMergeKey->nullOrder = NULL_ORDER_FIRST;
|
pMergeKey->nullOrder = NULL_ORDER_FIRST;
|
||||||
return nodesListMakeStrictAppend(pMergeKeys, (SNode*)pMergeKey);
|
return nodesListMakeStrictAppend(pMergeKeys, (SNode*)pMergeKey);
|
||||||
}
|
}
|
||||||
|
@ -491,7 +491,8 @@ static int32_t stbSplSplitIntervalForBatch(SSplitContext* pCxt, SStableSplitInfo
|
||||||
((SWindowLogicNode*)pPartWindow)->windowAlgo = INTERVAL_ALGO_HASH;
|
((SWindowLogicNode*)pPartWindow)->windowAlgo = INTERVAL_ALGO_HASH;
|
||||||
((SWindowLogicNode*)pInfo->pSplitNode)->windowAlgo = INTERVAL_ALGO_MERGE;
|
((SWindowLogicNode*)pInfo->pSplitNode)->windowAlgo = INTERVAL_ALGO_MERGE;
|
||||||
SNodeList* pMergeKeys = NULL;
|
SNodeList* pMergeKeys = NULL;
|
||||||
code = stbSplCreateMergeKeysByPrimaryKey(((SWindowLogicNode*)pInfo->pSplitNode)->pTspk, &pMergeKeys);
|
code = stbSplCreateMergeKeysByPrimaryKey(((SWindowLogicNode*)pInfo->pSplitNode)->pTspk,
|
||||||
|
((SWindowLogicNode*)pInfo->pSplitNode)->inputTsOrder, &pMergeKeys);
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = stbSplCreateMergeNode(pCxt, NULL, pInfo->pSplitNode, pMergeKeys, pPartWindow, true);
|
code = stbSplCreateMergeNode(pCxt, NULL, pInfo->pSplitNode, pMergeKeys, pPartWindow, true);
|
||||||
}
|
}
|
||||||
|
@ -579,7 +580,8 @@ static int32_t stbSplSplitSessionOrStateForBatch(SSplitContext* pCxt, SStableSpl
|
||||||
SLogicNode* pChild = (SLogicNode*)nodesListGetNode(pWindow->pChildren, 0);
|
SLogicNode* pChild = (SLogicNode*)nodesListGetNode(pWindow->pChildren, 0);
|
||||||
|
|
||||||
SNodeList* pMergeKeys = NULL;
|
SNodeList* pMergeKeys = NULL;
|
||||||
int32_t code = stbSplCreateMergeKeysByPrimaryKey(((SWindowLogicNode*)pWindow)->pTspk, &pMergeKeys);
|
int32_t code = stbSplCreateMergeKeysByPrimaryKey(((SWindowLogicNode*)pWindow)->pTspk,
|
||||||
|
((SWindowLogicNode*)pWindow)->inputTsOrder, &pMergeKeys);
|
||||||
|
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = stbSplCreateMergeNode(pCxt, pInfo->pSubplan, pChild, pMergeKeys, (SLogicNode*)pChild, true);
|
code = stbSplCreateMergeNode(pCxt, pInfo->pSubplan, pChild, pMergeKeys, (SLogicNode*)pChild, true);
|
||||||
|
@ -913,27 +915,70 @@ static int32_t stbSplSplitScanNodeWithPartTags(SSplitContext* pCxt, SStableSplit
|
||||||
}
|
}
|
||||||
|
|
||||||
static SNode* stbSplFindPrimaryKeyFromScan(SScanLogicNode* pScan) {
|
static SNode* stbSplFindPrimaryKeyFromScan(SScanLogicNode* pScan) {
|
||||||
|
bool find = false;
|
||||||
SNode* pCol = NULL;
|
SNode* pCol = NULL;
|
||||||
FOREACH(pCol, pScan->pScanCols) {
|
FOREACH(pCol, pScan->pScanCols) {
|
||||||
if (PRIMARYKEY_TIMESTAMP_COL_ID == ((SColumnNode*)pCol)->colId) {
|
if (PRIMARYKEY_TIMESTAMP_COL_ID == ((SColumnNode*)pCol)->colId) {
|
||||||
|
find = true;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (!find) {
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
SNode* pTarget = NULL;
|
||||||
|
FOREACH(pTarget, pScan->node.pTargets) {
|
||||||
|
if (nodesEqualNode(pTarget, pCol)) {
|
||||||
return pCol;
|
return pCol;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return NULL;
|
nodesListStrictAppend(pScan->node.pTargets, nodesCloneNode(pCol));
|
||||||
|
return pCol;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int32_t stbSplCreateMergeScanNode(SScanLogicNode* pScan, SLogicNode** pOutputMergeScan,
|
||||||
|
SNodeList** pOutputMergeKeys) {
|
||||||
|
SNodeList* pChildren = pScan->node.pChildren;
|
||||||
|
pScan->node.pChildren = NULL;
|
||||||
|
|
||||||
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
|
SScanLogicNode* pMergeScan = (SScanLogicNode*)nodesCloneNode((SNode*)pScan);
|
||||||
|
if (NULL == pMergeScan) {
|
||||||
|
code = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
}
|
||||||
|
|
||||||
|
SNodeList* pMergeKeys = NULL;
|
||||||
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
|
pMergeScan->scanType = SCAN_TYPE_TABLE_MERGE;
|
||||||
|
pMergeScan->node.pChildren = pChildren;
|
||||||
|
splSetParent((SLogicNode*)pMergeScan);
|
||||||
|
code = stbSplCreateMergeKeysByPrimaryKey(stbSplFindPrimaryKeyFromScan(pMergeScan),
|
||||||
|
pMergeScan->scanSeq[0] > 0 ? ORDER_ASC : ORDER_DESC, &pMergeKeys);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
|
*pOutputMergeScan = (SLogicNode*)pMergeScan;
|
||||||
|
*pOutputMergeKeys = pMergeKeys;
|
||||||
|
} else {
|
||||||
|
nodesDestroyNode((SNode*)pMergeScan);
|
||||||
|
nodesDestroyList(pMergeKeys);
|
||||||
|
}
|
||||||
|
|
||||||
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t stbSplSplitMergeScanNode(SSplitContext* pCxt, SLogicSubplan* pSubplan, SScanLogicNode* pScan,
|
static int32_t stbSplSplitMergeScanNode(SSplitContext* pCxt, SLogicSubplan* pSubplan, SScanLogicNode* pScan,
|
||||||
bool groupSort) {
|
bool groupSort) {
|
||||||
SNodeList* pMergeKeys = NULL;
|
SLogicNode* pMergeScan = NULL;
|
||||||
int32_t code = stbSplCreateMergeKeysByPrimaryKey(stbSplFindPrimaryKeyFromScan(pScan), &pMergeKeys);
|
SNodeList* pMergeKeys = NULL;
|
||||||
|
int32_t code = stbSplCreateMergeScanNode(pScan, &pMergeScan, &pMergeKeys);
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = stbSplCreateMergeNode(pCxt, pSubplan, (SLogicNode*)pScan, pMergeKeys, (SLogicNode*)pScan, groupSort);
|
code = stbSplCreateMergeNode(pCxt, pSubplan, (SLogicNode*)pScan, pMergeKeys, pMergeScan, groupSort);
|
||||||
}
|
}
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = nodesListMakeStrictAppend(&pSubplan->pChildren,
|
code = nodesListMakeStrictAppend(&pSubplan->pChildren,
|
||||||
(SNode*)splCreateScanSubplan(pCxt, (SLogicNode*)pScan, SPLIT_FLAG_STABLE_SPLIT));
|
(SNode*)splCreateScanSubplan(pCxt, pMergeScan, SPLIT_FLAG_STABLE_SPLIT));
|
||||||
}
|
}
|
||||||
pScan->scanType = SCAN_TYPE_TABLE_MERGE;
|
|
||||||
++(pCxt->groupId);
|
++(pCxt->groupId);
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
@ -978,14 +1023,14 @@ static int32_t stbSplSplitJoinNode(SSplitContext* pCxt, SStableSplitInfo* pInfo)
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t stbSplCreateMergeKeysForPartitionNode(SLogicNode* pPart, SNodeList** pMergeKeys) {
|
static int32_t stbSplCreateMergeKeysForPartitionNode(SLogicNode* pPart, SNodeList** pMergeKeys) {
|
||||||
SNode* pPrimaryKey =
|
SScanLogicNode* pScan = (SScanLogicNode*)nodesListGetNode(pPart->pChildren, 0);
|
||||||
nodesCloneNode(stbSplFindPrimaryKeyFromScan((SScanLogicNode*)nodesListGetNode(pPart->pChildren, 0)));
|
SNode* pPrimaryKey = nodesCloneNode(stbSplFindPrimaryKeyFromScan(pScan));
|
||||||
if (NULL == pPrimaryKey) {
|
if (NULL == pPrimaryKey) {
|
||||||
return TSDB_CODE_OUT_OF_MEMORY;
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
}
|
}
|
||||||
int32_t code = nodesListAppend(pPart->pTargets, pPrimaryKey);
|
int32_t code = nodesListAppend(pPart->pTargets, pPrimaryKey);
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = stbSplCreateMergeKeysByPrimaryKey(pPrimaryKey, pMergeKeys);
|
code = stbSplCreateMergeKeysByPrimaryKey(pPrimaryKey, pScan->scanSeq[0] > 0 ? ORDER_ASC : ORDER_DESC, pMergeKeys);
|
||||||
}
|
}
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
|
@ -124,7 +124,8 @@ int32_t replaceLogicNode(SLogicSubplan* pSubplan, SLogicNode* pOld, SLogicNode*
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t adjustScanDataRequirement(SScanLogicNode* pScan, EDataOrderLevel requirement) {
|
static int32_t adjustScanDataRequirement(SScanLogicNode* pScan, EDataOrderLevel requirement) {
|
||||||
if (SCAN_TYPE_TABLE != pScan->scanType && SCAN_TYPE_TABLE_MERGE != pScan->scanType) {
|
if ((SCAN_TYPE_TABLE != pScan->scanType && SCAN_TYPE_TABLE_MERGE != pScan->scanType) ||
|
||||||
|
DATA_ORDER_LEVEL_GLOBAL == pScan->node.requireDataOrder) {
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
// The lowest sort level of scan output data is DATA_ORDER_LEVEL_IN_BLOCK
|
// The lowest sort level of scan output data is DATA_ORDER_LEVEL_IN_BLOCK
|
||||||
|
|
|
@ -24,9 +24,10 @@ TEST_F(PlanBasicTest, selectClause) {
|
||||||
useDb("root", "test");
|
useDb("root", "test");
|
||||||
|
|
||||||
run("SELECT * FROM t1");
|
run("SELECT * FROM t1");
|
||||||
run("SELECT 1 FROM t1");
|
|
||||||
run("SELECT * FROM st1");
|
run("SELECT MAX(c1) c2, c2 FROM t1");
|
||||||
run("SELECT 1 FROM st1");
|
|
||||||
|
run("SELECT MAX(c1) c2, c2 FROM st1");
|
||||||
}
|
}
|
||||||
|
|
||||||
TEST_F(PlanBasicTest, whereClause) {
|
TEST_F(PlanBasicTest, whereClause) {
|
||||||
|
|
|
@ -53,6 +53,8 @@ TEST_F(PlanOptimizeTest, sortPrimaryKey) {
|
||||||
|
|
||||||
run("SELECT c1 FROM t1 ORDER BY ts");
|
run("SELECT c1 FROM t1 ORDER BY ts");
|
||||||
|
|
||||||
|
run("SELECT c1 FROM st1 ORDER BY ts");
|
||||||
|
|
||||||
run("SELECT c1 FROM t1 ORDER BY ts DESC");
|
run("SELECT c1 FROM t1 ORDER BY ts DESC");
|
||||||
|
|
||||||
run("SELECT COUNT(*) FROM t1 INTERVAL(10S) ORDER BY _WSTART DESC");
|
run("SELECT COUNT(*) FROM t1 INTERVAL(10S) ORDER BY _WSTART DESC");
|
||||||
|
|
|
@ -283,6 +283,8 @@ int32_t qwGetDeleteResFromSink(QW_FPARAMS_DEF, SQWTaskCtx *ctx, SDeleteRes *pRes
|
||||||
pRes->skey = pDelRes->skey;
|
pRes->skey = pDelRes->skey;
|
||||||
pRes->ekey = pDelRes->ekey;
|
pRes->ekey = pDelRes->ekey;
|
||||||
pRes->affectedRows = pDelRes->affectedRows;
|
pRes->affectedRows = pDelRes->affectedRows;
|
||||||
|
|
||||||
|
taosMemoryFree(output.pData);
|
||||||
|
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
|
@ -26,7 +26,7 @@ static int32_t streamTaskExecImpl(SStreamTask* pTask, void* data, SArray* pRes)
|
||||||
} else if (pItem->type == STREAM_INPUT__DATA_SUBMIT) {
|
} else if (pItem->type == STREAM_INPUT__DATA_SUBMIT) {
|
||||||
ASSERT(pTask->isDataScan);
|
ASSERT(pTask->isDataScan);
|
||||||
SStreamDataSubmit* pSubmit = (SStreamDataSubmit*)data;
|
SStreamDataSubmit* pSubmit = (SStreamDataSubmit*)data;
|
||||||
qDebug("task %d %p set submit input %p %p %d", pTask->taskId, pTask, pSubmit, pSubmit->data, *pSubmit->dataRef);
|
qDebug("task %d %p set submit input %p %p %d 1", pTask->taskId, pTask, pSubmit, pSubmit->data, *pSubmit->dataRef);
|
||||||
qSetStreamInput(exec, pSubmit->data, STREAM_INPUT__DATA_SUBMIT, false);
|
qSetStreamInput(exec, pSubmit->data, STREAM_INPUT__DATA_SUBMIT, false);
|
||||||
} else if (pItem->type == STREAM_INPUT__DATA_BLOCK || pItem->type == STREAM_INPUT__DATA_RETRIEVE) {
|
} else if (pItem->type == STREAM_INPUT__DATA_BLOCK || pItem->type == STREAM_INPUT__DATA_RETRIEVE) {
|
||||||
SStreamDataBlock* pBlock = (SStreamDataBlock*)data;
|
SStreamDataBlock* pBlock = (SStreamDataBlock*)data;
|
||||||
|
@ -72,6 +72,8 @@ static int32_t streamTaskExecImpl(SStreamTask* pTask, void* data, SArray* pRes)
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
qDebug("task %d(child %d) executed and get block");
|
||||||
|
|
||||||
SSDataBlock block = {0};
|
SSDataBlock block = {0};
|
||||||
assignOneDataBlock(&block, output);
|
assignOneDataBlock(&block, output);
|
||||||
block.info.childId = pTask->selfChildId;
|
block.info.childId = pTask->selfChildId;
|
||||||
|
@ -188,7 +190,7 @@ static SArray* streamExecForQall(SStreamTask* pTask, SArray* pRes) {
|
||||||
if (pTask->execType == TASK_EXEC__NONE) {
|
if (pTask->execType == TASK_EXEC__NONE) {
|
||||||
ASSERT(((SStreamQueueItem*)data)->type == STREAM_INPUT__DATA_BLOCK);
|
ASSERT(((SStreamQueueItem*)data)->type == STREAM_INPUT__DATA_BLOCK);
|
||||||
streamTaskOutput(pTask, data);
|
streamTaskOutput(pTask, data);
|
||||||
return pRes;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
qDebug("stream task %d exec begin, msg batch: %d", pTask->taskId, cnt);
|
qDebug("stream task %d exec begin, msg batch: %d", pTask->taskId, cnt);
|
||||||
|
|
|
@ -26,6 +26,7 @@ extern "C" {
|
||||||
#include "syncInt.h"
|
#include "syncInt.h"
|
||||||
#include "syncMessage.h"
|
#include "syncMessage.h"
|
||||||
#include "taosdef.h"
|
#include "taosdef.h"
|
||||||
|
#include "tref.h"
|
||||||
#include "tskiplist.h"
|
#include "tskiplist.h"
|
||||||
|
|
||||||
typedef struct SSyncRaftEntry {
|
typedef struct SSyncRaftEntry {
|
||||||
|
@ -89,6 +90,7 @@ typedef struct SRaftEntryCache {
|
||||||
SSkipList* pSkipList;
|
SSkipList* pSkipList;
|
||||||
int32_t maxCount;
|
int32_t maxCount;
|
||||||
int32_t currentCount;
|
int32_t currentCount;
|
||||||
|
int32_t refMgr;
|
||||||
TdThreadMutex mutex;
|
TdThreadMutex mutex;
|
||||||
SSyncNode* pSyncNode;
|
SSyncNode* pSyncNode;
|
||||||
} SRaftEntryCache;
|
} SRaftEntryCache;
|
||||||
|
|
|
@ -242,9 +242,9 @@ static int32_t syncIOStopInternal(SSyncIO *io) {
|
||||||
}
|
}
|
||||||
|
|
||||||
static void *syncIOConsumerFunc(void *param) {
|
static void *syncIOConsumerFunc(void *param) {
|
||||||
SSyncIO *io = param;
|
SSyncIO * io = param;
|
||||||
STaosQall *qall = taosAllocateQall();
|
STaosQall *qall = taosAllocateQall();
|
||||||
SRpcMsg *pRpcMsg, rpcMsg;
|
SRpcMsg * pRpcMsg, rpcMsg;
|
||||||
SQueueInfo qinfo = {0};
|
SQueueInfo qinfo = {0};
|
||||||
|
|
||||||
while (1) {
|
while (1) {
|
||||||
|
|
|
@ -125,7 +125,7 @@ cJSON *syncIndexMgr2Json(SSyncIndexMgr *pSyncIndexMgr) {
|
||||||
|
|
||||||
char *syncIndexMgr2Str(SSyncIndexMgr *pSyncIndexMgr) {
|
char *syncIndexMgr2Str(SSyncIndexMgr *pSyncIndexMgr) {
|
||||||
cJSON *pJson = syncIndexMgr2Json(pSyncIndexMgr);
|
cJSON *pJson = syncIndexMgr2Json(pSyncIndexMgr);
|
||||||
char *serialized = cJSON_Print(pJson);
|
char * serialized = cJSON_Print(pJson);
|
||||||
cJSON_Delete(pJson);
|
cJSON_Delete(pJson);
|
||||||
return serialized;
|
return serialized;
|
||||||
}
|
}
|
||||||
|
|
|
@ -101,7 +101,7 @@ cJSON *syncCfg2Json(SSyncCfg *pSyncCfg) {
|
||||||
|
|
||||||
char *syncCfg2Str(SSyncCfg *pSyncCfg) {
|
char *syncCfg2Str(SSyncCfg *pSyncCfg) {
|
||||||
cJSON *pJson = syncCfg2Json(pSyncCfg);
|
cJSON *pJson = syncCfg2Json(pSyncCfg);
|
||||||
char *serialized = cJSON_Print(pJson);
|
char * serialized = cJSON_Print(pJson);
|
||||||
cJSON_Delete(pJson);
|
cJSON_Delete(pJson);
|
||||||
return serialized;
|
return serialized;
|
||||||
}
|
}
|
||||||
|
@ -109,7 +109,7 @@ char *syncCfg2Str(SSyncCfg *pSyncCfg) {
|
||||||
char *syncCfg2SimpleStr(SSyncCfg *pSyncCfg) {
|
char *syncCfg2SimpleStr(SSyncCfg *pSyncCfg) {
|
||||||
if (pSyncCfg != NULL) {
|
if (pSyncCfg != NULL) {
|
||||||
int32_t len = 512;
|
int32_t len = 512;
|
||||||
char *s = taosMemoryMalloc(len);
|
char * s = taosMemoryMalloc(len);
|
||||||
memset(s, 0, len);
|
memset(s, 0, len);
|
||||||
|
|
||||||
snprintf(s, len, "{r-num:%d, my:%d, ", pSyncCfg->replicaNum, pSyncCfg->myIndex);
|
snprintf(s, len, "{r-num:%d, my:%d, ", pSyncCfg->replicaNum, pSyncCfg->myIndex);
|
||||||
|
@ -206,7 +206,7 @@ cJSON *raftCfg2Json(SRaftCfg *pRaftCfg) {
|
||||||
|
|
||||||
char *raftCfg2Str(SRaftCfg *pRaftCfg) {
|
char *raftCfg2Str(SRaftCfg *pRaftCfg) {
|
||||||
cJSON *pJson = raftCfg2Json(pRaftCfg);
|
cJSON *pJson = raftCfg2Json(pRaftCfg);
|
||||||
char *serialized = cJSON_Print(pJson);
|
char * serialized = cJSON_Print(pJson);
|
||||||
cJSON_Delete(pJson);
|
cJSON_Delete(pJson);
|
||||||
return serialized;
|
return serialized;
|
||||||
}
|
}
|
||||||
|
@ -285,7 +285,7 @@ int32_t raftCfgFromJson(const cJSON *pRoot, SRaftCfg *pRaftCfg) {
|
||||||
(pRaftCfg->configIndexArr)[i] = atoll(pIndex->valuestring);
|
(pRaftCfg->configIndexArr)[i] = atoll(pIndex->valuestring);
|
||||||
}
|
}
|
||||||
|
|
||||||
cJSON *pJsonSyncCfg = cJSON_GetObjectItem(pJson, "SSyncCfg");
|
cJSON * pJsonSyncCfg = cJSON_GetObjectItem(pJson, "SSyncCfg");
|
||||||
int32_t code = syncCfgFromJson(pJsonSyncCfg, &(pRaftCfg->cfg));
|
int32_t code = syncCfgFromJson(pJsonSyncCfg, &(pRaftCfg->cfg));
|
||||||
ASSERT(code == 0);
|
ASSERT(code == 0);
|
||||||
|
|
||||||
|
|
|
@ -23,6 +23,7 @@ SSyncRaftEntry* syncEntryBuild(uint32_t dataLen) {
|
||||||
memset(pEntry, 0, bytes);
|
memset(pEntry, 0, bytes);
|
||||||
pEntry->bytes = bytes;
|
pEntry->bytes = bytes;
|
||||||
pEntry->dataLen = dataLen;
|
pEntry->dataLen = dataLen;
|
||||||
|
pEntry->rid = -1;
|
||||||
return pEntry;
|
return pEntry;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -451,6 +452,11 @@ static char* keyFn(const void* pData) {
|
||||||
|
|
||||||
static int cmpFn(const void* p1, const void* p2) { return memcmp(p1, p2, sizeof(SyncIndex)); }
|
static int cmpFn(const void* p1, const void* p2) { return memcmp(p1, p2, sizeof(SyncIndex)); }
|
||||||
|
|
||||||
|
static void freeRaftEntry(void* param) {
|
||||||
|
SSyncRaftEntry* pEntry = (SSyncRaftEntry*)param;
|
||||||
|
syncEntryDestory(pEntry);
|
||||||
|
}
|
||||||
|
|
||||||
SRaftEntryCache* raftEntryCacheCreate(SSyncNode* pSyncNode, int32_t maxCount) {
|
SRaftEntryCache* raftEntryCacheCreate(SSyncNode* pSyncNode, int32_t maxCount) {
|
||||||
SRaftEntryCache* pCache = taosMemoryMalloc(sizeof(SRaftEntryCache));
|
SRaftEntryCache* pCache = taosMemoryMalloc(sizeof(SRaftEntryCache));
|
||||||
if (pCache == NULL) {
|
if (pCache == NULL) {
|
||||||
|
@ -466,6 +472,7 @@ SRaftEntryCache* raftEntryCacheCreate(SSyncNode* pSyncNode, int32_t maxCount) {
|
||||||
}
|
}
|
||||||
|
|
||||||
taosThreadMutexInit(&(pCache->mutex), NULL);
|
taosThreadMutexInit(&(pCache->mutex), NULL);
|
||||||
|
pCache->refMgr = taosOpenRef(10, freeRaftEntry);
|
||||||
pCache->maxCount = maxCount;
|
pCache->maxCount = maxCount;
|
||||||
pCache->currentCount = 0;
|
pCache->currentCount = 0;
|
||||||
pCache->pSyncNode = pSyncNode;
|
pCache->pSyncNode = pSyncNode;
|
||||||
|
@ -477,6 +484,10 @@ void raftEntryCacheDestroy(SRaftEntryCache* pCache) {
|
||||||
if (pCache != NULL) {
|
if (pCache != NULL) {
|
||||||
taosThreadMutexLock(&(pCache->mutex));
|
taosThreadMutexLock(&(pCache->mutex));
|
||||||
tSkipListDestroy(pCache->pSkipList);
|
tSkipListDestroy(pCache->pSkipList);
|
||||||
|
if (pCache->refMgr != -1) {
|
||||||
|
taosCloseRef(pCache->refMgr);
|
||||||
|
pCache->refMgr = -1;
|
||||||
|
}
|
||||||
taosThreadMutexUnlock(&(pCache->mutex));
|
taosThreadMutexUnlock(&(pCache->mutex));
|
||||||
taosThreadMutexDestroy(&(pCache->mutex));
|
taosThreadMutexDestroy(&(pCache->mutex));
|
||||||
taosMemoryFree(pCache);
|
taosMemoryFree(pCache);
|
||||||
|
@ -498,6 +509,9 @@ int32_t raftEntryCachePutEntry(struct SRaftEntryCache* pCache, SSyncRaftEntry* p
|
||||||
ASSERT(pSkipListNode != NULL);
|
ASSERT(pSkipListNode != NULL);
|
||||||
++(pCache->currentCount);
|
++(pCache->currentCount);
|
||||||
|
|
||||||
|
pEntry->rid = taosAddRef(pCache->refMgr, pEntry);
|
||||||
|
ASSERT(pEntry->rid >= 0);
|
||||||
|
|
||||||
do {
|
do {
|
||||||
char eventLog[128];
|
char eventLog[128];
|
||||||
snprintf(eventLog, sizeof(eventLog), "raft cache add, type:%s,%d, type2:%s,%d, index:%" PRId64 ", bytes:%d",
|
snprintf(eventLog, sizeof(eventLog), "raft cache add, type:%s,%d, type2:%s,%d, index:%" PRId64 ", bytes:%d",
|
||||||
|
@ -520,6 +534,7 @@ int32_t raftEntryCacheGetEntry(struct SRaftEntryCache* pCache, SyncIndex index,
|
||||||
if (code == 1) {
|
if (code == 1) {
|
||||||
*ppEntry = taosMemoryMalloc(pEntry->bytes);
|
*ppEntry = taosMemoryMalloc(pEntry->bytes);
|
||||||
memcpy(*ppEntry, pEntry, pEntry->bytes);
|
memcpy(*ppEntry, pEntry, pEntry->bytes);
|
||||||
|
(*ppEntry)->rid = -1;
|
||||||
} else {
|
} else {
|
||||||
*ppEntry = NULL;
|
*ppEntry = NULL;
|
||||||
}
|
}
|
||||||
|
@ -541,6 +556,7 @@ int32_t raftEntryCacheGetEntryP(struct SRaftEntryCache* pCache, SyncIndex index,
|
||||||
SSkipListNode** ppNode = (SSkipListNode**)taosArrayGet(entryPArray, 0);
|
SSkipListNode** ppNode = (SSkipListNode**)taosArrayGet(entryPArray, 0);
|
||||||
ASSERT(*ppNode != NULL);
|
ASSERT(*ppNode != NULL);
|
||||||
*ppEntry = (SSyncRaftEntry*)SL_GET_NODE_DATA(*ppNode);
|
*ppEntry = (SSyncRaftEntry*)SL_GET_NODE_DATA(*ppNode);
|
||||||
|
taosAcquireRef(pCache->refMgr, (*ppEntry)->rid);
|
||||||
code = 1;
|
code = 1;
|
||||||
|
|
||||||
} else if (arraySize == 0) {
|
} else if (arraySize == 0) {
|
||||||
|
@ -600,7 +616,9 @@ int32_t raftEntryCacheClear(struct SRaftEntryCache* pCache, int32_t count) {
|
||||||
taosArrayPush(delNodeArray, &pNode);
|
taosArrayPush(delNodeArray, &pNode);
|
||||||
++returnCnt;
|
++returnCnt;
|
||||||
SSyncRaftEntry* pEntry = (SSyncRaftEntry*)SL_GET_NODE_DATA(pNode);
|
SSyncRaftEntry* pEntry = (SSyncRaftEntry*)SL_GET_NODE_DATA(pNode);
|
||||||
syncEntryDestory(pEntry);
|
|
||||||
|
// syncEntryDestory(pEntry);
|
||||||
|
taosRemoveRef(pCache->refMgr, pEntry->rid);
|
||||||
}
|
}
|
||||||
tSkipListDestroyIter(pIter);
|
tSkipListDestroyIter(pIter);
|
||||||
|
|
||||||
|
|
|
@ -216,7 +216,7 @@ cJSON *raftStore2Json(SRaftStore *pRaftStore) {
|
||||||
|
|
||||||
char *raftStore2Str(SRaftStore *pRaftStore) {
|
char *raftStore2Str(SRaftStore *pRaftStore) {
|
||||||
cJSON *pJson = raftStore2Json(pRaftStore);
|
cJSON *pJson = raftStore2Json(pRaftStore);
|
||||||
char *serialized = cJSON_Print(pJson);
|
char * serialized = cJSON_Print(pJson);
|
||||||
cJSON_Delete(pJson);
|
cJSON_Delete(pJson);
|
||||||
return serialized;
|
return serialized;
|
||||||
}
|
}
|
||||||
|
|
|
@ -129,7 +129,7 @@ void syncRespCleanByTTL(SSyncRespMgr *pObj, int64_t ttl) {
|
||||||
|
|
||||||
while (pStub) {
|
while (pStub) {
|
||||||
size_t len;
|
size_t len;
|
||||||
void *key = taosHashGetKey(pStub, &len);
|
void * key = taosHashGetKey(pStub, &len);
|
||||||
uint64_t *pSeqNum = (uint64_t *)key;
|
uint64_t *pSeqNum = (uint64_t *)key;
|
||||||
sum++;
|
sum++;
|
||||||
|
|
||||||
|
|
|
@ -374,14 +374,14 @@ cJSON *snapshotSender2Json(SSyncSnapshotSender *pSender) {
|
||||||
|
|
||||||
char *snapshotSender2Str(SSyncSnapshotSender *pSender) {
|
char *snapshotSender2Str(SSyncSnapshotSender *pSender) {
|
||||||
cJSON *pJson = snapshotSender2Json(pSender);
|
cJSON *pJson = snapshotSender2Json(pSender);
|
||||||
char *serialized = cJSON_Print(pJson);
|
char * serialized = cJSON_Print(pJson);
|
||||||
cJSON_Delete(pJson);
|
cJSON_Delete(pJson);
|
||||||
return serialized;
|
return serialized;
|
||||||
}
|
}
|
||||||
|
|
||||||
char *snapshotSender2SimpleStr(SSyncSnapshotSender *pSender, char *event) {
|
char *snapshotSender2SimpleStr(SSyncSnapshotSender *pSender, char *event) {
|
||||||
int32_t len = 256;
|
int32_t len = 256;
|
||||||
char *s = taosMemoryMalloc(len);
|
char * s = taosMemoryMalloc(len);
|
||||||
|
|
||||||
SRaftId destId = pSender->pSyncNode->replicasId[pSender->replicaIndex];
|
SRaftId destId = pSender->pSyncNode->replicasId[pSender->replicaIndex];
|
||||||
char host[64];
|
char host[64];
|
||||||
|
@ -653,7 +653,7 @@ cJSON *snapshotReceiver2Json(SSyncSnapshotReceiver *pReceiver) {
|
||||||
cJSON_AddStringToObject(pFromId, "addr", u64buf);
|
cJSON_AddStringToObject(pFromId, "addr", u64buf);
|
||||||
{
|
{
|
||||||
uint64_t u64 = pReceiver->fromId.addr;
|
uint64_t u64 = pReceiver->fromId.addr;
|
||||||
cJSON *pTmp = pFromId;
|
cJSON * pTmp = pFromId;
|
||||||
char host[128] = {0};
|
char host[128] = {0};
|
||||||
uint16_t port;
|
uint16_t port;
|
||||||
syncUtilU642Addr(u64, host, sizeof(host), &port);
|
syncUtilU642Addr(u64, host, sizeof(host), &port);
|
||||||
|
@ -686,14 +686,14 @@ cJSON *snapshotReceiver2Json(SSyncSnapshotReceiver *pReceiver) {
|
||||||
|
|
||||||
char *snapshotReceiver2Str(SSyncSnapshotReceiver *pReceiver) {
|
char *snapshotReceiver2Str(SSyncSnapshotReceiver *pReceiver) {
|
||||||
cJSON *pJson = snapshotReceiver2Json(pReceiver);
|
cJSON *pJson = snapshotReceiver2Json(pReceiver);
|
||||||
char *serialized = cJSON_Print(pJson);
|
char * serialized = cJSON_Print(pJson);
|
||||||
cJSON_Delete(pJson);
|
cJSON_Delete(pJson);
|
||||||
return serialized;
|
return serialized;
|
||||||
}
|
}
|
||||||
|
|
||||||
char *snapshotReceiver2SimpleStr(SSyncSnapshotReceiver *pReceiver, char *event) {
|
char *snapshotReceiver2SimpleStr(SSyncSnapshotReceiver *pReceiver, char *event) {
|
||||||
int32_t len = 256;
|
int32_t len = 256;
|
||||||
char *s = taosMemoryMalloc(len);
|
char * s = taosMemoryMalloc(len);
|
||||||
|
|
||||||
SRaftId fromId = pReceiver->fromId;
|
SRaftId fromId = pReceiver->fromId;
|
||||||
char host[128];
|
char host[128];
|
||||||
|
|
|
@ -125,7 +125,7 @@ int32_t SnapshotStartWrite(struct SSyncFSM* pFsm, void* pParam, void** ppWriter)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t SnapshotStopWrite(struct SSyncFSM* pFsm, void* pWriter, bool isApply, SSnapshot *pSnapshot) {
|
int32_t SnapshotStopWrite(struct SSyncFSM* pFsm, void* pWriter, bool isApply, SSnapshot* pSnapshot) {
|
||||||
char logBuf[256] = {0};
|
char logBuf[256] = {0};
|
||||||
snprintf(logBuf, sizeof(logBuf), "==callback== ==SnapshotStopWrite== pFsm:%p, pWriter:%p, isApply:%d", pFsm, pWriter,
|
snprintf(logBuf, sizeof(logBuf), "==callback== ==SnapshotStopWrite== pFsm:%p, pWriter:%p, isApply:%d", pFsm, pWriter,
|
||||||
isApply);
|
isApply);
|
||||||
|
|
|
@ -5,8 +5,8 @@
|
||||||
#include "syncRaftLog.h"
|
#include "syncRaftLog.h"
|
||||||
#include "syncRaftStore.h"
|
#include "syncRaftStore.h"
|
||||||
#include "syncUtil.h"
|
#include "syncUtil.h"
|
||||||
#include "tskiplist.h"
|
|
||||||
#include "tref.h"
|
#include "tref.h"
|
||||||
|
#include "tskiplist.h"
|
||||||
|
|
||||||
void logTest() {
|
void logTest() {
|
||||||
sTrace("--- sync log test: trace");
|
sTrace("--- sync log test: trace");
|
||||||
|
@ -51,7 +51,7 @@ SRaftEntryCache* createCache(int maxCount) {
|
||||||
}
|
}
|
||||||
|
|
||||||
void test1() {
|
void test1() {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
SRaftEntryCache* pCache = createCache(5);
|
SRaftEntryCache* pCache = createCache(5);
|
||||||
for (int i = 0; i < 10; ++i) {
|
for (int i = 0; i < 10; ++i) {
|
||||||
SSyncRaftEntry* pEntry = createEntry(i);
|
SSyncRaftEntry* pEntry = createEntry(i);
|
||||||
|
@ -68,7 +68,7 @@ void test1() {
|
||||||
}
|
}
|
||||||
|
|
||||||
void test2() {
|
void test2() {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
SRaftEntryCache* pCache = createCache(5);
|
SRaftEntryCache* pCache = createCache(5);
|
||||||
for (int i = 0; i < 10; ++i) {
|
for (int i = 0; i < 10; ++i) {
|
||||||
SSyncRaftEntry* pEntry = createEntry(i);
|
SSyncRaftEntry* pEntry = createEntry(i);
|
||||||
|
@ -77,7 +77,7 @@ void test2() {
|
||||||
}
|
}
|
||||||
raftEntryCacheLog2((char*)"==test1 write 5 entries==", pCache);
|
raftEntryCacheLog2((char*)"==test1 write 5 entries==", pCache);
|
||||||
|
|
||||||
SyncIndex index = 2;
|
SyncIndex index = 2;
|
||||||
SSyncRaftEntry* pEntry = NULL;
|
SSyncRaftEntry* pEntry = NULL;
|
||||||
|
|
||||||
code = raftEntryCacheGetEntryP(pCache, index, &pEntry);
|
code = raftEntryCacheGetEntryP(pCache, index, &pEntry);
|
||||||
|
@ -107,7 +107,7 @@ void test2() {
|
||||||
}
|
}
|
||||||
|
|
||||||
void test3() {
|
void test3() {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
SRaftEntryCache* pCache = createCache(20);
|
SRaftEntryCache* pCache = createCache(20);
|
||||||
for (int i = 0; i <= 4; ++i) {
|
for (int i = 0; i <= 4; ++i) {
|
||||||
SSyncRaftEntry* pEntry = createEntry(i);
|
SSyncRaftEntry* pEntry = createEntry(i);
|
||||||
|
@ -122,8 +122,6 @@ void test3() {
|
||||||
raftEntryCacheLog2((char*)"==test3 write 10 entries==", pCache);
|
raftEntryCacheLog2((char*)"==test3 write 10 entries==", pCache);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
static void freeObj(void* param) {
|
static void freeObj(void* param) {
|
||||||
SSyncRaftEntry* pEntry = (SSyncRaftEntry*)param;
|
SSyncRaftEntry* pEntry = (SSyncRaftEntry*)param;
|
||||||
syncEntryLog2((char*)"freeObj: ", pEntry);
|
syncEntryLog2((char*)"freeObj: ", pEntry);
|
||||||
|
@ -138,19 +136,41 @@ void test4() {
|
||||||
|
|
||||||
int64_t rid = taosAddRef(testRefId, pEntry);
|
int64_t rid = taosAddRef(testRefId, pEntry);
|
||||||
sTrace("rid: %ld", rid);
|
sTrace("rid: %ld", rid);
|
||||||
|
|
||||||
do {
|
do {
|
||||||
SSyncRaftEntry* pAcquireEntry = (SSyncRaftEntry*)taosAcquireRef(testRefId, rid);
|
SSyncRaftEntry* pAcquireEntry = (SSyncRaftEntry*)taosAcquireRef(testRefId, rid);
|
||||||
syncEntryLog2((char*)"acquire: ", pAcquireEntry);
|
syncEntryLog2((char*)"acquire: ", pAcquireEntry);
|
||||||
|
|
||||||
|
taosAcquireRef(testRefId, rid);
|
||||||
taosAcquireRef(testRefId, rid);
|
taosAcquireRef(testRefId, rid);
|
||||||
taosAcquireRef(testRefId, rid);
|
taosAcquireRef(testRefId, rid);
|
||||||
|
|
||||||
taosReleaseRef(testRefId, rid);
|
// taosReleaseRef(testRefId, rid);
|
||||||
//taosReleaseRef(testRefId, rid);
|
// taosReleaseRef(testRefId, rid);
|
||||||
} while (0);
|
} while (0);
|
||||||
|
|
||||||
taosRemoveRef(testRefId, rid);
|
taosRemoveRef(testRefId, rid);
|
||||||
|
|
||||||
|
for (int i = 0; i < 10; ++i) {
|
||||||
|
sTrace("taosReleaseRef, %d", i);
|
||||||
|
taosReleaseRef(testRefId, rid);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void test5() {
|
||||||
|
int32_t testRefId = taosOpenRef(5, freeObj);
|
||||||
|
for (int i = 0; i < 100; i++) {
|
||||||
|
SSyncRaftEntry* pEntry = createEntry(i);
|
||||||
|
ASSERT(pEntry != NULL);
|
||||||
|
|
||||||
|
int64_t rid = taosAddRef(testRefId, pEntry);
|
||||||
|
sTrace("rid: %ld", rid);
|
||||||
|
}
|
||||||
|
|
||||||
|
for (int64_t rid = 2; rid < 101; rid++) {
|
||||||
|
SSyncRaftEntry* pAcquireEntry = (SSyncRaftEntry*)taosAcquireRef(testRefId, rid);
|
||||||
|
syncEntryLog2((char*)"taosAcquireRef: ", pAcquireEntry);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
int main(int argc, char** argv) {
|
int main(int argc, char** argv) {
|
||||||
|
@ -158,11 +178,13 @@ int main(int argc, char** argv) {
|
||||||
tsAsyncLog = 0;
|
tsAsyncLog = 0;
|
||||||
sDebugFlag = DEBUG_TRACE + DEBUG_SCREEN + DEBUG_FILE + DEBUG_DEBUG;
|
sDebugFlag = DEBUG_TRACE + DEBUG_SCREEN + DEBUG_FILE + DEBUG_DEBUG;
|
||||||
|
|
||||||
test1();
|
/*
|
||||||
test2();
|
test1();
|
||||||
test3();
|
test2();
|
||||||
|
test3();
|
||||||
//test4();
|
*/
|
||||||
|
test4();
|
||||||
|
// test5();
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
|
@ -30,7 +30,7 @@ int32_t SnapshotStopRead(struct SSyncFSM* pFsm, void* pReader) { return 0; }
|
||||||
int32_t SnapshotDoRead(struct SSyncFSM* pFsm, void* pReader, void** ppBuf, int32_t* len) { return 0; }
|
int32_t SnapshotDoRead(struct SSyncFSM* pFsm, void* pReader, void** ppBuf, int32_t* len) { return 0; }
|
||||||
|
|
||||||
int32_t SnapshotStartWrite(struct SSyncFSM* pFsm, void* pParam, void** ppWriter) { return 0; }
|
int32_t SnapshotStartWrite(struct SSyncFSM* pFsm, void* pParam, void** ppWriter) { return 0; }
|
||||||
int32_t SnapshotStopWrite(struct SSyncFSM* pFsm, void* pWriter, bool isApply, SSnapshot *pSnapshot) { return 0; }
|
int32_t SnapshotStopWrite(struct SSyncFSM* pFsm, void* pWriter, bool isApply, SSnapshot* pSnapshot) { return 0; }
|
||||||
int32_t SnapshotDoWrite(struct SSyncFSM* pFsm, void* pWriter, void* pBuf, int32_t len) { return 0; }
|
int32_t SnapshotDoWrite(struct SSyncFSM* pFsm, void* pWriter, void* pBuf, int32_t len) { return 0; }
|
||||||
|
|
||||||
SSyncSnapshotReceiver* createReceiver() {
|
SSyncSnapshotReceiver* createReceiver() {
|
||||||
|
|
|
@ -126,7 +126,7 @@ int32_t SnapshotStartWrite(struct SSyncFSM* pFsm, void* pParam, void** ppWriter)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t SnapshotStopWrite(struct SSyncFSM* pFsm, void* pWriter, bool isApply, SSnapshot *pSnapshot) {
|
int32_t SnapshotStopWrite(struct SSyncFSM* pFsm, void* pWriter, bool isApply, SSnapshot* pSnapshot) {
|
||||||
if (isApply) {
|
if (isApply) {
|
||||||
gSnapshotLastApplyIndex = gFinishLastApplyIndex;
|
gSnapshotLastApplyIndex = gFinishLastApplyIndex;
|
||||||
gSnapshotLastApplyTerm = gFinishLastApplyTerm;
|
gSnapshotLastApplyTerm = gFinishLastApplyTerm;
|
||||||
|
|
|
@ -335,6 +335,7 @@ int32_t cfgSetItem(SConfig *pCfg, const char *name, const char *value, ECfgSrcTy
|
||||||
}
|
}
|
||||||
|
|
||||||
SConfigItem *cfgGetItem(SConfig *pCfg, const char *name) {
|
SConfigItem *cfgGetItem(SConfig *pCfg, const char *name) {
|
||||||
|
if (pCfg == NULL) return NULL;
|
||||||
int32_t size = taosArrayGetSize(pCfg->array);
|
int32_t size = taosArrayGetSize(pCfg->array);
|
||||||
for (int32_t i = 0; i < size; ++i) {
|
for (int32_t i = 0; i < size; ++i) {
|
||||||
SConfigItem *pItem = taosArrayGet(pCfg->array, i);
|
SConfigItem *pItem = taosArrayGet(pCfg->array, i);
|
||||||
|
|
|
@ -512,7 +512,7 @@ TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INTER_OFFSET_UNIT, "Cannot use 'year' as
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INTER_OFFSET_TOO_BIG, "Interval offset should be shorter than interval")
|
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INTER_OFFSET_TOO_BIG, "Interval offset should be shorter than interval")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INTER_SLIDING_UNIT, "Does not support sliding when interval is natural month/year")
|
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INTER_SLIDING_UNIT, "Does not support sliding when interval is natural month/year")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INTER_SLIDING_TOO_BIG, "sliding value no larger than the interval value")
|
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INTER_SLIDING_TOO_BIG, "sliding value no larger than the interval value")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INTER_SLIDING_TOO_SMALL, "sliding value can not less than 1% of interval value")
|
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INTER_SLIDING_TOO_SMALL, "sliding value can not less than 1%% of interval value")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_ONLY_ONE_JSON_TAG, "Only one tag if there is a json tag")
|
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_ONLY_ONE_JSON_TAG, "Only one tag if there is a json tag")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INCORRECT_NUM_OF_COL, "Query block has incorrect number of result columns")
|
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INCORRECT_NUM_OF_COL, "Query block has incorrect number of result columns")
|
||||||
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INCORRECT_TIMESTAMP_VAL, "Incorrect TIMESTAMP value")
|
TAOS_DEFINE_ERROR(TSDB_CODE_PAR_INCORRECT_TIMESTAMP_VAL, "Incorrect TIMESTAMP value")
|
||||||
|
|
|
@ -17,6 +17,7 @@
|
||||||
#include "tlog.h"
|
#include "tlog.h"
|
||||||
#include "os.h"
|
#include "os.h"
|
||||||
#include "tutil.h"
|
#include "tutil.h"
|
||||||
|
#include "tconfig.h"
|
||||||
|
|
||||||
#define LOG_MAX_LINE_SIZE (1024)
|
#define LOG_MAX_LINE_SIZE (1024)
|
||||||
#define LOG_MAX_LINE_BUFFER_SIZE (LOG_MAX_LINE_SIZE + 3)
|
#define LOG_MAX_LINE_BUFFER_SIZE (LOG_MAX_LINE_SIZE + 3)
|
||||||
|
@ -62,6 +63,7 @@ typedef struct {
|
||||||
TdThreadMutex logMutex;
|
TdThreadMutex logMutex;
|
||||||
} SLogObj;
|
} SLogObj;
|
||||||
|
|
||||||
|
extern SConfig *tsCfg;
|
||||||
static int8_t tsLogInited = 0;
|
static int8_t tsLogInited = 0;
|
||||||
static SLogObj tsLogObj = {.fileNum = 1};
|
static SLogObj tsLogObj = {.fileNum = 1};
|
||||||
static int64_t tsAsyncLogLostLines = 0;
|
static int64_t tsAsyncLogLostLines = 0;
|
||||||
|
@ -741,25 +743,3 @@ cmp_end:
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
void taosSetAllDebugFlag(int32_t flag) {
|
|
||||||
if (flag <= 0) return;
|
|
||||||
|
|
||||||
uDebugFlag = flag;
|
|
||||||
rpcDebugFlag = flag;
|
|
||||||
jniDebugFlag = flag;
|
|
||||||
qDebugFlag = flag;
|
|
||||||
cDebugFlag = flag;
|
|
||||||
dDebugFlag = flag;
|
|
||||||
vDebugFlag = flag;
|
|
||||||
mDebugFlag = flag;
|
|
||||||
wDebugFlag = flag;
|
|
||||||
sDebugFlag = flag;
|
|
||||||
tsdbDebugFlag = flag;
|
|
||||||
tqDebugFlag = flag;
|
|
||||||
fsDebugFlag = flag;
|
|
||||||
udfDebugFlag = flag;
|
|
||||||
smaDebugFlag = flag;
|
|
||||||
idxDebugFlag = flag;
|
|
||||||
uInfo("all debug flag are set to %d", flag);
|
|
||||||
}
|
|
||||||
|
|
|
@ -371,7 +371,9 @@ class ThreadCoordinator:
|
||||||
if isinstance(err, CrashGenError): # our own transition failure
|
if isinstance(err, CrashGenError): # our own transition failure
|
||||||
Logging.info("State transition error")
|
Logging.info("State transition error")
|
||||||
# TODO: saw an error here once, let's print out stack info for err?
|
# TODO: saw an error here once, let's print out stack info for err?
|
||||||
traceback.print_stack()
|
traceback.print_stack() # Stack frame to here.
|
||||||
|
Logging.info("Caused by:")
|
||||||
|
traceback.print_exception(*sys.exc_info()) # Ref: https://www.geeksforgeeks.org/how-to-print-exception-stack-trace-in-python/
|
||||||
transitionFailed = True
|
transitionFailed = True
|
||||||
self._te = None # Not running any more
|
self._te = None # Not running any more
|
||||||
self._execStats.registerFailure("State transition error: {}".format(err))
|
self._execStats.registerFailure("State transition error: {}".format(err))
|
||||||
|
@ -741,7 +743,8 @@ class AnyState:
|
||||||
sCnt += 1
|
sCnt += 1
|
||||||
if (sCnt >= 2):
|
if (sCnt >= 2):
|
||||||
raise CrashGenError(
|
raise CrashGenError(
|
||||||
"Unexpected more than 1 success with task: {}, in task set: {}".format(
|
"Unexpected more than 1 success at state: {}, with task: {}, in task set: {}".format(
|
||||||
|
self.__class__.__name__,
|
||||||
cls.__name__, # verified just now that isinstance(task, cls)
|
cls.__name__, # verified just now that isinstance(task, cls)
|
||||||
[c.__class__.__name__ for c in tasks]
|
[c.__class__.__name__ for c in tasks]
|
||||||
))
|
))
|
||||||
|
@ -756,8 +759,11 @@ class AnyState:
|
||||||
if task.isSuccess():
|
if task.isSuccess():
|
||||||
sCnt += 1
|
sCnt += 1
|
||||||
if (exists and sCnt <= 0):
|
if (exists and sCnt <= 0):
|
||||||
raise CrashGenError("Unexpected zero success for task type: {}, from tasks: {}"
|
raise CrashGenError("Unexpected zero success at state: {}, with task: {}, in task set: {}".format(
|
||||||
.format(cls, tasks))
|
self.__class__.__name__,
|
||||||
|
cls.__name__, # verified just now that isinstance(task, cls)
|
||||||
|
[c.__class__.__name__ for c in tasks]
|
||||||
|
))
|
||||||
|
|
||||||
def assertNoTask(self, tasks, cls):
|
def assertNoTask(self, tasks, cls):
|
||||||
for task in tasks:
|
for task in tasks:
|
||||||
|
@ -809,8 +815,6 @@ class StateEmpty(AnyState):
|
||||||
]
|
]
|
||||||
|
|
||||||
def verifyTasksToState(self, tasks, newState):
|
def verifyTasksToState(self, tasks, newState):
|
||||||
if Config.getConfig().ignore_errors: # if we are asked to ignore certain errors, let's not verify CreateDB success.
|
|
||||||
return
|
|
||||||
if (self.hasSuccess(tasks, TaskCreateDb)
|
if (self.hasSuccess(tasks, TaskCreateDb)
|
||||||
): # at EMPTY, if there's succes in creating DB
|
): # at EMPTY, if there's succes in creating DB
|
||||||
if (not self.hasTask(tasks, TaskDropDb)): # and no drop_db tasks
|
if (not self.hasTask(tasks, TaskDropDb)): # and no drop_db tasks
|
||||||
|
@ -995,16 +999,17 @@ class StateMechine:
|
||||||
dbc.execute("show dnodes")
|
dbc.execute("show dnodes")
|
||||||
|
|
||||||
# Generic Checks, first based on the start state
|
# Generic Checks, first based on the start state
|
||||||
if self._curState.canCreateDb():
|
if not Config.getConfig().ignore_errors: # verify state, only if we are asked not to ignore certain errors.
|
||||||
self._curState.assertIfExistThenSuccess(tasks, TaskCreateDb)
|
if self._curState.canCreateDb():
|
||||||
# self.assertAtMostOneSuccess(tasks, CreateDbTask) # not really, in
|
self._curState.assertIfExistThenSuccess(tasks, TaskCreateDb)
|
||||||
# case of multiple creation and drops
|
# self.assertAtMostOneSuccess(tasks, CreateDbTask) # not really, in
|
||||||
|
# case of multiple creation and drops
|
||||||
|
|
||||||
if self._curState.canDropDb():
|
if self._curState.canDropDb():
|
||||||
if gSvcMgr == None: # only if we are running as client-only
|
if gSvcMgr == None: # only if we are running as client-only
|
||||||
self._curState.assertIfExistThenSuccess(tasks, TaskDropDb)
|
self._curState.assertIfExistThenSuccess(tasks, TaskDropDb)
|
||||||
# self.assertAtMostOneSuccess(tasks, DropDbTask) # not really in
|
# self.assertAtMostOneSuccess(tasks, DropDbTask) # not really in
|
||||||
# case of drop-create-drop
|
# case of drop-create-drop
|
||||||
|
|
||||||
# if self._state.canCreateFixedTable():
|
# if self._state.canCreateFixedTable():
|
||||||
# self.assertIfExistThenSuccess(tasks, CreateFixedTableTask) # Not true, DB may be dropped
|
# self.assertIfExistThenSuccess(tasks, CreateFixedTableTask) # Not true, DB may be dropped
|
||||||
|
@ -1026,7 +1031,8 @@ class StateMechine:
|
||||||
newState = self._findCurrentState(dbc)
|
newState = self._findCurrentState(dbc)
|
||||||
Logging.debug("[STT] New DB state determined: {}".format(newState))
|
Logging.debug("[STT] New DB state determined: {}".format(newState))
|
||||||
# can old state move to new state through the tasks?
|
# can old state move to new state through the tasks?
|
||||||
self._curState.verifyTasksToState(tasks, newState)
|
if not Config.getConfig().ignore_errors: # verify state, only if we are asked not to ignore certain errors.
|
||||||
|
self._curState.verifyTasksToState(tasks, newState)
|
||||||
self._curState = newState
|
self._curState = newState
|
||||||
|
|
||||||
def pickTaskType(self):
|
def pickTaskType(self):
|
||||||
|
@ -2231,16 +2237,14 @@ class TaskAddData(StateTransitionTask):
|
||||||
class ThreadStacks: # stack info for all threads
|
class ThreadStacks: # stack info for all threads
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self._allStacks = {}
|
self._allStacks = {}
|
||||||
allFrames = sys._current_frames() # All current stack frames
|
allFrames = sys._current_frames() # All current stack frames, keyed with "ident"
|
||||||
for th in threading.enumerate(): # For each thread
|
for th in threading.enumerate(): # For each thread
|
||||||
if th.ident is None:
|
stack = traceback.extract_stack(allFrames[th.ident]) #type: ignore # Get stack for a thread
|
||||||
continue
|
shortTid = th.native_id % 10000 #type: ignore
|
||||||
stack = traceback.extract_stack(allFrames[th.ident]) # Get stack for a thread
|
|
||||||
shortTid = th.ident % 10000
|
|
||||||
self._allStacks[shortTid] = stack # Was using th.native_id
|
self._allStacks[shortTid] = stack # Was using th.native_id
|
||||||
|
|
||||||
def print(self, filteredEndName = None, filterInternal = False):
|
def print(self, filteredEndName = None, filterInternal = False):
|
||||||
for tIdent, stack in self._allStacks.items(): # for each thread, stack frames top to bottom
|
for shortTid, stack in self._allStacks.items(): # for each thread, stack frames top to bottom
|
||||||
lastFrame = stack[-1]
|
lastFrame = stack[-1]
|
||||||
if filteredEndName: # we need to filter out stacks that match this name
|
if filteredEndName: # we need to filter out stacks that match this name
|
||||||
if lastFrame.name == filteredEndName : # end did not match
|
if lastFrame.name == filteredEndName : # end did not match
|
||||||
|
@ -2252,7 +2256,9 @@ class ThreadStacks: # stack info for all threads
|
||||||
'__init__']: # the thread that extracted the stack
|
'__init__']: # the thread that extracted the stack
|
||||||
continue # ignore
|
continue # ignore
|
||||||
# Now print
|
# Now print
|
||||||
print("\n<----- Thread Info for LWP/ID: {} (most recent call last) <-----".format(tIdent))
|
print("\n<----- Thread Info for LWP/ID: {} (most recent call last) <-----".format(shortTid))
|
||||||
|
lastSqlForThread = DbConn.fetchSqlForThread(shortTid)
|
||||||
|
print("Last SQL statement attempted from thread {} is: {}".format(shortTid, lastSqlForThread))
|
||||||
stackFrame = 0
|
stackFrame = 0
|
||||||
for frame in stack: # was using: reversed(stack)
|
for frame in stack: # was using: reversed(stack)
|
||||||
# print(frame)
|
# print(frame)
|
||||||
|
|
|
@ -27,6 +27,26 @@ class DbConn:
|
||||||
TYPE_REST = "rest-api"
|
TYPE_REST = "rest-api"
|
||||||
TYPE_INVALID = "invalid"
|
TYPE_INVALID = "invalid"
|
||||||
|
|
||||||
|
# class variables
|
||||||
|
lastSqlFromThreads : dict[int, str] = {} # stored by thread id, obtained from threading.current_thread().ident%10000
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def saveSqlForCurrentThread(cls, sql: str):
|
||||||
|
'''
|
||||||
|
Let us save the last SQL statement on a per-thread basis, so that when later we
|
||||||
|
run into a dead-lock situation, we can pick out the deadlocked thread, and use
|
||||||
|
that information to find what what SQL statement is stuck.
|
||||||
|
'''
|
||||||
|
th = threading.current_thread()
|
||||||
|
shortTid = th.native_id % 10000 #type: ignore
|
||||||
|
cls.lastSqlFromThreads[shortTid] = sql # Save this for later
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def fetchSqlForThread(cls, shortTid : int) -> str :
|
||||||
|
if shortTid not in cls.lastSqlFromThreads:
|
||||||
|
raise CrashGenError("No last-attempted-SQL found for thread id: {}".format(shortTid))
|
||||||
|
return cls.lastSqlFromThreads[shortTid]
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def create(cls, connType, dbTarget):
|
def create(cls, connType, dbTarget):
|
||||||
if connType == cls.TYPE_NATIVE:
|
if connType == cls.TYPE_NATIVE:
|
||||||
|
@ -163,6 +183,7 @@ class DbConnRest(DbConn):
|
||||||
|
|
||||||
def _doSql(self, sql):
|
def _doSql(self, sql):
|
||||||
self._lastSql = sql # remember this, last SQL attempted
|
self._lastSql = sql # remember this, last SQL attempted
|
||||||
|
self.saveSqlForCurrentThread(sql) # Save in global structure too. #TODO: combine with above
|
||||||
try:
|
try:
|
||||||
r = requests.post(self._url,
|
r = requests.post(self._url,
|
||||||
data = sql,
|
data = sql,
|
||||||
|
@ -392,6 +413,7 @@ class DbConnNative(DbConn):
|
||||||
"Cannot exec SQL unless db connection is open", CrashGenError.DB_CONNECTION_NOT_OPEN)
|
"Cannot exec SQL unless db connection is open", CrashGenError.DB_CONNECTION_NOT_OPEN)
|
||||||
Logging.debug("[SQL] Executing SQL: {}".format(sql))
|
Logging.debug("[SQL] Executing SQL: {}".format(sql))
|
||||||
self._lastSql = sql
|
self._lastSql = sql
|
||||||
|
self.saveSqlForCurrentThread(sql) # Save in global structure too. #TODO: combine with above
|
||||||
nRows = self._tdSql.execute(sql)
|
nRows = self._tdSql.execute(sql)
|
||||||
cls = self.__class__
|
cls = self.__class__
|
||||||
cls.totalRequests += 1
|
cls.totalRequests += 1
|
||||||
|
@ -407,6 +429,7 @@ class DbConnNative(DbConn):
|
||||||
"Cannot query database until connection is open, restarting?", CrashGenError.DB_CONNECTION_NOT_OPEN)
|
"Cannot query database until connection is open, restarting?", CrashGenError.DB_CONNECTION_NOT_OPEN)
|
||||||
Logging.debug("[SQL] Executing SQL: {}".format(sql))
|
Logging.debug("[SQL] Executing SQL: {}".format(sql))
|
||||||
self._lastSql = sql
|
self._lastSql = sql
|
||||||
|
self.saveSqlForCurrentThread(sql) # Save in global structure too. #TODO: combine with above
|
||||||
nRows = self._tdSql.query(sql)
|
nRows = self._tdSql.query(sql)
|
||||||
cls = self.__class__
|
cls = self.__class__
|
||||||
cls.totalRequests += 1
|
cls.totalRequests += 1
|
||||||
|
|
|
@ -518,7 +518,7 @@ class TDDnode:
|
||||||
|
|
||||||
if self.running != 0:
|
if self.running != 0:
|
||||||
if platform.system().lower() == 'windows':
|
if platform.system().lower() == 'windows':
|
||||||
psCmd = "for /f %%a in ('wmic process where \"name='taosd.exe' and CommandLine like '%%dnode%d%%'\" get processId ^| xargs echo ^| awk ^'{print $2}^'') do @(ps | grep %%a | awk '{print $1}' | xargs kill -INT )" % (self.index)
|
psCmd = "for /f %%a in ('wmic process where \"name='taosd.exe' and CommandLine like '%%dnode%d%%'\" get processId ^| xargs echo ^| awk ^'{print $2}^' ^&^& echo aa') do @(ps | grep %%a | awk '{print $1}' )" % (self.index)
|
||||||
else:
|
else:
|
||||||
psCmd = "ps -ef|grep -w %s| grep dnode%d|grep -v grep | awk '{print $2}'" % (toBeKilled,self.index)
|
psCmd = "ps -ef|grep -w %s| grep dnode%d|grep -v grep | awk '{print $2}'" % (toBeKilled,self.index)
|
||||||
processID = subprocess.check_output(
|
processID = subprocess.check_output(
|
||||||
|
|
|
@ -224,7 +224,7 @@
|
||||||
|
|
||||||
# ---- stream
|
# ---- stream
|
||||||
./test.sh -f tsim/stream/basic0.sim
|
./test.sh -f tsim/stream/basic0.sim
|
||||||
#./test.sh -f tsim/stream/basic1.sim
|
./test.sh -f tsim/stream/basic1.sim
|
||||||
./test.sh -f tsim/stream/basic2.sim
|
./test.sh -f tsim/stream/basic2.sim
|
||||||
./test.sh -f tsim/stream/drop_stream.sim
|
./test.sh -f tsim/stream/drop_stream.sim
|
||||||
./test.sh -f tsim/stream/distributeInterval0.sim
|
./test.sh -f tsim/stream/distributeInterval0.sim
|
||||||
|
|
|
@ -3,6 +3,7 @@ system sh/deploy.sh -n dnode1 -i 1
|
||||||
system sh/deploy.sh -n dnode2 -i 2
|
system sh/deploy.sh -n dnode2 -i 2
|
||||||
system sh/deploy.sh -n dnode3 -i 3
|
system sh/deploy.sh -n dnode3 -i 3
|
||||||
system sh/deploy.sh -n dnode4 -i 4
|
system sh/deploy.sh -n dnode4 -i 4
|
||||||
|
system sh/cfg.sh -n dnode1 -c supportVnodes -v 0
|
||||||
system sh/exec.sh -n dnode1 -s start
|
system sh/exec.sh -n dnode1 -s start
|
||||||
system sh/exec.sh -n dnode2 -s start
|
system sh/exec.sh -n dnode2 -s start
|
||||||
system sh/exec.sh -n dnode3 -s start
|
system sh/exec.sh -n dnode3 -s start
|
||||||
|
@ -44,6 +45,8 @@ if $data(4)[4] != ready then
|
||||||
goto step1
|
goto step1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
return
|
||||||
|
|
||||||
print =============== step2: create database
|
print =============== step2: create database
|
||||||
sql create database db vgroups 1 replica 3
|
sql create database db vgroups 1 replica 3
|
||||||
sql show databases
|
sql show databases
|
||||||
|
|
|
@ -1,6 +1,5 @@
|
||||||
system sh/stop_dnodes.sh
|
system sh/stop_dnodes.sh
|
||||||
system sh/deploy.sh -n dnode1 -i 1
|
system sh/deploy.sh -n dnode1 -i 1
|
||||||
system sh/cfg.sh -n dnode1 -c debugflag -v 131
|
|
||||||
system sh/exec.sh -n dnode1 -s start -v
|
system sh/exec.sh -n dnode1 -s start -v
|
||||||
sql connect
|
sql connect
|
||||||
|
|
||||||
|
@ -22,78 +21,31 @@ if $data(1)[4] != ready then
|
||||||
goto step1
|
goto step1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
print =============== step2: create db
|
$tbPrefix = tb
|
||||||
sql create database db
|
$tbNum = 5
|
||||||
|
$rowNum = 10
|
||||||
|
|
||||||
|
print =============== step2: prepare data
|
||||||
|
sql create database db vgroups 2
|
||||||
sql use db
|
sql use db
|
||||||
sql create table db.stb (ts timestamp, c1 int, c2 binary(4)) tags(t1 int, t2 float, t3 binary(16)) comment "abd"
|
sql create table if not exists stb (ts timestamp, tbcol int, tbcol2 float, tbcol3 double) tags (tgcol int unsigned)
|
||||||
sql create table db.c1 using db.stb tags(101, 102, "103")
|
|
||||||
|
|
||||||
print =============== step3: alter stb
|
$i = 0
|
||||||
sql_error alter table db.stb add column ts int
|
while $i < $tbNum
|
||||||
sql alter table db.stb add column c3 int
|
$tb = $tbPrefix . $i
|
||||||
sql alter table db.stb add column c4 bigint
|
sql create table $tb using stb tags( $i )
|
||||||
sql alter table db.stb add column c5 binary(12)
|
$x = 0
|
||||||
sql alter table db.stb drop column c1
|
while $x < $rowNum
|
||||||
sql alter table db.stb drop column c4
|
$cc = $x * 60000
|
||||||
sql alter table db.stb MODIFY column c2 binary(32)
|
$ms = 1601481600000 + $cc
|
||||||
sql alter table db.stb add tag t4 bigint
|
sql insert into $tb values ($ms , $x , $x , $x )
|
||||||
sql alter table db.stb add tag c1 int
|
$x = $x + 1
|
||||||
sql alter table db.stb add tag t5 binary(12)
|
endw
|
||||||
sql alter table db.stb drop tag c1
|
$i = $i + 1
|
||||||
sql alter table db.stb drop tag t5
|
endw
|
||||||
sql alter table db.stb MODIFY tag t3 binary(32)
|
|
||||||
sql alter table db.stb rename tag t1 tx
|
|
||||||
sql alter table db.stb comment 'abcde' ;
|
|
||||||
sql drop table db.stb
|
|
||||||
|
|
||||||
print =============== step4: alter tb
|
print =============== step3: tb
|
||||||
sql create table tb (ts timestamp, a int)
|
sql select count(1) from tb1
|
||||||
sql insert into tb values(now-28d, -28)
|
|
||||||
sql select count(a) from tb
|
|
||||||
sql alter table tb add column b smallint
|
|
||||||
sql insert into tb values(now-25d, -25, 0)
|
|
||||||
sql select count(b) from tb
|
|
||||||
sql alter table tb add column c tinyint
|
|
||||||
sql insert into tb values(now-22d, -22, 3, 0)
|
|
||||||
sql select count(c) from tb
|
|
||||||
sql alter table tb add column d int
|
|
||||||
sql insert into tb values(now-19d, -19, 6, 0, 0)
|
|
||||||
sql select count(d) from tb
|
|
||||||
sql alter table tb add column e bigint
|
|
||||||
sql alter table tb add column f float
|
|
||||||
sql alter table tb add column g double
|
|
||||||
sql alter table tb add column h binary(10)
|
|
||||||
sql select count(a), count(b), count(c), count(d), count(e), count(f), count(g), count(h) from tb
|
|
||||||
sql select * from tb order by ts desc
|
|
||||||
|
|
||||||
print =============== step5: alter stb and insert data
|
|
||||||
sql create table stb (ts timestamp, c1 int, c2 binary(4)) tags(t1 int, t2 float, t3 binary(16)) comment "abd"
|
|
||||||
sql show db.stables
|
|
||||||
sql describe stb
|
|
||||||
sql_error alter table stb add column ts int
|
|
||||||
|
|
||||||
sql create table db.ctb using db.stb tags(101, 102, "103")
|
|
||||||
sql insert into db.ctb values(now, 1, "2")
|
|
||||||
sql show db.tables
|
|
||||||
sql select * from db.stb
|
|
||||||
sql select * from tb
|
|
||||||
|
|
||||||
sql alter table stb add column c3 int
|
|
||||||
sql describe stb
|
|
||||||
sql select * from db.stb
|
|
||||||
sql select * from tb
|
|
||||||
sql insert into db.ctb values(now+1s, 1, 2, 3)
|
|
||||||
sql select * from db.stb
|
|
||||||
|
|
||||||
sql alter table db.stb add column c4 bigint
|
|
||||||
sql select * from db.stb
|
|
||||||
sql insert into db.ctb values(now+2s, 1, 2, 3, 4)
|
|
||||||
|
|
||||||
sql alter table db.stb drop column c1
|
|
||||||
sql reset query cache
|
|
||||||
sql select * from tb
|
|
||||||
sql insert into db.ctb values(now+3s, 2, 3, 4)
|
|
||||||
sql select * from db.stb
|
|
||||||
|
|
||||||
_OVER:
|
_OVER:
|
||||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||||
|
|
|
@ -1,5 +1,6 @@
|
||||||
system sh/stop_dnodes.sh
|
system sh/stop_dnodes.sh
|
||||||
system sh/deploy.sh -n dnode1 -i 1
|
system sh/deploy.sh -n dnode1 -i 1
|
||||||
|
system sh/cfg.sh -n dnode1 -c debugflag -v 131
|
||||||
system sh/exec.sh -n dnode1 -s start -v
|
system sh/exec.sh -n dnode1 -s start -v
|
||||||
sql connect
|
sql connect
|
||||||
|
|
||||||
|
@ -44,20 +45,11 @@ while $i < $tbNum
|
||||||
$i = $i + 1
|
$i = $i + 1
|
||||||
endw
|
endw
|
||||||
|
|
||||||
print =============== step3: avg
|
print =============== step3: tb
|
||||||
sql select avg(tbcol) from tb1
|
sql select count(tbcol), avg(tbcol), max(tbcol), min(tbcol), count(tbcol) from tb1 where ts <= 1601481840000 and ts >= 1601481800000 partition by tgcol interval(1m) fill(value, 0)
|
||||||
sql select avg(tbcol) from tb1 where ts <= 1601481840000
|
|
||||||
sql select avg(tbcol) as b from tb1
|
print =============== step4: stb
|
||||||
sql select avg(tbcol) as b from tb1 interval(1d)
|
sql select count(tbcol), avg(tbcol), max(tbcol), min(tbcol), count(tbcol) from stb where ts <= 1601481840000 and ts >= 1601481800000 partition by tgcol interval(1m) fill(value, 0)
|
||||||
sql select avg(tbcol) as b from tb1 where ts <= 1601481840000s interval(1m)
|
|
||||||
sql select avg(tbcol) as c from stb
|
|
||||||
sql select avg(tbcol) as c from stb where ts <= 1601481840000
|
|
||||||
sql select avg(tbcol) as c from stb where tgcol < 5 and ts <= 1601481840000
|
|
||||||
sql select avg(tbcol) as c from stb interval(1m)
|
|
||||||
sql select avg(tbcol) as c from stb interval(1d)
|
|
||||||
sql select avg(tbcol) as b from stb where ts <= 1601481840000s interval(1m)
|
|
||||||
sql select avg(tbcol) as c from stb group by tgcol
|
|
||||||
sql select avg(tbcol) as b from stb where ts <= 1601481840000s partition by tgcol interval(1m)
|
|
||||||
|
|
||||||
_OVER:
|
_OVER:
|
||||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||||
|
|
|
@ -42,23 +42,50 @@ while $i < $tbNum
|
||||||
sql insert into $tb values ($ms , $x , $x , $x )
|
sql insert into $tb values ($ms , $x , $x , $x )
|
||||||
$x = $x + 1
|
$x = $x + 1
|
||||||
endw
|
endw
|
||||||
|
|
||||||
|
$cc = $x * 60000
|
||||||
|
$ms = 1601481600000 + $cc
|
||||||
|
sql insert into $tb values ($ms , NULL , NULL , NULL )
|
||||||
$i = $i + 1
|
$i = $i + 1
|
||||||
endw
|
endw
|
||||||
|
|
||||||
print =============== step3: avg
|
print =============== step3: tb
|
||||||
sql select avg(tbcol) from tb1
|
sql select avg(tbcol) from tb1
|
||||||
sql select avg(tbcol) from tb1 where ts <= 1601481840000
|
sql select avg(tbcol) from tb1 where ts <= 1601481840000
|
||||||
sql select avg(tbcol) as b from tb1
|
sql select avg(tbcol) as b from tb1
|
||||||
sql select avg(tbcol) as b from tb1 interval(1d)
|
sql select avg(tbcol) as b from tb1 interval(1d)
|
||||||
sql select avg(tbcol) as b from tb1 where ts <= 1601481840000s interval(1m)
|
sql select avg(tbcol) as b from tb1 where ts <= 1601481840000 interval(1m)
|
||||||
|
sql select bottom(tbcol, 2) from tb1 where ts <= 1601481840000
|
||||||
|
sql select top(tbcol, 2) from tb1 where ts <= 1601481840000
|
||||||
|
sql select percentile(tbcol, 2) from tb1 where ts <= 1601481840000
|
||||||
|
sql select leastsquares(tbcol, 1, 1) as b from tb1 where ts <= 1601481840000
|
||||||
|
sql show table distributed tb1
|
||||||
|
sql select count(tbcol) as b from tb1 where ts <= 1601481840000 interval(1m)
|
||||||
|
sql select diff(tbcol) from tb1 where ts <= 1601481840000
|
||||||
|
sql select diff(tbcol) from tb1 where tbcol > 5 and tbcol < 20
|
||||||
|
sql select first(tbcol), last(tbcol) as b from tb1 where ts <= 1601481840000 interval(1m)
|
||||||
|
sql select count(tbcol), avg(tbcol), max(tbcol), min(tbcol), sum(tbcol), stddev(tbcol) from tb1 where ts <= 1601481840000 partition by tgcol interval(1m)
|
||||||
|
#sql select count(tbcol), avg(tbcol), max(tbcol), min(tbcol), count(tbcol) from tb1 where ts <= 1601481840000 and ts >= 1601481800000 partition by tgcol interval(1m) fill(value, 0)
|
||||||
|
sql select last_row(*) from tb1 where tbcol > 5 and tbcol < 20
|
||||||
|
|
||||||
|
print =============== step4: stb
|
||||||
sql select avg(tbcol) as c from stb
|
sql select avg(tbcol) as c from stb
|
||||||
sql select avg(tbcol) as c from stb where ts <= 1601481840000
|
sql select avg(tbcol) as c from stb where ts <= 1601481840000
|
||||||
sql select avg(tbcol) as c from stb where tgcol < 5 and ts <= 1601481840000
|
sql select avg(tbcol) as c from stb where tgcol < 5 and ts <= 1601481840000
|
||||||
sql select avg(tbcol) as c from stb interval(1m)
|
sql select avg(tbcol) as c from stb interval(1m)
|
||||||
sql select avg(tbcol) as c from stb interval(1d)
|
sql select avg(tbcol) as c from stb interval(1d)
|
||||||
sql select avg(tbcol) as b from stb where ts <= 1601481840000s interval(1m)
|
sql select avg(tbcol) as b from stb where ts <= 1601481840000 interval(1m)
|
||||||
sql select avg(tbcol) as c from stb group by tgcol
|
sql select avg(tbcol) as c from stb group by tgcol
|
||||||
sql select avg(tbcol) as b from stb where ts <= 1601481840000s partition by tgcol interval(1m)
|
sql select avg(tbcol) as b from stb where ts <= 1601481840000 partition by tgcol interval(1m)
|
||||||
|
sql show table distributed stb
|
||||||
|
sql select count(tbcol) as b from stb where ts <= 1601481840000 partition by tgcol interval(1m)
|
||||||
|
sql select diff(tbcol) from stb where ts <= 1601481840000
|
||||||
|
sql select first(tbcol), last(tbcol) as c from stb group by tgcol
|
||||||
|
sql select first(tbcol), last(tbcol) as b from stb where ts <= 1601481840000 and tbcol2 is null partition by tgcol interval(1m)
|
||||||
|
sql select first(tbcol), last(tbcol) as b from stb where ts <= 1601481840000 partition by tgcol interval(1m)
|
||||||
|
sql select count(tbcol), avg(tbcol), max(tbcol), min(tbcol), sum(tbcol), stddev(tbcol) from stb where ts <= 1601481840000 partition by tgcol interval(1m)
|
||||||
|
#sql select count(tbcol), avg(tbcol), max(tbcol), min(tbcol), count(tbcol) from stb where ts <= 1601481840000 and ts >= 1601481800000 partition by tgcol interval(1m) fill(value, 0)
|
||||||
|
sql select last_row(tbcol), stddev(tbcol) from stb where tbcol > 5 and tbcol < 20 group by tgcol
|
||||||
|
|
||||||
_OVER:
|
_OVER:
|
||||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||||
|
|
|
@ -40,7 +40,7 @@ class ClusterComCheck:
|
||||||
def checkDnodes(self,dnodeNumbers):
|
def checkDnodes(self,dnodeNumbers):
|
||||||
count=0
|
count=0
|
||||||
# print(tdSql)
|
# print(tdSql)
|
||||||
while count < 5:
|
while count < 30:
|
||||||
tdSql.query("show dnodes")
|
tdSql.query("show dnodes")
|
||||||
# tdLog.debug(tdSql.queryResult)
|
# tdLog.debug(tdSql.queryResult)
|
||||||
status=0
|
status=0
|
||||||
|
@ -50,13 +50,13 @@ class ClusterComCheck:
|
||||||
tdLog.info(status)
|
tdLog.info(status)
|
||||||
|
|
||||||
if status == dnodeNumbers:
|
if status == dnodeNumbers:
|
||||||
tdLog.success("it find cluster with %d dnodes and check that all cluster dnodes are ready within 5s! " %dnodeNumbers)
|
tdLog.success("it find cluster with %d dnodes and check that all cluster dnodes are ready within 30s! " %dnodeNumbers)
|
||||||
return True
|
return True
|
||||||
count+=1
|
count+=1
|
||||||
time.sleep(1)
|
time.sleep(1)
|
||||||
else:
|
else:
|
||||||
tdLog.debug(tdSql.queryResult)
|
tdLog.debug(tdSql.queryResult)
|
||||||
tdLog.exit("it find cluster with %d dnodes but check that there dnodes are not ready within 5s ! "%dnodeNumbers)
|
tdLog.exit("it find cluster with %d dnodes but check that there dnodes are not ready within 30s ! "%dnodeNumbers)
|
||||||
|
|
||||||
def checkDbRows(self,dbNumbers):
|
def checkDbRows(self,dbNumbers):
|
||||||
dbNumbers=int(dbNumbers)
|
dbNumbers=int(dbNumbers)
|
||||||
|
|
|
@ -65,14 +65,14 @@ class TDTestCase:
|
||||||
is_leader=True
|
is_leader=True
|
||||||
|
|
||||||
if count==1 and is_leader:
|
if count==1 and is_leader:
|
||||||
tdLog.info("===== depoly cluster success with 1 mnode as leader =====")
|
tdLog.notice("===== depoly cluster success with 1 mnode as leader =====")
|
||||||
else:
|
else:
|
||||||
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
||||||
|
|
||||||
for k ,v in self.dnode_list.items():
|
for k ,v in self.dnode_list.items():
|
||||||
if k == mnode_name:
|
if k == mnode_name:
|
||||||
if v[3]==0:
|
if v[3]==0:
|
||||||
tdLog.info("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
tdLog.notice("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
else:
|
else:
|
||||||
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
else:
|
else:
|
||||||
|
@ -115,7 +115,7 @@ class TDTestCase:
|
||||||
|
|
||||||
for k , v in vgroups_infos.items():
|
for k , v in vgroups_infos.items():
|
||||||
if len(v) ==1 and v[0]=="leader":
|
if len(v) ==1 and v[0]=="leader":
|
||||||
tdLog.info(" === create database replica only 1 role leader check success of vgroup_id {} ======".format(k))
|
tdLog.notice(" === create database replica only 1 role leader check success of vgroup_id {} ======".format(k))
|
||||||
else:
|
else:
|
||||||
tdLog.exit(" === create database replica only 1 role leader check fail of vgroup_id {} ======".format(k))
|
tdLog.exit(" === create database replica only 1 role leader check fail of vgroup_id {} ======".format(k))
|
||||||
|
|
||||||
|
|
|
@ -71,14 +71,14 @@ class TDTestCase:
|
||||||
is_leader=True
|
is_leader=True
|
||||||
|
|
||||||
if count==1 and is_leader:
|
if count==1 and is_leader:
|
||||||
tdLog.info("===== depoly cluster success with 1 mnode as leader =====")
|
tdLog.notice("===== depoly cluster success with 1 mnode as leader =====")
|
||||||
else:
|
else:
|
||||||
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
||||||
|
|
||||||
for k ,v in self.dnode_list.items():
|
for k ,v in self.dnode_list.items():
|
||||||
if k == mnode_name:
|
if k == mnode_name:
|
||||||
if v[3]==0:
|
if v[3]==0:
|
||||||
tdLog.info("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
tdLog.notice("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
else:
|
else:
|
||||||
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
else:
|
else:
|
||||||
|
@ -121,7 +121,7 @@ class TDTestCase:
|
||||||
|
|
||||||
for k , v in vgroups_infos.items():
|
for k , v in vgroups_infos.items():
|
||||||
if len(v) ==1 and v[0]=="leader":
|
if len(v) ==1 and v[0]=="leader":
|
||||||
tdLog.info(" === create database replica only 1 role leader check success of vgroup_id {} ======".format(k))
|
tdLog.notice(" === create database replica only 1 role leader check success of vgroup_id {} ======".format(k))
|
||||||
else:
|
else:
|
||||||
tdLog.exit(" === create database replica only 1 role leader check fail of vgroup_id {} ======".format(k))
|
tdLog.exit(" === create database replica only 1 role leader check fail of vgroup_id {} ======".format(k))
|
||||||
|
|
||||||
|
@ -129,7 +129,7 @@ class TDTestCase:
|
||||||
drop_db_sql = "drop database if exists {}".format(dbname)
|
drop_db_sql = "drop database if exists {}".format(dbname)
|
||||||
create_db_sql = "create database {} replica {} vgroups {}".format(dbname,replica_num,vgroup_nums)
|
create_db_sql = "create database {} replica {} vgroups {}".format(dbname,replica_num,vgroup_nums)
|
||||||
|
|
||||||
tdLog.info(" ==== create database {} and insert rows begin =====".format(dbname))
|
tdLog.notice(" ==== create database {} and insert rows begin =====".format(dbname))
|
||||||
tdSql.execute(drop_db_sql)
|
tdSql.execute(drop_db_sql)
|
||||||
tdSql.execute(create_db_sql)
|
tdSql.execute(create_db_sql)
|
||||||
tdSql.execute("use {}".format(dbname))
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
@ -155,7 +155,7 @@ class TDTestCase:
|
||||||
ts = self.ts + 1000*row_num
|
ts = self.ts + 1000*row_num
|
||||||
tdSql.execute(f"insert into {sub_tbname} values ({ts}, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
tdSql.execute(f"insert into {sub_tbname} values ({ts}, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
|
|
||||||
tdLog.info(" ==== create database {} and insert rows execute end =====".format(dbname))
|
tdLog.notice(" ==== create database {} and insert rows execute end =====".format(dbname))
|
||||||
|
|
||||||
def check_insert_status(self, dbname, tb_nums , row_nums):
|
def check_insert_status(self, dbname, tb_nums , row_nums):
|
||||||
tdSql.execute("use {}".format(dbname))
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
|
|
@ -71,14 +71,14 @@ class TDTestCase:
|
||||||
is_leader=True
|
is_leader=True
|
||||||
|
|
||||||
if count==1 and is_leader:
|
if count==1 and is_leader:
|
||||||
tdLog.info("===== depoly cluster success with 1 mnode as leader =====")
|
tdLog.notice("===== depoly cluster success with 1 mnode as leader =====")
|
||||||
else:
|
else:
|
||||||
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
||||||
|
|
||||||
for k ,v in self.dnode_list.items():
|
for k ,v in self.dnode_list.items():
|
||||||
if k == mnode_name:
|
if k == mnode_name:
|
||||||
if v[3]==0:
|
if v[3]==0:
|
||||||
tdLog.info("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
tdLog.notice("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
else:
|
else:
|
||||||
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
else:
|
else:
|
||||||
|
@ -121,7 +121,7 @@ class TDTestCase:
|
||||||
|
|
||||||
for k , v in vgroups_infos.items():
|
for k , v in vgroups_infos.items():
|
||||||
if len(v) ==1 and v[0]=="leader":
|
if len(v) ==1 and v[0]=="leader":
|
||||||
tdLog.info(" === create database replica only 1 role leader check success of vgroup_id {} ======".format(k))
|
tdLog.notice(" === create database replica only 1 role leader check success of vgroup_id {} ======".format(k))
|
||||||
else:
|
else:
|
||||||
tdLog.exit(" === create database replica only 1 role leader check fail of vgroup_id {} ======".format(k))
|
tdLog.exit(" === create database replica only 1 role leader check fail of vgroup_id {} ======".format(k))
|
||||||
|
|
||||||
|
@ -129,7 +129,7 @@ class TDTestCase:
|
||||||
drop_db_sql = "drop database if exists {}".format(dbname)
|
drop_db_sql = "drop database if exists {}".format(dbname)
|
||||||
create_db_sql = "create database {} replica {} vgroups {}".format(dbname,replica_num,vgroup_nums)
|
create_db_sql = "create database {} replica {} vgroups {}".format(dbname,replica_num,vgroup_nums)
|
||||||
|
|
||||||
tdLog.info(" ==== create database {} and insert rows begin =====".format(dbname))
|
tdLog.notice(" ==== create database {} and insert rows begin =====".format(dbname))
|
||||||
tdSql.execute(drop_db_sql)
|
tdSql.execute(drop_db_sql)
|
||||||
tdSql.execute(create_db_sql)
|
tdSql.execute(create_db_sql)
|
||||||
tdSql.execute("use {}".format(dbname))
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
@ -155,7 +155,7 @@ class TDTestCase:
|
||||||
ts = self.ts + 1000*row_num
|
ts = self.ts + 1000*row_num
|
||||||
tdSql.execute(f"insert into {sub_tbname} values ({ts}, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
tdSql.execute(f"insert into {sub_tbname} values ({ts}, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
|
|
||||||
tdLog.info(" ==== create database {} and insert rows execute end =====".format(dbname))
|
tdLog.notice(" ==== create database {} and insert rows execute end =====".format(dbname))
|
||||||
|
|
||||||
def check_insert_status(self, dbname, tb_nums , row_nums):
|
def check_insert_status(self, dbname, tb_nums , row_nums):
|
||||||
tdSql.execute("use {}".format(dbname))
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
|
|
@ -80,14 +80,14 @@ class TDTestCase:
|
||||||
is_leader=True
|
is_leader=True
|
||||||
|
|
||||||
if count==1 and is_leader:
|
if count==1 and is_leader:
|
||||||
tdLog.info("===== depoly cluster success with 1 mnode as leader =====")
|
tdLog.notice("===== depoly cluster success with 1 mnode as leader =====")
|
||||||
else:
|
else:
|
||||||
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
||||||
|
|
||||||
for k ,v in self.dnode_list.items():
|
for k ,v in self.dnode_list.items():
|
||||||
if k == mnode_name:
|
if k == mnode_name:
|
||||||
if v[3]==0:
|
if v[3]==0:
|
||||||
tdLog.info("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
tdLog.notice("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
else:
|
else:
|
||||||
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
else:
|
else:
|
||||||
|
@ -130,7 +130,7 @@ class TDTestCase:
|
||||||
|
|
||||||
for k , v in vgroups_infos.items():
|
for k , v in vgroups_infos.items():
|
||||||
if len(v) ==1 and v[0]=="leader":
|
if len(v) ==1 and v[0]=="leader":
|
||||||
tdLog.info(" === create database replica only 1 role leader check success of vgroup_id {} ======".format(k))
|
tdLog.notice(" === create database replica only 1 role leader check success of vgroup_id {} ======".format(k))
|
||||||
else:
|
else:
|
||||||
tdLog.exit(" === create database replica only 1 role leader check fail of vgroup_id {} ======".format(k))
|
tdLog.exit(" === create database replica only 1 role leader check fail of vgroup_id {} ======".format(k))
|
||||||
|
|
||||||
|
@ -138,7 +138,7 @@ class TDTestCase:
|
||||||
drop_db_sql = "drop database if exists {}".format(dbname)
|
drop_db_sql = "drop database if exists {}".format(dbname)
|
||||||
create_db_sql = "create database {} replica {} vgroups {}".format(dbname,replica_num,vgroup_nums)
|
create_db_sql = "create database {} replica {} vgroups {}".format(dbname,replica_num,vgroup_nums)
|
||||||
|
|
||||||
tdLog.info(" ==== create database {} and insert rows begin =====".format(dbname))
|
tdLog.notice(" ==== create database {} and insert rows begin =====".format(dbname))
|
||||||
tdSql.execute(drop_db_sql)
|
tdSql.execute(drop_db_sql)
|
||||||
tdSql.execute(create_db_sql)
|
tdSql.execute(create_db_sql)
|
||||||
tdSql.execute("use {}".format(dbname))
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
@ -161,7 +161,7 @@ class TDTestCase:
|
||||||
ts = self.ts + self.ts_step*row_num
|
ts = self.ts + self.ts_step*row_num
|
||||||
tdSql.execute(f"insert into {sub_tbname} values ({ts}, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
tdSql.execute(f"insert into {sub_tbname} values ({ts}, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
|
|
||||||
tdLog.info(" ==== stable {} insert rows execute end =====".format(stablename))
|
tdLog.notice(" ==== stable {} insert rows execute end =====".format(stablename))
|
||||||
|
|
||||||
def append_rows_of_exists_tables(self,dbname ,stablename , tbname , append_nums ):
|
def append_rows_of_exists_tables(self,dbname ,stablename , tbname , append_nums ):
|
||||||
|
|
||||||
|
@ -170,7 +170,7 @@ class TDTestCase:
|
||||||
for row_num in range(append_nums):
|
for row_num in range(append_nums):
|
||||||
tdSql.execute(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
tdSql.execute(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
# print(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
# print(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
tdLog.info(" ==== append new rows of table {} belongs to stable {} execute end =====".format(tbname,stablename))
|
tdLog.notice(" ==== append new rows of table {} belongs to stable {} execute end =====".format(tbname,stablename))
|
||||||
os.system("taos -s 'select count(*) from {}.{}';".format(dbname,stablename))
|
os.system("taos -s 'select count(*) from {}.{}';".format(dbname,stablename))
|
||||||
|
|
||||||
def check_insert_rows(self, dbname, stablename , tb_nums , row_nums, append_rows):
|
def check_insert_rows(self, dbname, stablename , tb_nums , row_nums, append_rows):
|
||||||
|
@ -197,7 +197,7 @@ class TDTestCase:
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
||||||
status_OK = self.mycheckData("select count(*) from {}.{}".format(dbname,stablename) ,0 , 0 , tb_nums*row_nums+append_rows)
|
status_OK = self.mycheckData("select count(*) from {}.{}".format(dbname,stablename) ,0 , 0 , tb_nums*row_nums+append_rows)
|
||||||
tdLog.info(" ==== check insert rows first failed , this is {}_th retry check rows of database {}".format(count , dbname))
|
tdLog.debug(" ==== check insert rows first failed , this is {}_th retry check rows of database {}".format(count , dbname))
|
||||||
count += 1
|
count += 1
|
||||||
|
|
||||||
|
|
||||||
|
@ -218,7 +218,7 @@ class TDTestCase:
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
||||||
status_OK = self.mycheckRows("select distinct tbname from {}.{}".format(dbname,stablename) ,tb_nums)
|
status_OK = self.mycheckRows("select distinct tbname from {}.{}".format(dbname,stablename) ,tb_nums)
|
||||||
tdLog.info(" ==== check insert tbnames first failed , this is {}_th retry check tbnames of database {}".format(count , dbname))
|
tdLog.debug(" ==== check insert tbnames first failed , this is {}_th retry check tbnames of database {}".format(count , dbname))
|
||||||
count += 1
|
count += 1
|
||||||
def _get_stop_dnode_id(self,dbname):
|
def _get_stop_dnode_id(self,dbname):
|
||||||
tdSql.query("show {}.vgroups".format(dbname))
|
tdSql.query("show {}.vgroups".format(dbname))
|
||||||
|
@ -255,8 +255,8 @@ class TDTestCase:
|
||||||
while status !="offline":
|
while status !="offline":
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
status = _get_status()
|
status = _get_status()
|
||||||
# tdLog.info("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
# tdLog.notice("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
||||||
tdLog.info("==== stop_dnode has stopped , id is {}".format(self.stop_dnode_id))
|
tdLog.notice("==== stop_dnode has stopped , id is {} ====".format(self.stop_dnode_id))
|
||||||
|
|
||||||
def wait_start_dnode_OK(self):
|
def wait_start_dnode_OK(self):
|
||||||
|
|
||||||
|
@ -277,8 +277,8 @@ class TDTestCase:
|
||||||
while status !="ready":
|
while status !="ready":
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
status = _get_status()
|
status = _get_status()
|
||||||
# tdLog.info("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
# tdLog.notice("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
||||||
tdLog.info("==== stop_dnode has restart , id is {}".format(self.stop_dnode_id))
|
tdLog.notice("==== stop_dnode has restart , id is {} ====".format(self.stop_dnode_id))
|
||||||
|
|
||||||
def _parse_datetime(self,timestr):
|
def _parse_datetime(self,timestr):
|
||||||
try:
|
try:
|
||||||
|
@ -342,9 +342,9 @@ class TDTestCase:
|
||||||
elif isinstance(data, str):
|
elif isinstance(data, str):
|
||||||
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
(sql, row, col, tdSql.queryResult[row][col], data))
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
elif isinstance(data, datetime.date):
|
# elif isinstance(data, datetime.date):
|
||||||
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
# tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
(sql, row, col, tdSql.queryResult[row][col], data))
|
# (sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
elif isinstance(data, float):
|
elif isinstance(data, float):
|
||||||
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
(sql, row, col, tdSql.queryResult[row][col], data))
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
@ -389,15 +389,15 @@ class TDTestCase:
|
||||||
# append rows of stablename when dnode stop
|
# append rows of stablename when dnode stop
|
||||||
|
|
||||||
tbname = "sub_{}_{}".format(stablename , 0)
|
tbname = "sub_{}_{}".format(stablename , 0)
|
||||||
tdLog.info(" ==== begin append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
tdLog.notice(" ==== begin append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
||||||
self.append_rows_of_exists_tables(db_name ,stablename , tbname , 100 )
|
self.append_rows_of_exists_tables(db_name ,stablename , tbname , 100 )
|
||||||
tdLog.info(" ==== check append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
tdLog.notice(" ==== check append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=100)
|
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=100)
|
||||||
|
|
||||||
# create new stables
|
# create new stables
|
||||||
tdLog.info(" ==== create new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
tdLog.notice(" ==== create new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb1' , tb_nums= 10 ,row_nums= 10 )
|
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb1' , tb_nums= 10 ,row_nums= 10 )
|
||||||
tdLog.info(" ==== check new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
tdLog.notice(" ==== check new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,'new_stb1' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
self.check_insert_rows(db_name ,'new_stb1' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
||||||
|
|
||||||
# begin start dnode
|
# begin start dnode
|
||||||
|
@ -409,9 +409,9 @@ class TDTestCase:
|
||||||
tdLog.exit(" ==== restart dnode {} cost too much time , please check ====".format(self.stop_dnode_id))
|
tdLog.exit(" ==== restart dnode {} cost too much time , please check ====".format(self.stop_dnode_id))
|
||||||
|
|
||||||
# create new stables again
|
# create new stables again
|
||||||
tdLog.info(" ==== create new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
tdLog.notice(" ==== create new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb2' , tb_nums= 10 ,row_nums= 10 )
|
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb2' , tb_nums= 10 ,row_nums= 10 )
|
||||||
tdLog.info(" ==== check new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
tdLog.notice(" ==== check new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,'new_stb2' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
self.check_insert_rows(db_name ,'new_stb2' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
||||||
|
|
||||||
def unsync_run_case(self):
|
def unsync_run_case(self):
|
||||||
|
@ -447,7 +447,7 @@ class TDTestCase:
|
||||||
self.create_database(dbname = db_name ,replica_num= self.replica , vgroup_nums= 1)
|
self.create_database(dbname = db_name ,replica_num= self.replica , vgroup_nums= 1)
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = stablename , tb_nums= 10 ,row_nums= 10 )
|
self.create_stable_insert_datas(dbname = db_name , stablename = stablename , tb_nums= 10 ,row_nums= 10 )
|
||||||
|
|
||||||
tdLog.info(" ===== restart dnode of database {} in an unsync threading ===== ".format(db_name))
|
tdLog.notice(" ===== restart dnode of database {} in an unsync threading ===== ".format(db_name))
|
||||||
|
|
||||||
# create sync threading and start it
|
# create sync threading and start it
|
||||||
self.current_thread = _create_threading(db_name)
|
self.current_thread = _create_threading(db_name)
|
||||||
|
@ -457,21 +457,21 @@ class TDTestCase:
|
||||||
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
||||||
|
|
||||||
tbname = "sub_{}_{}".format(stablename , 0)
|
tbname = "sub_{}_{}".format(stablename , 0)
|
||||||
tdLog.info(" ==== begin append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
tdLog.notice(" ==== begin append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
||||||
self.append_rows_of_exists_tables(db_name ,stablename , tbname , 100 )
|
self.append_rows_of_exists_tables(db_name ,stablename , tbname , 100 )
|
||||||
tdLog.info(" ==== check append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
tdLog.notice(" ==== check append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=100)
|
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=100)
|
||||||
|
|
||||||
# create new stables
|
# create new stables
|
||||||
tdLog.info(" ==== create new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
tdLog.notice(" ==== create new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb1' , tb_nums= 10 ,row_nums= 10 )
|
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb1' , tb_nums= 10 ,row_nums= 10 )
|
||||||
tdLog.info(" ==== check new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
tdLog.notice(" ==== check new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,'new_stb1' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
self.check_insert_rows(db_name ,'new_stb1' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
||||||
|
|
||||||
# create new stables again
|
# create new stables again
|
||||||
tdLog.info(" ==== create new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
tdLog.notice(" ==== create new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb2' , tb_nums= 10 ,row_nums= 10 )
|
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb2' , tb_nums= 10 ,row_nums= 10 )
|
||||||
tdLog.info(" ==== check new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
tdLog.notice(" ==== check new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,'new_stb2' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
self.check_insert_rows(db_name ,'new_stb2' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
||||||
|
|
||||||
self.current_thread.join()
|
self.current_thread.join()
|
||||||
|
|
|
@ -80,14 +80,14 @@ class TDTestCase:
|
||||||
is_leader=True
|
is_leader=True
|
||||||
|
|
||||||
if count==1 and is_leader:
|
if count==1 and is_leader:
|
||||||
tdLog.info("===== depoly cluster success with 1 mnode as leader =====")
|
tdLog.notice("===== depoly cluster success with 1 mnode as leader =====")
|
||||||
else:
|
else:
|
||||||
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
||||||
|
|
||||||
for k ,v in self.dnode_list.items():
|
for k ,v in self.dnode_list.items():
|
||||||
if k == mnode_name:
|
if k == mnode_name:
|
||||||
if v[3]==0:
|
if v[3]==0:
|
||||||
tdLog.info("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
tdLog.notice("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
else:
|
else:
|
||||||
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
else:
|
else:
|
||||||
|
@ -130,7 +130,7 @@ class TDTestCase:
|
||||||
|
|
||||||
for k , v in vgroups_infos.items():
|
for k , v in vgroups_infos.items():
|
||||||
if len(v) ==1 and v[0]=="leader":
|
if len(v) ==1 and v[0]=="leader":
|
||||||
tdLog.info(" === create database replica only 1 role leader check success of vgroup_id {} ======".format(k))
|
tdLog.notice(" === create database replica only 1 role leader check success of vgroup_id {} ======".format(k))
|
||||||
else:
|
else:
|
||||||
tdLog.exit(" === create database replica only 1 role leader check fail of vgroup_id {} ======".format(k))
|
tdLog.exit(" === create database replica only 1 role leader check fail of vgroup_id {} ======".format(k))
|
||||||
|
|
||||||
|
@ -138,7 +138,7 @@ class TDTestCase:
|
||||||
drop_db_sql = "drop database if exists {}".format(dbname)
|
drop_db_sql = "drop database if exists {}".format(dbname)
|
||||||
create_db_sql = "create database {} replica {} vgroups {}".format(dbname,replica_num,vgroup_nums)
|
create_db_sql = "create database {} replica {} vgroups {}".format(dbname,replica_num,vgroup_nums)
|
||||||
|
|
||||||
tdLog.info(" ==== create database {} and insert rows begin =====".format(dbname))
|
tdLog.notice(" ==== create database {} and insert rows begin =====".format(dbname))
|
||||||
tdSql.execute(drop_db_sql)
|
tdSql.execute(drop_db_sql)
|
||||||
tdSql.execute(create_db_sql)
|
tdSql.execute(create_db_sql)
|
||||||
tdSql.execute("use {}".format(dbname))
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
@ -161,7 +161,7 @@ class TDTestCase:
|
||||||
ts = self.ts + self.ts_step*row_num
|
ts = self.ts + self.ts_step*row_num
|
||||||
tdSql.execute(f"insert into {sub_tbname} values ({ts}, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
tdSql.execute(f"insert into {sub_tbname} values ({ts}, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
|
|
||||||
tdLog.info(" ==== stable {} insert rows execute end =====".format(stablename))
|
tdLog.notice(" ==== stable {} insert rows execute end =====".format(stablename))
|
||||||
|
|
||||||
def append_rows_of_exists_tables(self,dbname ,stablename , tbname , append_nums ):
|
def append_rows_of_exists_tables(self,dbname ,stablename , tbname , append_nums ):
|
||||||
|
|
||||||
|
@ -170,7 +170,7 @@ class TDTestCase:
|
||||||
for row_num in range(append_nums):
|
for row_num in range(append_nums):
|
||||||
tdSql.execute(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
tdSql.execute(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
# print(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
# print(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
tdLog.info(" ==== append new rows of table {} belongs to stable {} execute end =====".format(tbname,stablename))
|
tdLog.notice(" ==== append new rows of table {} belongs to stable {} execute end =====".format(tbname,stablename))
|
||||||
os.system("taos -s 'select count(*) from {}.{}';".format(dbname,stablename))
|
os.system("taos -s 'select count(*) from {}.{}';".format(dbname,stablename))
|
||||||
|
|
||||||
def check_insert_rows(self, dbname, stablename , tb_nums , row_nums, append_rows):
|
def check_insert_rows(self, dbname, stablename , tb_nums , row_nums, append_rows):
|
||||||
|
@ -197,7 +197,7 @@ class TDTestCase:
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
||||||
status_OK = self.mycheckData("select count(*) from {}.{}".format(dbname,stablename) ,0 , 0 , tb_nums*row_nums+append_rows)
|
status_OK = self.mycheckData("select count(*) from {}.{}".format(dbname,stablename) ,0 , 0 , tb_nums*row_nums+append_rows)
|
||||||
tdLog.info(" ==== check insert rows first failed , this is {}_th retry check rows of database {}".format(count , dbname))
|
tdLog.notice(" ==== check insert rows first failed , this is {}_th retry check rows of database {}".format(count , dbname))
|
||||||
count += 1
|
count += 1
|
||||||
|
|
||||||
|
|
||||||
|
@ -218,7 +218,7 @@ class TDTestCase:
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
||||||
status_OK = self.mycheckRows("select distinct tbname from {}.{}".format(dbname,stablename) ,tb_nums)
|
status_OK = self.mycheckRows("select distinct tbname from {}.{}".format(dbname,stablename) ,tb_nums)
|
||||||
tdLog.info(" ==== check insert tbnames first failed , this is {}_th retry check tbnames of database {}".format(count , dbname))
|
tdLog.notice(" ==== check insert tbnames first failed , this is {}_th retry check tbnames of database {}".format(count , dbname))
|
||||||
count += 1
|
count += 1
|
||||||
|
|
||||||
def _get_stop_dnode_id(self,dbname):
|
def _get_stop_dnode_id(self,dbname):
|
||||||
|
@ -256,8 +256,8 @@ class TDTestCase:
|
||||||
while status !="offline":
|
while status !="offline":
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
status = _get_status()
|
status = _get_status()
|
||||||
# tdLog.info("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
# tdLog.notice("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
||||||
tdLog.info("==== stop_dnode has stopped , id is {}".format(self.stop_dnode_id))
|
tdLog.notice("==== stop_dnode has stopped , id is {}".format(self.stop_dnode_id))
|
||||||
|
|
||||||
def wait_start_dnode_OK(self):
|
def wait_start_dnode_OK(self):
|
||||||
|
|
||||||
|
@ -278,8 +278,8 @@ class TDTestCase:
|
||||||
while status !="ready":
|
while status !="ready":
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
status = _get_status()
|
status = _get_status()
|
||||||
# tdLog.info("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
# tdLog.notice("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
||||||
tdLog.info("==== stop_dnode has restart , id is {}".format(self.stop_dnode_id))
|
tdLog.notice("==== stop_dnode has restart , id is {}".format(self.stop_dnode_id))
|
||||||
|
|
||||||
def _parse_datetime(self,timestr):
|
def _parse_datetime(self,timestr):
|
||||||
try:
|
try:
|
||||||
|
@ -343,9 +343,9 @@ class TDTestCase:
|
||||||
elif isinstance(data, str):
|
elif isinstance(data, str):
|
||||||
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
(sql, row, col, tdSql.queryResult[row][col], data))
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
elif isinstance(data, datetime.date):
|
# elif isinstance(data, datetime.date):
|
||||||
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
# tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
(sql, row, col, tdSql.queryResult[row][col], data))
|
# (sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
elif isinstance(data, float):
|
elif isinstance(data, float):
|
||||||
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
(sql, row, col, tdSql.queryResult[row][col], data))
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
@ -390,15 +390,15 @@ class TDTestCase:
|
||||||
# append rows of stablename when dnode stop
|
# append rows of stablename when dnode stop
|
||||||
|
|
||||||
tbname = "sub_{}_{}".format(stablename , 0)
|
tbname = "sub_{}_{}".format(stablename , 0)
|
||||||
tdLog.info(" ==== begin append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
tdLog.notice(" ==== begin append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
||||||
self.append_rows_of_exists_tables(db_name ,stablename , tbname , 100 )
|
self.append_rows_of_exists_tables(db_name ,stablename , tbname , 100 )
|
||||||
tdLog.info(" ==== check append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
tdLog.notice(" ==== check append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=100)
|
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=100)
|
||||||
|
|
||||||
# create new stables
|
# create new stables
|
||||||
tdLog.info(" ==== create new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
tdLog.notice(" ==== create new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb1' , tb_nums= 10 ,row_nums= 10 )
|
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb1' , tb_nums= 10 ,row_nums= 10 )
|
||||||
tdLog.info(" ==== check new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
tdLog.notice(" ==== check new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,'new_stb1' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
self.check_insert_rows(db_name ,'new_stb1' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
||||||
|
|
||||||
# begin start dnode
|
# begin start dnode
|
||||||
|
@ -410,9 +410,9 @@ class TDTestCase:
|
||||||
tdLog.exit(" ==== restart dnode {} cost too much time , please check ====".format(self.stop_dnode_id))
|
tdLog.exit(" ==== restart dnode {} cost too much time , please check ====".format(self.stop_dnode_id))
|
||||||
|
|
||||||
# create new stables again
|
# create new stables again
|
||||||
tdLog.info(" ==== create new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
tdLog.notice(" ==== create new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb2' , tb_nums= 10 ,row_nums= 10 )
|
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb2' , tb_nums= 10 ,row_nums= 10 )
|
||||||
tdLog.info(" ==== check new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
tdLog.notice(" ==== check new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,'new_stb2' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
self.check_insert_rows(db_name ,'new_stb2' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
||||||
|
|
||||||
def unsync_run_case(self):
|
def unsync_run_case(self):
|
||||||
|
@ -448,7 +448,7 @@ class TDTestCase:
|
||||||
self.create_database(dbname = db_name ,replica_num= self.replica , vgroup_nums= 1)
|
self.create_database(dbname = db_name ,replica_num= self.replica , vgroup_nums= 1)
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = stablename , tb_nums= 10 ,row_nums= 10 )
|
self.create_stable_insert_datas(dbname = db_name , stablename = stablename , tb_nums= 10 ,row_nums= 10 )
|
||||||
|
|
||||||
tdLog.info(" ===== restart dnode of database {} in an unsync threading ===== ".format(db_name))
|
tdLog.notice(" ===== restart dnode of database {} in an unsync threading ===== ".format(db_name))
|
||||||
|
|
||||||
# create sync threading and start it
|
# create sync threading and start it
|
||||||
self.current_thread = _create_threading(db_name)
|
self.current_thread = _create_threading(db_name)
|
||||||
|
@ -458,21 +458,21 @@ class TDTestCase:
|
||||||
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
||||||
|
|
||||||
tbname = "sub_{}_{}".format(stablename , 0)
|
tbname = "sub_{}_{}".format(stablename , 0)
|
||||||
tdLog.info(" ==== begin append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
tdLog.notice(" ==== begin append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
||||||
self.append_rows_of_exists_tables(db_name ,stablename , tbname , 100 )
|
self.append_rows_of_exists_tables(db_name ,stablename , tbname , 100 )
|
||||||
tdLog.info(" ==== check append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
tdLog.notice(" ==== check append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=100)
|
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=100)
|
||||||
|
|
||||||
# create new stables
|
# create new stables
|
||||||
tdLog.info(" ==== create new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
tdLog.notice(" ==== create new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb1' , tb_nums= 10 ,row_nums= 10 )
|
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb1' , tb_nums= 10 ,row_nums= 10 )
|
||||||
tdLog.info(" ==== check new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
tdLog.notice(" ==== check new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,'new_stb1' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
self.check_insert_rows(db_name ,'new_stb1' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
||||||
|
|
||||||
# create new stables again
|
# create new stables again
|
||||||
tdLog.info(" ==== create new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
tdLog.notice(" ==== create new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb2' , tb_nums= 10 ,row_nums= 10 )
|
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb2' , tb_nums= 10 ,row_nums= 10 )
|
||||||
tdLog.info(" ==== check new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
tdLog.notice(" ==== check new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,'new_stb2' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
self.check_insert_rows(db_name ,'new_stb2' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
||||||
|
|
||||||
self.current_thread.join()
|
self.current_thread.join()
|
||||||
|
|
|
@ -80,14 +80,14 @@ class TDTestCase:
|
||||||
is_leader=True
|
is_leader=True
|
||||||
|
|
||||||
if count==1 and is_leader:
|
if count==1 and is_leader:
|
||||||
tdLog.info("===== depoly cluster success with 1 mnode as leader =====")
|
tdLog.notice("===== depoly cluster success with 1 mnode as leader =====")
|
||||||
else:
|
else:
|
||||||
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
||||||
|
|
||||||
for k ,v in self.dnode_list.items():
|
for k ,v in self.dnode_list.items():
|
||||||
if k == mnode_name:
|
if k == mnode_name:
|
||||||
if v[3]==0:
|
if v[3]==0:
|
||||||
tdLog.info("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
tdLog.notice("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
else:
|
else:
|
||||||
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
else:
|
else:
|
||||||
|
@ -130,7 +130,7 @@ class TDTestCase:
|
||||||
|
|
||||||
for k , v in vgroups_infos.items():
|
for k , v in vgroups_infos.items():
|
||||||
if len(v) ==1 and v[0]=="leader":
|
if len(v) ==1 and v[0]=="leader":
|
||||||
tdLog.info(" === create database replica only 1 role leader check success of vgroup_id {} ======".format(k))
|
tdLog.notice(" === create database replica only 1 role leader check success of vgroup_id {} ======".format(k))
|
||||||
else:
|
else:
|
||||||
tdLog.exit(" === create database replica only 1 role leader check fail of vgroup_id {} ======".format(k))
|
tdLog.exit(" === create database replica only 1 role leader check fail of vgroup_id {} ======".format(k))
|
||||||
|
|
||||||
|
@ -138,7 +138,7 @@ class TDTestCase:
|
||||||
drop_db_sql = "drop database if exists {}".format(dbname)
|
drop_db_sql = "drop database if exists {}".format(dbname)
|
||||||
create_db_sql = "create database {} replica {} vgroups {}".format(dbname,replica_num,vgroup_nums)
|
create_db_sql = "create database {} replica {} vgroups {}".format(dbname,replica_num,vgroup_nums)
|
||||||
|
|
||||||
tdLog.info(" ==== create database {} and insert rows begin =====".format(dbname))
|
tdLog.notice(" ==== create database {} and insert rows begin =====".format(dbname))
|
||||||
tdSql.execute(drop_db_sql)
|
tdSql.execute(drop_db_sql)
|
||||||
tdSql.execute(create_db_sql)
|
tdSql.execute(create_db_sql)
|
||||||
tdSql.execute("use {}".format(dbname))
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
@ -161,7 +161,7 @@ class TDTestCase:
|
||||||
ts = self.ts + self.ts_step*row_num
|
ts = self.ts + self.ts_step*row_num
|
||||||
tdSql.execute(f"insert into {sub_tbname} values ({ts}, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
tdSql.execute(f"insert into {sub_tbname} values ({ts}, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
|
|
||||||
tdLog.info(" ==== stable {} insert rows execute end =====".format(stablename))
|
tdLog.notice(" ==== stable {} insert rows execute end =====".format(stablename))
|
||||||
|
|
||||||
def append_rows_of_exists_tables(self,dbname ,stablename , tbname , append_nums ):
|
def append_rows_of_exists_tables(self,dbname ,stablename , tbname , append_nums ):
|
||||||
|
|
||||||
|
@ -170,7 +170,7 @@ class TDTestCase:
|
||||||
for row_num in range(append_nums):
|
for row_num in range(append_nums):
|
||||||
tdSql.execute(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
tdSql.execute(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
# print(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
# print(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
tdLog.info(" ==== append new rows of table {} belongs to stable {} execute end =====".format(tbname,stablename))
|
tdLog.notice(" ==== append new rows of table {} belongs to stable {} execute end =====".format(tbname,stablename))
|
||||||
os.system("taos -s 'select count(*) from {}.{}';".format(dbname,stablename))
|
os.system("taos -s 'select count(*) from {}.{}';".format(dbname,stablename))
|
||||||
|
|
||||||
def check_insert_rows(self, dbname, stablename , tb_nums , row_nums, append_rows):
|
def check_insert_rows(self, dbname, stablename , tb_nums , row_nums, append_rows):
|
||||||
|
@ -197,7 +197,7 @@ class TDTestCase:
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
||||||
status_OK = self.mycheckData("select count(*) from {}.{}".format(dbname,stablename) ,0 , 0 , tb_nums*row_nums+append_rows)
|
status_OK = self.mycheckData("select count(*) from {}.{}".format(dbname,stablename) ,0 , 0 , tb_nums*row_nums+append_rows)
|
||||||
tdLog.info(" ==== check insert rows first failed , this is {}_th retry check rows of database {}".format(count , dbname))
|
tdLog.notice(" ==== check insert rows first failed , this is {}_th retry check rows of database {}".format(count , dbname))
|
||||||
count += 1
|
count += 1
|
||||||
|
|
||||||
|
|
||||||
|
@ -218,7 +218,7 @@ class TDTestCase:
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
||||||
status_OK = self.mycheckRows("select distinct tbname from {}.{}".format(dbname,stablename) ,tb_nums)
|
status_OK = self.mycheckRows("select distinct tbname from {}.{}".format(dbname,stablename) ,tb_nums)
|
||||||
tdLog.info(" ==== check insert tbnames first failed , this is {}_th retry check tbnames of database {}".format(count , dbname))
|
tdLog.notice(" ==== check insert tbnames first failed , this is {}_th retry check tbnames of database {}".format(count , dbname))
|
||||||
count += 1
|
count += 1
|
||||||
|
|
||||||
def _get_stop_dnode_id(self,dbname):
|
def _get_stop_dnode_id(self,dbname):
|
||||||
|
@ -256,8 +256,8 @@ class TDTestCase:
|
||||||
while status !="offline":
|
while status !="offline":
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
status = _get_status()
|
status = _get_status()
|
||||||
# tdLog.info("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
# tdLog.notice("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
||||||
tdLog.info("==== stop_dnode has stopped , id is {}".format(self.stop_dnode_id))
|
tdLog.notice("==== stop_dnode has stopped , id is {}".format(self.stop_dnode_id))
|
||||||
|
|
||||||
def wait_start_dnode_OK(self):
|
def wait_start_dnode_OK(self):
|
||||||
|
|
||||||
|
@ -278,8 +278,8 @@ class TDTestCase:
|
||||||
while status !="ready":
|
while status !="ready":
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
status = _get_status()
|
status = _get_status()
|
||||||
# tdLog.info("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
# tdLog.notice("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
||||||
tdLog.info("==== stop_dnode has restart , id is {}".format(self.stop_dnode_id))
|
tdLog.notice("==== stop_dnode has restart , id is {}".format(self.stop_dnode_id))
|
||||||
|
|
||||||
def _parse_datetime(self,timestr):
|
def _parse_datetime(self,timestr):
|
||||||
try:
|
try:
|
||||||
|
@ -343,9 +343,9 @@ class TDTestCase:
|
||||||
elif isinstance(data, str):
|
elif isinstance(data, str):
|
||||||
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
(sql, row, col, tdSql.queryResult[row][col], data))
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
elif isinstance(data, datetime.date):
|
# elif isinstance(data, datetime.date):
|
||||||
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
# tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
(sql, row, col, tdSql.queryResult[row][col], data))
|
# (sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
elif isinstance(data, float):
|
elif isinstance(data, float):
|
||||||
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
(sql, row, col, tdSql.queryResult[row][col], data))
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
@ -390,15 +390,15 @@ class TDTestCase:
|
||||||
# append rows of stablename when dnode stop
|
# append rows of stablename when dnode stop
|
||||||
|
|
||||||
tbname = "sub_{}_{}".format(stablename , 0)
|
tbname = "sub_{}_{}".format(stablename , 0)
|
||||||
tdLog.info(" ==== begin append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
tdLog.notice(" ==== begin append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
||||||
self.append_rows_of_exists_tables(db_name ,stablename , tbname , 100 )
|
self.append_rows_of_exists_tables(db_name ,stablename , tbname , 100 )
|
||||||
tdLog.info(" ==== check append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
tdLog.notice(" ==== check append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=100)
|
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=100)
|
||||||
|
|
||||||
# create new stables
|
# create new stables
|
||||||
tdLog.info(" ==== create new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
tdLog.notice(" ==== create new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb1' , tb_nums= 10 ,row_nums= 10 )
|
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb1' , tb_nums= 10 ,row_nums= 10 )
|
||||||
tdLog.info(" ==== check new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
tdLog.notice(" ==== check new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,'new_stb1' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
self.check_insert_rows(db_name ,'new_stb1' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
||||||
|
|
||||||
# begin start dnode
|
# begin start dnode
|
||||||
|
@ -410,9 +410,9 @@ class TDTestCase:
|
||||||
tdLog.exit(" ==== restart dnode {} cost too much time , please check ====".format(self.stop_dnode_id))
|
tdLog.exit(" ==== restart dnode {} cost too much time , please check ====".format(self.stop_dnode_id))
|
||||||
|
|
||||||
# create new stables again
|
# create new stables again
|
||||||
tdLog.info(" ==== create new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
tdLog.notice(" ==== create new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb2' , tb_nums= 10 ,row_nums= 10 )
|
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb2' , tb_nums= 10 ,row_nums= 10 )
|
||||||
tdLog.info(" ==== check new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
tdLog.notice(" ==== check new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,'new_stb2' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
self.check_insert_rows(db_name ,'new_stb2' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
||||||
|
|
||||||
def unsync_run_case(self):
|
def unsync_run_case(self):
|
||||||
|
@ -453,7 +453,7 @@ class TDTestCase:
|
||||||
self.create_database(dbname = db_name ,replica_num= self.replica , vgroup_nums= 1)
|
self.create_database(dbname = db_name ,replica_num= self.replica , vgroup_nums= 1)
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = stablename , tb_nums= 10 ,row_nums= 10 )
|
self.create_stable_insert_datas(dbname = db_name , stablename = stablename , tb_nums= 10 ,row_nums= 10 )
|
||||||
|
|
||||||
tdLog.info(" ===== restart dnode of database {} in an unsync threading ===== ".format(db_name))
|
tdLog.notice(" ===== restart dnode of database {} in an unsync threading ===== ".format(db_name))
|
||||||
|
|
||||||
# create sync threading and start it
|
# create sync threading and start it
|
||||||
self.current_thread = _create_threading(db_name)
|
self.current_thread = _create_threading(db_name)
|
||||||
|
@ -463,21 +463,21 @@ class TDTestCase:
|
||||||
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
||||||
|
|
||||||
tbname = "sub_{}_{}".format(stablename , 0)
|
tbname = "sub_{}_{}".format(stablename , 0)
|
||||||
tdLog.info(" ==== begin append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
tdLog.notice(" ==== begin append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
||||||
self.append_rows_of_exists_tables(db_name ,stablename , tbname , 100 )
|
self.append_rows_of_exists_tables(db_name ,stablename , tbname , 100 )
|
||||||
tdLog.info(" ==== check append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
tdLog.notice(" ==== check append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=100)
|
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=100)
|
||||||
|
|
||||||
# create new stables
|
# create new stables
|
||||||
tdLog.info(" ==== create new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
tdLog.notice(" ==== create new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb1' , tb_nums= 10 ,row_nums= 10 )
|
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb1' , tb_nums= 10 ,row_nums= 10 )
|
||||||
tdLog.info(" ==== check new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
tdLog.notice(" ==== check new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,'new_stb1' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
self.check_insert_rows(db_name ,'new_stb1' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
||||||
|
|
||||||
# create new stables again
|
# create new stables again
|
||||||
tdLog.info(" ==== create new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
tdLog.notice(" ==== create new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb2' , tb_nums= 10 ,row_nums= 10 )
|
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb2' , tb_nums= 10 ,row_nums= 10 )
|
||||||
tdLog.info(" ==== check new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
tdLog.notice(" ==== check new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,'new_stb2' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
self.check_insert_rows(db_name ,'new_stb2' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
||||||
|
|
||||||
self.current_thread.join()
|
self.current_thread.join()
|
||||||
|
@ -493,7 +493,7 @@ class TDTestCase:
|
||||||
else:
|
else:
|
||||||
continue
|
continue
|
||||||
if port:
|
if port:
|
||||||
tdLog.info(" ==== dnode {} will be force stop by kill -9 ====".format(dnode_id))
|
tdLog.notice(" ==== dnode {} will be force stop by kill -9 ====".format(dnode_id))
|
||||||
psCmd = '''netstat -anp|grep -w LISTEN|grep -w %s |grep -o "LISTEN.*"|awk '{print $2}'|cut -d/ -f1|head -n1''' %(port)
|
psCmd = '''netstat -anp|grep -w LISTEN|grep -w %s |grep -o "LISTEN.*"|awk '{print $2}'|cut -d/ -f1|head -n1''' %(port)
|
||||||
processID = subprocess.check_output(
|
processID = subprocess.check_output(
|
||||||
psCmd, shell=True).decode("utf-8")
|
psCmd, shell=True).decode("utf-8")
|
||||||
|
|
|
@ -114,9 +114,9 @@ class TDTestCase:
|
||||||
elif isinstance(data, str):
|
elif isinstance(data, str):
|
||||||
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
(sql, row, col, tdSql.queryResult[row][col], data))
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
elif isinstance(data, datetime.date):
|
# elif isinstance(data, datetime.date):
|
||||||
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
# tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
(sql, row, col, tdSql.queryResult[row][col], data))
|
# (sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
elif isinstance(data, float):
|
elif isinstance(data, float):
|
||||||
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
(sql, row, col, tdSql.queryResult[row][col], data))
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
@ -163,146 +163,20 @@ class TDTestCase:
|
||||||
is_leader=True
|
is_leader=True
|
||||||
|
|
||||||
if count==1 and is_leader:
|
if count==1 and is_leader:
|
||||||
tdLog.info("===== depoly cluster success with 1 mnode as leader =====")
|
tdLog.notice("===== depoly cluster success with 1 mnode as leader =====")
|
||||||
else:
|
else:
|
||||||
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
||||||
|
|
||||||
for k ,v in self.dnode_list.items():
|
for k ,v in self.dnode_list.items():
|
||||||
if k == mnode_name:
|
if k == mnode_name:
|
||||||
if v[3]==0:
|
if v[3]==0:
|
||||||
tdLog.info("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
tdLog.notice("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
else:
|
else:
|
||||||
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
else:
|
else:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
def create_db_check_vgroups(self):
|
|
||||||
|
|
||||||
tdSql.execute("drop database if exists test")
|
|
||||||
tdSql.execute("create database if not exists test replica 1 duration 300")
|
|
||||||
tdSql.execute("use test")
|
|
||||||
tdSql.execute(
|
|
||||||
'''create table stb1
|
|
||||||
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
|
||||||
tags (t1 int)
|
|
||||||
'''
|
|
||||||
)
|
|
||||||
tdSql.execute(
|
|
||||||
'''
|
|
||||||
create table t1
|
|
||||||
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
|
||||||
'''
|
|
||||||
)
|
|
||||||
|
|
||||||
for i in range(5):
|
|
||||||
tdSql.execute("create table sub_tb_{} using stb1 tags({})".format(i,i))
|
|
||||||
tdSql.query("show stables")
|
|
||||||
tdSql.checkRows(1)
|
|
||||||
tdSql.query("show tables")
|
|
||||||
tdSql.checkRows(6)
|
|
||||||
|
|
||||||
tdSql.query("show test.vgroups;")
|
|
||||||
vgroups_infos = {} # key is id: value is info list
|
|
||||||
for vgroup_info in tdSql.queryResult:
|
|
||||||
vgroup_id = vgroup_info[0]
|
|
||||||
tmp_list = []
|
|
||||||
for role in vgroup_info[3:-4]:
|
|
||||||
if role in ['leader','follower']:
|
|
||||||
tmp_list.append(role)
|
|
||||||
vgroups_infos[vgroup_id]=tmp_list
|
|
||||||
|
|
||||||
for k , v in vgroups_infos.items():
|
|
||||||
if len(v) ==1 and v[0]=="leader":
|
|
||||||
tdLog.info(" === create database replica only 1 role leader check success of vgroup_id {} ======".format(k))
|
|
||||||
else:
|
|
||||||
tdLog.exit(" === create database replica only 1 role leader check fail of vgroup_id {} ======".format(k))
|
|
||||||
|
|
||||||
def create_database(self, dbname, replica_num ,vgroup_nums ):
|
|
||||||
drop_db_sql = "drop database if exists {}".format(dbname)
|
|
||||||
create_db_sql = "create database {} replica {} vgroups {}".format(dbname,replica_num,vgroup_nums)
|
|
||||||
|
|
||||||
tdLog.info(" ==== create database {} and insert rows begin =====".format(dbname))
|
|
||||||
tdSql.execute(drop_db_sql)
|
|
||||||
tdSql.execute(create_db_sql)
|
|
||||||
tdSql.execute("use {}".format(dbname))
|
|
||||||
|
|
||||||
def create_stable_insert_datas(self,dbname ,stablename , tb_nums , row_nums):
|
|
||||||
tdSql.execute("use {}".format(dbname))
|
|
||||||
tdSql.execute(
|
|
||||||
'''create table {}
|
|
||||||
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(32),c9 nchar(32), c10 timestamp)
|
|
||||||
tags (t1 int)
|
|
||||||
'''.format(stablename)
|
|
||||||
)
|
|
||||||
|
|
||||||
for i in range(tb_nums):
|
|
||||||
sub_tbname = "sub_{}_{}".format(stablename,i)
|
|
||||||
tdSql.execute("create table {} using {} tags({})".format(sub_tbname, stablename ,i))
|
|
||||||
# insert datas about new database
|
|
||||||
|
|
||||||
for row_num in range(row_nums):
|
|
||||||
ts = self.ts + self.ts_step*row_num
|
|
||||||
tdSql.execute(f"insert into {sub_tbname} values ({ts}, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
|
||||||
|
|
||||||
tdLog.info(" ==== stable {} insert rows execute end =====".format(stablename))
|
|
||||||
|
|
||||||
def append_rows_of_exists_tables(self,dbname ,stablename , tbname , append_nums ):
|
|
||||||
|
|
||||||
tdSql.execute("use {}".format(dbname))
|
|
||||||
|
|
||||||
for row_num in range(append_nums):
|
|
||||||
tdSql.execute(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
|
||||||
# print(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
|
||||||
tdLog.info(" ==== append new rows of table {} belongs to stable {} execute end =====".format(tbname,stablename))
|
|
||||||
os.system("taos -s 'select count(*) from {}.{}';".format(dbname,stablename))
|
|
||||||
|
|
||||||
def check_insert_rows(self, dbname, stablename , tb_nums , row_nums, append_rows):
|
|
||||||
|
|
||||||
tdSql.execute("use {}".format(dbname))
|
|
||||||
|
|
||||||
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
|
||||||
|
|
||||||
while not tdSql.queryResult:
|
|
||||||
time.sleep(0.1)
|
|
||||||
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
|
||||||
|
|
||||||
status_OK = self.mycheckData("select count(*) from {}.{}".format(dbname,stablename) ,0 , 0 , tb_nums*row_nums+append_rows)
|
|
||||||
|
|
||||||
count = 0
|
|
||||||
while not status_OK :
|
|
||||||
if count > self.try_check_times:
|
|
||||||
os.system("taos -s ' show {}.vgroups; '".format(dbname))
|
|
||||||
tdLog.exit(" ==== check insert rows failed after {} try check {} times of database {}".format(count , self.try_check_times ,dbname))
|
|
||||||
break
|
|
||||||
time.sleep(0.1)
|
|
||||||
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
|
||||||
while not tdSql.queryResult:
|
|
||||||
time.sleep(0.1)
|
|
||||||
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
|
||||||
status_OK = self.mycheckData("select count(*) from {}.{}".format(dbname,stablename) ,0 , 0 , tb_nums*row_nums+append_rows)
|
|
||||||
tdLog.info(" ==== check insert rows first failed , this is {}_th retry check rows of database {}".format(count , dbname))
|
|
||||||
count += 1
|
|
||||||
|
|
||||||
|
|
||||||
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
|
||||||
while not tdSql.queryResult:
|
|
||||||
time.sleep(0.1)
|
|
||||||
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
|
||||||
status_OK = self.mycheckRows("select distinct tbname from {}.{}".format(dbname,stablename) ,tb_nums)
|
|
||||||
count = 0
|
|
||||||
while not status_OK :
|
|
||||||
if count > self.try_check_times:
|
|
||||||
os.system("taos -s ' show {}.vgroups;'".format(dbname))
|
|
||||||
tdLog.exit(" ==== check insert rows failed after {} try check {} times of database {}".format(count , self.try_check_times ,dbname))
|
|
||||||
break
|
|
||||||
time.sleep(0.1)
|
|
||||||
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
|
||||||
while not tdSql.queryResult:
|
|
||||||
time.sleep(0.1)
|
|
||||||
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
|
||||||
status_OK = self.mycheckRows("select distinct tbname from {}.{}".format(dbname,stablename) ,tb_nums)
|
|
||||||
tdLog.info(" ==== check insert tbnames first failed , this is {}_th retry check tbnames of database {}".format(count , dbname))
|
|
||||||
count += 1
|
|
||||||
def _get_stop_dnode_id(self,dbname):
|
def _get_stop_dnode_id(self,dbname):
|
||||||
newTdSql=tdCom.newTdSql()
|
newTdSql=tdCom.newTdSql()
|
||||||
newTdSql.query("show {}.vgroups".format(dbname))
|
newTdSql.query("show {}.vgroups".format(dbname))
|
||||||
|
@ -339,8 +213,8 @@ class TDTestCase:
|
||||||
while status !="offline":
|
while status !="offline":
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
status = _get_status()
|
status = _get_status()
|
||||||
# tdLog.info("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
# tdLog.notice("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
||||||
tdLog.info("==== stop_dnode has stopped , id is {}".format(self.stop_dnode_id))
|
tdLog.notice("==== stop_dnode has stopped , id is {}".format(self.stop_dnode_id))
|
||||||
|
|
||||||
def wait_start_dnode_OK(self):
|
def wait_start_dnode_OK(self):
|
||||||
|
|
||||||
|
@ -361,8 +235,8 @@ class TDTestCase:
|
||||||
while status !="ready":
|
while status !="ready":
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
status = _get_status()
|
status = _get_status()
|
||||||
# tdLog.info("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
# tdLog.notice("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
||||||
tdLog.info("==== stop_dnode has restart , id is {}".format(self.stop_dnode_id))
|
tdLog.notice("==== stop_dnode has restart , id is {}".format(self.stop_dnode_id))
|
||||||
|
|
||||||
def get_leader_infos(self ,dbname):
|
def get_leader_infos(self ,dbname):
|
||||||
|
|
||||||
|
@ -389,166 +263,96 @@ class TDTestCase:
|
||||||
if role==self.stop_dnode_id:
|
if role==self.stop_dnode_id:
|
||||||
|
|
||||||
if vgroup_info[ind+1] =="offline" and "leader" in vgroup_info:
|
if vgroup_info[ind+1] =="offline" and "leader" in vgroup_info:
|
||||||
tdLog.info(" === revote leader ok , leader is {} now ====".format(vgroup_info[list(vgroup_info).index("leader")-1]))
|
tdLog.notice(" === revote leader ok , leader is {} now ====".format(vgroup_info[list(vgroup_info).index("leader")-1]))
|
||||||
check_status = True
|
check_status = True
|
||||||
elif vgroup_info[ind+1] !="offline":
|
elif vgroup_info[ind+1] !="offline":
|
||||||
tdLog.info(" === dnode {} should be offline ".format(self.stop_dnode_id))
|
tdLog.notice(" === dnode {} should be offline ".format(self.stop_dnode_id))
|
||||||
else:
|
else:
|
||||||
continue
|
continue
|
||||||
break
|
break
|
||||||
return check_status
|
return check_status
|
||||||
|
|
||||||
def sync_run_case(self):
|
def start_benchmark_inserts(self,dbname , json_file):
|
||||||
|
benchmark_build_path = self.getBuildPath() + '/build/bin/taosBenchmark'
|
||||||
|
tdLog.notice("==== start taosBenchmark insert datas of database {} ==== ".format(dbname))
|
||||||
|
os.system(" {} -f {} >>/dev/null 2>&1 ".format(benchmark_build_path , json_file))
|
||||||
|
|
||||||
|
def stop_leader_when_Benchmark_inserts(self,dbname , total_rows , json_file ):
|
||||||
# stop follower and insert datas , update tables and create new stables
|
# stop follower and insert datas , update tables and create new stables
|
||||||
tdDnodes=cluster.dnodes
|
tdDnodes=cluster.dnodes
|
||||||
for loop in range(self.loop_restart_times):
|
tdSql.execute(" drop database if exists {} ".format(dbname))
|
||||||
db_name = "sync_db_{}".format(loop)
|
tdSql.execute(" create database {} replica {} vgroups {}".format(dbname , self.replica , self.vgroups))
|
||||||
stablename = 'stable_{}'.format(loop)
|
|
||||||
self.create_database(dbname = db_name ,replica_num= self.replica , vgroup_nums= 1)
|
# start insert datas using taosBenchmark ,expect insert 10000 rows
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = stablename , tb_nums= 10 ,row_nums= 10 )
|
|
||||||
self.stop_dnode_id = self._get_stop_dnode_id(db_name)
|
self.current_thread = threading.Thread(target=self.start_benchmark_inserts, args=(dbname,json_file))
|
||||||
|
self.current_thread.start()
|
||||||
# check rows of datas
|
tdSql.query(" show databases ")
|
||||||
|
|
||||||
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
# make sure create database ok
|
||||||
|
while (tdSql.queryRows!=3):
|
||||||
|
time.sleep(0.5)
|
||||||
|
tdSql.query(" show databases ")
|
||||||
|
|
||||||
# get leader info before stop
|
# # make sure create stable ok
|
||||||
before_leader_infos = self.get_leader_infos(db_name)
|
tdSql.query(" show {}.stables ".format(dbname))
|
||||||
|
while (tdSql.queryRows!=1):
|
||||||
|
time.sleep(0.5)
|
||||||
|
tdSql.query(" show {}.stables ".format(dbname))
|
||||||
|
|
||||||
# begin stop dnode
|
# stop leader of database when insert 10% rows
|
||||||
|
# os.system("taos -s 'show databases';")
|
||||||
tdDnodes[self.stop_dnode_id-1].stoptaosd()
|
tdSql.query(" select count(*) from {}.{} ".format(dbname,"stb1"))
|
||||||
|
|
||||||
self.wait_stop_dnode_OK()
|
while not tdSql.queryResult:
|
||||||
|
tdSql.query(" select count(*) from {}.{} ".format(dbname,"stb1"))
|
||||||
|
tdLog.debug(" === current insert {} rows in database {} === ".format(tdSql.queryResult[0][0] , dbname))
|
||||||
|
|
||||||
# vote leaders check
|
while (tdSql.queryResult[0][0] < total_rows/10):
|
||||||
|
if tdSql.queryResult:
|
||||||
|
tdLog.debug(" === current insert {} rows in database {} === ".format(tdSql.queryResult[0][0] , dbname))
|
||||||
|
time.sleep(0.01)
|
||||||
|
tdSql.query(" select count(*) from {}.{} ".format(dbname,"stb1"))
|
||||||
|
|
||||||
|
tdLog.debug(" === database {} has write {} rows at least ====".format(dbname,total_rows/10))
|
||||||
|
|
||||||
# get leader info after stop
|
self.stop_dnode_id = self._get_stop_dnode_id(dbname)
|
||||||
after_leader_infos = self.get_leader_infos(db_name)
|
|
||||||
|
|
||||||
revote_status = self.check_revote_leader_success(db_name ,before_leader_infos , after_leader_infos)
|
|
||||||
|
|
||||||
# append rows of stablename when dnode stop make sure revote leaders
|
# prepare stop leader of database
|
||||||
|
before_leader_infos = self.get_leader_infos(dbname)
|
||||||
|
|
||||||
|
tdDnodes[self.stop_dnode_id-1].stoptaosd()
|
||||||
|
# self.current_thread.join()
|
||||||
|
after_leader_infos = self.get_leader_infos(dbname)
|
||||||
|
|
||||||
while not revote_status:
|
start = time.time()
|
||||||
after_leader_infos = self.get_leader_infos(db_name)
|
revote_status = self.check_revote_leader_success(dbname ,before_leader_infos , after_leader_infos)
|
||||||
revote_status = self.check_revote_leader_success(db_name ,before_leader_infos , after_leader_infos)
|
while not revote_status:
|
||||||
|
after_leader_infos = self.get_leader_infos(dbname)
|
||||||
|
revote_status = self.check_revote_leader_success(dbname ,before_leader_infos , after_leader_infos)
|
||||||
|
end = time.time()
|
||||||
|
time_cost = end - start
|
||||||
|
tdLog.debug(" ==== revote leader of database {} cost time {} ====".format(dbname , time_cost))
|
||||||
|
|
||||||
|
self.current_thread.join()
|
||||||
|
|
||||||
if revote_status:
|
tdDnodes[self.stop_dnode_id-1].starttaosd()
|
||||||
tbname = "sub_{}_{}".format(stablename , 0)
|
self.wait_start_dnode_OK()
|
||||||
tdLog.info(" ==== begin append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
|
||||||
self.append_rows_of_exists_tables(db_name ,stablename , tbname , 100 )
|
|
||||||
tdLog.info(" ==== check append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
|
||||||
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=100)
|
|
||||||
|
|
||||||
# create new stables
|
tdSql.query(" select count(*) from {}.{} ".format(dbname,"stb1"))
|
||||||
tdLog.info(" ==== create new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
tdLog.debug(" ==== expected insert {} rows of database {} , really is {}".format(total_rows, dbname , tdSql.queryResult[0][0]))
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb1' , tb_nums= 10 ,row_nums= 10 )
|
|
||||||
tdLog.info(" ==== check new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
|
||||||
self.check_insert_rows(db_name ,'new_stb1' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
|
||||||
else:
|
|
||||||
tdLog.info("===== leader of database {} is not ok , append rows fail =====".format(db_name))
|
|
||||||
|
|
||||||
# begin start dnode
|
|
||||||
start = time.time()
|
|
||||||
tdDnodes[self.stop_dnode_id-1].starttaosd()
|
|
||||||
self.wait_start_dnode_OK()
|
|
||||||
end = time.time()
|
|
||||||
time_cost = int(end -start)
|
|
||||||
if time_cost > self.max_restart_time:
|
|
||||||
tdLog.exit(" ==== restart dnode {} cost too much time , please check ====".format(self.stop_dnode_id))
|
|
||||||
|
|
||||||
# create new stables again
|
|
||||||
tdLog.info(" ==== create new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb2' , tb_nums= 10 ,row_nums= 10 )
|
|
||||||
tdLog.info(" ==== check new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
|
||||||
self.check_insert_rows(db_name ,'new_stb2' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
|
||||||
|
|
||||||
def unsync_run_case(self):
|
|
||||||
|
|
||||||
def _restart_dnode_of_db_unsync(dbname):
|
|
||||||
|
|
||||||
tdDnodes=cluster.dnodes
|
|
||||||
self.stop_dnode_id = self._get_stop_dnode_id(dbname)
|
|
||||||
# begin restart dnode
|
|
||||||
|
|
||||||
tdDnodes[self.stop_dnode_id-1].stoptaosd()
|
|
||||||
|
|
||||||
tbname = "sub_{}_{}".format(stablename , 0)
|
|
||||||
tdLog.info(" ==== begin append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
|
||||||
self.append_rows_of_exists_tables(db_name ,stablename , tbname , 100 )
|
|
||||||
tdLog.info(" ==== check append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
|
||||||
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=100)
|
|
||||||
|
|
||||||
# create new stables
|
|
||||||
tdLog.info(" ==== create new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb1' , tb_nums= 10 ,row_nums= 10 )
|
|
||||||
tdLog.info(" ==== check new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
|
||||||
self.check_insert_rows(db_name ,'new_stb1' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
|
||||||
|
|
||||||
# create new stables again
|
|
||||||
tdLog.info(" ==== create new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb2' , tb_nums= 10 ,row_nums= 10 )
|
|
||||||
tdLog.info(" ==== check new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
|
||||||
self.check_insert_rows(db_name ,'new_stb2' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
|
||||||
|
|
||||||
# # get leader info before stop
|
|
||||||
# before_leader_infos = self.get_leader_infos(db_name)
|
|
||||||
# self.wait_stop_dnode_OK()
|
|
||||||
|
|
||||||
# check revote leader when restart servers
|
|
||||||
# # get leader info after stop
|
|
||||||
# after_leader_infos = self.get_leader_infos(db_name)
|
|
||||||
# revote_status = self.check_revote_leader_success(db_name ,before_leader_infos , after_leader_infos)
|
|
||||||
# # append rows of stablename when dnode stop make sure revote leaders
|
|
||||||
# while not revote_status:
|
|
||||||
# after_leader_infos = self.get_leader_infos(db_name)
|
|
||||||
# revote_status = self.check_revote_leader_success(db_name ,before_leader_infos , after_leader_infos)
|
|
||||||
tdDnodes[self.stop_dnode_id-1].starttaosd()
|
|
||||||
start = time.time()
|
|
||||||
self.wait_start_dnode_OK()
|
|
||||||
end = time.time()
|
|
||||||
time_cost = int(end-start)
|
|
||||||
|
|
||||||
if time_cost > self.max_restart_time:
|
|
||||||
tdLog.exit(" ==== restart dnode {} cost too much time , please check ====".format(self.stop_dnode_id))
|
|
||||||
|
|
||||||
|
|
||||||
def _create_threading(dbname):
|
|
||||||
self.current_thread = threading.Thread(target=_restart_dnode_of_db_unsync, args=(dbname,))
|
|
||||||
return self.current_thread
|
|
||||||
|
|
||||||
|
|
||||||
'''
|
|
||||||
in this mode , it will be extra threading control start or stop dnode , insert will always going with not care follower online or alive
|
|
||||||
'''
|
|
||||||
for loop in range(self.loop_restart_times):
|
|
||||||
db_name = "unsync_db_{}".format(loop)
|
|
||||||
stablename = 'stable_{}'.format(loop)
|
|
||||||
self.create_database(dbname = db_name ,replica_num= self.replica , vgroup_nums= 1)
|
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = stablename , tb_nums= 10 ,row_nums= 10 )
|
|
||||||
|
|
||||||
tdLog.info(" ===== restart dnode of database {} in an unsync threading ===== ".format(db_name))
|
|
||||||
|
|
||||||
# create sync threading and start it
|
|
||||||
self.current_thread = _create_threading(db_name)
|
|
||||||
self.current_thread.start()
|
|
||||||
|
|
||||||
# check rows of datas
|
|
||||||
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
|
||||||
|
|
||||||
|
|
||||||
self.current_thread.join()
|
|
||||||
|
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
|
|
||||||
# basic insert and check of cluster
|
# basic insert and check of cluster
|
||||||
self.check_setup_cluster_status()
|
# self.check_setup_cluster_status()
|
||||||
self.create_db_check_vgroups()
|
json = os.path.dirname(__file__) + '/insert_10W_rows.json'
|
||||||
self.sync_run_case()
|
self.stop_leader_when_Benchmark_inserts('db_1' , 100000 ,json)
|
||||||
# self.unsync_run_case()
|
# tdLog.notice( " ===== start insert 100W rows ==== ")
|
||||||
|
# json = os.path.dirname(__file__) + '/insert_100W_rows.json'
|
||||||
|
# self.stop_leader_when_Benchmark_inserts('db_2' , 1000000 ,json)
|
||||||
|
|
||||||
def stop(self):
|
def stop(self):
|
||||||
tdSql.close()
|
tdSql.close()
|
||||||
|
|
|
@ -114,9 +114,9 @@ class TDTestCase:
|
||||||
elif isinstance(data, str):
|
elif isinstance(data, str):
|
||||||
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
(sql, row, col, tdSql.queryResult[row][col], data))
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
elif isinstance(data, datetime.date):
|
# elif isinstance(data, datetime.date):
|
||||||
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
# tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
(sql, row, col, tdSql.queryResult[row][col], data))
|
# (sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
elif isinstance(data, float):
|
elif isinstance(data, float):
|
||||||
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
(sql, row, col, tdSql.queryResult[row][col], data))
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
@ -163,14 +163,14 @@ class TDTestCase:
|
||||||
is_leader=True
|
is_leader=True
|
||||||
|
|
||||||
if count==1 and is_leader:
|
if count==1 and is_leader:
|
||||||
tdLog.info("===== depoly cluster success with 1 mnode as leader =====")
|
tdLog.notice("===== depoly cluster success with 1 mnode as leader =====")
|
||||||
else:
|
else:
|
||||||
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
||||||
|
|
||||||
for k ,v in self.dnode_list.items():
|
for k ,v in self.dnode_list.items():
|
||||||
if k == mnode_name:
|
if k == mnode_name:
|
||||||
if v[3]==0:
|
if v[3]==0:
|
||||||
tdLog.info("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
tdLog.notice("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
else:
|
else:
|
||||||
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
else:
|
else:
|
||||||
|
@ -213,7 +213,7 @@ class TDTestCase:
|
||||||
|
|
||||||
for k , v in vgroups_infos.items():
|
for k , v in vgroups_infos.items():
|
||||||
if len(v) ==1 and v[0]=="leader":
|
if len(v) ==1 and v[0]=="leader":
|
||||||
tdLog.info(" === create database replica only 1 role leader check success of vgroup_id {} ======".format(k))
|
tdLog.notice(" === create database replica only 1 role leader check success of vgroup_id {} ======".format(k))
|
||||||
else:
|
else:
|
||||||
tdLog.exit(" === create database replica only 1 role leader check fail of vgroup_id {} ======".format(k))
|
tdLog.exit(" === create database replica only 1 role leader check fail of vgroup_id {} ======".format(k))
|
||||||
|
|
||||||
|
@ -221,7 +221,7 @@ class TDTestCase:
|
||||||
drop_db_sql = "drop database if exists {}".format(dbname)
|
drop_db_sql = "drop database if exists {}".format(dbname)
|
||||||
create_db_sql = "create database {} replica {} vgroups {}".format(dbname,replica_num,vgroup_nums)
|
create_db_sql = "create database {} replica {} vgroups {}".format(dbname,replica_num,vgroup_nums)
|
||||||
|
|
||||||
tdLog.info(" ==== create database {} and insert rows begin =====".format(dbname))
|
tdLog.notice(" ==== create database {} and insert rows begin =====".format(dbname))
|
||||||
tdSql.execute(drop_db_sql)
|
tdSql.execute(drop_db_sql)
|
||||||
tdSql.execute(create_db_sql)
|
tdSql.execute(create_db_sql)
|
||||||
tdSql.execute("use {}".format(dbname))
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
@ -244,7 +244,7 @@ class TDTestCase:
|
||||||
ts = self.ts + self.ts_step*row_num
|
ts = self.ts + self.ts_step*row_num
|
||||||
tdSql.execute(f"insert into {sub_tbname} values ({ts}, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
tdSql.execute(f"insert into {sub_tbname} values ({ts}, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
|
|
||||||
tdLog.info(" ==== stable {} insert rows execute end =====".format(stablename))
|
tdLog.notice(" ==== stable {} insert rows execute end =====".format(stablename))
|
||||||
|
|
||||||
def append_rows_of_exists_tables(self,dbname ,stablename , tbname , append_nums ):
|
def append_rows_of_exists_tables(self,dbname ,stablename , tbname , append_nums ):
|
||||||
|
|
||||||
|
@ -253,7 +253,7 @@ class TDTestCase:
|
||||||
for row_num in range(append_nums):
|
for row_num in range(append_nums):
|
||||||
tdSql.execute(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
tdSql.execute(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
# print(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
# print(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
tdLog.info(" ==== append new rows of table {} belongs to stable {} execute end =====".format(tbname,stablename))
|
tdLog.notice(" ==== append new rows of table {} belongs to stable {} execute end =====".format(tbname,stablename))
|
||||||
os.system("taos -s 'select count(*) from {}.{}';".format(dbname,stablename))
|
os.system("taos -s 'select count(*) from {}.{}';".format(dbname,stablename))
|
||||||
|
|
||||||
def check_insert_rows(self, dbname, stablename , tb_nums , row_nums, append_rows):
|
def check_insert_rows(self, dbname, stablename , tb_nums , row_nums, append_rows):
|
||||||
|
@ -280,7 +280,7 @@ class TDTestCase:
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
||||||
status_OK = self.mycheckData("select count(*) from {}.{}".format(dbname,stablename) ,0 , 0 , tb_nums*row_nums+append_rows)
|
status_OK = self.mycheckData("select count(*) from {}.{}".format(dbname,stablename) ,0 , 0 , tb_nums*row_nums+append_rows)
|
||||||
tdLog.info(" ==== check insert rows first failed , this is {}_th retry check rows of database {}".format(count , dbname))
|
tdLog.notice(" ==== check insert rows first failed , this is {}_th retry check rows of database {} ====".format(count , dbname))
|
||||||
count += 1
|
count += 1
|
||||||
|
|
||||||
|
|
||||||
|
@ -301,7 +301,7 @@ class TDTestCase:
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
||||||
status_OK = self.mycheckRows("select distinct tbname from {}.{}".format(dbname,stablename) ,tb_nums)
|
status_OK = self.mycheckRows("select distinct tbname from {}.{}".format(dbname,stablename) ,tb_nums)
|
||||||
tdLog.info(" ==== check insert tbnames first failed , this is {}_th retry check tbnames of database {}".format(count , dbname))
|
tdLog.notice(" ==== check insert tbnames first failed , this is {}_th retry check tbnames of database {}".format(count , dbname))
|
||||||
count += 1
|
count += 1
|
||||||
|
|
||||||
def _get_stop_dnode_id(self,dbname):
|
def _get_stop_dnode_id(self,dbname):
|
||||||
|
@ -340,8 +340,8 @@ class TDTestCase:
|
||||||
while status !="offline":
|
while status !="offline":
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
status = _get_status()
|
status = _get_status()
|
||||||
# tdLog.info("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
# tdLog.notice("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
||||||
tdLog.info("==== stop_dnode has stopped , id is {}".format(self.stop_dnode_id))
|
tdLog.notice("==== stop_dnode has stopped , id is {}".format(self.stop_dnode_id))
|
||||||
|
|
||||||
def wait_start_dnode_OK(self):
|
def wait_start_dnode_OK(self):
|
||||||
|
|
||||||
|
@ -362,8 +362,8 @@ class TDTestCase:
|
||||||
while status !="ready":
|
while status !="ready":
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
status = _get_status()
|
status = _get_status()
|
||||||
# tdLog.info("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
# tdLog.notice("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
||||||
tdLog.info("==== stop_dnode has restart , id is {}".format(self.stop_dnode_id))
|
tdLog.notice("==== stop_dnode has restart , id is {}".format(self.stop_dnode_id))
|
||||||
|
|
||||||
def get_leader_infos(self ,dbname):
|
def get_leader_infos(self ,dbname):
|
||||||
|
|
||||||
|
@ -390,10 +390,10 @@ class TDTestCase:
|
||||||
if role==self.stop_dnode_id:
|
if role==self.stop_dnode_id:
|
||||||
|
|
||||||
if vgroup_info[ind+1] =="offline" and "leader" in vgroup_info:
|
if vgroup_info[ind+1] =="offline" and "leader" in vgroup_info:
|
||||||
tdLog.info(" === revote leader ok , leader is {} now ====".format(vgroup_info[list(vgroup_info).index("leader")-1]))
|
tdLog.notice(" === revote leader ok , leader is {} now ====".format(vgroup_info[list(vgroup_info).index("leader")-1]))
|
||||||
check_status = True
|
check_status = True
|
||||||
elif vgroup_info[ind+1] !="offline":
|
elif vgroup_info[ind+1] !="offline":
|
||||||
tdLog.info(" === dnode {} should be offline ".format(self.stop_dnode_id))
|
tdLog.notice(" === dnode {} should be offline ".format(self.stop_dnode_id))
|
||||||
else:
|
else:
|
||||||
continue
|
continue
|
||||||
break
|
break
|
||||||
|
@ -410,7 +410,7 @@ class TDTestCase:
|
||||||
else:
|
else:
|
||||||
continue
|
continue
|
||||||
if port:
|
if port:
|
||||||
tdLog.info(" ==== dnode {} will be force stop by kill -9 ====".format(dnode_id))
|
tdLog.notice(" ==== dnode {} will be force stop by kill -9 ====".format(dnode_id))
|
||||||
psCmd = '''netstat -anp|grep -w LISTEN|grep -w %s |grep -o "LISTEN.*"|awk '{print $2}'|cut -d/ -f1|head -n1''' %(port)
|
psCmd = '''netstat -anp|grep -w LISTEN|grep -w %s |grep -o "LISTEN.*"|awk '{print $2}'|cut -d/ -f1|head -n1''' %(port)
|
||||||
processID = subprocess.check_output(
|
processID = subprocess.check_output(
|
||||||
psCmd, shell=True).decode("utf-8")
|
psCmd, shell=True).decode("utf-8")
|
||||||
|
@ -457,18 +457,18 @@ class TDTestCase:
|
||||||
|
|
||||||
if revote_status:
|
if revote_status:
|
||||||
tbname = "sub_{}_{}".format(stablename , 0)
|
tbname = "sub_{}_{}".format(stablename , 0)
|
||||||
tdLog.info(" ==== begin append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
tdLog.notice(" ==== begin append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
||||||
self.append_rows_of_exists_tables(db_name ,stablename , tbname , 100 )
|
self.append_rows_of_exists_tables(db_name ,stablename , tbname , 100 )
|
||||||
tdLog.info(" ==== check append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
tdLog.notice(" ==== check append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=100)
|
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=100)
|
||||||
|
|
||||||
# create new stables
|
# create new stables
|
||||||
tdLog.info(" ==== create new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
tdLog.notice(" ==== create new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb1' , tb_nums= 10 ,row_nums= 10 )
|
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb1' , tb_nums= 10 ,row_nums= 10 )
|
||||||
tdLog.info(" ==== check new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
tdLog.notice(" ==== check new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,'new_stb1' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
self.check_insert_rows(db_name ,'new_stb1' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
||||||
else:
|
else:
|
||||||
tdLog.info("===== leader of database {} is not ok , append rows fail =====".format(db_name))
|
tdLog.notice("===== leader of database {} is not ok , append rows fail =====".format(db_name))
|
||||||
|
|
||||||
# begin start dnode
|
# begin start dnode
|
||||||
start = time.time()
|
start = time.time()
|
||||||
|
@ -480,9 +480,9 @@ class TDTestCase:
|
||||||
tdLog.exit(" ==== restart dnode {} cost too much time , please check ====".format(self.stop_dnode_id))
|
tdLog.exit(" ==== restart dnode {} cost too much time , please check ====".format(self.stop_dnode_id))
|
||||||
|
|
||||||
# create new stables again
|
# create new stables again
|
||||||
tdLog.info(" ==== create new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
tdLog.notice(" ==== create new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb2' , tb_nums= 10 ,row_nums= 10 )
|
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb2' , tb_nums= 10 ,row_nums= 10 )
|
||||||
tdLog.info(" ==== check new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
tdLog.notice(" ==== check new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,'new_stb2' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
self.check_insert_rows(db_name ,'new_stb2' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
||||||
|
|
||||||
def unsync_run_case(self):
|
def unsync_run_case(self):
|
||||||
|
@ -509,21 +509,21 @@ class TDTestCase:
|
||||||
revote_status = self.check_revote_leader_success(db_name ,before_leader_infos , after_leader_infos)
|
revote_status = self.check_revote_leader_success(db_name ,before_leader_infos , after_leader_infos)
|
||||||
|
|
||||||
tbname = "sub_{}_{}".format(stablename , 0)
|
tbname = "sub_{}_{}".format(stablename , 0)
|
||||||
tdLog.info(" ==== begin append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
tdLog.notice(" ==== begin append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
||||||
self.append_rows_of_exists_tables(db_name ,stablename , tbname , 100 )
|
self.append_rows_of_exists_tables(db_name ,stablename , tbname , 100 )
|
||||||
tdLog.info(" ==== check append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
tdLog.notice(" ==== check append rows of exists table {} when dnode {} offline ====".format(tbname , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=100)
|
self.check_insert_rows(db_name ,stablename ,tb_nums=10 , row_nums= 10 ,append_rows=100)
|
||||||
|
|
||||||
# create new stables
|
# create new stables
|
||||||
tdLog.info(" ==== create new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
tdLog.notice(" ==== create new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb1' , tb_nums= 10 ,row_nums= 10 )
|
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb1' , tb_nums= 10 ,row_nums= 10 )
|
||||||
tdLog.info(" ==== check new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
tdLog.notice(" ==== check new stable {} when dnode {} offline ====".format('new_stb1' , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,'new_stb1' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
self.check_insert_rows(db_name ,'new_stb1' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
||||||
|
|
||||||
# create new stables again
|
# create new stables again
|
||||||
tdLog.info(" ==== create new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
tdLog.notice(" ==== create new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb2' , tb_nums= 10 ,row_nums= 10 )
|
self.create_stable_insert_datas(dbname = db_name , stablename = 'new_stb2' , tb_nums= 10 ,row_nums= 10 )
|
||||||
tdLog.info(" ==== check new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
tdLog.notice(" ==== check new stable {} when dnode {} restart ====".format('new_stb2' , self.stop_dnode_id))
|
||||||
self.check_insert_rows(db_name ,'new_stb2' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
self.check_insert_rows(db_name ,'new_stb2' ,tb_nums=10 , row_nums= 10 ,append_rows=0)
|
||||||
|
|
||||||
|
|
||||||
|
@ -551,7 +551,7 @@ class TDTestCase:
|
||||||
self.create_database(dbname = db_name ,replica_num= self.replica , vgroup_nums= 1)
|
self.create_database(dbname = db_name ,replica_num= self.replica , vgroup_nums= 1)
|
||||||
self.create_stable_insert_datas(dbname = db_name , stablename = stablename , tb_nums= 10 ,row_nums= 10 )
|
self.create_stable_insert_datas(dbname = db_name , stablename = stablename , tb_nums= 10 ,row_nums= 10 )
|
||||||
|
|
||||||
tdLog.info(" ===== restart dnode of database {} in an unsync threading ===== ".format(db_name))
|
tdLog.notice(" ===== restart dnode of database {} in an unsync threading ===== ".format(db_name))
|
||||||
|
|
||||||
# create sync threading and start it
|
# create sync threading and start it
|
||||||
self.current_thread = _create_threading(db_name)
|
self.current_thread = _create_threading(db_name)
|
||||||
|
|
|
@ -0,0 +1,416 @@
|
||||||
|
# author : wenzhouwww
|
||||||
|
from ssl import ALERT_DESCRIPTION_CERTIFICATE_UNOBTAINABLE
|
||||||
|
import taos
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
import os
|
||||||
|
|
||||||
|
from util.log import *
|
||||||
|
from util.sql import *
|
||||||
|
from util.cases import *
|
||||||
|
from util.dnodes import TDDnodes
|
||||||
|
from util.dnodes import TDDnode
|
||||||
|
from util.cluster import *
|
||||||
|
|
||||||
|
import datetime
|
||||||
|
import inspect
|
||||||
|
import time
|
||||||
|
import socket
|
||||||
|
import subprocess
|
||||||
|
import threading
|
||||||
|
sys.path.append(os.path.dirname(__file__))
|
||||||
|
|
||||||
|
class TDTestCase:
|
||||||
|
def init(self,conn ,logSql):
|
||||||
|
tdLog.debug(f"start to excute {__file__}")
|
||||||
|
tdSql.init(conn.cursor())
|
||||||
|
self.host = socket.gethostname()
|
||||||
|
self.mnode_list = {}
|
||||||
|
self.dnode_list = {}
|
||||||
|
self.ts = 1483200000000
|
||||||
|
self.ts_step =1000
|
||||||
|
self.db_name ='testdb'
|
||||||
|
self.replica = 3
|
||||||
|
self.vgroups = 1
|
||||||
|
self.tb_nums = 10
|
||||||
|
self.row_nums = 100
|
||||||
|
self.stop_dnode_id = None
|
||||||
|
self.loop_restart_times = 5
|
||||||
|
self.thread_list = []
|
||||||
|
self.max_restart_time = 10
|
||||||
|
self.try_check_times = 10
|
||||||
|
self.query_times = 100
|
||||||
|
|
||||||
|
|
||||||
|
def getBuildPath(self):
|
||||||
|
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||||
|
if ("community" in selfPath):
|
||||||
|
projPath = selfPath[:selfPath.find("community")]
|
||||||
|
else:
|
||||||
|
projPath = selfPath[:selfPath.find("tests")]
|
||||||
|
|
||||||
|
for root, dirs, files in os.walk(projPath):
|
||||||
|
if ("taosd" in files):
|
||||||
|
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||||
|
if ("packaging" not in rootRealPath):
|
||||||
|
buildPath = root[:len(root) - len("/build/bin")]
|
||||||
|
break
|
||||||
|
return buildPath
|
||||||
|
|
||||||
|
def check_setup_cluster_status(self):
|
||||||
|
tdSql.query("show mnodes")
|
||||||
|
for mnode in tdSql.queryResult:
|
||||||
|
name = mnode[1]
|
||||||
|
info = mnode
|
||||||
|
self.mnode_list[name] = info
|
||||||
|
|
||||||
|
tdSql.query("show dnodes")
|
||||||
|
for dnode in tdSql.queryResult:
|
||||||
|
name = dnode[1]
|
||||||
|
info = dnode
|
||||||
|
self.dnode_list[name] = info
|
||||||
|
|
||||||
|
count = 0
|
||||||
|
is_leader = False
|
||||||
|
mnode_name = ''
|
||||||
|
for k,v in self.mnode_list.items():
|
||||||
|
count +=1
|
||||||
|
# only for 1 mnode
|
||||||
|
mnode_name = k
|
||||||
|
|
||||||
|
if v[2] =='leader':
|
||||||
|
is_leader=True
|
||||||
|
|
||||||
|
if count==1 and is_leader:
|
||||||
|
tdLog.notice("===== depoly cluster success with 1 mnode as leader =====")
|
||||||
|
else:
|
||||||
|
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
||||||
|
|
||||||
|
for k ,v in self.dnode_list.items():
|
||||||
|
if k == mnode_name:
|
||||||
|
if v[3]==0:
|
||||||
|
tdLog.notice("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
|
else:
|
||||||
|
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
|
else:
|
||||||
|
continue
|
||||||
|
|
||||||
|
def create_database(self, dbname, replica_num ,vgroup_nums ):
|
||||||
|
drop_db_sql = "drop database if exists {}".format(dbname)
|
||||||
|
create_db_sql = "create database {} replica {} vgroups {}".format(dbname,replica_num,vgroup_nums)
|
||||||
|
|
||||||
|
tdLog.notice(" ==== create database {} and insert rows begin =====".format(dbname))
|
||||||
|
tdSql.execute(drop_db_sql)
|
||||||
|
tdSql.execute(create_db_sql)
|
||||||
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
|
||||||
|
def create_stable_insert_datas(self,dbname ,stablename , tb_nums , row_nums):
|
||||||
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
tdSql.execute(
|
||||||
|
'''create table {}
|
||||||
|
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(32),c9 nchar(32), c10 timestamp)
|
||||||
|
tags (t1 int)
|
||||||
|
'''.format(stablename)
|
||||||
|
)
|
||||||
|
|
||||||
|
for i in range(tb_nums):
|
||||||
|
sub_tbname = "sub_{}_{}".format(stablename,i)
|
||||||
|
tdSql.execute("create table {} using {} tags({})".format(sub_tbname, stablename ,i))
|
||||||
|
# insert datas about new database
|
||||||
|
|
||||||
|
for row_num in range(row_nums):
|
||||||
|
ts = self.ts + self.ts_step*row_num
|
||||||
|
tdSql.execute(f"insert into {sub_tbname} values ({ts}, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
|
|
||||||
|
tdLog.notice(" ==== stable {} insert rows execute end =====".format(stablename))
|
||||||
|
|
||||||
|
def append_rows_of_exists_tables(self,dbname ,stablename , tbname , append_nums ):
|
||||||
|
|
||||||
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
|
||||||
|
for row_num in range(append_nums):
|
||||||
|
tdSql.execute(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
|
# print(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
|
tdLog.notice(" ==== append new rows of table {} belongs to stable {} execute end =====".format(tbname,stablename))
|
||||||
|
os.system("taos -s 'select count(*) from {}.{}';".format(dbname,stablename))
|
||||||
|
|
||||||
|
def check_insert_rows(self, dbname, stablename , tb_nums , row_nums, append_rows):
|
||||||
|
|
||||||
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
|
||||||
|
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
||||||
|
|
||||||
|
while not tdSql.queryResult:
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
||||||
|
|
||||||
|
status_OK = self.mycheckData("select count(*) from {}.{}".format(dbname,stablename) ,0 , 0 , tb_nums*row_nums+append_rows)
|
||||||
|
|
||||||
|
count = 0
|
||||||
|
while not status_OK :
|
||||||
|
if count > self.try_check_times:
|
||||||
|
os.system("taos -s ' show {}.vgroups; '".format(dbname))
|
||||||
|
tdLog.exit(" ==== check insert rows failed after {} try check {} times of database {}".format(count , self.try_check_times ,dbname))
|
||||||
|
break
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
||||||
|
while not tdSql.queryResult:
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
||||||
|
status_OK = self.mycheckData("select count(*) from {}.{}".format(dbname,stablename) ,0 , 0 , tb_nums*row_nums+append_rows)
|
||||||
|
tdLog.notice(" ==== check insert rows first failed , this is {}_th retry check rows of database {}".format(count , dbname))
|
||||||
|
count += 1
|
||||||
|
|
||||||
|
|
||||||
|
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
||||||
|
while not tdSql.queryResult:
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
||||||
|
status_OK = self.mycheckRows("select distinct tbname from {}.{}".format(dbname,stablename) ,tb_nums)
|
||||||
|
count = 0
|
||||||
|
while not status_OK :
|
||||||
|
if count > self.try_check_times:
|
||||||
|
os.system("taos -s ' show {}.vgroups;'".format(dbname))
|
||||||
|
tdLog.exit(" ==== check insert rows failed after {} try check {} times of database {}".format(count , self.try_check_times ,dbname))
|
||||||
|
break
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
||||||
|
while not tdSql.queryResult:
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
||||||
|
status_OK = self.mycheckRows("select distinct tbname from {}.{}".format(dbname,stablename) ,tb_nums)
|
||||||
|
tdLog.notice(" ==== check insert tbnames first failed , this is {}_th retry check tbnames of database {}".format(count , dbname))
|
||||||
|
count += 1
|
||||||
|
|
||||||
|
def _get_stop_dnode_id(self,dbname):
|
||||||
|
tdSql.query("show {}.vgroups".format(dbname))
|
||||||
|
vgroup_infos = tdSql.queryResult
|
||||||
|
for vgroup_info in vgroup_infos:
|
||||||
|
leader_infos = vgroup_info[3:-4]
|
||||||
|
# print(vgroup_info)
|
||||||
|
for ind ,role in enumerate(leader_infos):
|
||||||
|
if role =='follower':
|
||||||
|
# print(ind,leader_infos)
|
||||||
|
self.stop_dnode_id = leader_infos[ind-1]
|
||||||
|
break
|
||||||
|
|
||||||
|
|
||||||
|
return self.stop_dnode_id
|
||||||
|
|
||||||
|
def wait_stop_dnode_OK(self):
|
||||||
|
|
||||||
|
def _get_status():
|
||||||
|
newTdSql=tdCom.newTdSql()
|
||||||
|
|
||||||
|
status = ""
|
||||||
|
newTdSql.query("show dnodes")
|
||||||
|
dnode_infos = newTdSql.queryResult
|
||||||
|
for dnode_info in dnode_infos:
|
||||||
|
id = dnode_info[0]
|
||||||
|
dnode_status = dnode_info[4]
|
||||||
|
if id == self.stop_dnode_id:
|
||||||
|
status = dnode_status
|
||||||
|
break
|
||||||
|
return status
|
||||||
|
|
||||||
|
status = _get_status()
|
||||||
|
while status !="offline":
|
||||||
|
time.sleep(0.1)
|
||||||
|
status = _get_status()
|
||||||
|
# tdLog.notice("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
||||||
|
tdLog.notice("==== stop_dnode has stopped , id is {}".format(self.stop_dnode_id))
|
||||||
|
|
||||||
|
def wait_start_dnode_OK(self):
|
||||||
|
|
||||||
|
def _get_status():
|
||||||
|
newTdSql=tdCom.newTdSql()
|
||||||
|
status = ""
|
||||||
|
newTdSql.query("show dnodes")
|
||||||
|
dnode_infos = newTdSql.queryResult
|
||||||
|
for dnode_info in dnode_infos:
|
||||||
|
id = dnode_info[0]
|
||||||
|
dnode_status = dnode_info[4]
|
||||||
|
if id == self.stop_dnode_id:
|
||||||
|
status = dnode_status
|
||||||
|
break
|
||||||
|
return status
|
||||||
|
|
||||||
|
status = _get_status()
|
||||||
|
while status !="ready":
|
||||||
|
time.sleep(0.1)
|
||||||
|
status = _get_status()
|
||||||
|
# tdLog.notice("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
||||||
|
tdLog.notice("==== stop_dnode has restart , id is {}".format(self.stop_dnode_id))
|
||||||
|
|
||||||
|
def _parse_datetime(self,timestr):
|
||||||
|
try:
|
||||||
|
return datetime.datetime.strptime(timestr, '%Y-%m-%d %H:%M:%S.%f')
|
||||||
|
except ValueError:
|
||||||
|
pass
|
||||||
|
try:
|
||||||
|
return datetime.datetime.strptime(timestr, '%Y-%m-%d %H:%M:%S')
|
||||||
|
except ValueError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def mycheckRowCol(self, sql, row, col):
|
||||||
|
caller = inspect.getframeinfo(inspect.stack()[2][0])
|
||||||
|
if row < 0:
|
||||||
|
args = (caller.filename, caller.lineno, sql, row)
|
||||||
|
tdLog.exit("%s(%d) failed: sql:%s, row:%d is smaller than zero" % args)
|
||||||
|
if col < 0:
|
||||||
|
args = (caller.filename, caller.lineno, sql, row)
|
||||||
|
tdLog.exit("%s(%d) failed: sql:%s, col:%d is smaller than zero" % args)
|
||||||
|
if row > tdSql.queryRows:
|
||||||
|
args = (caller.filename, caller.lineno, sql, row, tdSql.queryRows)
|
||||||
|
tdLog.exit("%s(%d) failed: sql:%s, row:%d is larger than queryRows:%d" % args)
|
||||||
|
if col > tdSql.queryCols:
|
||||||
|
args = (caller.filename, caller.lineno, sql, col, tdSql.queryCols)
|
||||||
|
tdLog.exit("%s(%d) failed: sql:%s, col:%d is larger than queryCols:%d" % args)
|
||||||
|
|
||||||
|
def mycheckData(self, sql ,row, col, data):
|
||||||
|
check_status = True
|
||||||
|
self.mycheckRowCol(sql ,row, col)
|
||||||
|
if tdSql.queryResult[row][col] != data:
|
||||||
|
if tdSql.cursor.istype(col, "TIMESTAMP"):
|
||||||
|
# suppose user want to check nanosecond timestamp if a longer data passed
|
||||||
|
if (len(data) >= 28):
|
||||||
|
if pd.to_datetime(tdSql.queryResult[row][col]) == pd.to_datetime(data):
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%d == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
else:
|
||||||
|
if tdSql.queryResult[row][col] == self._parse_datetime(data):
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
return
|
||||||
|
|
||||||
|
if str(tdSql.queryResult[row][col]) == str(data):
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
return
|
||||||
|
elif isinstance(data, float) and abs(tdSql.queryResult[row][col] - data) <= 0.000001:
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%f == expect:%f" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
caller = inspect.getframeinfo(inspect.stack()[1][0])
|
||||||
|
args = (caller.filename, caller.lineno, sql, row, col, tdSql.queryResult[row][col], data)
|
||||||
|
tdLog.info("%s(%d) failed: sql:%s row:%d col:%d data:%s != expect:%s" % args)
|
||||||
|
|
||||||
|
check_status = False
|
||||||
|
|
||||||
|
if data is None:
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
elif isinstance(data, str):
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
# elif isinstance(data, datetime.date):
|
||||||
|
# tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
# (sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
elif isinstance(data, float):
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
else:
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%d" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
|
||||||
|
return check_status
|
||||||
|
|
||||||
|
def mycheckRows(self, sql, expectRows):
|
||||||
|
check_status = True
|
||||||
|
if len(tdSql.queryResult) == expectRows:
|
||||||
|
tdLog.info("sql:%s, queryRows:%d == expect:%d" % (sql, len(tdSql.queryResult), expectRows))
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
caller = inspect.getframeinfo(inspect.stack()[1][0])
|
||||||
|
args = (caller.filename, caller.lineno, sql, len(tdSql.queryResult), expectRows)
|
||||||
|
tdLog.info("%s(%d) failed: sql:%s, queryRows:%d != expect:%d" % args)
|
||||||
|
check_status = False
|
||||||
|
return check_status
|
||||||
|
|
||||||
|
|
||||||
|
def force_stop_dnode(self, dnode_id ):
|
||||||
|
|
||||||
|
tdSql.query("show dnodes")
|
||||||
|
port = None
|
||||||
|
for dnode_info in tdSql.queryResult:
|
||||||
|
if dnode_id == dnode_info[0]:
|
||||||
|
port = dnode_info[1].split(":")[-1]
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
continue
|
||||||
|
if port:
|
||||||
|
tdLog.notice(" ==== dnode {} will be force stop by kill -9 ====".format(dnode_id))
|
||||||
|
psCmd = '''netstat -anp|grep -w LISTEN|grep -w %s |grep -o "LISTEN.*"|awk '{print $2}'|cut -d/ -f1|head -n1''' %(port)
|
||||||
|
processID = subprocess.check_output(
|
||||||
|
psCmd, shell=True).decode("utf-8")
|
||||||
|
ps_kill_taosd = ''' kill -9 {} '''.format(processID)
|
||||||
|
# print(ps_kill_taosd)
|
||||||
|
os.system(ps_kill_taosd)
|
||||||
|
|
||||||
|
def basic_query_task(self,dbname ,stablename):
|
||||||
|
|
||||||
|
sql = "select * from {}.{} ;".format(dbname , stablename)
|
||||||
|
|
||||||
|
count = 0
|
||||||
|
while count < self.query_times:
|
||||||
|
os.system(''' taos -s '{}' >>/dev/null '''.format(sql))
|
||||||
|
count += 1
|
||||||
|
|
||||||
|
def multi_thread_query_task(self, thread_nums ,dbname , stablename ):
|
||||||
|
|
||||||
|
for i in range(thread_nums):
|
||||||
|
task = threading.Thread(target = self.basic_query_task, args=(dbname ,stablename))
|
||||||
|
self.thread_list.append(task)
|
||||||
|
|
||||||
|
for thread in self.thread_list:
|
||||||
|
|
||||||
|
thread.start()
|
||||||
|
return self.thread_list
|
||||||
|
|
||||||
|
|
||||||
|
def stop_follower_when_query_going(self):
|
||||||
|
|
||||||
|
tdDnodes = cluster.dnodes
|
||||||
|
self.create_database(dbname = self.db_name ,replica_num= self.replica , vgroup_nums= 1)
|
||||||
|
self.create_stable_insert_datas(dbname = self.db_name , stablename = "stb1" , tb_nums= self.tb_nums ,row_nums= self.row_nums)
|
||||||
|
|
||||||
|
# let query task start
|
||||||
|
self.thread_list = self.multi_thread_query_task(10 ,self.db_name ,'stb1' )
|
||||||
|
|
||||||
|
# force stop follower
|
||||||
|
for loop in range(self.loop_restart_times):
|
||||||
|
tdLog.debug(" ==== this is {}_th restart follower of database {} ==== ".format(loop ,self.db_name))
|
||||||
|
self.stop_dnode_id = self._get_stop_dnode_id(self.db_name)
|
||||||
|
tdDnodes[self.stop_dnode_id-1].stoptaosd()
|
||||||
|
self.wait_stop_dnode_OK()
|
||||||
|
|
||||||
|
start = time.time()
|
||||||
|
tdDnodes[self.stop_dnode_id-1].starttaosd()
|
||||||
|
self.wait_start_dnode_OK()
|
||||||
|
end = time.time()
|
||||||
|
time_cost = int(end-start)
|
||||||
|
|
||||||
|
if time_cost > self.max_restart_time:
|
||||||
|
tdLog.exit(" ==== restart dnode {} cost too much time , please check ====".format(self.stop_dnode_id))
|
||||||
|
|
||||||
|
for thread in self.thread_list:
|
||||||
|
thread.join()
|
||||||
|
|
||||||
|
|
||||||
|
def run(self):
|
||||||
|
|
||||||
|
# basic check of cluster
|
||||||
|
self.check_setup_cluster_status()
|
||||||
|
self.stop_follower_when_query_going()
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def stop(self):
|
||||||
|
tdSql.close()
|
||||||
|
tdLog.success(f"{__file__} successfully executed")
|
||||||
|
|
||||||
|
tdCases.addLinux(__file__, TDTestCase())
|
||||||
|
tdCases.addWindows(__file__, TDTestCase())
|
|
@ -0,0 +1,416 @@
|
||||||
|
# author : wenzhouwww
|
||||||
|
from ssl import ALERT_DESCRIPTION_CERTIFICATE_UNOBTAINABLE
|
||||||
|
import taos
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
import os
|
||||||
|
|
||||||
|
from util.log import *
|
||||||
|
from util.sql import *
|
||||||
|
from util.cases import *
|
||||||
|
from util.dnodes import TDDnodes
|
||||||
|
from util.dnodes import TDDnode
|
||||||
|
from util.cluster import *
|
||||||
|
|
||||||
|
import datetime
|
||||||
|
import inspect
|
||||||
|
import time
|
||||||
|
import socket
|
||||||
|
import subprocess
|
||||||
|
import threading
|
||||||
|
sys.path.append(os.path.dirname(__file__))
|
||||||
|
|
||||||
|
class TDTestCase:
|
||||||
|
def init(self,conn ,logSql):
|
||||||
|
tdLog.debug(f"start to excute {__file__}")
|
||||||
|
tdSql.init(conn.cursor())
|
||||||
|
self.host = socket.gethostname()
|
||||||
|
self.mnode_list = {}
|
||||||
|
self.dnode_list = {}
|
||||||
|
self.ts = 1483200000000
|
||||||
|
self.ts_step =1000
|
||||||
|
self.db_name ='testdb'
|
||||||
|
self.replica = 3
|
||||||
|
self.vgroups = 1
|
||||||
|
self.tb_nums = 10
|
||||||
|
self.row_nums = 100
|
||||||
|
self.stop_dnode_id = None
|
||||||
|
self.loop_restart_times = 5
|
||||||
|
self.thread_list = []
|
||||||
|
self.max_restart_time = 10
|
||||||
|
self.try_check_times = 10
|
||||||
|
self.query_times = 100
|
||||||
|
|
||||||
|
|
||||||
|
def getBuildPath(self):
|
||||||
|
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||||
|
if ("community" in selfPath):
|
||||||
|
projPath = selfPath[:selfPath.find("community")]
|
||||||
|
else:
|
||||||
|
projPath = selfPath[:selfPath.find("tests")]
|
||||||
|
|
||||||
|
for root, dirs, files in os.walk(projPath):
|
||||||
|
if ("taosd" in files):
|
||||||
|
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||||
|
if ("packaging" not in rootRealPath):
|
||||||
|
buildPath = root[:len(root) - len("/build/bin")]
|
||||||
|
break
|
||||||
|
return buildPath
|
||||||
|
|
||||||
|
def check_setup_cluster_status(self):
|
||||||
|
tdSql.query("show mnodes")
|
||||||
|
for mnode in tdSql.queryResult:
|
||||||
|
name = mnode[1]
|
||||||
|
info = mnode
|
||||||
|
self.mnode_list[name] = info
|
||||||
|
|
||||||
|
tdSql.query("show dnodes")
|
||||||
|
for dnode in tdSql.queryResult:
|
||||||
|
name = dnode[1]
|
||||||
|
info = dnode
|
||||||
|
self.dnode_list[name] = info
|
||||||
|
|
||||||
|
count = 0
|
||||||
|
is_leader = False
|
||||||
|
mnode_name = ''
|
||||||
|
for k,v in self.mnode_list.items():
|
||||||
|
count +=1
|
||||||
|
# only for 1 mnode
|
||||||
|
mnode_name = k
|
||||||
|
|
||||||
|
if v[2] =='leader':
|
||||||
|
is_leader=True
|
||||||
|
|
||||||
|
if count==1 and is_leader:
|
||||||
|
tdLog.notice("===== depoly cluster success with 1 mnode as leader =====")
|
||||||
|
else:
|
||||||
|
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
||||||
|
|
||||||
|
for k ,v in self.dnode_list.items():
|
||||||
|
if k == mnode_name:
|
||||||
|
if v[3]==0:
|
||||||
|
tdLog.notice("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
|
else:
|
||||||
|
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
|
else:
|
||||||
|
continue
|
||||||
|
|
||||||
|
def create_database(self, dbname, replica_num ,vgroup_nums ):
|
||||||
|
drop_db_sql = "drop database if exists {}".format(dbname)
|
||||||
|
create_db_sql = "create database {} replica {} vgroups {}".format(dbname,replica_num,vgroup_nums)
|
||||||
|
|
||||||
|
tdLog.notice(" ==== create database {} and insert rows begin =====".format(dbname))
|
||||||
|
tdSql.execute(drop_db_sql)
|
||||||
|
tdSql.execute(create_db_sql)
|
||||||
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
|
||||||
|
def create_stable_insert_datas(self,dbname ,stablename , tb_nums , row_nums):
|
||||||
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
tdSql.execute(
|
||||||
|
'''create table {}
|
||||||
|
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(32),c9 nchar(32), c10 timestamp)
|
||||||
|
tags (t1 int)
|
||||||
|
'''.format(stablename)
|
||||||
|
)
|
||||||
|
|
||||||
|
for i in range(tb_nums):
|
||||||
|
sub_tbname = "sub_{}_{}".format(stablename,i)
|
||||||
|
tdSql.execute("create table {} using {} tags({})".format(sub_tbname, stablename ,i))
|
||||||
|
# insert datas about new database
|
||||||
|
|
||||||
|
for row_num in range(row_nums):
|
||||||
|
ts = self.ts + self.ts_step*row_num
|
||||||
|
tdSql.execute(f"insert into {sub_tbname} values ({ts}, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
|
|
||||||
|
tdLog.notice(" ==== stable {} insert rows execute end =====".format(stablename))
|
||||||
|
|
||||||
|
def append_rows_of_exists_tables(self,dbname ,stablename , tbname , append_nums ):
|
||||||
|
|
||||||
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
|
||||||
|
for row_num in range(append_nums):
|
||||||
|
tdSql.execute(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
|
# print(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
|
tdLog.notice(" ==== append new rows of table {} belongs to stable {} execute end =====".format(tbname,stablename))
|
||||||
|
os.system("taos -s 'select count(*) from {}.{}';".format(dbname,stablename))
|
||||||
|
|
||||||
|
def check_insert_rows(self, dbname, stablename , tb_nums , row_nums, append_rows):
|
||||||
|
|
||||||
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
|
||||||
|
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
||||||
|
|
||||||
|
while not tdSql.queryResult:
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
||||||
|
|
||||||
|
status_OK = self.mycheckData("select count(*) from {}.{}".format(dbname,stablename) ,0 , 0 , tb_nums*row_nums+append_rows)
|
||||||
|
|
||||||
|
count = 0
|
||||||
|
while not status_OK :
|
||||||
|
if count > self.try_check_times:
|
||||||
|
os.system("taos -s ' show {}.vgroups; '".format(dbname))
|
||||||
|
tdLog.exit(" ==== check insert rows failed after {} try check {} times of database {}".format(count , self.try_check_times ,dbname))
|
||||||
|
break
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
||||||
|
while not tdSql.queryResult:
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
||||||
|
status_OK = self.mycheckData("select count(*) from {}.{}".format(dbname,stablename) ,0 , 0 , tb_nums*row_nums+append_rows)
|
||||||
|
tdLog.notice(" ==== check insert rows first failed , this is {}_th retry check rows of database {}".format(count , dbname))
|
||||||
|
count += 1
|
||||||
|
|
||||||
|
|
||||||
|
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
||||||
|
while not tdSql.queryResult:
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
||||||
|
status_OK = self.mycheckRows("select distinct tbname from {}.{}".format(dbname,stablename) ,tb_nums)
|
||||||
|
count = 0
|
||||||
|
while not status_OK :
|
||||||
|
if count > self.try_check_times:
|
||||||
|
os.system("taos -s ' show {}.vgroups;'".format(dbname))
|
||||||
|
tdLog.exit(" ==== check insert rows failed after {} try check {} times of database {}".format(count , self.try_check_times ,dbname))
|
||||||
|
break
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
||||||
|
while not tdSql.queryResult:
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
||||||
|
status_OK = self.mycheckRows("select distinct tbname from {}.{}".format(dbname,stablename) ,tb_nums)
|
||||||
|
tdLog.notice(" ==== check insert tbnames first failed , this is {}_th retry check tbnames of database {}".format(count , dbname))
|
||||||
|
count += 1
|
||||||
|
|
||||||
|
def _get_stop_dnode_id(self,dbname):
|
||||||
|
tdSql.query("show {}.vgroups".format(dbname))
|
||||||
|
vgroup_infos = tdSql.queryResult
|
||||||
|
for vgroup_info in vgroup_infos:
|
||||||
|
leader_infos = vgroup_info[3:-4]
|
||||||
|
# print(vgroup_info)
|
||||||
|
for ind ,role in enumerate(leader_infos):
|
||||||
|
if role =='follower':
|
||||||
|
# print(ind,leader_infos)
|
||||||
|
self.stop_dnode_id = leader_infos[ind-1]
|
||||||
|
break
|
||||||
|
|
||||||
|
|
||||||
|
return self.stop_dnode_id
|
||||||
|
|
||||||
|
def wait_stop_dnode_OK(self):
|
||||||
|
|
||||||
|
def _get_status():
|
||||||
|
newTdSql=tdCom.newTdSql()
|
||||||
|
|
||||||
|
status = ""
|
||||||
|
newTdSql.query("show dnodes")
|
||||||
|
dnode_infos = newTdSql.queryResult
|
||||||
|
for dnode_info in dnode_infos:
|
||||||
|
id = dnode_info[0]
|
||||||
|
dnode_status = dnode_info[4]
|
||||||
|
if id == self.stop_dnode_id:
|
||||||
|
status = dnode_status
|
||||||
|
break
|
||||||
|
return status
|
||||||
|
|
||||||
|
status = _get_status()
|
||||||
|
while status !="offline":
|
||||||
|
time.sleep(0.1)
|
||||||
|
status = _get_status()
|
||||||
|
# tdLog.notice("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
||||||
|
tdLog.notice("==== stop_dnode has stopped , id is {}".format(self.stop_dnode_id))
|
||||||
|
|
||||||
|
def wait_start_dnode_OK(self):
|
||||||
|
|
||||||
|
def _get_status():
|
||||||
|
newTdSql=tdCom.newTdSql()
|
||||||
|
status = ""
|
||||||
|
newTdSql.query("show dnodes")
|
||||||
|
dnode_infos = newTdSql.queryResult
|
||||||
|
for dnode_info in dnode_infos:
|
||||||
|
id = dnode_info[0]
|
||||||
|
dnode_status = dnode_info[4]
|
||||||
|
if id == self.stop_dnode_id:
|
||||||
|
status = dnode_status
|
||||||
|
break
|
||||||
|
return status
|
||||||
|
|
||||||
|
status = _get_status()
|
||||||
|
while status !="ready":
|
||||||
|
time.sleep(0.1)
|
||||||
|
status = _get_status()
|
||||||
|
# tdLog.notice("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
||||||
|
tdLog.notice("==== stop_dnode has restart , id is {}".format(self.stop_dnode_id))
|
||||||
|
|
||||||
|
def _parse_datetime(self,timestr):
|
||||||
|
try:
|
||||||
|
return datetime.datetime.strptime(timestr, '%Y-%m-%d %H:%M:%S.%f')
|
||||||
|
except ValueError:
|
||||||
|
pass
|
||||||
|
try:
|
||||||
|
return datetime.datetime.strptime(timestr, '%Y-%m-%d %H:%M:%S')
|
||||||
|
except ValueError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def mycheckRowCol(self, sql, row, col):
|
||||||
|
caller = inspect.getframeinfo(inspect.stack()[2][0])
|
||||||
|
if row < 0:
|
||||||
|
args = (caller.filename, caller.lineno, sql, row)
|
||||||
|
tdLog.exit("%s(%d) failed: sql:%s, row:%d is smaller than zero" % args)
|
||||||
|
if col < 0:
|
||||||
|
args = (caller.filename, caller.lineno, sql, row)
|
||||||
|
tdLog.exit("%s(%d) failed: sql:%s, col:%d is smaller than zero" % args)
|
||||||
|
if row > tdSql.queryRows:
|
||||||
|
args = (caller.filename, caller.lineno, sql, row, tdSql.queryRows)
|
||||||
|
tdLog.exit("%s(%d) failed: sql:%s, row:%d is larger than queryRows:%d" % args)
|
||||||
|
if col > tdSql.queryCols:
|
||||||
|
args = (caller.filename, caller.lineno, sql, col, tdSql.queryCols)
|
||||||
|
tdLog.exit("%s(%d) failed: sql:%s, col:%d is larger than queryCols:%d" % args)
|
||||||
|
|
||||||
|
def mycheckData(self, sql ,row, col, data):
|
||||||
|
check_status = True
|
||||||
|
self.mycheckRowCol(sql ,row, col)
|
||||||
|
if tdSql.queryResult[row][col] != data:
|
||||||
|
if tdSql.cursor.istype(col, "TIMESTAMP"):
|
||||||
|
# suppose user want to check nanosecond timestamp if a longer data passed
|
||||||
|
if (len(data) >= 28):
|
||||||
|
if pd.to_datetime(tdSql.queryResult[row][col]) == pd.to_datetime(data):
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%d == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
else:
|
||||||
|
if tdSql.queryResult[row][col] == self._parse_datetime(data):
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
return
|
||||||
|
|
||||||
|
if str(tdSql.queryResult[row][col]) == str(data):
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
return
|
||||||
|
elif isinstance(data, float) and abs(tdSql.queryResult[row][col] - data) <= 0.000001:
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%f == expect:%f" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
caller = inspect.getframeinfo(inspect.stack()[1][0])
|
||||||
|
args = (caller.filename, caller.lineno, sql, row, col, tdSql.queryResult[row][col], data)
|
||||||
|
tdLog.info("%s(%d) failed: sql:%s row:%d col:%d data:%s != expect:%s" % args)
|
||||||
|
|
||||||
|
check_status = False
|
||||||
|
|
||||||
|
if data is None:
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
elif isinstance(data, str):
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
# elif isinstance(data, datetime.date):
|
||||||
|
# tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
# (sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
elif isinstance(data, float):
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
else:
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%d" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
|
||||||
|
return check_status
|
||||||
|
|
||||||
|
def mycheckRows(self, sql, expectRows):
|
||||||
|
check_status = True
|
||||||
|
if len(tdSql.queryResult) == expectRows:
|
||||||
|
tdLog.info("sql:%s, queryRows:%d == expect:%d" % (sql, len(tdSql.queryResult), expectRows))
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
caller = inspect.getframeinfo(inspect.stack()[1][0])
|
||||||
|
args = (caller.filename, caller.lineno, sql, len(tdSql.queryResult), expectRows)
|
||||||
|
tdLog.info("%s(%d) failed: sql:%s, queryRows:%d != expect:%d" % args)
|
||||||
|
check_status = False
|
||||||
|
return check_status
|
||||||
|
|
||||||
|
|
||||||
|
def force_stop_dnode(self, dnode_id ):
|
||||||
|
|
||||||
|
tdSql.query("show dnodes")
|
||||||
|
port = None
|
||||||
|
for dnode_info in tdSql.queryResult:
|
||||||
|
if dnode_id == dnode_info[0]:
|
||||||
|
port = dnode_info[1].split(":")[-1]
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
continue
|
||||||
|
if port:
|
||||||
|
tdLog.notice(" ==== dnode {} will be force stop by kill -9 ====".format(dnode_id))
|
||||||
|
psCmd = '''netstat -anp|grep -w LISTEN|grep -w %s |grep -o "LISTEN.*"|awk '{print $2}'|cut -d/ -f1|head -n1''' %(port)
|
||||||
|
processID = subprocess.check_output(
|
||||||
|
psCmd, shell=True).decode("utf-8")
|
||||||
|
ps_kill_taosd = ''' kill -9 {} '''.format(processID)
|
||||||
|
# print(ps_kill_taosd)
|
||||||
|
os.system(ps_kill_taosd)
|
||||||
|
|
||||||
|
def basic_query_task(self,dbname ,stablename):
|
||||||
|
|
||||||
|
sql = "select * from {}.{} ;".format(dbname , stablename)
|
||||||
|
|
||||||
|
count = 0
|
||||||
|
while count < self.query_times:
|
||||||
|
os.system(''' taos -s '{}' >>/dev/null '''.format(sql))
|
||||||
|
count += 1
|
||||||
|
|
||||||
|
def multi_thread_query_task(self, thread_nums ,dbname , stablename ):
|
||||||
|
|
||||||
|
for i in range(thread_nums):
|
||||||
|
task = threading.Thread(target = self.basic_query_task, args=(dbname ,stablename))
|
||||||
|
self.thread_list.append(task)
|
||||||
|
|
||||||
|
for thread in self.thread_list:
|
||||||
|
|
||||||
|
thread.start()
|
||||||
|
return self.thread_list
|
||||||
|
|
||||||
|
|
||||||
|
def stop_follower_when_query_going(self):
|
||||||
|
|
||||||
|
tdDnodes = cluster.dnodes
|
||||||
|
self.create_database(dbname = self.db_name ,replica_num= self.replica , vgroup_nums= 1)
|
||||||
|
self.create_stable_insert_datas(dbname = self.db_name , stablename = "stb1" , tb_nums= self.tb_nums ,row_nums= self.row_nums)
|
||||||
|
|
||||||
|
# let query task start
|
||||||
|
self.thread_list = self.multi_thread_query_task(10 ,self.db_name ,'stb1' )
|
||||||
|
|
||||||
|
# force stop follower
|
||||||
|
for loop in range(self.loop_restart_times):
|
||||||
|
tdLog.debug(" ==== this is {}_th restart follower of database {} ==== ".format(loop ,self.db_name))
|
||||||
|
self.stop_dnode_id = self._get_stop_dnode_id(self.db_name)
|
||||||
|
self.force_stop_dnode(self.stop_dnode_id)
|
||||||
|
self.wait_stop_dnode_OK()
|
||||||
|
|
||||||
|
start = time.time()
|
||||||
|
tdDnodes[self.stop_dnode_id-1].starttaosd()
|
||||||
|
self.wait_start_dnode_OK()
|
||||||
|
end = time.time()
|
||||||
|
time_cost = int(end-start)
|
||||||
|
|
||||||
|
if time_cost > self.max_restart_time:
|
||||||
|
tdLog.exit(" ==== restart dnode {} cost too much time , please check ====".format(self.stop_dnode_id))
|
||||||
|
|
||||||
|
for thread in self.thread_list:
|
||||||
|
thread.join()
|
||||||
|
|
||||||
|
|
||||||
|
def run(self):
|
||||||
|
|
||||||
|
# basic check of cluster
|
||||||
|
self.check_setup_cluster_status()
|
||||||
|
self.stop_follower_when_query_going()
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def stop(self):
|
||||||
|
tdSql.close()
|
||||||
|
tdLog.success(f"{__file__} successfully executed")
|
||||||
|
|
||||||
|
tdCases.addLinux(__file__, TDTestCase())
|
||||||
|
tdCases.addWindows(__file__, TDTestCase())
|
|
@ -0,0 +1,470 @@
|
||||||
|
# author : wenzhouwww
|
||||||
|
from ssl import ALERT_DESCRIPTION_CERTIFICATE_UNOBTAINABLE
|
||||||
|
import taos
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
import os
|
||||||
|
|
||||||
|
from util.log import *
|
||||||
|
from util.sql import *
|
||||||
|
from util.cases import *
|
||||||
|
from util.dnodes import TDDnodes
|
||||||
|
from util.dnodes import TDDnode
|
||||||
|
from util.cluster import *
|
||||||
|
|
||||||
|
import datetime
|
||||||
|
import inspect
|
||||||
|
import time
|
||||||
|
import socket
|
||||||
|
import subprocess
|
||||||
|
import threading
|
||||||
|
sys.path.append(os.path.dirname(__file__))
|
||||||
|
|
||||||
|
class TDTestCase:
|
||||||
|
def init(self,conn ,logSql):
|
||||||
|
tdLog.debug(f"start to excute {__file__}")
|
||||||
|
tdSql.init(conn.cursor())
|
||||||
|
self.host = socket.gethostname()
|
||||||
|
self.mnode_list = {}
|
||||||
|
self.dnode_list = {}
|
||||||
|
self.ts = 1483200000000
|
||||||
|
self.ts_step =1000
|
||||||
|
self.db_name ='testdb'
|
||||||
|
self.replica = 3
|
||||||
|
self.vgroups = 1
|
||||||
|
self.tb_nums = 10
|
||||||
|
self.row_nums = 100
|
||||||
|
self.stop_dnode_id = None
|
||||||
|
self.loop_restart_times = 5
|
||||||
|
self.thread_list = []
|
||||||
|
self.max_restart_time = 10
|
||||||
|
self.try_check_times = 10
|
||||||
|
self.query_times = 100
|
||||||
|
|
||||||
|
|
||||||
|
def getBuildPath(self):
|
||||||
|
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||||
|
if ("community" in selfPath):
|
||||||
|
projPath = selfPath[:selfPath.find("community")]
|
||||||
|
else:
|
||||||
|
projPath = selfPath[:selfPath.find("tests")]
|
||||||
|
|
||||||
|
for root, dirs, files in os.walk(projPath):
|
||||||
|
if ("taosd" in files):
|
||||||
|
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||||
|
if ("packaging" not in rootRealPath):
|
||||||
|
buildPath = root[:len(root) - len("/build/bin")]
|
||||||
|
break
|
||||||
|
return buildPath
|
||||||
|
|
||||||
|
def check_setup_cluster_status(self):
|
||||||
|
tdSql.query("show mnodes")
|
||||||
|
for mnode in tdSql.queryResult:
|
||||||
|
name = mnode[1]
|
||||||
|
info = mnode
|
||||||
|
self.mnode_list[name] = info
|
||||||
|
|
||||||
|
tdSql.query("show dnodes")
|
||||||
|
for dnode in tdSql.queryResult:
|
||||||
|
name = dnode[1]
|
||||||
|
info = dnode
|
||||||
|
self.dnode_list[name] = info
|
||||||
|
|
||||||
|
count = 0
|
||||||
|
is_leader = False
|
||||||
|
mnode_name = ''
|
||||||
|
for k,v in self.mnode_list.items():
|
||||||
|
count +=1
|
||||||
|
# only for 1 mnode
|
||||||
|
mnode_name = k
|
||||||
|
|
||||||
|
if v[2] =='leader':
|
||||||
|
is_leader=True
|
||||||
|
|
||||||
|
if count==1 and is_leader:
|
||||||
|
tdLog.notice("===== depoly cluster success with 1 mnode as leader =====")
|
||||||
|
else:
|
||||||
|
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
||||||
|
|
||||||
|
for k ,v in self.dnode_list.items():
|
||||||
|
if k == mnode_name:
|
||||||
|
if v[3]==0:
|
||||||
|
tdLog.notice("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
|
else:
|
||||||
|
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
|
else:
|
||||||
|
continue
|
||||||
|
|
||||||
|
def create_database(self, dbname, replica_num ,vgroup_nums ):
|
||||||
|
drop_db_sql = "drop database if exists {}".format(dbname)
|
||||||
|
create_db_sql = "create database {} replica {} vgroups {}".format(dbname,replica_num,vgroup_nums)
|
||||||
|
|
||||||
|
tdLog.notice(" ==== create database {} and insert rows begin =====".format(dbname))
|
||||||
|
tdSql.execute(drop_db_sql)
|
||||||
|
tdSql.execute(create_db_sql)
|
||||||
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
|
||||||
|
def create_stable_insert_datas(self,dbname ,stablename , tb_nums , row_nums):
|
||||||
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
tdSql.execute(
|
||||||
|
'''create table {}
|
||||||
|
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(32),c9 nchar(32), c10 timestamp)
|
||||||
|
tags (t1 int)
|
||||||
|
'''.format(stablename)
|
||||||
|
)
|
||||||
|
|
||||||
|
for i in range(tb_nums):
|
||||||
|
sub_tbname = "sub_{}_{}".format(stablename,i)
|
||||||
|
tdSql.execute("create table {} using {} tags({})".format(sub_tbname, stablename ,i))
|
||||||
|
# insert datas about new database
|
||||||
|
|
||||||
|
for row_num in range(row_nums):
|
||||||
|
ts = self.ts + self.ts_step*row_num
|
||||||
|
tdSql.execute(f"insert into {sub_tbname} values ({ts}, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
|
|
||||||
|
tdLog.notice(" ==== stable {} insert rows execute end =====".format(stablename))
|
||||||
|
|
||||||
|
def append_rows_of_exists_tables(self,dbname ,stablename , tbname , append_nums ):
|
||||||
|
|
||||||
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
|
||||||
|
for row_num in range(append_nums):
|
||||||
|
tdSql.execute(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
|
# print(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
|
tdLog.notice(" ==== append new rows of table {} belongs to stable {} execute end =====".format(tbname,stablename))
|
||||||
|
os.system("taos -s 'select count(*) from {}.{}';".format(dbname,stablename))
|
||||||
|
|
||||||
|
def check_insert_rows(self, dbname, stablename , tb_nums , row_nums, append_rows):
|
||||||
|
|
||||||
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
|
||||||
|
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
||||||
|
|
||||||
|
while not tdSql.queryResult:
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
||||||
|
|
||||||
|
status_OK = self.mycheckData("select count(*) from {}.{}".format(dbname,stablename) ,0 , 0 , tb_nums*row_nums+append_rows)
|
||||||
|
|
||||||
|
count = 0
|
||||||
|
while not status_OK :
|
||||||
|
if count > self.try_check_times:
|
||||||
|
os.system("taos -s ' show {}.vgroups; '".format(dbname))
|
||||||
|
tdLog.exit(" ==== check insert rows failed after {} try check {} times of database {}".format(count , self.try_check_times ,dbname))
|
||||||
|
break
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
||||||
|
while not tdSql.queryResult:
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
||||||
|
status_OK = self.mycheckData("select count(*) from {}.{}".format(dbname,stablename) ,0 , 0 , tb_nums*row_nums+append_rows)
|
||||||
|
tdLog.notice(" ==== check insert rows first failed , this is {}_th retry check rows of database {}".format(count , dbname))
|
||||||
|
count += 1
|
||||||
|
|
||||||
|
|
||||||
|
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
||||||
|
while not tdSql.queryResult:
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
||||||
|
status_OK = self.mycheckRows("select distinct tbname from {}.{}".format(dbname,stablename) ,tb_nums)
|
||||||
|
count = 0
|
||||||
|
while not status_OK :
|
||||||
|
if count > self.try_check_times:
|
||||||
|
os.system("taos -s ' show {}.vgroups;'".format(dbname))
|
||||||
|
tdLog.exit(" ==== check insert rows failed after {} try check {} times of database {}".format(count , self.try_check_times ,dbname))
|
||||||
|
break
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
||||||
|
while not tdSql.queryResult:
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
||||||
|
status_OK = self.mycheckRows("select distinct tbname from {}.{}".format(dbname,stablename) ,tb_nums)
|
||||||
|
tdLog.notice(" ==== check insert tbnames first failed , this is {}_th retry check tbnames of database {}".format(count , dbname))
|
||||||
|
count += 1
|
||||||
|
|
||||||
|
def _get_stop_dnode_id(self,dbname):
|
||||||
|
tdSql.query("show {}.vgroups".format(dbname))
|
||||||
|
vgroup_infos = tdSql.queryResult
|
||||||
|
for vgroup_info in vgroup_infos:
|
||||||
|
leader_infos = vgroup_info[3:-4]
|
||||||
|
# print(vgroup_info)
|
||||||
|
for ind ,role in enumerate(leader_infos):
|
||||||
|
if role =='leader':
|
||||||
|
# print(ind,leader_infos)
|
||||||
|
self.stop_dnode_id = leader_infos[ind-1]
|
||||||
|
break
|
||||||
|
|
||||||
|
|
||||||
|
return self.stop_dnode_id
|
||||||
|
|
||||||
|
def wait_stop_dnode_OK(self):
|
||||||
|
|
||||||
|
def _get_status():
|
||||||
|
newTdSql=tdCom.newTdSql()
|
||||||
|
|
||||||
|
status = ""
|
||||||
|
newTdSql.query("show dnodes")
|
||||||
|
dnode_infos = newTdSql.queryResult
|
||||||
|
for dnode_info in dnode_infos:
|
||||||
|
id = dnode_info[0]
|
||||||
|
dnode_status = dnode_info[4]
|
||||||
|
if id == self.stop_dnode_id:
|
||||||
|
status = dnode_status
|
||||||
|
break
|
||||||
|
return status
|
||||||
|
|
||||||
|
status = _get_status()
|
||||||
|
while status !="offline":
|
||||||
|
time.sleep(0.1)
|
||||||
|
status = _get_status()
|
||||||
|
# tdLog.notice("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
||||||
|
tdLog.notice("==== stop_dnode has stopped , id is {}".format(self.stop_dnode_id))
|
||||||
|
|
||||||
|
def check_revote_leader_success(self, dbname, before_leader_infos , after_leader_infos):
|
||||||
|
check_status = False
|
||||||
|
vote_act = set(set(after_leader_infos)-set(before_leader_infos))
|
||||||
|
if not vote_act:
|
||||||
|
print("=======before_revote_leader_infos ======\n" , before_leader_infos)
|
||||||
|
print("=======after_revote_leader_infos ======\n" , after_leader_infos)
|
||||||
|
tdLog.info(" ===maybe revote not occured , there is no dnode offline ====")
|
||||||
|
else:
|
||||||
|
for vgroup_info in vote_act:
|
||||||
|
for ind , role in enumerate(vgroup_info):
|
||||||
|
if role==self.stop_dnode_id:
|
||||||
|
|
||||||
|
if vgroup_info[ind+1] =="offline" and "leader" in vgroup_info:
|
||||||
|
tdLog.notice(" === revote leader ok , leader is {} now ====".format(vgroup_info[list(vgroup_info).index("leader")-1]))
|
||||||
|
check_status = True
|
||||||
|
elif vgroup_info[ind+1] !="offline":
|
||||||
|
tdLog.notice(" === dnode {} should be offline ".format(self.stop_dnode_id))
|
||||||
|
else:
|
||||||
|
continue
|
||||||
|
break
|
||||||
|
return check_status
|
||||||
|
|
||||||
|
def wait_start_dnode_OK(self):
|
||||||
|
|
||||||
|
def _get_status():
|
||||||
|
newTdSql=tdCom.newTdSql()
|
||||||
|
status = ""
|
||||||
|
newTdSql.query("show dnodes")
|
||||||
|
dnode_infos = newTdSql.queryResult
|
||||||
|
for dnode_info in dnode_infos:
|
||||||
|
id = dnode_info[0]
|
||||||
|
dnode_status = dnode_info[4]
|
||||||
|
if id == self.stop_dnode_id:
|
||||||
|
status = dnode_status
|
||||||
|
break
|
||||||
|
return status
|
||||||
|
|
||||||
|
status = _get_status()
|
||||||
|
while status !="ready":
|
||||||
|
time.sleep(0.1)
|
||||||
|
status = _get_status()
|
||||||
|
# tdLog.notice("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
||||||
|
tdLog.notice("==== stop_dnode has restart , id is {}".format(self.stop_dnode_id))
|
||||||
|
|
||||||
|
def _parse_datetime(self,timestr):
|
||||||
|
try:
|
||||||
|
return datetime.datetime.strptime(timestr, '%Y-%m-%d %H:%M:%S.%f')
|
||||||
|
except ValueError:
|
||||||
|
pass
|
||||||
|
try:
|
||||||
|
return datetime.datetime.strptime(timestr, '%Y-%m-%d %H:%M:%S')
|
||||||
|
except ValueError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def mycheckRowCol(self, sql, row, col):
|
||||||
|
caller = inspect.getframeinfo(inspect.stack()[2][0])
|
||||||
|
if row < 0:
|
||||||
|
args = (caller.filename, caller.lineno, sql, row)
|
||||||
|
tdLog.exit("%s(%d) failed: sql:%s, row:%d is smaller than zero" % args)
|
||||||
|
if col < 0:
|
||||||
|
args = (caller.filename, caller.lineno, sql, row)
|
||||||
|
tdLog.exit("%s(%d) failed: sql:%s, col:%d is smaller than zero" % args)
|
||||||
|
if row > tdSql.queryRows:
|
||||||
|
args = (caller.filename, caller.lineno, sql, row, tdSql.queryRows)
|
||||||
|
tdLog.exit("%s(%d) failed: sql:%s, row:%d is larger than queryRows:%d" % args)
|
||||||
|
if col > tdSql.queryCols:
|
||||||
|
args = (caller.filename, caller.lineno, sql, col, tdSql.queryCols)
|
||||||
|
tdLog.exit("%s(%d) failed: sql:%s, col:%d is larger than queryCols:%d" % args)
|
||||||
|
|
||||||
|
def mycheckData(self, sql ,row, col, data):
|
||||||
|
check_status = True
|
||||||
|
self.mycheckRowCol(sql ,row, col)
|
||||||
|
if tdSql.queryResult[row][col] != data:
|
||||||
|
if tdSql.cursor.istype(col, "TIMESTAMP"):
|
||||||
|
# suppose user want to check nanosecond timestamp if a longer data passed
|
||||||
|
if (len(data) >= 28):
|
||||||
|
if pd.to_datetime(tdSql.queryResult[row][col]) == pd.to_datetime(data):
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%d == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
else:
|
||||||
|
if tdSql.queryResult[row][col] == self._parse_datetime(data):
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
return
|
||||||
|
|
||||||
|
if str(tdSql.queryResult[row][col]) == str(data):
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
return
|
||||||
|
elif isinstance(data, float) and abs(tdSql.queryResult[row][col] - data) <= 0.000001:
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%f == expect:%f" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
caller = inspect.getframeinfo(inspect.stack()[1][0])
|
||||||
|
args = (caller.filename, caller.lineno, sql, row, col, tdSql.queryResult[row][col], data)
|
||||||
|
tdLog.info("%s(%d) failed: sql:%s row:%d col:%d data:%s != expect:%s" % args)
|
||||||
|
|
||||||
|
check_status = False
|
||||||
|
|
||||||
|
if data is None:
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
elif isinstance(data, str):
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
# elif isinstance(data, datetime.date):
|
||||||
|
# tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
# (sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
elif isinstance(data, float):
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
else:
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%d" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
|
||||||
|
return check_status
|
||||||
|
|
||||||
|
def mycheckRows(self, sql, expectRows):
|
||||||
|
check_status = True
|
||||||
|
if len(tdSql.queryResult) == expectRows:
|
||||||
|
tdLog.info("sql:%s, queryRows:%d == expect:%d" % (sql, len(tdSql.queryResult), expectRows))
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
caller = inspect.getframeinfo(inspect.stack()[1][0])
|
||||||
|
args = (caller.filename, caller.lineno, sql, len(tdSql.queryResult), expectRows)
|
||||||
|
tdLog.info("%s(%d) failed: sql:%s, queryRows:%d != expect:%d" % args)
|
||||||
|
check_status = False
|
||||||
|
return check_status
|
||||||
|
|
||||||
|
|
||||||
|
def get_leader_infos(self ,dbname):
|
||||||
|
|
||||||
|
newTdSql=tdCom.newTdSql()
|
||||||
|
newTdSql.query("show {}.vgroups".format(dbname))
|
||||||
|
vgroup_infos = newTdSql.queryResult
|
||||||
|
|
||||||
|
leader_infos = set()
|
||||||
|
for vgroup_info in vgroup_infos:
|
||||||
|
leader_infos.add(vgroup_info[3:-4])
|
||||||
|
|
||||||
|
return leader_infos
|
||||||
|
|
||||||
|
def force_stop_dnode(self, dnode_id ):
|
||||||
|
|
||||||
|
tdSql.query("show dnodes")
|
||||||
|
port = None
|
||||||
|
for dnode_info in tdSql.queryResult:
|
||||||
|
if dnode_id == dnode_info[0]:
|
||||||
|
port = dnode_info[1].split(":")[-1]
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
continue
|
||||||
|
if port:
|
||||||
|
tdLog.notice(" ==== dnode {} will be force stop by kill -9 ====".format(dnode_id))
|
||||||
|
psCmd = '''netstat -anp|grep -w LISTEN|grep -w %s |grep -o "LISTEN.*"|awk '{print $2}'|cut -d/ -f1|head -n1''' %(port)
|
||||||
|
processID = subprocess.check_output(
|
||||||
|
psCmd, shell=True).decode("utf-8")
|
||||||
|
ps_kill_taosd = ''' kill -9 {} '''.format(processID)
|
||||||
|
# print(ps_kill_taosd)
|
||||||
|
os.system(ps_kill_taosd)
|
||||||
|
|
||||||
|
def basic_query_task(self,dbname ,stablename):
|
||||||
|
|
||||||
|
sql = "select * from {}.{} ;".format(dbname , stablename)
|
||||||
|
|
||||||
|
count = 0
|
||||||
|
while count < self.query_times:
|
||||||
|
os.system(''' taos -s '{}' >>/dev/null '''.format(sql))
|
||||||
|
count += 1
|
||||||
|
|
||||||
|
def multi_thread_query_task(self, thread_nums ,dbname , stablename ):
|
||||||
|
|
||||||
|
for i in range(thread_nums):
|
||||||
|
task = threading.Thread(target = self.basic_query_task, args=(dbname ,stablename))
|
||||||
|
self.thread_list.append(task)
|
||||||
|
|
||||||
|
for thread in self.thread_list:
|
||||||
|
|
||||||
|
thread.start()
|
||||||
|
return self.thread_list
|
||||||
|
|
||||||
|
|
||||||
|
def stop_follower_when_query_going(self):
|
||||||
|
|
||||||
|
tdDnodes = cluster.dnodes
|
||||||
|
self.create_database(dbname = self.db_name ,replica_num= self.replica , vgroup_nums= 1)
|
||||||
|
self.create_stable_insert_datas(dbname = self.db_name , stablename = "stb1" , tb_nums= self.tb_nums ,row_nums= self.row_nums)
|
||||||
|
|
||||||
|
# let query task start
|
||||||
|
self.thread_list = self.multi_thread_query_task(10 ,self.db_name ,'stb1' )
|
||||||
|
|
||||||
|
# force stop follower
|
||||||
|
for loop in range(self.loop_restart_times):
|
||||||
|
tdLog.debug(" ==== this is {}_th restart follower of database {} ==== ".format(loop ,self.db_name))
|
||||||
|
|
||||||
|
# get leader info before stop
|
||||||
|
before_leader_infos = self.get_leader_infos(self.db_name)
|
||||||
|
|
||||||
|
self.stop_dnode_id = self._get_stop_dnode_id(self.db_name)
|
||||||
|
tdDnodes[self.stop_dnode_id-1].stoptaosd()
|
||||||
|
|
||||||
|
|
||||||
|
start = time.time()
|
||||||
|
# get leader info after stop
|
||||||
|
after_leader_infos = self.get_leader_infos(self.db_name)
|
||||||
|
|
||||||
|
revote_status = self.check_revote_leader_success(self.db_name ,before_leader_infos , after_leader_infos)
|
||||||
|
|
||||||
|
while not revote_status:
|
||||||
|
after_leader_infos = self.get_leader_infos(self.db_name)
|
||||||
|
revote_status = self.check_revote_leader_success(self.db_name ,before_leader_infos , after_leader_infos)
|
||||||
|
|
||||||
|
end = time.time()
|
||||||
|
time_cost = end - start
|
||||||
|
tdLog.debug(" ==== revote leader of database {} cost time {} ====".format(self.db_name , time_cost))
|
||||||
|
|
||||||
|
self.wait_stop_dnode_OK()
|
||||||
|
|
||||||
|
start = time.time()
|
||||||
|
tdDnodes[self.stop_dnode_id-1].starttaosd()
|
||||||
|
self.wait_start_dnode_OK()
|
||||||
|
end = time.time()
|
||||||
|
time_cost = int(end-start)
|
||||||
|
|
||||||
|
if time_cost > self.max_restart_time:
|
||||||
|
tdLog.exit(" ==== restart dnode {} cost too much time , please check ====".format(self.stop_dnode_id))
|
||||||
|
|
||||||
|
for thread in self.thread_list:
|
||||||
|
thread.join()
|
||||||
|
|
||||||
|
|
||||||
|
def run(self):
|
||||||
|
|
||||||
|
# basic check of cluster
|
||||||
|
self.check_setup_cluster_status()
|
||||||
|
self.stop_follower_when_query_going()
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def stop(self):
|
||||||
|
tdSql.close()
|
||||||
|
tdLog.success(f"{__file__} successfully executed")
|
||||||
|
|
||||||
|
tdCases.addLinux(__file__, TDTestCase())
|
||||||
|
tdCases.addWindows(__file__, TDTestCase())
|
|
@ -0,0 +1,470 @@
|
||||||
|
# author : wenzhouwww
|
||||||
|
from ssl import ALERT_DESCRIPTION_CERTIFICATE_UNOBTAINABLE
|
||||||
|
import taos
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
import os
|
||||||
|
|
||||||
|
from util.log import *
|
||||||
|
from util.sql import *
|
||||||
|
from util.cases import *
|
||||||
|
from util.dnodes import TDDnodes
|
||||||
|
from util.dnodes import TDDnode
|
||||||
|
from util.cluster import *
|
||||||
|
|
||||||
|
import datetime
|
||||||
|
import inspect
|
||||||
|
import time
|
||||||
|
import socket
|
||||||
|
import subprocess
|
||||||
|
import threading
|
||||||
|
sys.path.append(os.path.dirname(__file__))
|
||||||
|
|
||||||
|
class TDTestCase:
|
||||||
|
def init(self,conn ,logSql):
|
||||||
|
tdLog.debug(f"start to excute {__file__}")
|
||||||
|
tdSql.init(conn.cursor())
|
||||||
|
self.host = socket.gethostname()
|
||||||
|
self.mnode_list = {}
|
||||||
|
self.dnode_list = {}
|
||||||
|
self.ts = 1483200000000
|
||||||
|
self.ts_step =1000
|
||||||
|
self.db_name ='testdb'
|
||||||
|
self.replica = 3
|
||||||
|
self.vgroups = 1
|
||||||
|
self.tb_nums = 10
|
||||||
|
self.row_nums = 100
|
||||||
|
self.stop_dnode_id = None
|
||||||
|
self.loop_restart_times = 5
|
||||||
|
self.thread_list = []
|
||||||
|
self.max_restart_time = 10
|
||||||
|
self.try_check_times = 10
|
||||||
|
self.query_times = 100
|
||||||
|
|
||||||
|
|
||||||
|
def getBuildPath(self):
|
||||||
|
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||||
|
if ("community" in selfPath):
|
||||||
|
projPath = selfPath[:selfPath.find("community")]
|
||||||
|
else:
|
||||||
|
projPath = selfPath[:selfPath.find("tests")]
|
||||||
|
|
||||||
|
for root, dirs, files in os.walk(projPath):
|
||||||
|
if ("taosd" in files):
|
||||||
|
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||||
|
if ("packaging" not in rootRealPath):
|
||||||
|
buildPath = root[:len(root) - len("/build/bin")]
|
||||||
|
break
|
||||||
|
return buildPath
|
||||||
|
|
||||||
|
def check_setup_cluster_status(self):
|
||||||
|
tdSql.query("show mnodes")
|
||||||
|
for mnode in tdSql.queryResult:
|
||||||
|
name = mnode[1]
|
||||||
|
info = mnode
|
||||||
|
self.mnode_list[name] = info
|
||||||
|
|
||||||
|
tdSql.query("show dnodes")
|
||||||
|
for dnode in tdSql.queryResult:
|
||||||
|
name = dnode[1]
|
||||||
|
info = dnode
|
||||||
|
self.dnode_list[name] = info
|
||||||
|
|
||||||
|
count = 0
|
||||||
|
is_leader = False
|
||||||
|
mnode_name = ''
|
||||||
|
for k,v in self.mnode_list.items():
|
||||||
|
count +=1
|
||||||
|
# only for 1 mnode
|
||||||
|
mnode_name = k
|
||||||
|
|
||||||
|
if v[2] =='leader':
|
||||||
|
is_leader=True
|
||||||
|
|
||||||
|
if count==1 and is_leader:
|
||||||
|
tdLog.notice("===== depoly cluster success with 1 mnode as leader =====")
|
||||||
|
else:
|
||||||
|
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
||||||
|
|
||||||
|
for k ,v in self.dnode_list.items():
|
||||||
|
if k == mnode_name:
|
||||||
|
if v[3]==0:
|
||||||
|
tdLog.notice("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
|
else:
|
||||||
|
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
|
else:
|
||||||
|
continue
|
||||||
|
|
||||||
|
def create_database(self, dbname, replica_num ,vgroup_nums ):
|
||||||
|
drop_db_sql = "drop database if exists {}".format(dbname)
|
||||||
|
create_db_sql = "create database {} replica {} vgroups {}".format(dbname,replica_num,vgroup_nums)
|
||||||
|
|
||||||
|
tdLog.notice(" ==== create database {} and insert rows begin =====".format(dbname))
|
||||||
|
tdSql.execute(drop_db_sql)
|
||||||
|
tdSql.execute(create_db_sql)
|
||||||
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
|
||||||
|
def create_stable_insert_datas(self,dbname ,stablename , tb_nums , row_nums):
|
||||||
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
tdSql.execute(
|
||||||
|
'''create table {}
|
||||||
|
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(32),c9 nchar(32), c10 timestamp)
|
||||||
|
tags (t1 int)
|
||||||
|
'''.format(stablename)
|
||||||
|
)
|
||||||
|
|
||||||
|
for i in range(tb_nums):
|
||||||
|
sub_tbname = "sub_{}_{}".format(stablename,i)
|
||||||
|
tdSql.execute("create table {} using {} tags({})".format(sub_tbname, stablename ,i))
|
||||||
|
# insert datas about new database
|
||||||
|
|
||||||
|
for row_num in range(row_nums):
|
||||||
|
ts = self.ts + self.ts_step*row_num
|
||||||
|
tdSql.execute(f"insert into {sub_tbname} values ({ts}, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
|
|
||||||
|
tdLog.notice(" ==== stable {} insert rows execute end =====".format(stablename))
|
||||||
|
|
||||||
|
def append_rows_of_exists_tables(self,dbname ,stablename , tbname , append_nums ):
|
||||||
|
|
||||||
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
|
||||||
|
for row_num in range(append_nums):
|
||||||
|
tdSql.execute(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
|
# print(f"insert into {tbname} values (now, {row_num} ,{row_num}, 10 ,1 ,{row_num} ,{row_num},true,'bin_{row_num}','nchar_{row_num}',now) ")
|
||||||
|
tdLog.notice(" ==== append new rows of table {} belongs to stable {} execute end =====".format(tbname,stablename))
|
||||||
|
os.system("taos -s 'select count(*) from {}.{}';".format(dbname,stablename))
|
||||||
|
|
||||||
|
def check_insert_rows(self, dbname, stablename , tb_nums , row_nums, append_rows):
|
||||||
|
|
||||||
|
tdSql.execute("use {}".format(dbname))
|
||||||
|
|
||||||
|
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
||||||
|
|
||||||
|
while not tdSql.queryResult:
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
||||||
|
|
||||||
|
status_OK = self.mycheckData("select count(*) from {}.{}".format(dbname,stablename) ,0 , 0 , tb_nums*row_nums+append_rows)
|
||||||
|
|
||||||
|
count = 0
|
||||||
|
while not status_OK :
|
||||||
|
if count > self.try_check_times:
|
||||||
|
os.system("taos -s ' show {}.vgroups; '".format(dbname))
|
||||||
|
tdLog.exit(" ==== check insert rows failed after {} try check {} times of database {}".format(count , self.try_check_times ,dbname))
|
||||||
|
break
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
||||||
|
while not tdSql.queryResult:
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select count(*) from {}.{}".format(dbname,stablename))
|
||||||
|
status_OK = self.mycheckData("select count(*) from {}.{}".format(dbname,stablename) ,0 , 0 , tb_nums*row_nums+append_rows)
|
||||||
|
tdLog.notice(" ==== check insert rows first failed , this is {}_th retry check rows of database {}".format(count , dbname))
|
||||||
|
count += 1
|
||||||
|
|
||||||
|
|
||||||
|
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
||||||
|
while not tdSql.queryResult:
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
||||||
|
status_OK = self.mycheckRows("select distinct tbname from {}.{}".format(dbname,stablename) ,tb_nums)
|
||||||
|
count = 0
|
||||||
|
while not status_OK :
|
||||||
|
if count > self.try_check_times:
|
||||||
|
os.system("taos -s ' show {}.vgroups;'".format(dbname))
|
||||||
|
tdLog.exit(" ==== check insert rows failed after {} try check {} times of database {}".format(count , self.try_check_times ,dbname))
|
||||||
|
break
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
||||||
|
while not tdSql.queryResult:
|
||||||
|
time.sleep(0.1)
|
||||||
|
tdSql.query("select distinct tbname from {}.{}".format(dbname,stablename))
|
||||||
|
status_OK = self.mycheckRows("select distinct tbname from {}.{}".format(dbname,stablename) ,tb_nums)
|
||||||
|
tdLog.notice(" ==== check insert tbnames first failed , this is {}_th retry check tbnames of database {}".format(count , dbname))
|
||||||
|
count += 1
|
||||||
|
|
||||||
|
def _get_stop_dnode_id(self,dbname):
|
||||||
|
tdSql.query("show {}.vgroups".format(dbname))
|
||||||
|
vgroup_infos = tdSql.queryResult
|
||||||
|
for vgroup_info in vgroup_infos:
|
||||||
|
leader_infos = vgroup_info[3:-4]
|
||||||
|
# print(vgroup_info)
|
||||||
|
for ind ,role in enumerate(leader_infos):
|
||||||
|
if role =='leader':
|
||||||
|
# print(ind,leader_infos)
|
||||||
|
self.stop_dnode_id = leader_infos[ind-1]
|
||||||
|
break
|
||||||
|
|
||||||
|
|
||||||
|
return self.stop_dnode_id
|
||||||
|
|
||||||
|
def wait_stop_dnode_OK(self):
|
||||||
|
|
||||||
|
def _get_status():
|
||||||
|
newTdSql=tdCom.newTdSql()
|
||||||
|
|
||||||
|
status = ""
|
||||||
|
newTdSql.query("show dnodes")
|
||||||
|
dnode_infos = newTdSql.queryResult
|
||||||
|
for dnode_info in dnode_infos:
|
||||||
|
id = dnode_info[0]
|
||||||
|
dnode_status = dnode_info[4]
|
||||||
|
if id == self.stop_dnode_id:
|
||||||
|
status = dnode_status
|
||||||
|
break
|
||||||
|
return status
|
||||||
|
|
||||||
|
status = _get_status()
|
||||||
|
while status !="offline":
|
||||||
|
time.sleep(0.1)
|
||||||
|
status = _get_status()
|
||||||
|
# tdLog.notice("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
||||||
|
tdLog.notice("==== stop_dnode has stopped , id is {}".format(self.stop_dnode_id))
|
||||||
|
|
||||||
|
def check_revote_leader_success(self, dbname, before_leader_infos , after_leader_infos):
|
||||||
|
check_status = False
|
||||||
|
vote_act = set(set(after_leader_infos)-set(before_leader_infos))
|
||||||
|
if not vote_act:
|
||||||
|
print("=======before_revote_leader_infos ======\n" , before_leader_infos)
|
||||||
|
print("=======after_revote_leader_infos ======\n" , after_leader_infos)
|
||||||
|
tdLog.info(" ===maybe revote not occured , there is no dnode offline ====")
|
||||||
|
else:
|
||||||
|
for vgroup_info in vote_act:
|
||||||
|
for ind , role in enumerate(vgroup_info):
|
||||||
|
if role==self.stop_dnode_id:
|
||||||
|
|
||||||
|
if vgroup_info[ind+1] =="offline" and "leader" in vgroup_info:
|
||||||
|
tdLog.notice(" === revote leader ok , leader is {} now ====".format(vgroup_info[list(vgroup_info).index("leader")-1]))
|
||||||
|
check_status = True
|
||||||
|
elif vgroup_info[ind+1] !="offline":
|
||||||
|
tdLog.notice(" === dnode {} should be offline ".format(self.stop_dnode_id))
|
||||||
|
else:
|
||||||
|
continue
|
||||||
|
break
|
||||||
|
return check_status
|
||||||
|
|
||||||
|
def wait_start_dnode_OK(self):
|
||||||
|
|
||||||
|
def _get_status():
|
||||||
|
newTdSql=tdCom.newTdSql()
|
||||||
|
status = ""
|
||||||
|
newTdSql.query("show dnodes")
|
||||||
|
dnode_infos = newTdSql.queryResult
|
||||||
|
for dnode_info in dnode_infos:
|
||||||
|
id = dnode_info[0]
|
||||||
|
dnode_status = dnode_info[4]
|
||||||
|
if id == self.stop_dnode_id:
|
||||||
|
status = dnode_status
|
||||||
|
break
|
||||||
|
return status
|
||||||
|
|
||||||
|
status = _get_status()
|
||||||
|
while status !="ready":
|
||||||
|
time.sleep(0.1)
|
||||||
|
status = _get_status()
|
||||||
|
# tdLog.notice("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
||||||
|
tdLog.notice("==== stop_dnode has restart , id is {}".format(self.stop_dnode_id))
|
||||||
|
|
||||||
|
def _parse_datetime(self,timestr):
|
||||||
|
try:
|
||||||
|
return datetime.datetime.strptime(timestr, '%Y-%m-%d %H:%M:%S.%f')
|
||||||
|
except ValueError:
|
||||||
|
pass
|
||||||
|
try:
|
||||||
|
return datetime.datetime.strptime(timestr, '%Y-%m-%d %H:%M:%S')
|
||||||
|
except ValueError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def mycheckRowCol(self, sql, row, col):
|
||||||
|
caller = inspect.getframeinfo(inspect.stack()[2][0])
|
||||||
|
if row < 0:
|
||||||
|
args = (caller.filename, caller.lineno, sql, row)
|
||||||
|
tdLog.exit("%s(%d) failed: sql:%s, row:%d is smaller than zero" % args)
|
||||||
|
if col < 0:
|
||||||
|
args = (caller.filename, caller.lineno, sql, row)
|
||||||
|
tdLog.exit("%s(%d) failed: sql:%s, col:%d is smaller than zero" % args)
|
||||||
|
if row > tdSql.queryRows:
|
||||||
|
args = (caller.filename, caller.lineno, sql, row, tdSql.queryRows)
|
||||||
|
tdLog.exit("%s(%d) failed: sql:%s, row:%d is larger than queryRows:%d" % args)
|
||||||
|
if col > tdSql.queryCols:
|
||||||
|
args = (caller.filename, caller.lineno, sql, col, tdSql.queryCols)
|
||||||
|
tdLog.exit("%s(%d) failed: sql:%s, col:%d is larger than queryCols:%d" % args)
|
||||||
|
|
||||||
|
def mycheckData(self, sql ,row, col, data):
|
||||||
|
check_status = True
|
||||||
|
self.mycheckRowCol(sql ,row, col)
|
||||||
|
if tdSql.queryResult[row][col] != data:
|
||||||
|
if tdSql.cursor.istype(col, "TIMESTAMP"):
|
||||||
|
# suppose user want to check nanosecond timestamp if a longer data passed
|
||||||
|
if (len(data) >= 28):
|
||||||
|
if pd.to_datetime(tdSql.queryResult[row][col]) == pd.to_datetime(data):
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%d == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
else:
|
||||||
|
if tdSql.queryResult[row][col] == self._parse_datetime(data):
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
return
|
||||||
|
|
||||||
|
if str(tdSql.queryResult[row][col]) == str(data):
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
return
|
||||||
|
elif isinstance(data, float) and abs(tdSql.queryResult[row][col] - data) <= 0.000001:
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%f == expect:%f" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
caller = inspect.getframeinfo(inspect.stack()[1][0])
|
||||||
|
args = (caller.filename, caller.lineno, sql, row, col, tdSql.queryResult[row][col], data)
|
||||||
|
tdLog.info("%s(%d) failed: sql:%s row:%d col:%d data:%s != expect:%s" % args)
|
||||||
|
|
||||||
|
check_status = False
|
||||||
|
|
||||||
|
if data is None:
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
elif isinstance(data, str):
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
# elif isinstance(data, datetime.date):
|
||||||
|
# tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
# (sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
elif isinstance(data, float):
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
else:
|
||||||
|
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%d" %
|
||||||
|
(sql, row, col, tdSql.queryResult[row][col], data))
|
||||||
|
|
||||||
|
return check_status
|
||||||
|
|
||||||
|
def mycheckRows(self, sql, expectRows):
|
||||||
|
check_status = True
|
||||||
|
if len(tdSql.queryResult) == expectRows:
|
||||||
|
tdLog.info("sql:%s, queryRows:%d == expect:%d" % (sql, len(tdSql.queryResult), expectRows))
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
caller = inspect.getframeinfo(inspect.stack()[1][0])
|
||||||
|
args = (caller.filename, caller.lineno, sql, len(tdSql.queryResult), expectRows)
|
||||||
|
tdLog.info("%s(%d) failed: sql:%s, queryRows:%d != expect:%d" % args)
|
||||||
|
check_status = False
|
||||||
|
return check_status
|
||||||
|
|
||||||
|
|
||||||
|
def get_leader_infos(self ,dbname):
|
||||||
|
|
||||||
|
newTdSql=tdCom.newTdSql()
|
||||||
|
newTdSql.query("show {}.vgroups".format(dbname))
|
||||||
|
vgroup_infos = newTdSql.queryResult
|
||||||
|
|
||||||
|
leader_infos = set()
|
||||||
|
for vgroup_info in vgroup_infos:
|
||||||
|
leader_infos.add(vgroup_info[3:-4])
|
||||||
|
|
||||||
|
return leader_infos
|
||||||
|
|
||||||
|
def force_stop_dnode(self, dnode_id ):
|
||||||
|
|
||||||
|
tdSql.query("show dnodes")
|
||||||
|
port = None
|
||||||
|
for dnode_info in tdSql.queryResult:
|
||||||
|
if dnode_id == dnode_info[0]:
|
||||||
|
port = dnode_info[1].split(":")[-1]
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
continue
|
||||||
|
if port:
|
||||||
|
tdLog.notice(" ==== dnode {} will be force stop by kill -9 ====".format(dnode_id))
|
||||||
|
psCmd = '''netstat -anp|grep -w LISTEN|grep -w %s |grep -o "LISTEN.*"|awk '{print $2}'|cut -d/ -f1|head -n1''' %(port)
|
||||||
|
processID = subprocess.check_output(
|
||||||
|
psCmd, shell=True).decode("utf-8")
|
||||||
|
ps_kill_taosd = ''' kill -9 {} '''.format(processID)
|
||||||
|
# print(ps_kill_taosd)
|
||||||
|
os.system(ps_kill_taosd)
|
||||||
|
|
||||||
|
def basic_query_task(self,dbname ,stablename):
|
||||||
|
|
||||||
|
sql = "select * from {}.{} ;".format(dbname , stablename)
|
||||||
|
|
||||||
|
count = 0
|
||||||
|
while count < self.query_times:
|
||||||
|
os.system(''' taos -s '{}' >>/dev/null '''.format(sql))
|
||||||
|
count += 1
|
||||||
|
|
||||||
|
def multi_thread_query_task(self, thread_nums ,dbname , stablename ):
|
||||||
|
|
||||||
|
for i in range(thread_nums):
|
||||||
|
task = threading.Thread(target = self.basic_query_task, args=(dbname ,stablename))
|
||||||
|
self.thread_list.append(task)
|
||||||
|
|
||||||
|
for thread in self.thread_list:
|
||||||
|
|
||||||
|
thread.start()
|
||||||
|
return self.thread_list
|
||||||
|
|
||||||
|
|
||||||
|
def stop_follower_when_query_going(self):
|
||||||
|
|
||||||
|
tdDnodes = cluster.dnodes
|
||||||
|
self.create_database(dbname = self.db_name ,replica_num= self.replica , vgroup_nums= 1)
|
||||||
|
self.create_stable_insert_datas(dbname = self.db_name , stablename = "stb1" , tb_nums= self.tb_nums ,row_nums= self.row_nums)
|
||||||
|
|
||||||
|
# let query task start
|
||||||
|
self.thread_list = self.multi_thread_query_task(10 ,self.db_name ,'stb1' )
|
||||||
|
|
||||||
|
# force stop follower
|
||||||
|
for loop in range(self.loop_restart_times):
|
||||||
|
tdLog.debug(" ==== this is {}_th restart follower of database {} ==== ".format(loop ,self.db_name))
|
||||||
|
|
||||||
|
# get leader info before stop
|
||||||
|
before_leader_infos = self.get_leader_infos(self.db_name)
|
||||||
|
|
||||||
|
self.stop_dnode_id = self._get_stop_dnode_id(self.db_name)
|
||||||
|
self.force_stop_dnode(self.stop_dnode_id)
|
||||||
|
|
||||||
|
|
||||||
|
start = time.time()
|
||||||
|
# get leader info after stop
|
||||||
|
after_leader_infos = self.get_leader_infos(self.db_name)
|
||||||
|
|
||||||
|
revote_status = self.check_revote_leader_success(self.db_name ,before_leader_infos , after_leader_infos)
|
||||||
|
|
||||||
|
while not revote_status:
|
||||||
|
after_leader_infos = self.get_leader_infos(self.db_name)
|
||||||
|
revote_status = self.check_revote_leader_success(self.db_name ,before_leader_infos , after_leader_infos)
|
||||||
|
|
||||||
|
end = time.time()
|
||||||
|
time_cost = end - start
|
||||||
|
tdLog.debug(" ==== revote leader of database {} cost time {} ====".format(self.db_name , time_cost))
|
||||||
|
|
||||||
|
self.wait_stop_dnode_OK()
|
||||||
|
|
||||||
|
start = time.time()
|
||||||
|
tdDnodes[self.stop_dnode_id-1].starttaosd()
|
||||||
|
self.wait_start_dnode_OK()
|
||||||
|
end = time.time()
|
||||||
|
time_cost = int(end-start)
|
||||||
|
|
||||||
|
if time_cost > self.max_restart_time:
|
||||||
|
tdLog.exit(" ==== restart dnode {} cost too much time , please check ====".format(self.stop_dnode_id))
|
||||||
|
|
||||||
|
for thread in self.thread_list:
|
||||||
|
thread.join()
|
||||||
|
|
||||||
|
|
||||||
|
def run(self):
|
||||||
|
|
||||||
|
# basic check of cluster
|
||||||
|
self.check_setup_cluster_status()
|
||||||
|
self.stop_follower_when_query_going()
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def stop(self):
|
||||||
|
tdSql.close()
|
||||||
|
tdLog.success(f"{__file__} successfully executed")
|
||||||
|
|
||||||
|
tdCases.addLinux(__file__, TDTestCase())
|
||||||
|
tdCases.addWindows(__file__, TDTestCase())
|
|
@ -71,14 +71,14 @@ class TDTestCase:
|
||||||
is_leader=True
|
is_leader=True
|
||||||
|
|
||||||
if count==1 and is_leader:
|
if count==1 and is_leader:
|
||||||
tdLog.info("===== depoly cluster success with 1 mnode as leader =====")
|
tdLog.notice("===== depoly cluster success with 1 mnode as leader =====")
|
||||||
else:
|
else:
|
||||||
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
||||||
|
|
||||||
for k ,v in self.dnode_list.items():
|
for k ,v in self.dnode_list.items():
|
||||||
if k == mnode_name:
|
if k == mnode_name:
|
||||||
if v[3]==0:
|
if v[3]==0:
|
||||||
tdLog.info("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
tdLog.notice("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
else:
|
else:
|
||||||
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
else:
|
else:
|
||||||
|
@ -121,7 +121,7 @@ class TDTestCase:
|
||||||
|
|
||||||
for k , v in vgroups_infos.items():
|
for k , v in vgroups_infos.items():
|
||||||
if len(v) ==1 and v[0]=="leader":
|
if len(v) ==1 and v[0]=="leader":
|
||||||
tdLog.info(" === create database replica only 1 role leader check success of vgroup_id {} ======".format(k))
|
tdLog.notice(" === create database replica only 1 role leader check success of vgroup_id {} ======".format(k))
|
||||||
else:
|
else:
|
||||||
tdLog.exit(" === create database replica only 1 role leader check fail of vgroup_id {} ======".format(k))
|
tdLog.exit(" === create database replica only 1 role leader check fail of vgroup_id {} ======".format(k))
|
||||||
|
|
||||||
|
@ -152,10 +152,10 @@ class TDTestCase:
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
status = self.check_vgroups_init_done(dbname)
|
status = self.check_vgroups_init_done(dbname)
|
||||||
|
|
||||||
# tdLog.info("=== database {} show vgroups vote the leader is in progress ===".format(dbname))
|
# tdLog.notice("=== database {} show vgroups vote the leader is in progress ===".format(dbname))
|
||||||
end = time.time()
|
end = time.time()
|
||||||
cost_time = end - start
|
cost_time = end - start
|
||||||
tdLog.info(" ==== database %s vote the leaders success , cost time is %.3f second ====="%(dbname,cost_time) )
|
tdLog.notice(" ==== database %s vote the leaders success , cost time is %.3f second ====="%(dbname,cost_time) )
|
||||||
# os.system("taos -s 'show {}.vgroups;'".format(dbname))
|
# os.system("taos -s 'show {}.vgroups;'".format(dbname))
|
||||||
if cost_time >= self.max_vote_time_cost:
|
if cost_time >= self.max_vote_time_cost:
|
||||||
tdLog.exit(" ==== database %s vote the leaders cost too large time , cost time is %.3f second ===="%(dbname,cost_time) )
|
tdLog.exit(" ==== database %s vote the leaders cost too large time , cost time is %.3f second ===="%(dbname,cost_time) )
|
||||||
|
@ -165,28 +165,28 @@ class TDTestCase:
|
||||||
|
|
||||||
def test_init_vgroups_time_costs(self):
|
def test_init_vgroups_time_costs(self):
|
||||||
|
|
||||||
tdLog.info(" ====start check time cost about vgroups vote leaders ==== ")
|
tdLog.notice(" ====start check time cost about vgroups vote leaders ==== ")
|
||||||
tdLog.info(" ==== current max time cost is set value : {} =======".format(self.max_vote_time_cost))
|
tdLog.notice(" ==== current max time cost is set value : {} =======".format(self.max_vote_time_cost))
|
||||||
|
|
||||||
# create database replica 3 vgroups 1
|
# create database replica 3 vgroups 1
|
||||||
|
|
||||||
db1 = 'db_1'
|
db1 = 'db_1'
|
||||||
create_db_replica_3_vgroups_1 = "create database {} replica 3 vgroups 1".format(db1)
|
create_db_replica_3_vgroups_1 = "create database {} replica 3 vgroups 1".format(db1)
|
||||||
tdLog.info('=======database {} replica 3 vgroups 1 ======'.format(db1))
|
tdLog.notice('=======database {} replica 3 vgroups 1 ======'.format(db1))
|
||||||
tdSql.execute(create_db_replica_3_vgroups_1)
|
tdSql.execute(create_db_replica_3_vgroups_1)
|
||||||
self.vote_leader_time_costs(db1)
|
self.vote_leader_time_costs(db1)
|
||||||
|
|
||||||
# create database replica 3 vgroups 10
|
# create database replica 3 vgroups 10
|
||||||
db2 = 'db_2'
|
db2 = 'db_2'
|
||||||
create_db_replica_3_vgroups_10 = "create database {} replica 3 vgroups 10".format(db2)
|
create_db_replica_3_vgroups_10 = "create database {} replica 3 vgroups 10".format(db2)
|
||||||
tdLog.info('=======database {} replica 3 vgroups 10 ======'.format(db2))
|
tdLog.notice('=======database {} replica 3 vgroups 10 ======'.format(db2))
|
||||||
tdSql.execute(create_db_replica_3_vgroups_10)
|
tdSql.execute(create_db_replica_3_vgroups_10)
|
||||||
self.vote_leader_time_costs(db2)
|
self.vote_leader_time_costs(db2)
|
||||||
|
|
||||||
# create database replica 3 vgroups 100
|
# create database replica 3 vgroups 100
|
||||||
db3 = 'db_3'
|
db3 = 'db_3'
|
||||||
create_db_replica_3_vgroups_100 = "create database {} replica 3 vgroups 100".format(db3)
|
create_db_replica_3_vgroups_100 = "create database {} replica 3 vgroups 100".format(db3)
|
||||||
tdLog.info('=======database {} replica 3 vgroups 100 ======'.format(db3))
|
tdLog.notice('=======database {} replica 3 vgroups 100 ======'.format(db3))
|
||||||
tdSql.execute(create_db_replica_3_vgroups_100)
|
tdSql.execute(create_db_replica_3_vgroups_100)
|
||||||
self.vote_leader_time_costs(db3)
|
self.vote_leader_time_costs(db3)
|
||||||
|
|
||||||
|
|
|
@ -74,14 +74,14 @@ class TDTestCase:
|
||||||
is_leader=True
|
is_leader=True
|
||||||
|
|
||||||
if count==1 and is_leader:
|
if count==1 and is_leader:
|
||||||
tdLog.info("===== depoly cluster success with 1 mnode as leader =====")
|
tdLog.notice("===== depoly cluster success with 1 mnode as leader =====")
|
||||||
else:
|
else:
|
||||||
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
tdLog.exit("===== depoly cluster fail with 1 mnode as leader =====")
|
||||||
|
|
||||||
for k ,v in self.dnode_list.items():
|
for k ,v in self.dnode_list.items():
|
||||||
if k == mnode_name:
|
if k == mnode_name:
|
||||||
if v[3]==0:
|
if v[3]==0:
|
||||||
tdLog.info("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
tdLog.notice("===== depoly cluster mnode only success at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
else:
|
else:
|
||||||
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
tdLog.exit("===== depoly cluster mnode only fail at {} , support_vnodes is {} ".format(mnode_name,v[3]))
|
||||||
else:
|
else:
|
||||||
|
@ -124,7 +124,7 @@ class TDTestCase:
|
||||||
|
|
||||||
for k , v in vgroups_infos.items():
|
for k , v in vgroups_infos.items():
|
||||||
if len(v) ==1 and v[0]=="leader":
|
if len(v) ==1 and v[0]=="leader":
|
||||||
tdLog.info(" === create database replica only 1 role leader check success of vgroup_id {} ======".format(k))
|
tdLog.notice(" === create database replica only 1 role leader check success of vgroup_id {} ======".format(k))
|
||||||
else:
|
else:
|
||||||
tdLog.exit(" === create database replica only 1 role leader check fail of vgroup_id {} ======".format(k))
|
tdLog.exit(" === create database replica only 1 role leader check fail of vgroup_id {} ======".format(k))
|
||||||
|
|
||||||
|
@ -148,7 +148,7 @@ class TDTestCase:
|
||||||
|
|
||||||
if ind%2==0:
|
if ind%2==0:
|
||||||
if role == stop_dnode_id and vgroups_leader_follower[ind+1]=="offline":
|
if role == stop_dnode_id and vgroups_leader_follower[ind+1]=="offline":
|
||||||
tdLog.info("====== dnode {} has offline , endpoint is {}".format(stop_dnode_id , self.stop_dnode))
|
tdLog.notice("====== dnode {} has offline , endpoint is {}".format(stop_dnode_id , self.stop_dnode))
|
||||||
elif role == stop_dnode_id :
|
elif role == stop_dnode_id :
|
||||||
tdLog.exit("====== dnode {} has not offline , endpoint is {}".format(stop_dnode_id , self.stop_dnode))
|
tdLog.exit("====== dnode {} has not offline , endpoint is {}".format(stop_dnode_id , self.stop_dnode))
|
||||||
else:
|
else:
|
||||||
|
@ -180,8 +180,8 @@ class TDTestCase:
|
||||||
while status !="offline":
|
while status !="offline":
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
status = _get_status()
|
status = _get_status()
|
||||||
# tdLog.info("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
# tdLog.notice("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
||||||
tdLog.info("==== stop_dnode has stopped , endpoint is {}".format(self.stop_dnode))
|
tdLog.notice("==== stop_dnode has stopped , endpoint is {}".format(self.stop_dnode))
|
||||||
|
|
||||||
def wait_start_dnode_OK(self):
|
def wait_start_dnode_OK(self):
|
||||||
|
|
||||||
|
@ -202,15 +202,15 @@ class TDTestCase:
|
||||||
while status !="ready":
|
while status !="ready":
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
status = _get_status()
|
status = _get_status()
|
||||||
# tdLog.info("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
# tdLog.notice("==== stop dnode has not been stopped , endpoint is {}".format(self.stop_dnode))
|
||||||
tdLog.info("==== stop_dnode has restart , endpoint is {}".format(self.stop_dnode))
|
tdLog.notice("==== stop_dnode has restart , endpoint is {}".format(self.stop_dnode))
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def random_stop_One_dnode(self):
|
def random_stop_One_dnode(self):
|
||||||
self.stop_dnode = self._get_stop_dnode()
|
self.stop_dnode = self._get_stop_dnode()
|
||||||
stop_dnode_id = self.dnode_list[self.stop_dnode][0]
|
stop_dnode_id = self.dnode_list[self.stop_dnode][0]
|
||||||
tdLog.info(" ==== dnode {} will offline ,endpoints is {} ====".format(stop_dnode_id , self.stop_dnode))
|
tdLog.notice(" ==== dnode {} will offline ,endpoints is {} ====".format(stop_dnode_id , self.stop_dnode))
|
||||||
tdDnodes=cluster.dnodes
|
tdDnodes=cluster.dnodes
|
||||||
tdDnodes[stop_dnode_id-1].stoptaosd()
|
tdDnodes[stop_dnode_id-1].stoptaosd()
|
||||||
self.wait_stop_dnode_OK()
|
self.wait_stop_dnode_OK()
|
||||||
|
@ -250,10 +250,10 @@ class TDTestCase:
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
status = self.check_vgroups_init_done(dbname)
|
status = self.check_vgroups_init_done(dbname)
|
||||||
|
|
||||||
# tdLog.info("=== database {} show vgroups vote the leader is in progress ===".format(dbname))
|
# tdLog.notice("=== database {} show vgroups vote the leader is in progress ===".format(dbname))
|
||||||
end = time.time()
|
end = time.time()
|
||||||
cost_time = end - start
|
cost_time = end - start
|
||||||
tdLog.info(" ==== database %s vote the leaders success , cost time is %.3f second ====="%(dbname,cost_time) )
|
tdLog.notice(" ==== database %s vote the leaders success , cost time is %.3f second ====="%(dbname,cost_time) )
|
||||||
# os.system("taos -s 'show {}.vgroups;'".format(dbname))
|
# os.system("taos -s 'show {}.vgroups;'".format(dbname))
|
||||||
if cost_time >= self.max_vote_time_cost:
|
if cost_time >= self.max_vote_time_cost:
|
||||||
tdLog.exit(" ==== database %s vote the leaders cost too large time , cost time is %.3f second ===="%(dbname,cost_time) )
|
tdLog.exit(" ==== database %s vote the leaders cost too large time , cost time is %.3f second ===="%(dbname,cost_time) )
|
||||||
|
@ -269,10 +269,10 @@ class TDTestCase:
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
status = self.check_vgroups_revote_leader(dbname)
|
status = self.check_vgroups_revote_leader(dbname)
|
||||||
|
|
||||||
# tdLog.info("=== database {} show vgroups vote the leader is in progress ===".format(dbname))
|
# tdLog.notice("=== database {} show vgroups vote the leader is in progress ===".format(dbname))
|
||||||
end = time.time()
|
end = time.time()
|
||||||
cost_time = end - start
|
cost_time = end - start
|
||||||
tdLog.info(" ==== database %s revote the leaders success , cost time is %.3f second ====="%(dbname,cost_time) )
|
tdLog.notice(" ==== database %s revote the leaders success , cost time is %.3f second ====="%(dbname,cost_time) )
|
||||||
# os.system("taos -s 'show {}.vgroups;'".format(dbname))
|
# os.system("taos -s 'show {}.vgroups;'".format(dbname))
|
||||||
if cost_time >= self.max_vote_time_cost:
|
if cost_time >= self.max_vote_time_cost:
|
||||||
tdLog.exit(" ==== database %s revote the leaders cost too large time , cost time is %.3f second ===="%(dbname,cost_time) )
|
tdLog.exit(" ==== database %s revote the leaders cost too large time , cost time is %.3f second ===="%(dbname,cost_time) )
|
||||||
|
@ -306,7 +306,7 @@ class TDTestCase:
|
||||||
if role==self.dnode_list[self.stop_dnode][0]:
|
if role==self.dnode_list[self.stop_dnode][0]:
|
||||||
|
|
||||||
if vgroup_info[ind+1] =="offline" and "leader" in vgroup_info:
|
if vgroup_info[ind+1] =="offline" and "leader" in vgroup_info:
|
||||||
tdLog.info(" === revote leader ok , leader is {} now ====".format(list(vgroup_info).index("leader")-1))
|
tdLog.notice(" === revote leader ok , leader is {} now ====".format(list(vgroup_info).index("leader")-1))
|
||||||
elif vgroup_info[ind+1] !="offline":
|
elif vgroup_info[ind+1] !="offline":
|
||||||
tdLog.exit(" === dnode {} should be offline ".format(self.stop_dnode))
|
tdLog.exit(" === dnode {} should be offline ".format(self.stop_dnode))
|
||||||
else:
|
else:
|
||||||
|
@ -319,14 +319,14 @@ class TDTestCase:
|
||||||
self.Restart_stop_dnode()
|
self.Restart_stop_dnode()
|
||||||
def test_init_vgroups_time_costs(self):
|
def test_init_vgroups_time_costs(self):
|
||||||
|
|
||||||
tdLog.info(" ====start check time cost about vgroups vote leaders ==== ")
|
tdLog.notice(" ====start check time cost about vgroups vote leaders ==== ")
|
||||||
tdLog.info(" ==== current max time cost is set value : {} =======".format(self.max_vote_time_cost))
|
tdLog.notice(" ==== current max time cost is set value : {} =======".format(self.max_vote_time_cost))
|
||||||
|
|
||||||
# create database replica 3 vgroups 1
|
# create database replica 3 vgroups 1
|
||||||
|
|
||||||
db1 = 'db_1'
|
db1 = 'db_1'
|
||||||
create_db_replica_3_vgroups_1 = "create database {} replica 3 vgroups 1".format(db1)
|
create_db_replica_3_vgroups_1 = "create database {} replica 3 vgroups 1".format(db1)
|
||||||
tdLog.info('=======database {} replica 3 vgroups 1 ======'.format(db1))
|
tdLog.notice('=======database {} replica 3 vgroups 1 ======'.format(db1))
|
||||||
tdSql.execute(create_db_replica_3_vgroups_1)
|
tdSql.execute(create_db_replica_3_vgroups_1)
|
||||||
self.vote_leader_time_costs(db1)
|
self.vote_leader_time_costs(db1)
|
||||||
self.exec_revote_action(db1)
|
self.exec_revote_action(db1)
|
||||||
|
@ -334,7 +334,7 @@ class TDTestCase:
|
||||||
# create database replica 3 vgroups 10
|
# create database replica 3 vgroups 10
|
||||||
db2 = 'db_2'
|
db2 = 'db_2'
|
||||||
create_db_replica_3_vgroups_10 = "create database {} replica 3 vgroups 10".format(db2)
|
create_db_replica_3_vgroups_10 = "create database {} replica 3 vgroups 10".format(db2)
|
||||||
tdLog.info('=======database {} replica 3 vgroups 10 ======'.format(db2))
|
tdLog.notice('=======database {} replica 3 vgroups 10 ======'.format(db2))
|
||||||
tdSql.execute(create_db_replica_3_vgroups_10)
|
tdSql.execute(create_db_replica_3_vgroups_10)
|
||||||
self.vote_leader_time_costs(db2)
|
self.vote_leader_time_costs(db2)
|
||||||
self.exec_revote_action(db2)
|
self.exec_revote_action(db2)
|
||||||
|
@ -342,7 +342,7 @@ class TDTestCase:
|
||||||
# create database replica 3 vgroups 100
|
# create database replica 3 vgroups 100
|
||||||
db3 = 'db_3'
|
db3 = 'db_3'
|
||||||
create_db_replica_3_vgroups_100 = "create database {} replica 3 vgroups 100".format(db3)
|
create_db_replica_3_vgroups_100 = "create database {} replica 3 vgroups 100".format(db3)
|
||||||
tdLog.info('=======database {} replica 3 vgroups 100 ======'.format(db3))
|
tdLog.notice('=======database {} replica 3 vgroups 100 ======'.format(db3))
|
||||||
tdSql.execute(create_db_replica_3_vgroups_100)
|
tdSql.execute(create_db_replica_3_vgroups_100)
|
||||||
self.vote_leader_time_costs(db3)
|
self.vote_leader_time_costs(db3)
|
||||||
self.exec_revote_action(db3)
|
self.exec_revote_action(db3)
|
||||||
|
|
|
@ -0,0 +1,118 @@
|
||||||
|
{
|
||||||
|
"filetype": "insert",
|
||||||
|
"cfgdir": "/etc/taos/",
|
||||||
|
"host": "localhost",
|
||||||
|
"port": 6030,
|
||||||
|
"user": "root",
|
||||||
|
"password": "taosdata",
|
||||||
|
"thread_count": 10,
|
||||||
|
"create_table_thread_count": 10,
|
||||||
|
"confirm_parameter_prompt": "no",
|
||||||
|
"insert_interval": 0,
|
||||||
|
"interlace_rows": 1000,
|
||||||
|
"num_of_records_per_req": 1000,
|
||||||
|
"databases": [
|
||||||
|
{
|
||||||
|
"dbinfo": {
|
||||||
|
"name": "db_2",
|
||||||
|
"drop": "no",
|
||||||
|
"vgroups": 1,
|
||||||
|
"replica": 3
|
||||||
|
},
|
||||||
|
"super_tables": [
|
||||||
|
{
|
||||||
|
"name": "stb1",
|
||||||
|
"childtable_count": 10,
|
||||||
|
"childtable_prefix": "sub_",
|
||||||
|
"auto_create_table": "yes",
|
||||||
|
"batch_create_tbl_num": 5000,
|
||||||
|
"data_source": "rand",
|
||||||
|
"insert_mode": "taosc",
|
||||||
|
"insert_rows": 100000,
|
||||||
|
"interlace_rows": 0,
|
||||||
|
"insert_interval": 0,
|
||||||
|
"max_sql_len": 1000000,
|
||||||
|
"disorder_ratio": 0,
|
||||||
|
"disorder_range": 1000,
|
||||||
|
"timestamp_step": 10,
|
||||||
|
"start_timestamp": "2015-05-01 00:00:00.000",
|
||||||
|
"sample_format": "csv",
|
||||||
|
"use_sample_ts": "no",
|
||||||
|
"tags_file": "",
|
||||||
|
"columns": [
|
||||||
|
{
|
||||||
|
"type": "INT",
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "TINYINT",
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "SMALLINT",
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "BIGINT",
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "UINT",
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "UTINYINT",
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "USMALLINT",
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "UBIGINT",
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "DOUBLE",
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "FLOAT",
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "BINARY",
|
||||||
|
"len": 40,
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "VARCHAR",
|
||||||
|
"len": 200,
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "nchar",
|
||||||
|
"len": 200,
|
||||||
|
"count": 1
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"tags": [
|
||||||
|
{
|
||||||
|
"type": "INT",
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "BINARY",
|
||||||
|
"len": 100,
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "BOOL",
|
||||||
|
"count": 1
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
|
@ -0,0 +1,118 @@
|
||||||
|
{
|
||||||
|
"filetype": "insert",
|
||||||
|
"cfgdir": "/etc/taos/",
|
||||||
|
"host": "localhost",
|
||||||
|
"port": 6030,
|
||||||
|
"user": "root",
|
||||||
|
"password": "taosdata",
|
||||||
|
"thread_count": 1,
|
||||||
|
"create_table_thread_count": 1,
|
||||||
|
"confirm_parameter_prompt": "no",
|
||||||
|
"insert_interval": 0,
|
||||||
|
"interlace_rows": 1000,
|
||||||
|
"num_of_records_per_req": 1000,
|
||||||
|
"databases": [
|
||||||
|
{
|
||||||
|
"dbinfo": {
|
||||||
|
"name": "db_1",
|
||||||
|
"drop": "no",
|
||||||
|
"vgroups": 1,
|
||||||
|
"replica": 3
|
||||||
|
},
|
||||||
|
"super_tables": [
|
||||||
|
{
|
||||||
|
"name": "stb1",
|
||||||
|
"childtable_count": 10,
|
||||||
|
"childtable_prefix": "sub_",
|
||||||
|
"auto_create_table": "yes",
|
||||||
|
"batch_create_tbl_num": 5000,
|
||||||
|
"data_source": "rand",
|
||||||
|
"insert_mode": "taosc",
|
||||||
|
"insert_rows": 10000,
|
||||||
|
"interlace_rows": 0,
|
||||||
|
"insert_interval": 0,
|
||||||
|
"max_sql_len": 1000000,
|
||||||
|
"disorder_ratio": 0,
|
||||||
|
"disorder_range": 1000,
|
||||||
|
"timestamp_step": 10,
|
||||||
|
"start_timestamp": "2015-05-01 00:00:00.000",
|
||||||
|
"sample_format": "csv",
|
||||||
|
"use_sample_ts": "no",
|
||||||
|
"tags_file": "",
|
||||||
|
"columns": [
|
||||||
|
{
|
||||||
|
"type": "INT",
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "TINYINT",
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "SMALLINT",
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "BIGINT",
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "UINT",
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "UTINYINT",
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "USMALLINT",
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "UBIGINT",
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "DOUBLE",
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "FLOAT",
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "BINARY",
|
||||||
|
"len": 40,
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "VARCHAR",
|
||||||
|
"len": 200,
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "nchar",
|
||||||
|
"len": 200,
|
||||||
|
"count": 1
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"tags": [
|
||||||
|
{
|
||||||
|
"type": "INT",
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "BINARY",
|
||||||
|
"len": 100,
|
||||||
|
"count": 1
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "BOOL",
|
||||||
|
"count": 1
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue