Merge remote-tracking branch 'origin/3.0' into fix/tsim

This commit is contained in:
Shengliang Guan 2022-07-21 21:21:13 +08:00
commit de9b4358f1
53 changed files with 2054 additions and 632 deletions

View File

@ -1,6 +1,6 @@
--- ---
sidebar_label: 安装包 sidebar_label: 安装包
title: 使用安装包安装和卸载 title: 使用安装包立即开始
--- ---
import Tabs from "@theme/Tabs"; import Tabs from "@theme/Tabs";
@ -169,72 +169,128 @@ install.sh 安装脚本在执行过程中,会通过命令行交互界面询问
::: :::
## 卸载 ## 启动
<Tabs> 安装后,请使用 `systemctl` 命令来启动 TDengine 的服务进程。
<TabItem label="apt-get 卸载" value="aptremove">
内容 TBD
</TabItem>
<TabItem label="Deb 卸载" value="debuninst">
卸载命令如下:
```
$ sudo dpkg -r tdengine
(Reading database ... 137504 files and directories currently installed.)
Removing tdengine (2.4.0.7) ...
TDengine is removed successfully!
```bash
systemctl start taosd
``` ```
</TabItem> 检查服务是否正常工作:
<TabItem label="RPM 卸载" value="rpmuninst"> ```bash
systemctl status taosd
卸载命令如下:
```
$ sudo rpm -e tdengine
TDengine is removed successfully!
``` ```
</TabItem> 如果服务进程处于活动状态,则 status 指令会显示如下的相关信息:
<TabItem label="tar.gz 卸载" value="taruninst">
卸载命令如下:
``` ```
$ rmtaos Active: active (running)
Nginx for TDengine is running, stopping it...
TDengine is removed successfully!
taosKeeper is removed successfully!
``` ```
</TabItem> 如果后台服务进程处于停止状态,则 status 指令会显示如下的相关信息:
</Tabs>
```
Active: inactive (dead)
```
如果 TDengine 服务正常工作,那么您可以通过 TDengine 的命令行程序 `taos` 来访问并体验 TDengine。
systemctl 命令汇总:
- 启动服务进程:`systemctl start taosd`
- 停止服务进程:`systemctl stop taosd`
- 重启服务进程:`systemctl restart taosd`
- 查看服务状态:`systemctl status taosd`
:::info :::info
- TDengine 提供了多种安装包,但最好不要在一个系统上同时使用 tar.gz 安装包和 deb 或 rpm 安装包。否则会相互影响,导致在使用时出现问题。 - systemctl 命令需要 _root_ 权限来运行,如果您非 _root_ 用户,请在命令前添加 sudo 。
- `systemctl stop taosd` 指令在执行后并不会马上停止 TDengine 服务,而是会等待系统中必要的落盘工作正常完成。在数据量很大的情况下,这可能会消耗较长时间。
- 如果系统中不支持 `systemd`,也可以用手动运行 `/usr/local/taos/bin/taosd` 方式启动 TDengine 服务。
- 对于 deb 包安装后,如果安装目录被手工误删了部分,出现卸载、或重新安装不能成功。此时,需要清除 TDengine 包的安装信息,执行如下命令: :::
``` ## TDengine 命令行 (CLI)
$ sudo rm -f /var/lib/dpkg/info/tdengine*
```
然后再重新进行安装就可以了。 为便于检查 TDengine 的状态,执行数据库 (Database) 的各种即席(Ad Hoc)查询TDengine 提供一命令行应用程序(以下简称为 TDengine CLI) taos。要进入 TDengine 命令行,您只要在安装有 TDengine 的 Linux 终端执行 `taos` 即可。
- 对于 rpm 包安装后,如果安装目录被手工误删了部分,出现卸载、或重新安装不能成功。此时,需要清除 TDengine 包的安装信息,执行如下命令: ```bash
taos
```
``` 如果连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息出来(请参考 [FAQ](/train-faq/faq) 来解决终端连接服务端失败的问题)。 TDengine CLI 的提示符号如下:
$ sudo rpm -e --noscripts tdengine
```
然后再重新进行安装就可以了。 ```cmd
taos>
```
::: 在 TDengine CLI 中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行数据库(database)插入查询操作。在终端中运行的 SQL 语句需要以分号结束来运行。示例:
```sql
create database demo;
use demo;
create table t (ts timestamp, speed int);
insert into t values ('2019-07-15 00:00:00', 10);
insert into t values ('2019-07-15 01:00:00', 20);
select * from t;
ts | speed |
========================================
2019-07-15 00:00:00.000 | 10 |
2019-07-15 01:00:00.000 | 20 |
Query OK, 2 row(s) in set (0.003128s)
```
除执行 SQL 语句外,系统管理员还可以从 TDengine CLI 进行检查系统运行状态、添加删除用户账号等操作。TDengine CLI 连同应用驱动也可以独立安装在 Linux 或 Windows 机器上运行,更多细节请参考 [这里](../reference/taos-shell/)
## 使用 taosBenchmark 体验写入速度
启动 TDengine 的服务,在 Linux 终端执行 `taosBenchmark` (曾命名为 `taosdemo`
```bash
taosBenchmark
```
该命令将在数据库 test 下面自动创建一张超级表 meters该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupIdgroupId 被设置为 1 到 10 location 被设置为 "California.SanFrancisco" 或者 "California.LosAngeles"。
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能,即使在一台普通的 PC 服务器往往也仅需十几秒。
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,您可以设置不同参数进行体验,请执行 `taosBenchmark --help` 详细列出。taosBenchmark 详细使用方法请参照 [如何使用 taosBenchmark 对 TDengine 进行性能测试](https://www.taosdata.com/2021/10/09/3111.html)。
## 使用 TDengine CLI 体验查询速度
使用上述 taosBenchmark 插入数据后,可以在 TDengine CLI 输入查询命令,体验查询速度。
查询超级表下记录总条数:
```sql
taos> select count(*) from test.meters;
```
查询 1 亿条记录的平均值、最大值、最小值等:
```sql
taos> select avg(current), max(voltage), min(phase) from test.meters;
```
查询 location="California.SanFrancisco" 的记录总条数:
```sql
taos> select count(*) from test.meters where location="California.SanFrancisco";
```
查询 groupId=10 的所有记录的平均值、最大值、最小值等:
```sql
taos> select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
```
对表 d10 按 10s 进行平均值、最大值和最小值聚合统计:
```sql
taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
```

View File

@ -1,135 +0,0 @@
---
sidebar_label: 开始使用
title: 快速体验 TDengine
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import PkgInstall from "./\_pkg_install.mdx";
import AptGetInstall from "./\_apt_get_install.mdx";
## 启动
安装后,请使用 `systemctl` 命令来启动 TDengine 的服务进程。
```bash
systemctl start taosd
```
检查服务是否正常工作:
```bash
systemctl status taosd
```
如果服务进程处于活动状态,则 status 指令会显示如下的相关信息:
```
Active: active (running)
```
如果后台服务进程处于停止状态,则 status 指令会显示如下的相关信息:
```
Active: inactive (dead)
```
如果 TDengine 服务正常工作,那么您可以通过 TDengine 的命令行程序 `taos` 来访问并体验 TDengine。
systemctl 命令汇总:
- 启动服务进程:`systemctl start taosd`
- 停止服务进程:`systemctl stop taosd`
- 重启服务进程:`systemctl restart taosd`
- 查看服务状态:`systemctl status taosd`
:::info
- systemctl 命令需要 _root_ 权限来运行,如果您非 _root_ 用户,请在命令前添加 sudo 。
- `systemctl stop taosd` 指令在执行后并不会马上停止 TDengine 服务,而是会等待系统中必要的落盘工作正常完成。在数据量很大的情况下,这可能会消耗较长时间。
- 如果系统中不支持 `systemd`,也可以用手动运行 `/usr/local/taos/bin/taosd` 方式启动 TDengine 服务。
:::
## TDengine 命令行 (CLI)
为便于检查 TDengine 的状态,执行数据库 (Database) 的各种即席(Ad Hoc)查询TDengine 提供一命令行应用程序(以下简称为 TDengine CLI) taos。要进入 TDengine 命令行,您只要在安装有 TDengine 的 Linux 终端执行 `taos` 即可。
```bash
taos
```
如果连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息出来(请参考 [FAQ](/train-faq/faq) 来解决终端连接服务端失败的问题)。 TDengine CLI 的提示符号如下:
```cmd
taos>
```
在 TDengine CLI 中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行数据库(database)插入查询操作。在终端中运行的 SQL 语句需要以分号结束来运行。示例:
```sql
create database demo;
use demo;
create table t (ts timestamp, speed int);
insert into t values ('2019-07-15 00:00:00', 10);
insert into t values ('2019-07-15 01:00:00', 20);
select * from t;
ts | speed |
========================================
2019-07-15 00:00:00.000 | 10 |
2019-07-15 01:00:00.000 | 20 |
Query OK, 2 row(s) in set (0.003128s)
```
除执行 SQL 语句外,系统管理员还可以从 TDengine CLI 进行检查系统运行状态、添加删除用户账号等操作。TDengine CLI 连同应用驱动也可以独立安装在 Linux 或 Windows 机器上运行,更多细节请参考 [这里](../reference/taos-shell/)
## 使用 taosBenchmark 体验写入速度
启动 TDengine 的服务,在 Linux 终端执行 `taosBenchmark` (曾命名为 `taosdemo`
```bash
taosBenchmark
```
该命令将在数据库 test 下面自动创建一张超级表 meters该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupIdgroupId 被设置为 1 到 10 location 被设置为 "California.SanFrancisco" 或者 "California.LosAngeles"。
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能,即使在一台普通的 PC 服务器往往也仅需十几秒。
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,您可以设置不同参数进行体验,请执行 `taosBenchmark --help` 详细列出。taosBenchmark 详细使用方法请参照 [如何使用 taosBenchmark 对 TDengine 进行性能测试](https://www.taosdata.com/2021/10/09/3111.html)。
## 使用 TDengine CLI 体验查询速度
使用上述 taosBenchmark 插入数据后,可以在 TDengine CLI 输入查询命令,体验查询速度。
查询超级表下记录总条数:
```sql
taos> select count(*) from test.meters;
```
查询 1 亿条记录的平均值、最大值、最小值等:
```sql
taos> select avg(current), max(voltage), min(phase) from test.meters;
```
查询 location="California.SanFrancisco" 的记录总条数:
```sql
taos> select count(*) from test.meters where location="California.SanFrancisco";
```
查询 groupId=10 的所有记录的平均值、最大值、最小值等:
```sql
taos> select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
```
对表 d10 按 10s 进行平均值、最大值和最小值聚合统计:
```sql
taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
```

View File

@ -55,6 +55,8 @@ fqdn h1.taosdata.com
// 配置本数据节点的端口号,缺省是 6030 // 配置本数据节点的端口号,缺省是 6030
serverPort 6030 serverPort 6030
```
一定要修改的参数是 firstEp 和 fqdn。在每个数据节点firstEp 需全部配置成一样,但 fqdn 一定要配置成其所在数据节点的值。其他参数可不做任何修改,除非你很清楚为什么要修改。 一定要修改的参数是 firstEp 和 fqdn。在每个数据节点firstEp 需全部配置成一样,但 fqdn 一定要配置成其所在数据节点的值。其他参数可不做任何修改,除非你很清楚为什么要修改。
加入到集群中的数据节点 dnode下表中涉及集群相关的参数必须完全相同否则不能成功加入到集群中。 加入到集群中的数据节点 dnode下表中涉及集群相关的参数必须完全相同否则不能成功加入到集群中。
@ -68,12 +70,9 @@ serverPort 6030
## 启动集群 ## 启动集群
### 启动第一个数据节点
按照《立即开始》里的步骤,启动第一个数据节点,例如 h1.taosdata.com然后执行 taos启动 taos shell从 shell 里执行命令“SHOW DNODES”如下所示 按照《立即开始》里的步骤,启动第一个数据节点,例如 h1.taosdata.com然后执行 taos启动 taos shell从 shell 里执行命令“SHOW DNODES”如下所示
``` ```
Welcome to the TDengine shell from Linux, Client Version:3.0.0.0 Welcome to the TDengine shell from Linux, Client Version:3.0.0.0
Copyright (c) 2022 by TAOS Data, Inc. All rights reserved. Copyright (c) 2022 by TAOS Data, Inc. All rights reserved.
@ -85,15 +84,12 @@ id | endpoint | vnodes | support_vnodes | status | create_time | note |
1 | h1.taosdata.com:6030 | 0 | 1024 | ready | 2022-07-16 10:50:42.673 | | 1 | h1.taosdata.com:6030 | 0 | 1024 | ready | 2022-07-16 10:50:42.673 | |
Query OK, 1 rows affected (0.007984s) Query OK, 1 rows affected (0.007984s)
taos>
taos> ```
````
上述命令里,可以看到刚启动的数据节点的 End Point 是h1.taos.com:6030就是这个新集群的 firstEp。 上述命令里,可以看到刚启动的数据节点的 End Point 是h1.taos.com:6030就是这个新集群的 firstEp。
### 添加数据节点 ## 添加数据节点
将后续的数据节点添加到现有集群,具体有以下几步: 将后续的数据节点添加到现有集群,具体有以下几步:

View File

@ -8,9 +8,11 @@ import TabItem from "@theme/TabItem";
本节将介绍一些关于安装和卸载更深层次的内容,以及升级的注意事项。 本节将介绍一些关于安装和卸载更深层次的内容,以及升级的注意事项。
## 安装和卸载 ## 安装
关于安装,请参考 [使用安装包立即开始](../get-started/package)
关于安装和卸载,请参考 [安装和卸载](../get-started/package)
## 安装目录说明 ## 安装目录说明
@ -40,6 +42,76 @@ lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/
- /usr/local/taos/driver 目录下的动态库文件,会软链接到 /usr/lib 目录下; - /usr/local/taos/driver 目录下的动态库文件,会软链接到 /usr/lib 目录下;
- /usr/local/taos/include 目录下的头文件,会软链接到到 /usr/include 目录下; - /usr/local/taos/include 目录下的头文件,会软链接到到 /usr/include 目录下;
## 卸载
<Tabs>
<TabItem label="apt-get 卸载" value="aptremove">
内容 TBD
</TabItem>
<TabItem label="Deb 卸载" value="debuninst">
卸载命令如下:
```
$ sudo dpkg -r tdengine
(Reading database ... 137504 files and directories currently installed.)
Removing tdengine (2.4.0.7) ...
TDengine is removed successfully!
```
</TabItem>
<TabItem label="RPM 卸载" value="rpmuninst">
卸载命令如下:
```
$ sudo rpm -e tdengine
TDengine is removed successfully!
```
</TabItem>
<TabItem label="tar.gz 卸载" value="taruninst">
卸载命令如下:
```
$ rmtaos
Nginx for TDengine is running, stopping it...
TDengine is removed successfully!
taosKeeper is removed successfully!
```
</TabItem>
</Tabs>
:::info
- TDengine 提供了多种安装包,但最好不要在一个系统上同时使用 tar.gz 安装包和 deb 或 rpm 安装包。否则会相互影响,导致在使用时出现问题。
- 对于 deb 包安装后,如果安装目录被手工误删了部分,出现卸载、或重新安装不能成功。此时,需要清除 TDengine 包的安装信息,执行如下命令:
```
$ sudo rm -f /var/lib/dpkg/info/tdengine*
```
然后再重新进行安装就可以了。
- 对于 rpm 包安装后,如果安装目录被手工误删了部分,出现卸载、或重新安装不能成功。此时,需要清除 TDengine 包的安装信息,执行如下命令:
```
$ sudo rpm -e --noscripts tdengine
```
然后再重新进行安装就可以了。
:::
## 卸载和更新文件说明 ## 卸载和更新文件说明
卸载安装包的时候,将保留配置文件、数据库文件和日志文件,即 /etc/taos/taos.cfg 、 /var/lib/taos 、 /var/log/taos 。如果用户确认后不需保留,可以手工删除,但一定要慎重,因为删除后,数据将永久丢失,不可以恢复! 卸载安装包的时候,将保留配置文件、数据库文件和日志文件,即 /etc/taos/taos.cfg 、 /var/lib/taos 、 /var/log/taos 。如果用户确认后不需保留,可以手工删除,但一定要慎重,因为删除后,数据将永久丢失,不可以恢复!
@ -64,4 +136,4 @@ lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/
:::warning :::warning
TDengine 不保证低版本能够兼容高版本的数据,所以任何时候都不推荐降级 TDengine 不保证低版本能够兼容高版本的数据,所以任何时候都不推荐降级
::: :::

View File

@ -103,6 +103,7 @@ typedef struct SDataBlockInfo {
int16_t hasVarCol; int16_t hasVarCol;
uint32_t capacity; uint32_t capacity;
// TODO: optimize and remove following // TODO: optimize and remove following
int64_t version; // used for stream, and need serialization
int32_t childId; // used for stream, do not serialize int32_t childId; // used for stream, do not serialize
EStreamType type; // used for stream, do not serialize EStreamType type; // used for stream, do not serialize
STimeWindow calWin; // used for stream, do not serialize STimeWindow calWin; // used for stream, do not serialize

View File

@ -438,7 +438,7 @@ static FORCE_INLINE int32_t tDecodeSSchemaWrapperEx(SDecoder* pDecoder, SSchemaW
return 0; return 0;
} }
STSchema* tdGetSTSChemaFromSSChema(SSchema** pSchema, int32_t nCols); STSchema* tdGetSTSChemaFromSSChema(SSchema* pSchema, int32_t nCols, int32_t sver);
typedef struct { typedef struct {
char name[TSDB_TABLE_FNAME_LEN]; char name[TSDB_TABLE_FNAME_LEN];
@ -1359,6 +1359,7 @@ typedef struct {
int32_t numOfCols; int32_t numOfCols;
int64_t skey; int64_t skey;
int64_t ekey; int64_t ekey;
int64_t version; // for stream
char data[]; char data[];
} SRetrieveTableRsp; } SRetrieveTableRsp;

View File

@ -64,7 +64,7 @@ qTaskInfo_t qCreateStreamExecTaskInfo(void* msg, SReadHandle* readers);
* @param SReadHandle * @param SReadHandle
* @return * @return
*/ */
qTaskInfo_t qCreateQueueExecTaskInfo(void* msg, SReadHandle* readers, int32_t* numOfCols); qTaskInfo_t qCreateQueueExecTaskInfo(void* msg, SReadHandle* readers, int32_t* numOfCols, SSchemaWrapper** pSchemaWrapper);
/** /**
* Set the input data block for the stream scan. * Set the input data block for the stream scan.

View File

@ -142,6 +142,7 @@ static FORCE_INLINE void* streamQueueNextItem(SStreamQueue* queue) {
ASSERT(queue->qItem != NULL); ASSERT(queue->qItem != NULL);
return streamQueueCurItem(queue); return streamQueueCurItem(queue);
} else { } else {
queue->qItem = NULL;
taosGetQitem(queue->qall, &queue->qItem); taosGetQitem(queue->qall, &queue->qItem);
if (queue->qItem == NULL) { if (queue->qItem == NULL) {
taosReadAllQitems(queue->queue, queue->qall); taosReadAllQitems(queue->queue, queue->qall);

View File

@ -826,7 +826,7 @@ TEST(testCase, update_test) {
TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0); TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(pConn, nullptr); ASSERT_NE(pConn, nullptr);
TAOS_RES* pRes = taos_query(pConn, "create database if not exists abc1"); TAOS_RES* pRes = taos_query(pConn, "select cast(0 as timestamp)-1y");
if (taos_errno(pRes) != TSDB_CODE_SUCCESS) { if (taos_errno(pRes) != TSDB_CODE_SUCCESS) {
printf("failed to create database, code:%s", taos_errstr(pRes)); printf("failed to create database, code:%s", taos_errstr(pRes));
taos_free_result(pRes); taos_free_result(pRes);

View File

@ -1163,9 +1163,15 @@ static int32_t doEnsureCapacity(SColumnInfoData* pColumn, const SDataBlockInfo*
void colInfoDataCleanup(SColumnInfoData* pColumn, uint32_t numOfRows) { void colInfoDataCleanup(SColumnInfoData* pColumn, uint32_t numOfRows) {
if (IS_VAR_DATA_TYPE(pColumn->info.type)) { if (IS_VAR_DATA_TYPE(pColumn->info.type)) {
pColumn->varmeta.length = 0; pColumn->varmeta.length = 0;
if (pColumn->varmeta.offset > 0) {
memset(pColumn->varmeta.offset, 0, sizeof(int32_t) * numOfRows);
}
} else { } else {
if (pColumn->nullbitmap != NULL) { if (pColumn->nullbitmap != NULL) {
memset(pColumn->nullbitmap, 0, BitmapLen(numOfRows)); memset(pColumn->nullbitmap, 0, BitmapLen(numOfRows));
if (pColumn->pData != NULL) {
memset(pColumn->pData, 0, pColumn->info.bytes * numOfRows);
}
} }
} }
} }

View File

@ -4941,14 +4941,14 @@ int tDecodeSVCreateStbReq(SDecoder *pCoder, SVCreateStbReq *pReq) {
return 0; return 0;
} }
STSchema *tdGetSTSChemaFromSSChema(SSchema **pSchema, int32_t nCols) { STSchema *tdGetSTSChemaFromSSChema(SSchema *pSchema, int32_t nCols, int32_t sver) {
STSchemaBuilder schemaBuilder = {0}; STSchemaBuilder schemaBuilder = {0};
if (tdInitTSchemaBuilder(&schemaBuilder, 1) < 0) { if (tdInitTSchemaBuilder(&schemaBuilder, sver) < 0) {
return NULL; return NULL;
} }
for (int i = 0; i < nCols; i++) { for (int i = 0; i < nCols; i++) {
SSchema *schema = *pSchema + i; SSchema *schema = pSchema + i;
if (tdAddColToSchema(&schemaBuilder, schema->type, schema->flags, schema->colId, schema->bytes) < 0) { if (tdAddColToSchema(&schemaBuilder, schema->type, schema->flags, schema->colId, schema->bytes) < 0) {
tdDestroyTSchemaBuilder(&schemaBuilder); tdDestroyTSchemaBuilder(&schemaBuilder);
return NULL; return NULL;

View File

@ -568,6 +568,7 @@ int32_t tdSTSRowNew(SArray *pArray, STSchema *pTSchema, STSRow **ppRow) {
int32_t maxVarDataLen = 0; int32_t maxVarDataLen = 0;
int32_t iColVal = 0; int32_t iColVal = 0;
void *varBuf = NULL; void *varBuf = NULL;
bool isAlloc = false;
ASSERT(nColVal > 1); ASSERT(nColVal > 1);
@ -610,8 +611,11 @@ int32_t tdSTSRowNew(SArray *pArray, STSchema *pTSchema, STSRow **ppRow) {
++iColVal; ++iColVal;
} }
*ppRow = (STSRow *)taosMemoryCalloc( if (!(*ppRow)) {
1, sizeof(STSRow) + pTSchema->flen + varDataLen + TD_BITMAP_BYTES(pTSchema->numOfCols - 1)); *ppRow = (STSRow *)taosMemoryCalloc(
1, sizeof(STSRow) + pTSchema->flen + varDataLen + TD_BITMAP_BYTES(pTSchema->numOfCols - 1));
isAlloc = true;
}
if (!(*ppRow)) { if (!(*ppRow)) {
terrno = TSDB_CODE_OUT_OF_MEMORY; terrno = TSDB_CODE_OUT_OF_MEMORY;
@ -621,7 +625,9 @@ int32_t tdSTSRowNew(SArray *pArray, STSchema *pTSchema, STSRow **ppRow) {
if (maxVarDataLen > 0) { if (maxVarDataLen > 0) {
varBuf = taosMemoryMalloc(maxVarDataLen); varBuf = taosMemoryMalloc(maxVarDataLen);
if (!varBuf) { if (!varBuf) {
taosMemoryFreeClear(*ppRow); if(isAlloc) {
taosMemoryFreeClear(*ppRow);
}
terrno = TSDB_CODE_OUT_OF_MEMORY; terrno = TSDB_CODE_OUT_OF_MEMORY;
return -1; return -1;
} }
@ -1323,12 +1329,11 @@ void tTSRowGetVal(STSRow *pRow, STSchema *pTSchema, int16_t iCol, SColVal *pColV
SCellVal cv; SCellVal cv;
SValue value; SValue value;
ASSERT(iCol > 0); ASSERT((pTColumn->colId == PRIMARYKEY_TIMESTAMP_COL_ID) || (iCol > 0));
if (TD_IS_TP_ROW(pRow)) { if (TD_IS_TP_ROW(pRow)) {
tdSTpRowGetVal(pRow, pTColumn->colId, pTColumn->type, pTSchema->flen, pTColumn->offset, iCol - 1, &cv); tdSTpRowGetVal(pRow, pTColumn->colId, pTColumn->type, pTSchema->flen, pTColumn->offset, iCol - 1, &cv);
} else if (TD_IS_KV_ROW(pRow)) { } else if (TD_IS_KV_ROW(pRow)) {
ASSERT(iCol > 0);
tdSKvRowGetVal(pRow, pTColumn->colId, iCol - 1, &cv); tdSKvRowGetVal(pRow, pTColumn->colId, iCol - 1, &cv);
} else { } else {
ASSERT(0); ASSERT(0);

View File

@ -116,7 +116,7 @@ STSchema *genSTSchema(int16_t nCols) {
} }
STSchema *pResult = NULL; STSchema *pResult = NULL;
pResult = tdGetSTSChemaFromSSChema(&pSchema, nCols); pResult = tdGetSTSChemaFromSSChema(pSchema, nCols, 1);
taosMemoryFree(pSchema); taosMemoryFree(pSchema);
return pResult; return pResult;

View File

@ -868,7 +868,10 @@ int32_t mndDropSubByTopic(SMnode *pMnode, STrans *pTrans, const char *topicName)
} }
// iter all vnode to delete handle // iter all vnode to delete handle
ASSERT(taosHashGetSize(pSub->consumerHash) == 0); if (taosHashGetSize(pSub->consumerHash) != 0) {
sdbRelease(pSdb, pSub);
return -1;
}
int32_t sz = taosArrayGetSize(pSub->unassignedVgs); int32_t sz = taosArrayGetSize(pSub->unassignedVgs);
for (int32_t i = 0; i < sz; i++) { for (int32_t i = 0; i < sz; i++) {
SMqVgEp *pVgEp = taosArrayGetP(pSub->unassignedVgs, i); SMqVgEp *pVgEp = taosArrayGetP(pSub->unassignedVgs, i);

View File

@ -583,6 +583,7 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) {
mndTransSetDbName(pTrans, pTopic->db, NULL); mndTransSetDbName(pTrans, pTopic->db, NULL);
if (pTrans == NULL) { if (pTrans == NULL) {
mError("topic:%s, failed to drop since %s", pTopic->name, terrstr()); mError("topic:%s, failed to drop since %s", pTopic->name, terrstr());
mndReleaseTopic(pMnode, pTopic);
return -1; return -1;
} }
@ -590,11 +591,17 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) {
if (mndDropOffsetByTopic(pMnode, pTrans, dropReq.name) < 0) { if (mndDropOffsetByTopic(pMnode, pTrans, dropReq.name) < 0) {
ASSERT(0); ASSERT(0);
mndTransDrop(pTrans);
mndReleaseTopic(pMnode, pTopic);
return -1; return -1;
} }
// TODO check if rebalancing
if (mndDropSubByTopic(pMnode, pTrans, dropReq.name) < 0) { if (mndDropSubByTopic(pMnode, pTrans, dropReq.name) < 0) {
ASSERT(0); /*ASSERT(0);*/
mError("topic:%s, failed to drop since %s", pTopic->name, terrstr());
mndTransDrop(pTrans);
mndReleaseTopic(pMnode, pTopic);
return -1; return -1;
} }

View File

@ -88,7 +88,8 @@ typedef struct {
STqExecTb execTb; STqExecTb execTb;
STqExecDb execDb; STqExecDb execDb;
}; };
int32_t numOfCols; // number of out pout column, temporarily used int32_t numOfCols; // number of out pout column, temporarily used
SSchemaWrapper *pSchemaWrapper; // columns that are involved in query
} STqExecHandle; } STqExecHandle;
typedef struct { typedef struct {

View File

@ -579,9 +579,11 @@ static int32_t tdRSmaFetchAndSubmitResult(SRSmaInfoItem *pItem, STSchema *pTSche
while (1) { while (1) {
SSDataBlock *output = NULL; SSDataBlock *output = NULL;
uint64_t ts; uint64_t ts;
if (qExecTask(pItem->taskInfo, &output, &ts) < 0) {
int32_t code = qExecTask(pItem->taskInfo, &output, &ts);
if (code < 0) {
smaError("vgId:%d, qExecTask for rsma table %" PRIi64 "l evel %" PRIi8 " failed since %s", SMA_VID(pSma), suid, smaError("vgId:%d, qExecTask for rsma table %" PRIi64 "l evel %" PRIi8 " failed since %s", SMA_VID(pSma), suid,
pItem->level, terrstr()); pItem->level, terrstr(code));
goto _err; goto _err;
} }
if (!output) { if (!output) {
@ -597,35 +599,36 @@ static int32_t tdRSmaFetchAndSubmitResult(SRSmaInfoItem *pItem, STSchema *pTSche
} }
taosArrayPush(pResult, output); taosArrayPush(pResult, output);
}
if (taosArrayGetSize(pResult) > 0) { if (taosArrayGetSize(pResult) > 0) {
#if 1 #if 1
char flag[10] = {0}; char flag[10] = {0};
snprintf(flag, 10, "level %" PRIi8, pItem->level); snprintf(flag, 10, "level %" PRIi8, pItem->level);
blockDebugShowDataBlocks(pResult, flag); blockDebugShowDataBlocks(pResult, flag);
#endif #endif
STsdb *sinkTsdb = (pItem->level == TSDB_RETENTION_L1 ? pSma->pRSmaTsdb[0] : pSma->pRSmaTsdb[1]); STsdb *sinkTsdb = (pItem->level == TSDB_RETENTION_L1 ? pSma->pRSmaTsdb[0] : pSma->pRSmaTsdb[1]);
SSubmitReq *pReq = NULL; SSubmitReq *pReq = NULL;
// TODO: the schema update should be handled // TODO: the schema update should be handled
if (buildSubmitReqFromDataBlock(&pReq, pResult, pTSchema, SMA_VID(pSma), suid) < 0) { if (buildSubmitReqFromDataBlock(&pReq, pResult, pTSchema, SMA_VID(pSma), suid) < 0) {
smaError("vgId:%d, build submit req for rsma table %" PRIi64 "l evel %" PRIi8 " failed since %s", SMA_VID(pSma), smaError("vgId:%d, build submit req for rsma table %" PRIi64 "l evel %" PRIi8 " failed since %s", SMA_VID(pSma),
suid, pItem->level, terrstr()); suid, pItem->level, terrstr());
goto _err; goto _err;
} }
if (pReq && tdProcessSubmitReq(sinkTsdb, atomic_add_fetch_64(&pStat->submitVer, 1), pReq) < 0) {
taosMemoryFreeClear(pReq);
smaError("vgId:%d, process submit req for rsma table %" PRIi64 " level %" PRIi8 " failed since %s",
SMA_VID(pSma), suid, pItem->level, terrstr());
goto _err;
}
if (pReq && tdProcessSubmitReq(sinkTsdb, atomic_add_fetch_64(&pStat->submitVer, 1), pReq) < 0) {
taosMemoryFreeClear(pReq); taosMemoryFreeClear(pReq);
smaError("vgId:%d, process submit req for rsma table %" PRIi64 " level %" PRIi8 " failed since %s", SMA_VID(pSma), taosArrayClear(pResult);
suid, pItem->level, terrstr()); } else if (terrno == 0) {
goto _err; smaDebug("vgId:%d, no rsma %" PRIi8 " data fetched yet", SMA_VID(pSma), pItem->level);
} else {
smaDebug("vgId:%d, no rsma %" PRIi8 " data fetched since %s", SMA_VID(pSma), pItem->level, tstrerror(terrno));
} }
taosMemoryFreeClear(pReq);
} else if (terrno == 0) {
smaDebug("vgId:%d, no rsma %" PRIi8 " data fetched yet", SMA_VID(pSma), pItem->level);
} else {
smaDebug("vgId:%d, no rsma %" PRIi8 " data fetched since %s", SMA_VID(pSma), pItem->level, tstrerror(terrno));
} }
tdDestroySDataBlockArray(pResult); tdDestroySDataBlockArray(pResult);
@ -1364,4 +1367,4 @@ static void tdRSmaFetchTrigger(void *param, void *tmrId) {
_end: _end:
tdReleaseSmaRef(smaMgmt.rsetId, pItem->refId, __func__, __LINE__); tdReleaseSmaRef(smaMgmt.rsetId, pItem->refId, __func__, __LINE__);
} }

View File

@ -526,8 +526,8 @@ int32_t tqProcessVgChangeReq(STQ* pTq, char* msg, int32_t msgLen) {
.initTqReader = true, .initTqReader = true,
.version = ver, .version = ver,
}; };
pHandle->execHandle.execCol.task[i] = pHandle->execHandle.execCol.task[i] = qCreateQueueExecTaskInfo(pHandle->execHandle.execCol.qmsg, &handle, &pHandle->execHandle.numOfCols,
qCreateQueueExecTaskInfo(pHandle->execHandle.execCol.qmsg, &handle, &pHandle->execHandle.numOfCols); &pHandle->execHandle.pSchemaWrapper);
ASSERT(pHandle->execHandle.execCol.task[i]); ASSERT(pHandle->execHandle.execCol.task[i]);
void* scanner = NULL; void* scanner = NULL;
qExtractStreamScanner(pHandle->execHandle.execCol.task[i], &scanner); qExtractStreamScanner(pHandle->execHandle.execCol.task[i], &scanner);
@ -634,7 +634,7 @@ int32_t tqProcessTaskDeployReq(STQ* pTq, char* msg, int32_t msgLen) {
ASSERT(pTask->tbSink.pSchemaWrapper->pSchema); ASSERT(pTask->tbSink.pSchemaWrapper->pSchema);
pTask->tbSink.pTSchema = pTask->tbSink.pTSchema =
tdGetSTSChemaFromSSChema(&pTask->tbSink.pSchemaWrapper->pSchema, pTask->tbSink.pSchemaWrapper->nCols); tdGetSTSChemaFromSSChema(pTask->tbSink.pSchemaWrapper->pSchema, pTask->tbSink.pSchemaWrapper->nCols, 1);
ASSERT(pTask->tbSink.pTSchema); ASSERT(pTask->tbSink.pTSchema);
} }

View File

@ -93,7 +93,8 @@ int32_t tqMetaOpen(STQ* pTq) {
.version = handle.snapshotVer, .version = handle.snapshotVer,
}; };
handle.execHandle.execCol.task[i] = qCreateQueueExecTaskInfo(handle.execHandle.execCol.qmsg, &reader, &handle.execHandle.numOfCols); handle.execHandle.execCol.task[i] = qCreateQueueExecTaskInfo(handle.execHandle.execCol.qmsg, &reader, &handle.execHandle.numOfCols,
&handle.execHandle.pSchemaWrapper);
ASSERT(handle.execHandle.execCol.task[i]); ASSERT(handle.execHandle.execCol.task[i]);
void* scanner = NULL; void* scanner = NULL;
qExtractStreamScanner(handle.execHandle.execCol.task[i], &scanner); qExtractStreamScanner(handle.execHandle.execCol.task[i], &scanner);

View File

@ -249,6 +249,8 @@ int tqPushMsg(STQ* pTq, void* msg, int32_t msgLen, tmsg_t msgType, int64_t ver)
return -1; return -1;
} }
memcpy(data, msg, msgLen); memcpy(data, msg, msgLen);
SSubmitReq* pReq = (SSubmitReq*)data;
pReq->version = ver;
tqProcessStreamTrigger(pTq, data); tqProcessStreamTrigger(pTq, data);
} }

View File

@ -314,6 +314,7 @@ int32_t tqRetrieveDataBlock(SSDataBlock* pBlock, STqReader* pReader) {
pBlock->info.uid = pReader->msgIter.uid; pBlock->info.uid = pReader->msgIter.uid;
pBlock->info.rows = pReader->msgIter.numOfRows; pBlock->info.rows = pReader->msgIter.numOfRows;
pBlock->info.version = pReader->pMsg->version;
while ((row = tGetSubmitBlkNext(&pReader->blkIter)) != NULL) { while ((row = tGetSubmitBlkNext(&pReader->blkIter)) != NULL) {
tdSTSRowIterReset(&iter, row); tdSTSRowIterReset(&iter, row);

View File

@ -1943,17 +1943,20 @@ int32_t initDelSkylineIterator(STableBlockScanInfo* pBlockScanInfo, STsdbReader*
if (pDelFile) { if (pDelFile) {
SDelFReader* pDelFReader = NULL; SDelFReader* pDelFReader = NULL;
code = tsdbDelFReaderOpen(&pDelFReader, pDelFile, pTsdb, NULL); code = tsdbDelFReaderOpen(&pDelFReader, pDelFile, pTsdb, NULL);
if (code) { if (code != TSDB_CODE_SUCCESS) {
goto _err; goto _err;
} }
SArray* aDelIdx = taosArrayInit(4, sizeof(SDelIdx)); SArray* aDelIdx = taosArrayInit(4, sizeof(SDelIdx));
if (aDelIdx == NULL) { if (aDelIdx == NULL) {
tsdbDelFReaderClose(&pDelFReader);
goto _err; goto _err;
} }
code = tsdbReadDelIdx(pDelFReader, aDelIdx, NULL); code = tsdbReadDelIdx(pDelFReader, aDelIdx, NULL);
if (code) { if (code != TSDB_CODE_SUCCESS) {
taosArrayDestroy(aDelIdx);
tsdbDelFReaderClose(&pDelFReader);
goto _err; goto _err;
} }
@ -1962,9 +1965,13 @@ int32_t initDelSkylineIterator(STableBlockScanInfo* pBlockScanInfo, STsdbReader*
if (pIdx != NULL) { if (pIdx != NULL) {
code = tsdbReadDelData(pDelFReader, pIdx, pDelData, NULL); code = tsdbReadDelData(pDelFReader, pIdx, pDelData, NULL);
if (code != TSDB_CODE_SUCCESS) { }
goto _err;
} taosArrayDestroy(aDelIdx);
tsdbDelFReaderClose(&pDelFReader);
if (code != TSDB_CODE_SUCCESS) {
goto _err;
} }
} }
@ -2532,8 +2539,7 @@ static int32_t checkForNeighborFileBlock(STsdbReader* pReader, STableBlockScanIn
pDumpInfo->rowIndex = pDumpInfo->rowIndex =
doMergeRowsInFileBlockImpl(pBlockData, pDumpInfo->rowIndex, key, pMerger, &pReader->verRange, step); doMergeRowsInFileBlockImpl(pBlockData, pDumpInfo->rowIndex, key, pMerger, &pReader->verRange, step);
if (pDumpInfo->rowIndex >= pDumpInfo->totalRows) {
if (pDumpInfo->rowIndex >= pBlock->nRow) {
*state = CHECK_FILEBLOCK_CONT; *state = CHECK_FILEBLOCK_CONT;
} }
} }

View File

@ -164,6 +164,7 @@ typedef struct {
char* dbname; char* dbname;
int32_t tversion; int32_t tversion;
SSchemaWrapper* sw; SSchemaWrapper* sw;
SSchemaWrapper* qsw;
} SSchemaInfo; } SSchemaInfo;
typedef struct SExecTaskInfo { typedef struct SExecTaskInfo {
@ -868,7 +869,7 @@ SOperatorInfo* createDataBlockInfoScanOperator(void* dataReader, SReadHandle* re
SExecTaskInfo* pTaskInfo); SExecTaskInfo* pTaskInfo);
SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhysiNode* pTableScanNode, SNode* pTagCond, SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhysiNode* pTableScanNode, SNode* pTagCond,
SExecTaskInfo* pTaskInfo, STimeWindowAggSupp* pTwSup); SExecTaskInfo* pTaskInfo);
SOperatorInfo* createFillOperatorInfo(SOperatorInfo* downstream, SFillPhysiNode* pPhyFillNode, SExecTaskInfo* pTaskInfo); SOperatorInfo* createFillOperatorInfo(SOperatorInfo* downstream, SFillPhysiNode* pPhyFillNode, SExecTaskInfo* pTaskInfo);

View File

@ -120,7 +120,7 @@ int32_t qSetMultiStreamInput(qTaskInfo_t tinfo, const void* pBlocks, size_t numO
return code; return code;
} }
qTaskInfo_t qCreateQueueExecTaskInfo(void* msg, SReadHandle* readers, int32_t* numOfCols) { qTaskInfo_t qCreateQueueExecTaskInfo(void* msg, SReadHandle* readers, int32_t* numOfCols, SSchemaWrapper** pSchemaWrapper) {
if (msg == NULL) { if (msg == NULL) {
// TODO create raw scan // TODO create raw scan
return NULL; return NULL;
@ -154,6 +154,7 @@ qTaskInfo_t qCreateQueueExecTaskInfo(void* msg, SReadHandle* readers, int32_t* n
} }
} }
*pSchemaWrapper = tCloneSSchemaWrapper(((SExecTaskInfo*)pTaskInfo)->schemaInfo.qsw);
return pTaskInfo; return pTaskInfo;
} }

View File

@ -199,6 +199,10 @@ int32_t qAsyncKillTask(qTaskInfo_t qinfo) {
void qDestroyTask(qTaskInfo_t qTaskHandle) { void qDestroyTask(qTaskInfo_t qTaskHandle) {
SExecTaskInfo* pTaskInfo = (SExecTaskInfo*)qTaskHandle; SExecTaskInfo* pTaskInfo = (SExecTaskInfo*)qTaskHandle;
if (pTaskInfo == NULL) {
return;
}
qDebug("%s execTask completed, numOfRows:%" PRId64, GET_TASKID(pTaskInfo), pTaskInfo->pRoot->resultInfo.totalRows); qDebug("%s execTask completed, numOfRows:%" PRId64, GET_TASKID(pTaskInfo), pTaskInfo->pRoot->resultInfo.totalRows);
queryCostStatis(pTaskInfo); // print the query cost summary queryCostStatis(pTaskInfo); // print the query cost summary

View File

@ -1647,11 +1647,6 @@ static int32_t compressQueryColData(SColumnInfoData* pColRes, int32_t numOfRows,
colSize + COMP_OVERFLOW_BYTES, compressed, NULL, 0); colSize + COMP_OVERFLOW_BYTES, compressed, NULL, 0);
} }
int32_t doFillTimeIntervalGapsInResults(struct SFillInfo* pFillInfo, SSDataBlock* pBlock, int32_t capacity) {
int32_t numOfRows = (int32_t)taosFillResultDataBlock(pFillInfo, pBlock, capacity - pBlock->info.rows);
return pBlock->info.rows;
}
void queryCostStatis(SExecTaskInfo* pTaskInfo) { void queryCostStatis(SExecTaskInfo* pTaskInfo) {
STaskCostInfo* pSummary = &pTaskInfo->cost; STaskCostInfo* pSummary = &pTaskInfo->cost;
@ -4147,35 +4142,62 @@ static STsdbReader* doCreateDataReader(STableScanPhysiNode* pTableScanNode, SRea
static SArray* extractColumnInfo(SNodeList* pNodeList); static SArray* extractColumnInfo(SNodeList* pNodeList);
int32_t extractTableSchemaInfo(SReadHandle* pHandle, uint64_t uid, SExecTaskInfo* pTaskInfo) { SSchemaWrapper* extractQueriedColumnSchema(SScanPhysiNode* pScanNode);
int32_t extractTableSchemaInfo(SReadHandle* pHandle, SScanPhysiNode* pScanNode, SExecTaskInfo* pTaskInfo) {
SMetaReader mr = {0}; SMetaReader mr = {0};
metaReaderInit(&mr, pHandle->meta, 0); metaReaderInit(&mr, pHandle->meta, 0);
int32_t code = metaGetTableEntryByUid(&mr, uid); int32_t code = metaGetTableEntryByUid(&mr, pScanNode->uid);
if (code != TSDB_CODE_SUCCESS) { if (code != TSDB_CODE_SUCCESS) {
qError("failed to get the table meta, uid:0x%"PRIx64", suid:0x%"PRIx64 ", %s", pScanNode->uid, pScanNode->suid,
GET_TASKID(pTaskInfo));
metaReaderClear(&mr); metaReaderClear(&mr);
return terrno; return terrno;
} }
pTaskInfo->schemaInfo.tablename = strdup(mr.me.name); SSchemaInfo* pSchemaInfo = &pTaskInfo->schemaInfo;
pSchemaInfo->tablename = strdup(mr.me.name);
if (mr.me.type == TSDB_SUPER_TABLE) { if (mr.me.type == TSDB_SUPER_TABLE) {
pTaskInfo->schemaInfo.sw = tCloneSSchemaWrapper(&mr.me.stbEntry.schemaRow); pSchemaInfo->sw = tCloneSSchemaWrapper(&mr.me.stbEntry.schemaRow);
pTaskInfo->schemaInfo.tversion = mr.me.stbEntry.schemaTag.version; pSchemaInfo->tversion = mr.me.stbEntry.schemaTag.version;
} else if (mr.me.type == TSDB_CHILD_TABLE) { } else if (mr.me.type == TSDB_CHILD_TABLE) {
tDecoderClear(&mr.coder); tDecoderClear(&mr.coder);
tb_uid_t suid = mr.me.ctbEntry.suid; tb_uid_t suid = mr.me.ctbEntry.suid;
metaGetTableEntryByUid(&mr, suid); metaGetTableEntryByUid(&mr, suid);
pTaskInfo->schemaInfo.sw = tCloneSSchemaWrapper(&mr.me.stbEntry.schemaRow); pSchemaInfo->sw = tCloneSSchemaWrapper(&mr.me.stbEntry.schemaRow);
pTaskInfo->schemaInfo.tversion = mr.me.stbEntry.schemaTag.version; pSchemaInfo->tversion = mr.me.stbEntry.schemaTag.version;
} else { } else {
pTaskInfo->schemaInfo.sw = tCloneSSchemaWrapper(&mr.me.ntbEntry.schemaRow); pSchemaInfo->sw = tCloneSSchemaWrapper(&mr.me.ntbEntry.schemaRow);
} }
metaReaderClear(&mr); metaReaderClear(&mr);
pSchemaInfo->qsw = extractQueriedColumnSchema(pScanNode);
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
} }
SSchemaWrapper* extractQueriedColumnSchema(SScanPhysiNode* pScanNode) {
int32_t numOfCols = LIST_LENGTH(pScanNode->pScanCols);
SSchemaWrapper* pqSw = taosMemoryCalloc(1, sizeof(SSchemaWrapper));
pqSw->pSchema = taosMemoryCalloc(numOfCols, sizeof(SSchema));
for(int32_t i = 0; i < numOfCols; ++i) {
STargetNode* pNode = (STargetNode*)nodesListGetNode(pScanNode->pScanCols, i);
SColumnNode* pColNode = (SColumnNode*)pNode->pExpr;
SSchema* pSchema = &pqSw->pSchema[pqSw->nCols++];
pSchema->colId = pColNode->colId;
pSchema->type = pColNode->node.resType.type;
pSchema->type = pColNode->node.resType.bytes;
strncpy(pSchema->name, pColNode->colName, tListLen(pSchema->name));
}
return pqSw;
}
static void cleanupTableSchemaInfo(SSchemaInfo* pSchemaInfo) { static void cleanupTableSchemaInfo(SSchemaInfo* pSchemaInfo) {
taosMemoryFreeClear(pSchemaInfo->dbname); taosMemoryFreeClear(pSchemaInfo->dbname);
if (pSchemaInfo->sw == NULL) { if (pSchemaInfo->sw == NULL) {
@ -4183,8 +4205,8 @@ static void cleanupTableSchemaInfo(SSchemaInfo* pSchemaInfo) {
} }
taosMemoryFree(pSchemaInfo->tablename); taosMemoryFree(pSchemaInfo->tablename);
taosMemoryFree(pSchemaInfo->sw->pSchema); tDeleteSSchemaWrapper(pSchemaInfo->sw);
taosMemoryFree(pSchemaInfo->sw); tDeleteSSchemaWrapper(pSchemaInfo->qsw);
} }
static int32_t sortTableGroup(STableListInfo* pTableListInfo, int32_t groupNum) { static int32_t sortTableGroup(STableListInfo* pTableListInfo, int32_t groupNum) {
@ -4385,7 +4407,7 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
return NULL; return NULL;
} }
code = extractTableSchemaInfo(pHandle, pTableScanNode->scan.uid, pTaskInfo); code = extractTableSchemaInfo(pHandle, &pTableScanNode->scan, pTaskInfo);
if (code) { if (code) {
pTaskInfo->code = terrno; pTaskInfo->code = terrno;
return NULL; return NULL;
@ -4405,7 +4427,7 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
return NULL; return NULL;
} }
code = extractTableSchemaInfo(pHandle, pTableScanNode->scan.uid, pTaskInfo); code = extractTableSchemaInfo(pHandle, &pTableScanNode->scan, pTaskInfo);
if (code) { if (code) {
pTaskInfo->code = terrno; pTaskInfo->code = terrno;
return NULL; return NULL;
@ -4422,11 +4444,6 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
return createExchangeOperatorInfo(pHandle->pMsgCb->clientRpc, (SExchangePhysiNode*)pPhyNode, pTaskInfo); return createExchangeOperatorInfo(pHandle->pMsgCb->clientRpc, (SExchangePhysiNode*)pPhyNode, pTaskInfo);
} else if (QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN == type) { } else if (QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN == type) {
STableScanPhysiNode* pTableScanNode = (STableScanPhysiNode*)pPhyNode; STableScanPhysiNode* pTableScanNode = (STableScanPhysiNode*)pPhyNode;
STimeWindowAggSupp twSup = {
.waterMark = pTableScanNode->watermark,
.calTrigger = pTableScanNode->triggerType,
.maxTs = INT64_MIN,
};
if (pHandle->vnode) { if (pHandle->vnode) {
int32_t code = createScanTableListInfo(&pTableScanNode->scan, pTableScanNode->pGroupTags, int32_t code = createScanTableListInfo(&pTableScanNode->scan, pTableScanNode->pGroupTags,
pTableScanNode->groupSort, pHandle, pTableListInfo, pTagCond, pTagIndexCond, GET_TASKID(pTaskInfo)); pTableScanNode->groupSort, pHandle, pTableListInfo, pTagCond, pTagIndexCond, GET_TASKID(pTaskInfo));
@ -4436,7 +4453,8 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
} }
} }
SOperatorInfo* pOperator = createStreamScanOperatorInfo(pHandle, pTableScanNode, pTagCond, pTaskInfo, &twSup); pTaskInfo->schemaInfo.qsw = extractQueriedColumnSchema(&pTableScanNode->scan);
SOperatorInfo* pOperator = createStreamScanOperatorInfo(pHandle, pTableScanNode, pTagCond, pTaskInfo);
return pOperator; return pOperator;
} else if (QUERY_NODE_PHYSICAL_PLAN_SYSTABLE_SCAN == type) { } else if (QUERY_NODE_PHYSICAL_PLAN_SYSTABLE_SCAN == type) {
@ -4487,7 +4505,7 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
return NULL; return NULL;
} }
code = extractTableSchemaInfo(pHandle, pScanNode->scan.uid, pTaskInfo); code = extractTableSchemaInfo(pHandle, &pScanNode->scan, pTaskInfo);
if (code != TSDB_CODE_SUCCESS) { if (code != TSDB_CODE_SUCCESS) {
pTaskInfo->code = code; pTaskInfo->code = code;
return NULL; return NULL;

View File

@ -1525,7 +1525,7 @@ static void destroyStreamScanOperatorInfo(void* param, int32_t numOfOutput) {
} }
SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhysiNode* pTableScanNode, SNode* pTagCond, SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhysiNode* pTableScanNode, SNode* pTagCond,
SExecTaskInfo* pTaskInfo, STimeWindowAggSupp* pTwSup) { SExecTaskInfo* pTaskInfo) {
SStreamScanInfo* pInfo = taosMemoryCalloc(1, sizeof(SStreamScanInfo)); SStreamScanInfo* pInfo = taosMemoryCalloc(1, sizeof(SStreamScanInfo));
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo)); SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
@ -1539,6 +1539,12 @@ SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhys
pInfo->pTagCond = pTagCond; pInfo->pTagCond = pTagCond;
pInfo->twAggSup = (STimeWindowAggSupp){
.waterMark = pTableScanNode->watermark,
.calTrigger = pTableScanNode->triggerType,
.maxTs = INT64_MIN,
};
int32_t numOfCols = 0; int32_t numOfCols = 0;
pInfo->pColMatchInfo = extractColMatchInfo(pScanPhyNode->pScanCols, pDescNode, &numOfCols, COL_MATCH_FROM_COL_ID); pInfo->pColMatchInfo = extractColMatchInfo(pScanPhyNode->pScanCols, pDescNode, &numOfCols, COL_MATCH_FROM_COL_ID);
@ -1591,7 +1597,7 @@ SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhys
} }
if (pTSInfo->interval.interval > 0) { if (pTSInfo->interval.interval > 0) {
pInfo->pUpdateInfo = updateInfoInitP(&pTSInfo->interval, pTwSup->waterMark); pInfo->pUpdateInfo = updateInfoInitP(&pTSInfo->interval, pInfo->twAggSup.waterMark);
} else { } else {
pInfo->pUpdateInfo = NULL; pInfo->pUpdateInfo = NULL;
} }
@ -1631,7 +1637,6 @@ SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhys
pInfo->deleteDataIndex = 0; pInfo->deleteDataIndex = 0;
pInfo->pDeleteDataRes = createPullDataBlock(); pInfo->pDeleteDataRes = createPullDataBlock();
pInfo->updateWin = (STimeWindow){.skey = INT64_MAX, .ekey = INT64_MAX}; pInfo->updateWin = (STimeWindow){.skey = INT64_MAX, .ekey = INT64_MAX};
pInfo->twAggSup = *pTwSup;
pOperator->name = "StreamScanOperator"; pOperator->name = "StreamScanOperator";
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN; pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN;

View File

@ -66,12 +66,32 @@ static void setNullRow(SSDataBlock* pBlock, int64_t ts, int32_t rowIndex) {
static void doSetVal(SColumnInfoData* pDstColInfoData, int32_t rowIndex, const SGroupKeys* pKey); static void doSetVal(SColumnInfoData* pDstColInfoData, int32_t rowIndex, const SGroupKeys* pKey);
static void doSetUserSpecifiedValue(SColumnInfoData* pDst, SVariant* pVar, int32_t rowIndex, int64_t currentKey) {
if (pDst->info.type == TSDB_DATA_TYPE_FLOAT) {
float v = 0;
GET_TYPED_DATA(v, float, pVar->nType, &pVar->i);
colDataAppend(pDst, rowIndex, (char*)&v, false);
} else if (pDst->info.type == TSDB_DATA_TYPE_DOUBLE) {
double v = 0;
GET_TYPED_DATA(v, double, pVar->nType, &pVar->i);
colDataAppend(pDst, rowIndex, (char*)&v, false);
} else if (IS_SIGNED_NUMERIC_TYPE(pDst->info.type)) {
int64_t v = 0;
GET_TYPED_DATA(v, int64_t, pVar->nType, &pVar->i);
colDataAppend(pDst, rowIndex, (char*)&v, false);
} else if (pDst->info.type == TSDB_DATA_TYPE_TIMESTAMP) {
colDataAppend(pDst, rowIndex, (const char*)&currentKey, false);
} else { // varchar/nchar data
colDataAppendNULL(pDst, rowIndex);
}
}
static void doFillOneRow(SFillInfo* pFillInfo, SSDataBlock* pBlock, SSDataBlock* pSrcBlock, int64_t ts, static void doFillOneRow(SFillInfo* pFillInfo, SSDataBlock* pBlock, SSDataBlock* pSrcBlock, int64_t ts,
bool outOfBound) { bool outOfBound) {
SPoint point1, point2, point; SPoint point1, point2, point;
int32_t step = GET_FORWARD_DIRECTION_FACTOR(pFillInfo->order); int32_t step = GET_FORWARD_DIRECTION_FACTOR(pFillInfo->order);
// set the primary timestamp column value // set the primary timestamp column value
int32_t index = pBlock->info.rows; int32_t index = pBlock->info.rows;
// set the other values // set the other values
@ -160,30 +180,13 @@ static void doFillOneRow(SFillInfo* pFillInfo, SSDataBlock* pBlock, SSDataBlock*
} else { // fill with user specified value for each column } else { // fill with user specified value for each column
for (int32_t i = 0; i < pFillInfo->numOfCols; ++i) { for (int32_t i = 0; i < pFillInfo->numOfCols; ++i) {
SFillColInfo* pCol = &pFillInfo->pFillCol[i]; SFillColInfo* pCol = &pFillInfo->pFillCol[i];
if (TSDB_COL_IS_TAG(pCol->flag) /* || IS_VAR_DATA_TYPE(pCol->schema.type)*/) { if (TSDB_COL_IS_TAG(pCol->flag)) {
continue; continue;
} }
SVariant* pVar = &pFillInfo->pFillCol[i].fillVal; SVariant* pVar = &pFillInfo->pFillCol[i].fillVal;
SColumnInfoData* pDst = taosArrayGet(pBlock->pDataBlock, i); SColumnInfoData* pDst = taosArrayGet(pBlock->pDataBlock, i);
if (pDst->info.type == TSDB_DATA_TYPE_FLOAT) { doSetUserSpecifiedValue(pDst, pVar, index, pFillInfo->currentKey);
float v = 0;
GET_TYPED_DATA(v, float, pVar->nType, &pVar->i);
colDataAppend(pDst, index, (char*)&v, false);
} else if (pDst->info.type == TSDB_DATA_TYPE_DOUBLE) {
double v = 0;
GET_TYPED_DATA(v, double, pVar->nType, &pVar->i);
colDataAppend(pDst, index, (char*)&v, false);
} else if (IS_SIGNED_NUMERIC_TYPE(pDst->info.type)) {
int64_t v = 0;
GET_TYPED_DATA(v, int64_t, pVar->nType, &pVar->i);
colDataAppend(pDst, index, (char*)&v, false);
} else if (pDst->info.type == TSDB_DATA_TYPE_TIMESTAMP) {
colDataAppend(pDst, index, (const char*)&pFillInfo->currentKey, false);
} else { // varchar/nchar data
colDataAppendNULL(pDst, index);
}
} }
} }
@ -273,7 +276,7 @@ static int32_t fillResultImpl(SFillInfo* pFillInfo, SSDataBlock* pBlock, int32_t
return outputRows; return outputRows;
} }
} else { } else {
assert(pFillInfo->currentKey == ts); ASSERT(pFillInfo->currentKey == ts);
int32_t index = pBlock->info.rows; int32_t index = pBlock->info.rows;
if (pFillInfo->type == TSDB_FILL_NEXT && (pFillInfo->index + 1) < pFillInfo->numOfRows) { if (pFillInfo->type == TSDB_FILL_NEXT && (pFillInfo->index + 1) < pFillInfo->numOfRows) {
@ -295,27 +298,32 @@ static int32_t fillResultImpl(SFillInfo* pFillInfo, SSDataBlock* pBlock, int32_t
SColumnInfoData* pSrc = taosArrayGet(pFillInfo->pSrcBlock->pDataBlock, srcSlotId); SColumnInfoData* pSrc = taosArrayGet(pFillInfo->pSrcBlock->pDataBlock, srcSlotId);
char* src = colDataGetData(pSrc, pFillInfo->index); char* src = colDataGetData(pSrc, pFillInfo->index);
if (i == 0 || (/*pCol->functionId != FUNCTION_COUNT &&*/ !colDataIsNull_s(pSrc, pFillInfo->index)) /*|| if (/*i == 0 || (*/!colDataIsNull_s(pSrc, pFillInfo->index)) {
(pCol->functionId == FUNCTION_COUNT && GET_INT64_VAL(src) != 0)*/) {
bool isNull = colDataIsNull_s(pSrc, pFillInfo->index); bool isNull = colDataIsNull_s(pSrc, pFillInfo->index);
colDataAppend(pDst, index, src, isNull); colDataAppend(pDst, index, src, isNull);
saveColData(pFillInfo->prev, i, src, isNull); saveColData(pFillInfo->prev, i, src, isNull);
} else { // i > 0 and data is null , do interpolation } else {
if (pFillInfo->type == TSDB_FILL_PREV) { if (pDst->info.type == TSDB_DATA_TYPE_TIMESTAMP) {
SGroupKeys* pKey = taosArrayGet(pFillInfo->prev, i); colDataAppend(pDst, index, (const char*)&pFillInfo->currentKey, false);
doSetVal(pDst, index, pKey); } else { // i > 0 and data is null , do interpolation
} else if (pFillInfo->type == TSDB_FILL_LINEAR) { if (pFillInfo->type == TSDB_FILL_PREV) {
bool isNull = colDataIsNull_s(pSrc, pFillInfo->index); SArray* p = FILL_IS_ASC_FILL(pFillInfo) ? pFillInfo->prev : pFillInfo->next;
colDataAppend(pDst, index, src, isNull); SGroupKeys* pKey = taosArrayGet(p, i);
saveColData(pFillInfo->prev, i, src, isNull); doSetVal(pDst, index, pKey);
} else if (pFillInfo->type == TSDB_FILL_NULL) { } else if (pFillInfo->type == TSDB_FILL_LINEAR) {
colDataAppendNULL(pDst, index); bool isNull = colDataIsNull_s(pSrc, pFillInfo->index);
} else if (pFillInfo->type == TSDB_FILL_NEXT) { colDataAppend(pDst, index, src, isNull);
SGroupKeys* pKey = taosArrayGet(pFillInfo->next, i); saveColData(pFillInfo->prev, i, src, isNull); // todo:
doSetVal(pDst, index, pKey); } else if (pFillInfo->type == TSDB_FILL_NULL) {
} else { colDataAppendNULL(pDst, index);
SVariant* pVar = &pFillInfo->pFillCol[i].fillVal; } else if (pFillInfo->type == TSDB_FILL_NEXT) {
colDataAppend(pDst, index, (char*)&pVar->i, false); SArray* p = FILL_IS_ASC_FILL(pFillInfo) ? pFillInfo->next : pFillInfo->prev;
SGroupKeys* pKey = taosArrayGet(p, i);
doSetVal(pDst, index, pKey);
} else {
SVariant* pVar = &pFillInfo->pFillCol[i].fillVal;
doSetUserSpecifiedValue(pDst, pVar, index, pFillInfo->currentKey);
}
} }
} }
} }

View File

@ -1200,7 +1200,7 @@ static int parseOneRow(SInsertParseContext* pCxt, STableDataBlocks* pDataBlocks,
*gotRow = true; *gotRow = true;
#ifdef TD_DEBUG_PRINT_ROW #ifdef TD_DEBUG_PRINT_ROW
STSchema* pSTSchema = tdGetSTSChemaFromSSChema(&schema, spd->numOfCols); STSchema* pSTSchema = tdGetSTSChemaFromSSChema(schema, spd->numOfCols, 1);
tdSRowPrint(row, pSTSchema, __func__); tdSRowPrint(row, pSTSchema, __func__);
taosMemoryFree(pSTSchema); taosMemoryFree(pSTSchema);
#endif #endif
@ -1972,7 +1972,7 @@ int32_t qBindStmtColsValue(void* pBlock, TAOS_MULTI_BIND* bind, char* msgBuf, in
} }
} }
#ifdef TD_DEBUG_PRINT_ROW #ifdef TD_DEBUG_PRINT_ROW
STSchema* pSTSchema = tdGetSTSChemaFromSSChema(&pSchema, spd->numOfCols); STSchema* pSTSchema = tdGetSTSChemaFromSSChema(pSchema, spd->numOfCols, 1);
tdSRowPrint(row, pSTSchema, __func__); tdSRowPrint(row, pSTSchema, __func__);
taosMemoryFree(pSTSchema); taosMemoryFree(pSTSchema);
#endif #endif
@ -2057,7 +2057,7 @@ int32_t qBindStmtSingleColValue(void* pBlock, TAOS_MULTI_BIND* bind, char* msgBu
#ifdef TD_DEBUG_PRINT_ROW #ifdef TD_DEBUG_PRINT_ROW
if (rowEnd) { if (rowEnd) {
STSchema* pSTSchema = tdGetSTSChemaFromSSChema(&pSchema, spd->numOfCols); STSchema* pSTSchema = tdGetSTSChemaFromSSChema(pSchema, spd->numOfCols, 1);
tdSRowPrint(row, pSTSchema, __func__); tdSRowPrint(row, pSTSchema, __func__);
taosMemoryFree(pSTSchema); taosMemoryFree(pSTSchema);
} }

View File

@ -19,6 +19,7 @@
#include "parInt.h" #include "parInt.h"
#include "parUtil.h" #include "parUtil.h"
#include "querynodes.h" #include "querynodes.h"
#include "tRealloc.h"
#define IS_RAW_PAYLOAD(t) \ #define IS_RAW_PAYLOAD(t) \
(((int)(t)) == PAYLOAD_TYPE_RAW) // 0: K-V payload for non-prepare insert, 1: rawPayload for prepare insert (((int)(t)) == PAYLOAD_TYPE_RAW) // 0: K-V payload for non-prepare insert, 1: rawPayload for prepare insert
@ -34,6 +35,32 @@ typedef struct SBlockKeyInfo {
SBlockKeyTuple* pKeyTuple; SBlockKeyTuple* pKeyTuple;
} SBlockKeyInfo; } SBlockKeyInfo;
typedef struct {
int32_t index;
SArray* rowArray; // array of merged rows(mem allocated by tRealloc/free by tFree)
STSchema* pSchema;
int64_t tbUid; // suid for child table, uid for normal table
} SBlockRowMerger;
static FORCE_INLINE void tdResetSBlockRowMerger(SBlockRowMerger* pMerger) {
if (pMerger) {
pMerger->index = -1;
}
}
static void tdFreeSBlockRowMerger(SBlockRowMerger* pMerger) {
if (pMerger) {
int32_t size = taosArrayGetSize(pMerger->rowArray);
for (int32_t i = 0; i < size; ++i) {
tFree(*(void**)taosArrayGet(pMerger->rowArray, i));
}
taosArrayDestroy(pMerger->rowArray);
taosMemoryFreeClear(pMerger->pSchema);
taosMemoryFree(pMerger);
}
}
static int32_t rowDataCompar(const void* lhs, const void* rhs) { static int32_t rowDataCompar(const void* lhs, const void* rhs) {
TSKEY left = *(TSKEY*)lhs; TSKEY left = *(TSKEY*)lhs;
TSKEY right = *(TSKEY*)rhs; TSKEY right = *(TSKEY*)rhs;
@ -328,7 +355,7 @@ void sortRemoveDataBlockDupRowsRaw(STableDataBlocks* dataBuf) {
} }
// data block is disordered, sort it in ascending order // data block is disordered, sort it in ascending order
int sortRemoveDataBlockDupRows(STableDataBlocks* dataBuf, SBlockKeyInfo* pBlkKeyInfo) { static int sortRemoveDataBlockDupRows(STableDataBlocks* dataBuf, SBlockKeyInfo* pBlkKeyInfo) {
SSubmitBlk* pBlocks = (SSubmitBlk*)dataBuf->pData; SSubmitBlk* pBlocks = (SSubmitBlk*)dataBuf->pData;
int16_t nRows = pBlocks->numOfRows; int16_t nRows = pBlocks->numOfRows;
@ -396,6 +423,201 @@ int sortRemoveDataBlockDupRows(STableDataBlocks* dataBuf, SBlockKeyInfo* pBlkKey
return 0; return 0;
} }
static void* tdGetCurRowFromBlockMerger(SBlockRowMerger* pBlkRowMerger) {
if (pBlkRowMerger && (pBlkRowMerger->index >= 0)) {
ASSERT(pBlkRowMerger->index < taosArrayGetSize(pBlkRowMerger->rowArray));
return *(void**)taosArrayGet(pBlkRowMerger->rowArray, pBlkRowMerger->index);
}
return NULL;
}
static int32_t tdBlockRowMerge(STableMeta* pTableMeta, SBlockKeyTuple* pEndKeyTp, int32_t nDupRows,
SBlockRowMerger** pBlkRowMerger, int32_t rowSize) {
ASSERT(nDupRows > 1);
SBlockKeyTuple* pStartKeyTp = pEndKeyTp - (nDupRows - 1);
ASSERT(pStartKeyTp->skey == pEndKeyTp->skey);
// TODO: optimization if end row is all normal
#if 0
STSRow* pEndRow = (STSRow*)pEndKeyTp->payloadAddr;
if(isNormal(pEndRow)) { // set the end row if it is normal and return directly
pStartKeyTp->payloadAddr = pEndKeyTp->payloadAddr;
return TSDB_CODE_SUCCESS;
}
#endif
if (!(*pBlkRowMerger)) {
(*pBlkRowMerger) = taosMemoryCalloc(1, sizeof(**pBlkRowMerger));
if (!(*pBlkRowMerger)) {
terrno = TSDB_CODE_OUT_OF_MEMORY;
return TSDB_CODE_FAILED;
}
(*pBlkRowMerger)->index = -1;
if (!(*pBlkRowMerger)->rowArray) {
(*pBlkRowMerger)->rowArray = taosArrayInit(1, sizeof(void*));
if (!(*pBlkRowMerger)->rowArray) {
terrno = TSDB_CODE_OUT_OF_MEMORY;
return TSDB_CODE_FAILED;
}
}
}
if ((*pBlkRowMerger)->pSchema) {
if ((*pBlkRowMerger)->pSchema->version != pTableMeta->sversion) {
taosMemoryFreeClear((*pBlkRowMerger)->pSchema);
} else {
if ((*pBlkRowMerger)->tbUid != (pTableMeta->suid > 0 ? pTableMeta->suid : pTableMeta->uid)) {
taosMemoryFreeClear((*pBlkRowMerger)->pSchema);
}
}
}
if (!(*pBlkRowMerger)->pSchema) {
(*pBlkRowMerger)->pSchema =
tdGetSTSChemaFromSSChema(pTableMeta->schema, pTableMeta->tableInfo.numOfColumns, pTableMeta->sversion);
if (!(*pBlkRowMerger)->pSchema) {
terrno = TSDB_CODE_OUT_OF_MEMORY;
return TSDB_CODE_FAILED;
}
(*pBlkRowMerger)->tbUid = pTableMeta->suid > 0 ? pTableMeta->suid : pTableMeta->uid;
}
void* pDestRow = NULL;
++((*pBlkRowMerger)->index);
if ((*pBlkRowMerger)->index < taosArrayGetSize((*pBlkRowMerger)->rowArray)) {
void* pAlloc = *(void**)taosArrayGet((*pBlkRowMerger)->rowArray, (*pBlkRowMerger)->index);
if (tRealloc((uint8_t**)&pAlloc, rowSize) != 0) {
return TSDB_CODE_FAILED;
}
pDestRow = pAlloc;
} else {
if (tRealloc((uint8_t**)&pDestRow, rowSize) != 0) {
return TSDB_CODE_FAILED;
}
taosArrayPush((*pBlkRowMerger)->rowArray, &pDestRow);
}
// merge rows to pDestRow
STSchema* pSchema = (*pBlkRowMerger)->pSchema;
SArray* pArray = taosArrayInit(pSchema->numOfCols, sizeof(SColVal));
for (int32_t i = 0; i < pSchema->numOfCols; ++i) {
SColVal colVal = {0};
for (int32_t j = 0; j < nDupRows; ++j) {
tTSRowGetVal((pEndKeyTp - j)->payloadAddr, pSchema, i, &colVal);
if (!colVal.isNone) {
break;
}
}
taosArrayPush(pArray, &colVal);
}
if (tdSTSRowNew(pArray, pSchema, (STSRow**)&pDestRow) < 0) {
taosArrayDestroy(pArray);
return TSDB_CODE_FAILED;
}
taosArrayDestroy(pArray);
return TSDB_CODE_SUCCESS;
}
// data block is disordered, sort it in ascending order, and merge dup rows if exists
static int sortMergeDataBlockDupRows(STableDataBlocks* dataBuf, SBlockKeyInfo* pBlkKeyInfo,
SBlockRowMerger** ppBlkRowMerger) {
SSubmitBlk* pBlocks = (SSubmitBlk*)dataBuf->pData;
STableMeta* pTableMeta = dataBuf->pTableMeta;
int16_t nRows = pBlocks->numOfRows;
// size is less than the total size, since duplicated rows may be removed.
// allocate memory
size_t nAlloc = nRows * sizeof(SBlockKeyTuple);
if (pBlkKeyInfo->pKeyTuple == NULL || pBlkKeyInfo->maxBytesAlloc < nAlloc) {
char* tmp = taosMemoryRealloc(pBlkKeyInfo->pKeyTuple, nAlloc);
if (tmp == NULL) {
return TSDB_CODE_TSC_OUT_OF_MEMORY;
}
pBlkKeyInfo->pKeyTuple = (SBlockKeyTuple*)tmp;
pBlkKeyInfo->maxBytesAlloc = (int32_t)nAlloc;
}
memset(pBlkKeyInfo->pKeyTuple, 0, nAlloc);
tdResetSBlockRowMerger(*ppBlkRowMerger);
int32_t extendedRowSize = getExtendedRowSize(dataBuf);
SBlockKeyTuple* pBlkKeyTuple = pBlkKeyInfo->pKeyTuple;
char* pBlockData = pBlocks->data + pBlocks->schemaLen;
int n = 0;
while (n < nRows) {
pBlkKeyTuple->skey = TD_ROW_KEY((STSRow*)pBlockData);
pBlkKeyTuple->payloadAddr = pBlockData;
pBlkKeyTuple->index = n;
// next loop
pBlockData += extendedRowSize;
++pBlkKeyTuple;
++n;
}
if (!dataBuf->ordered) {
pBlkKeyTuple = pBlkKeyInfo->pKeyTuple;
taosSort(pBlkKeyTuple, nRows, sizeof(SBlockKeyTuple), rowDataComparStable);
pBlkKeyTuple = pBlkKeyInfo->pKeyTuple;
bool hasDup = false;
int32_t nextPos = 0;
int32_t i = 0;
int32_t j = 1;
while (j < nRows) {
TSKEY ti = (pBlkKeyTuple + i)->skey;
TSKEY tj = (pBlkKeyTuple + j)->skey;
if (ti == tj) {
++j;
continue;
}
if ((j - i) > 1) {
if (tdBlockRowMerge(pTableMeta, (pBlkKeyTuple + j - 1), j - i, ppBlkRowMerger, extendedRowSize) < 0) {
return TSDB_CODE_FAILED;
}
(pBlkKeyTuple + nextPos)->payloadAddr = tdGetCurRowFromBlockMerger(*ppBlkRowMerger);
if (!hasDup) {
hasDup = true;
}
i = j;
} else {
if (hasDup) {
memmove(pBlkKeyTuple + nextPos, pBlkKeyTuple + i, sizeof(SBlockKeyTuple));
}
++i;
}
++nextPos;
++j;
}
if ((j - i) > 1) {
ASSERT((pBlkKeyTuple + i)->skey == (pBlkKeyTuple + j - 1)->skey);
if (tdBlockRowMerge(pTableMeta, (pBlkKeyTuple + j - 1), j - i, ppBlkRowMerger, extendedRowSize) < 0) {
return TSDB_CODE_FAILED;
}
(pBlkKeyTuple + nextPos)->payloadAddr = tdGetCurRowFromBlockMerger(*ppBlkRowMerger);
} else if (hasDup) {
memmove(pBlkKeyTuple + nextPos, pBlkKeyTuple + i, sizeof(SBlockKeyTuple));
}
dataBuf->ordered = true;
pBlocks->numOfRows = nextPos + 1;
}
dataBuf->size = sizeof(SSubmitBlk) + pBlocks->numOfRows * extendedRowSize;
dataBuf->prevTS = INT64_MIN;
return TSDB_CODE_SUCCESS;
}
// Erase the empty space reserved for binary data // Erase the empty space reserved for binary data
static int trimDataBlock(void* pDataBlock, STableDataBlocks* pTableDataBlock, SBlockKeyTuple* blkKeyTuple, static int trimDataBlock(void* pDataBlock, STableDataBlocks* pTableDataBlock, SBlockKeyTuple* blkKeyTuple,
bool isRawPayload) { bool isRawPayload) {
@ -464,6 +686,8 @@ int32_t mergeTableDataBlocks(SHashObj* pHashObj, uint8_t payloadType, SArray** p
STableDataBlocks** p = taosHashIterate(pHashObj, NULL); STableDataBlocks** p = taosHashIterate(pHashObj, NULL);
STableDataBlocks* pOneTableBlock = *p; STableDataBlocks* pOneTableBlock = *p;
SBlockKeyInfo blkKeyInfo = {0}; // share by pOneTableBlock SBlockKeyInfo blkKeyInfo = {0}; // share by pOneTableBlock
SBlockRowMerger *pBlkRowMerger = NULL;
while (pOneTableBlock) { while (pOneTableBlock) {
SSubmitBlk* pBlocks = (SSubmitBlk*)pOneTableBlock->pData; SSubmitBlk* pBlocks = (SSubmitBlk*)pOneTableBlock->pData;
if (pBlocks->numOfRows > 0) { if (pBlocks->numOfRows > 0) {
@ -473,6 +697,7 @@ int32_t mergeTableDataBlocks(SHashObj* pHashObj, uint8_t payloadType, SArray** p
getDataBlockFromList(pVnodeDataBlockHashList, &pOneTableBlock->vgId, sizeof(pOneTableBlock->vgId), TSDB_PAYLOAD_SIZE, INSERT_HEAD_SIZE, 0, getDataBlockFromList(pVnodeDataBlockHashList, &pOneTableBlock->vgId, sizeof(pOneTableBlock->vgId), TSDB_PAYLOAD_SIZE, INSERT_HEAD_SIZE, 0,
pOneTableBlock->pTableMeta, &dataBuf, pVnodeDataBlockList, NULL); pOneTableBlock->pTableMeta, &dataBuf, pVnodeDataBlockList, NULL);
if (ret != TSDB_CODE_SUCCESS) { if (ret != TSDB_CODE_SUCCESS) {
tdFreeSBlockRowMerger(pBlkRowMerger);
taosHashCleanup(pVnodeDataBlockHashList); taosHashCleanup(pVnodeDataBlockHashList);
destroyBlockArrayList(pVnodeDataBlockList); destroyBlockArrayList(pVnodeDataBlockList);
taosMemoryFreeClear(blkKeyInfo.pKeyTuple); taosMemoryFreeClear(blkKeyInfo.pKeyTuple);
@ -490,6 +715,7 @@ int32_t mergeTableDataBlocks(SHashObj* pHashObj, uint8_t payloadType, SArray** p
if (tmp != NULL) { if (tmp != NULL) {
dataBuf->pData = tmp; dataBuf->pData = tmp;
} else { // failed to allocate memory, free already allocated memory and return error code } else { // failed to allocate memory, free already allocated memory and return error code
tdFreeSBlockRowMerger(pBlkRowMerger);
taosHashCleanup(pVnodeDataBlockHashList); taosHashCleanup(pVnodeDataBlockHashList);
destroyBlockArrayList(pVnodeDataBlockList); destroyBlockArrayList(pVnodeDataBlockList);
taosMemoryFreeClear(dataBuf->pData); taosMemoryFreeClear(dataBuf->pData);
@ -501,7 +727,8 @@ int32_t mergeTableDataBlocks(SHashObj* pHashObj, uint8_t payloadType, SArray** p
if (isRawPayload) { if (isRawPayload) {
sortRemoveDataBlockDupRowsRaw(pOneTableBlock); sortRemoveDataBlockDupRowsRaw(pOneTableBlock);
} else { } else {
if ((code = sortRemoveDataBlockDupRows(pOneTableBlock, &blkKeyInfo)) != 0) { if ((code = sortMergeDataBlockDupRows(pOneTableBlock, &blkKeyInfo, &pBlkRowMerger)) != 0) {
tdFreeSBlockRowMerger(pBlkRowMerger);
taosHashCleanup(pVnodeDataBlockHashList); taosHashCleanup(pVnodeDataBlockHashList);
destroyBlockArrayList(pVnodeDataBlockList); destroyBlockArrayList(pVnodeDataBlockList);
taosMemoryFreeClear(dataBuf->pData); taosMemoryFreeClear(dataBuf->pData);
@ -529,6 +756,7 @@ int32_t mergeTableDataBlocks(SHashObj* pHashObj, uint8_t payloadType, SArray** p
} }
// free the table data blocks; // free the table data blocks;
tdFreeSBlockRowMerger(pBlkRowMerger);
taosHashCleanup(pVnodeDataBlockHashList); taosHashCleanup(pVnodeDataBlockHashList);
taosMemoryFreeClear(blkKeyInfo.pKeyTuple); taosMemoryFreeClear(blkKeyInfo.pKeyTuple);
*pVgDataBlocks = pVnodeDataBlockList; *pVgDataBlocks = pVnodeDataBlockList;

View File

@ -34,6 +34,7 @@ int32_t streamDispatchReqToData(const SStreamDispatchReq* pReq, SStreamDataBlock
// TODO: refactor // TODO: refactor
pDataBlock->info.window.skey = be64toh(pRetrieve->skey); pDataBlock->info.window.skey = be64toh(pRetrieve->skey);
pDataBlock->info.window.ekey = be64toh(pRetrieve->ekey); pDataBlock->info.window.ekey = be64toh(pRetrieve->ekey);
pDataBlock->info.version = be64toh(pRetrieve->version);
pDataBlock->info.type = pRetrieve->streamBlockType; pDataBlock->info.type = pRetrieve->streamBlockType;
pDataBlock->info.childId = pReq->upstreamChildId; pDataBlock->info.childId = pReq->upstreamChildId;
@ -54,6 +55,7 @@ int32_t streamRetrieveReqToData(const SStreamRetrieveReq* pReq, SStreamDataBlock
// TODO: refactor // TODO: refactor
pDataBlock->info.window.skey = be64toh(pRetrieve->skey); pDataBlock->info.window.skey = be64toh(pRetrieve->skey);
pDataBlock->info.window.ekey = be64toh(pRetrieve->ekey); pDataBlock->info.window.ekey = be64toh(pRetrieve->ekey);
pDataBlock->info.version = be64toh(pRetrieve->version);
pDataBlock->info.type = pRetrieve->streamBlockType; pDataBlock->info.type = pRetrieve->streamBlockType;

View File

@ -108,6 +108,7 @@ int32_t streamBroadcastToChildren(SStreamTask* pTask, const SSDataBlock* pBlock)
pRetrieve->numOfCols = htonl(numOfCols); pRetrieve->numOfCols = htonl(numOfCols);
pRetrieve->skey = htobe64(pBlock->info.window.skey); pRetrieve->skey = htobe64(pBlock->info.window.skey);
pRetrieve->ekey = htobe64(pBlock->info.window.ekey); pRetrieve->ekey = htobe64(pBlock->info.window.ekey);
pRetrieve->version = htobe64(pBlock->info.version);
int32_t actualLen = 0; int32_t actualLen = 0;
blockEncode(pBlock, pRetrieve->data, &actualLen, numOfCols, false); blockEncode(pBlock, pRetrieve->data, &actualLen, numOfCols, false);
@ -182,6 +183,7 @@ static int32_t streamAddBlockToDispatchMsg(const SSDataBlock* pBlock, SStreamDis
pRetrieve->numOfRows = htonl(pBlock->info.rows); pRetrieve->numOfRows = htonl(pBlock->info.rows);
pRetrieve->skey = htobe64(pBlock->info.window.skey); pRetrieve->skey = htobe64(pBlock->info.window.skey);
pRetrieve->ekey = htobe64(pBlock->info.window.ekey); pRetrieve->ekey = htobe64(pBlock->info.window.ekey);
pRetrieve->version = htobe64(pBlock->info.version);
int32_t numOfCols = (int32_t)taosArrayGetSize(pBlock->pDataBlock); int32_t numOfCols = (int32_t)taosArrayGetSize(pBlock->pDataBlock);
pRetrieve->numOfCols = htonl(numOfCols); pRetrieve->numOfCols = htonl(numOfCols);

View File

@ -1550,12 +1550,12 @@ void syncNodeEventLog(const SSyncNode* pSyncNode, char* str) {
char logBuf[256 + 256]; char logBuf[256 + 256];
if (pSyncNode != NULL && pSyncNode->pRaftCfg != NULL && pSyncNode->pRaftStore != NULL) { if (pSyncNode != NULL && pSyncNode->pRaftCfg != NULL && pSyncNode->pRaftStore != NULL) {
snprintf(logBuf, sizeof(logBuf), snprintf(logBuf, sizeof(logBuf),
"vgId:%d, sync %s %s, term:%" PRIu64 ", commit:%" PRId64 ", first:%" PRId64 ", last:%" PRId64 "vgId:%d, sync %s %s, tm:%" PRIu64 ", cmt:%" PRId64 ", fst:%" PRId64 ", lst:%" PRId64 ", snap:%" PRId64
", snapshot:%" PRId64 ", snapshot-term:%" PRIu64 ", snap-tm:%" PRIu64
", standby:%d, " ", sby:%d, "
"strategy:%d, batch:%d, " "stgy:%d, bch:%d, "
"replica-num:%d, " "r-num:%d, "
"lconfig:%" PRId64 ", changing:%d, restore:%d, %s", "lcfg:%" PRId64 ", chging:%d, rsto:%d, %s",
pSyncNode->vgId, syncUtilState2String(pSyncNode->state), str, pSyncNode->pRaftStore->currentTerm, pSyncNode->vgId, syncUtilState2String(pSyncNode->state), str, pSyncNode->pRaftStore->currentTerm,
pSyncNode->commitIndex, logBeginIndex, logLastIndex, snapshot.lastApplyIndex, snapshot.lastApplyTerm, pSyncNode->commitIndex, logBeginIndex, logLastIndex, snapshot.lastApplyIndex, snapshot.lastApplyTerm,
pSyncNode->pRaftCfg->isStandBy, pSyncNode->pRaftCfg->snapshotStrategy, pSyncNode->pRaftCfg->batchSize, pSyncNode->pRaftCfg->isStandBy, pSyncNode->pRaftCfg->snapshotStrategy, pSyncNode->pRaftCfg->batchSize,
@ -1573,12 +1573,12 @@ void syncNodeEventLog(const SSyncNode* pSyncNode, char* str) {
char* s = (char*)taosMemoryMalloc(len); char* s = (char*)taosMemoryMalloc(len);
if (pSyncNode != NULL && pSyncNode->pRaftCfg != NULL && pSyncNode->pRaftStore != NULL) { if (pSyncNode != NULL && pSyncNode->pRaftCfg != NULL && pSyncNode->pRaftStore != NULL) {
snprintf(s, len, snprintf(s, len,
"vgId:%d, sync %s %s, term:%" PRIu64 ", commit:%" PRId64 ", first:%" PRId64 ", last:%" PRId64 "vgId:%d, sync %s %s, tm:%" PRIu64 ", cmt:%" PRId64 ", fst:%" PRId64 ", lst:%" PRId64 ", snap:%" PRId64
", snapshot:%" PRId64 ", snapshot-term:%" PRIu64 ", snap-tm:%" PRIu64
", standby:%d, " ", sby:%d, "
"strategy:%d, batch:%d, " "stgy:%d, bch:%d, "
"replica-num:%d, " "r-num:%d, "
"lconfig:%" PRId64 ", changing:%d, restore:%d, %s", "lcfg:%" PRId64 ", chging:%d, rsto:%d, %s",
pSyncNode->vgId, syncUtilState2String(pSyncNode->state), str, pSyncNode->pRaftStore->currentTerm, pSyncNode->vgId, syncUtilState2String(pSyncNode->state), str, pSyncNode->pRaftStore->currentTerm,
pSyncNode->commitIndex, logBeginIndex, logLastIndex, snapshot.lastApplyIndex, snapshot.lastApplyTerm, pSyncNode->commitIndex, logBeginIndex, logLastIndex, snapshot.lastApplyIndex, snapshot.lastApplyTerm,
pSyncNode->pRaftCfg->isStandBy, pSyncNode->pRaftCfg->snapshotStrategy, pSyncNode->pRaftCfg->batchSize, pSyncNode->pRaftCfg->isStandBy, pSyncNode->pRaftCfg->snapshotStrategy, pSyncNode->pRaftCfg->batchSize,
@ -1621,12 +1621,12 @@ void syncNodeErrorLog(const SSyncNode* pSyncNode, char* str) {
char logBuf[256 + 256]; char logBuf[256 + 256];
if (pSyncNode != NULL && pSyncNode->pRaftCfg != NULL && pSyncNode->pRaftStore != NULL) { if (pSyncNode != NULL && pSyncNode->pRaftCfg != NULL && pSyncNode->pRaftStore != NULL) {
snprintf(logBuf, sizeof(logBuf), snprintf(logBuf, sizeof(logBuf),
"vgId:%d, sync %s %s, term:%" PRIu64 ", commit:%" PRId64 ", first:%" PRId64 ", last:%" PRId64 "vgId:%d, sync %s %s, tm:%" PRIu64 ", cmt:%" PRId64 ", fst:%" PRId64 ", lst:%" PRId64 ", snap:%" PRId64
", snapshot:%" PRId64 ", snapshot-term:%" PRIu64 ", snap-tm:%" PRIu64
", standby:%d, " ", sby:%d, "
"strategy:%d, batch:%d, " "stgy:%d, bch:%d, "
"replica-num:%d, " "r-num:%d, "
"lconfig:%" PRId64 ", changing:%d, restore:%d, %s", "lcfg:%" PRId64 ", chging:%d, rsto:%d, %s",
pSyncNode->vgId, syncUtilState2String(pSyncNode->state), str, pSyncNode->pRaftStore->currentTerm, pSyncNode->vgId, syncUtilState2String(pSyncNode->state), str, pSyncNode->pRaftStore->currentTerm,
pSyncNode->commitIndex, logBeginIndex, logLastIndex, snapshot.lastApplyIndex, snapshot.lastApplyTerm, pSyncNode->commitIndex, logBeginIndex, logLastIndex, snapshot.lastApplyIndex, snapshot.lastApplyTerm,
pSyncNode->pRaftCfg->isStandBy, pSyncNode->pRaftCfg->snapshotStrategy, pSyncNode->pRaftCfg->batchSize, pSyncNode->pRaftCfg->isStandBy, pSyncNode->pRaftCfg->snapshotStrategy, pSyncNode->pRaftCfg->batchSize,
@ -1642,12 +1642,12 @@ void syncNodeErrorLog(const SSyncNode* pSyncNode, char* str) {
char* s = (char*)taosMemoryMalloc(len); char* s = (char*)taosMemoryMalloc(len);
if (pSyncNode != NULL && pSyncNode->pRaftCfg != NULL && pSyncNode->pRaftStore != NULL) { if (pSyncNode != NULL && pSyncNode->pRaftCfg != NULL && pSyncNode->pRaftStore != NULL) {
snprintf(s, len, snprintf(s, len,
"vgId:%d, sync %s %s, term:%" PRIu64 ", commit:%" PRId64 ", first:%" PRId64 ", last:%" PRId64 "vgId:%d, sync %s %s, tm:%" PRIu64 ", cmt:%" PRId64 ", fst:%" PRId64 ", lst:%" PRId64 ", snap:%" PRId64
", snapshot:%" PRId64 ", snapshot-term:%" PRIu64 ", snap-tm:%" PRIu64
", standby:%d, " ", sby:%d, "
"strategy:%d, batch:%d, " "stgy:%d, bch:%d, "
"replica-num:%d, " "r-num:%d, "
"lconfig:%" PRId64 ", changing:%d, restore:%d, %s", "lcfg:%" PRId64 ", chging:%d, rsto:%d, %s",
pSyncNode->vgId, syncUtilState2String(pSyncNode->state), str, pSyncNode->pRaftStore->currentTerm, pSyncNode->vgId, syncUtilState2String(pSyncNode->state), str, pSyncNode->pRaftStore->currentTerm,
pSyncNode->commitIndex, logBeginIndex, logLastIndex, snapshot.lastApplyIndex, snapshot.lastApplyTerm, pSyncNode->commitIndex, logBeginIndex, logLastIndex, snapshot.lastApplyIndex, snapshot.lastApplyTerm,
pSyncNode->pRaftCfg->isStandBy, pSyncNode->pRaftCfg->snapshotStrategy, pSyncNode->pRaftCfg->batchSize, pSyncNode->pRaftCfg->isStandBy, pSyncNode->pRaftCfg->snapshotStrategy, pSyncNode->pRaftCfg->batchSize,
@ -1675,11 +1675,10 @@ char* syncNode2SimpleStr(const SSyncNode* pSyncNode) {
SyncIndex logBeginIndex = pSyncNode->pLogStore->syncLogBeginIndex(pSyncNode->pLogStore); SyncIndex logBeginIndex = pSyncNode->pLogStore->syncLogBeginIndex(pSyncNode->pLogStore);
snprintf(s, len, snprintf(s, len,
"vgId:%d, sync %s, term:%" PRIu64 ", commit:%" PRId64 ", first:%" PRId64 ", last:%" PRId64 "vgId:%d, sync %s, tm:%" PRIu64 ", cmt:%" PRId64 ", fst:%" PRId64 ", lst:%" PRId64 ", snap:%" PRId64
", snapshot:%" PRId64 ", sby:%d, "
", standby:%d, " "r-num:%d, "
"replica-num:%d, " "lcfg:%" PRId64 ", chging:%d, rsto:%d",
"lconfig:%" PRId64 ", changing:%d, restore:%d",
pSyncNode->vgId, syncUtilState2String(pSyncNode->state), pSyncNode->pRaftStore->currentTerm, pSyncNode->vgId, syncUtilState2String(pSyncNode->state), pSyncNode->pRaftStore->currentTerm,
pSyncNode->commitIndex, logBeginIndex, logLastIndex, snapshot.lastApplyIndex, pSyncNode->pRaftCfg->isStandBy, pSyncNode->commitIndex, logBeginIndex, logLastIndex, snapshot.lastApplyIndex, pSyncNode->pRaftCfg->isStandBy,
pSyncNode->replicaNum, pSyncNode->pRaftCfg->lastConfigIndex, pSyncNode->changing, pSyncNode->restoreFinish); pSyncNode->replicaNum, pSyncNode->pRaftCfg->lastConfigIndex, pSyncNode->changing, pSyncNode->restoreFinish);
@ -2977,7 +2976,7 @@ void syncLogSendAppendEntries(SSyncNode* pSyncNode, const SyncAppendEntries* pMs
char logBuf[256]; char logBuf[256];
snprintf(logBuf, sizeof(logBuf), snprintf(logBuf, sizeof(logBuf),
"send sync-append-entries to %s:%d, {term:%" PRIu64 ", pre-index:%" PRId64 ", pre-term:%" PRIu64 "send sync-append-entries to %s:%d, {term:%" PRIu64 ", pre-index:%" PRId64 ", pre-term:%" PRIu64
", pterm:%" PRIu64 ", commit:%" PRId64 ", pterm:%" PRIu64 ", cmt:%" PRId64
", " ", "
"datalen:%d}, %s", "datalen:%d}, %s",
host, port, pMsg->term, pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->privateTerm, pMsg->commitIndex, host, port, pMsg->term, pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->privateTerm, pMsg->commitIndex,
@ -2992,7 +2991,7 @@ void syncLogRecvAppendEntries(SSyncNode* pSyncNode, const SyncAppendEntries* pMs
char logBuf[256]; char logBuf[256];
snprintf(logBuf, sizeof(logBuf), snprintf(logBuf, sizeof(logBuf),
"recv sync-append-entries from %s:%d {term:%" PRIu64 ", pre-index:%" PRIu64 ", pre-term:%" PRIu64 "recv sync-append-entries from %s:%d {term:%" PRIu64 ", pre-index:%" PRIu64 ", pre-term:%" PRIu64
", commit:%" PRIu64 ", pterm:%" PRIu64 ", cmt:%" PRIu64 ", pterm:%" PRIu64
", " ", "
"datalen:%d}, %s", "datalen:%d}, %s",
host, port, pMsg->term, pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->commitIndex, pMsg->privateTerm, host, port, pMsg->term, pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->commitIndex, pMsg->privateTerm,
@ -3007,7 +3006,7 @@ void syncLogSendAppendEntriesBatch(SSyncNode* pSyncNode, const SyncAppendEntries
char logBuf[256]; char logBuf[256];
snprintf(logBuf, sizeof(logBuf), snprintf(logBuf, sizeof(logBuf),
"send sync-append-entries-batch to %s:%d, {term:%" PRIu64 ", pre-index:%" PRId64 ", pre-term:%" PRIu64 "send sync-append-entries-batch to %s:%d, {term:%" PRIu64 ", pre-index:%" PRId64 ", pre-term:%" PRIu64
", pterm:%" PRIu64 ", commit:%" PRId64 ", datalen:%d, count:%d}, %s", ", pterm:%" PRIu64 ", cmt:%" PRId64 ", datalen:%d, count:%d}, %s",
host, port, pMsg->term, pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->privateTerm, pMsg->commitIndex, host, port, pMsg->term, pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->privateTerm, pMsg->commitIndex,
pMsg->dataLen, pMsg->dataCount, s); pMsg->dataLen, pMsg->dataCount, s);
syncNodeEventLog(pSyncNode, logBuf); syncNodeEventLog(pSyncNode, logBuf);
@ -3020,7 +3019,7 @@ void syncLogRecvAppendEntriesBatch(SSyncNode* pSyncNode, const SyncAppendEntries
char logBuf[256]; char logBuf[256];
snprintf(logBuf, sizeof(logBuf), snprintf(logBuf, sizeof(logBuf),
"recv sync-append-entries-batch from %s:%d, {term:%" PRIu64 ", pre-index:%" PRId64 ", pre-term:%" PRIu64 "recv sync-append-entries-batch from %s:%d, {term:%" PRIu64 ", pre-index:%" PRId64 ", pre-term:%" PRIu64
", pterm:%" PRIu64 ", commit:%" PRId64 ", datalen:%d, count:%d}, %s", ", pterm:%" PRIu64 ", cmt:%" PRId64 ", datalen:%d, count:%d}, %s",
host, port, pMsg->term, pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->privateTerm, pMsg->commitIndex, host, port, pMsg->term, pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->privateTerm, pMsg->commitIndex,
pMsg->dataLen, pMsg->dataCount, s); pMsg->dataLen, pMsg->dataCount, s);
syncNodeEventLog(pSyncNode, logBuf); syncNodeEventLog(pSyncNode, logBuf);

View File

@ -101,7 +101,7 @@ cJSON *syncCfg2Json(SSyncCfg *pSyncCfg) {
char *syncCfg2Str(SSyncCfg *pSyncCfg) { char *syncCfg2Str(SSyncCfg *pSyncCfg) {
cJSON *pJson = syncCfg2Json(pSyncCfg); cJSON *pJson = syncCfg2Json(pSyncCfg);
char * serialized = cJSON_Print(pJson); char *serialized = cJSON_Print(pJson);
cJSON_Delete(pJson); cJSON_Delete(pJson);
return serialized; return serialized;
} }
@ -109,10 +109,10 @@ char *syncCfg2Str(SSyncCfg *pSyncCfg) {
char *syncCfg2SimpleStr(SSyncCfg *pSyncCfg) { char *syncCfg2SimpleStr(SSyncCfg *pSyncCfg) {
if (pSyncCfg != NULL) { if (pSyncCfg != NULL) {
int32_t len = 512; int32_t len = 512;
char * s = taosMemoryMalloc(len); char *s = taosMemoryMalloc(len);
memset(s, 0, len); memset(s, 0, len);
snprintf(s, len, "{replica-num:%d, my-index:%d, ", pSyncCfg->replicaNum, pSyncCfg->myIndex); snprintf(s, len, "{r-num:%d, my:%d, ", pSyncCfg->replicaNum, pSyncCfg->myIndex);
char *p = s + strlen(s); char *p = s + strlen(s);
for (int i = 0; i < pSyncCfg->replicaNum; ++i) { for (int i = 0; i < pSyncCfg->replicaNum; ++i) {
/* /*
@ -206,7 +206,7 @@ cJSON *raftCfg2Json(SRaftCfg *pRaftCfg) {
char *raftCfg2Str(SRaftCfg *pRaftCfg) { char *raftCfg2Str(SRaftCfg *pRaftCfg) {
cJSON *pJson = raftCfg2Json(pRaftCfg); cJSON *pJson = raftCfg2Json(pRaftCfg);
char * serialized = cJSON_Print(pJson); char *serialized = cJSON_Print(pJson);
cJSON_Delete(pJson); cJSON_Delete(pJson);
return serialized; return serialized;
} }
@ -285,7 +285,7 @@ int32_t raftCfgFromJson(const cJSON *pRoot, SRaftCfg *pRaftCfg) {
(pRaftCfg->configIndexArr)[i] = atoll(pIndex->valuestring); (pRaftCfg->configIndexArr)[i] = atoll(pIndex->valuestring);
} }
cJSON * pJsonSyncCfg = cJSON_GetObjectItem(pJson, "SSyncCfg"); cJSON *pJsonSyncCfg = cJSON_GetObjectItem(pJson, "SSyncCfg");
int32_t code = syncCfgFromJson(pJsonSyncCfg, &(pRaftCfg->cfg)); int32_t code = syncCfgFromJson(pJsonSyncCfg, &(pRaftCfg->cfg));
ASSERT(code == 0); ASSERT(code == 0);

View File

@ -0,0 +1,176 @@
system sh/stop_dnodes.sh
system sh/deploy.sh -n dnode1 -i 1
system sh/exec.sh -n dnode1 -s start
sql connect
print =============== create database
sql drop database if exists d0
sql create database d0 keep 365000d,365000d,365000d
sql use d0
print =============== create super table
sql create table if not exists stb (ts timestamp, c1 int unsigned, c2 double, c3 binary(10), c4 nchar(10), c5 double) tags (city binary(20),district binary(20));
sql show stables
if $rows != 1 then
return -1
endi
print =============== create child table
sql create table ct1 using stb tags("BeiJing", "ChaoYang")
sql create table ct2 using stb tags("BeiJing", "HaiDian")
sql create table ct3 using stb tags("BeiJing", "PingGu")
sql create table ct4 using stb tags("BeiJing", "YanQing")
sql show tables
if $rows != 4 then
print rows $rows != 4
return -1
endi
print =============== step 1 insert records into ct1 - taosd merge
sql insert into ct1(ts,c1,c2) values('2022-05-03 16:59:00.010', 10, 20);
sql insert into ct1(ts,c1,c2,c3,c4) values('2022-05-03 16:59:00.011', 11, NULL, 'binary', 'nchar');
sql insert into ct1 values('2022-05-03 16:59:00.016', 16, NULL, NULL, 'nchar', NULL);
sql insert into ct1 values('2022-05-03 16:59:00.016', 17, NULL, NULL, 'nchar', 170);
sql insert into ct1 values('2022-05-03 16:59:00.020', 20, NULL, NULL, 'nchar', 200);
sql insert into ct1 values('2022-05-03 16:59:00.016', 18, NULL, NULL, 'nchar', 180);
sql insert into ct1 values('2022-05-03 16:59:00.021', 21, NULL, NULL, 'nchar', 210);
sql insert into ct1 values('2022-05-03 16:59:00.022', 22, NULL, NULL, 'nchar', 220);
print =============== step 2 insert records into ct1/ct2 - taosc merge for 2022-05-03 16:59:00.010
sql insert into ct1(ts,c1,c2) values('2022-05-03 16:59:00.010', 10,10), ('2022-05-03 16:59:00.010',20,10.0), ('2022-05-03 16:59:00.010',30,NULL) ct2(ts,c1) values('2022-05-03 16:59:00.010',10), ('2022-05-03 16:59:00.010',20) ct1(ts,c2) values('2022-05-03 16:59:00.010',10), ('2022-05-03 16:59:00.010',100) ct1(ts,c3) values('2022-05-03 16:59:00.010','bin1'), ('2022-05-03 16:59:00.010','bin2') ct1(ts,c4,c5) values('2022-05-03 16:59:00.010',NULL,NULL), ('2022-05-03 16:59:00.010','nchar4',1000.01) ct2(ts,c2,c3,c4,c5) values('2022-05-03 16:59:00.010',20,'xkl','zxc',10);
print =============== step 3 insert records into ct3
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.020', 10,10);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.021', 10,10), ('2022-05-03 16:59:00.021',20,20.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.022', 30,30), ('2022-05-03 16:59:00.022',40,40.0),('2022-05-03 16:59:00.022',50,50.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.023', 60,60), ('2022-05-03 16:59:00.023',70,70.0),('2022-05-03 16:59:00.023',80,80.0), ('2022-05-03 16:59:00.023',90,90.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.024', 100,100), ('2022-05-03 16:59:00.025',110,110.0),('2022-05-03 16:59:00.025',120,120.0), ('2022-05-03 16:59:00.025',130,130.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.030', 140,140), ('2022-05-03 16:59:00.030',150,150.0),('2022-05-03 16:59:00.031',160,160.0), ('2022-05-03 16:59:00.030',170,170.0), ('2022-05-03 16:59:00.031',180,180.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.042', 190,190), ('2022-05-03 16:59:00.041',200,200.0),('2022-05-03 16:59:00.040',210,210.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.050', 220,220), ('2022-05-03 16:59:00.051',230,230.0),('2022-05-03 16:59:00.052',240,240.0);
print =============== step 4 insert records into ct4
sql insert into ct4(ts,c1,c3,c4) values('2022-05-03 16:59:00.020', 10,'b0','n0');
sql insert into ct4(ts,c1,c3,c4) values('2022-05-03 16:59:00.021', 20,'b1','n1'), ('2022-05-03 16:59:00.021',30,'b2','n2');
sql insert into ct4(ts,c1,c3,c4) values('2022-05-03 16:59:00.022', 40,'b3','n3'), ('2022-05-03 16:59:00.022',40,'b4','n4'),('2022-05-03 16:59:00.022',50,'b5','n5');
sql insert into ct4(ts,c1,c3,c4) values('2022-05-03 16:59:00.023', 60,'b6','n6'), ('2022-05-03 16:59:00.024',70,'b7','n7'),('2022-05-03 16:59:00.024',80,'b8','n8'), ('2022-05-03 16:59:00.023',90,'b9','n9');
print =============== step 5 query records of ct1 from memory(taosc and taosd merge)
sql select * from ct1;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
print $data50 $data51 $data52 $data53 $data54 $data55
print =============== step 6 query records of ct2 from memory(taosc and taosd merge)
sql select * from ct2;
print $data00 $data01 $data02 $data03 $data04 $data05
if $rows != 1 then
print rows $rows != 1
return -1
endi
print =============== step 7 query records of ct3 from memory
sql select * from ct3;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
print $data50 $data51 $data52 $data53 $data54 $data55
print $data60 $data61 $data62 $data63 $data64 $data65
print $data70 $data71 $data72 $data73 $data74 $data75
print $data80 $data81 $data82 $data83 $data84 $data85
print $data90 $data91 $data92 $data93 $data94 $data95
print $data[10][0] $data[10][1] $data[10][2] $data[10][3] $data[10][4] $data[10][5]
print $data[11][0] $data[11][1] $data[11][2] $data[11][3] $data[11][4] $data[11][5]
print $data[12][0] $data[12][1] $data[12][2] $data[12][3] $data[12][4] $data[12][5]
print $data[13][0] $data[13][1] $data[13][2] $data[13][3] $data[13][4] $data[13][5]
if $rows != 14 then
print rows $rows != 14
return -1
endi
print =============== step 8 query records of ct4 from memory
sql select * from ct4;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
if $rows != 5 then
print rows $rows != 5
return -1
endi
#==================== reboot to trigger commit data to file
system sh/exec.sh -n dnode1 -s stop -x SIGINT
system sh/exec.sh -n dnode1 -s start
print =============== step 9 query records of ct1 from file
sql select * from ct1;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
print $data50 $data51 $data52 $data53 $data54 $data55
if $rows != 6 then
print rows $rows != 6
return -1
endi
print =============== step 10 query records of ct2 from file
sql select * from ct2;
print $data00 $data01 $data02 $data03 $data04 $data05
if $rows != 1 then
print rows $rows != 1
return -1
endi
print =============== step 11 query records of ct3 from file
sql select * from ct3;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
print $data50 $data51 $data52 $data53 $data54 $data55
print $data60 $data61 $data62 $data63 $data64 $data65
print $data70 $data71 $data72 $data73 $data74 $data75
print $data80 $data81 $data82 $data83 $data84 $data85
print $data90 $data91 $data92 $data93 $data94 $data95
print $data[10][0] $data[10][1] $data[10][2] $data[10][3] $data[10][4] $data[10][5]
print $data[11][0] $data[11][1] $data[11][2] $data[11][3] $data[11][4] $data[11][5]
print $data[12][0] $data[12][1] $data[12][2] $data[12][3] $data[12][4] $data[12][5]
print $data[13][0] $data[13][1] $data[13][2] $data[13][3] $data[13][4] $data[13][5]
print =============== step 12 query records of ct4 from file
sql select * from ct4;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45

View File

@ -79,8 +79,8 @@ if $rows != 3 then
return -1 return -1
endi endi
if $data01 != 103 then if $data01 != 303 then
print data01 $data01 != 103 print data01 $data01 != 303
return -1 return -1
endi endi
@ -89,8 +89,8 @@ if $data11 != 80 then
return -1 return -1
endi endi
if $data21 != 40 then if $data21 != 60 then
print data21 $data21 != 40 print data21 $data21 != 60
return -1 return -1
endi endi
@ -138,8 +138,8 @@ if $rows != 3 then
return -1 return -1
endi endi
if $data01 != 103 then if $data01 != 303 then
print data01 $data01 != 103 print data01 $data01 != 303
return -1 return -1
endi endi
@ -148,8 +148,8 @@ if $data11 != 80 then
return -1 return -1
endi endi
if $data21 != 40 then if $data21 != 60 then
print data21 $data21 != 40 print data21 $data21 != 60
return -1 return -1
endi endi
@ -208,8 +208,8 @@ if $data01 != 10 then
return -1 return -1
endi endi
if $data11 != 103 then if $data11 != 303 then
print data11 $data11 != 103 print data11 $data11 != 303
return -1 return -1
endi endi
@ -218,8 +218,8 @@ if $data21 != NULL then
return -1 return -1
endi endi
if $data31 != 40 then if $data31 != 60 then
print data31 $data31 != 40 print data31 $data31 != 60
return -1 return -1
endi endi

View File

@ -0,0 +1,818 @@
system sh/stop_dnodes.sh
system sh/deploy.sh -n dnode1 -i 1
system sh/exec.sh -n dnode1 -s start
sql connect
print =============== create database
sql drop database if exists d0
sql create database d0 keep 365000d,365000d,365000d
sql use d0
print =============== create super table
sql create table if not exists stb (ts timestamp, c1 int unsigned, c2 double, c3 binary(10), c4 nchar(10), c5 double) tags (city binary(20),district binary(20));
sql show stables
if $rows != 1 then
return -1
endi
print =============== create child table
sql create table ct1 using stb tags("BeiJing", "ChaoYang")
sql create table ct2 using stb tags("BeiJing", "HaiDian")
sql create table ct3 using stb tags("BeiJing", "PingGu")
sql create table ct4 using stb tags("BeiJing", "YanQing")
sql show tables
if $rows != 4 then
print rows $rows != 4
return -1
endi
print =============== step 1 insert records into ct1 - taosd merge
sql insert into ct1(ts,c1,c2) values('2022-05-03 16:59:00.010', 10, 20);
sql insert into ct1(ts,c1,c2,c3,c4) values('2022-05-03 16:59:00.011', 11, NULL, 'binary', 'nchar');
sql insert into ct1 values('2022-05-03 16:59:00.016', 16, NULL, NULL, 'nchar', NULL);
sql insert into ct1 values('2022-05-03 16:59:00.016', 17, NULL, NULL, 'nchar', 170);
sql insert into ct1 values('2022-05-03 16:59:00.020', 20, NULL, NULL, 'nchar', 200);
sql insert into ct1 values('2022-05-03 16:59:00.016', 18, NULL, NULL, 'nchar', 180);
sql insert into ct1 values('2022-05-03 16:59:00.021', 21, NULL, NULL, 'nchar', 210);
sql insert into ct1 values('2022-05-03 16:59:00.022', 22, NULL, NULL, 'nchar', 220);
print =============== step 2 insert records into ct1/ct2 - taosc merge for 2022-05-03 16:59:00.010
sql insert into ct1(ts,c1,c2) values('2022-05-03 16:59:00.010', 10,10), ('2022-05-03 16:59:00.010',20,10.0), ('2022-05-03 16:59:00.010',30,NULL) ct2(ts,c1) values('2022-05-03 16:59:00.010',10), ('2022-05-03 16:59:00.010',20) ct1(ts,c2) values('2022-05-03 16:59:00.010',10), ('2022-05-03 16:59:00.010',100) ct1(ts,c3) values('2022-05-03 16:59:00.010','bin1'), ('2022-05-03 16:59:00.010','bin2') ct1(ts,c4,c5) values('2022-05-03 16:59:00.010',NULL,NULL), ('2022-05-03 16:59:00.010','nchar4',1000.01) ct2(ts,c2,c3,c4,c5) values('2022-05-03 16:59:00.010',20,'xkl','zxc',10);
print =============== step 3 insert records into ct3
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.020', 10,10);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.021', 10,10), ('2022-05-03 16:59:00.021',20,20.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.022', 30,30), ('2022-05-03 16:59:00.022',40,40.0),('2022-05-03 16:59:00.022',50,50.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.023', 60,60), ('2022-05-03 16:59:00.023',70,70.0),('2022-05-03 16:59:00.023',80,80.0), ('2022-05-03 16:59:00.023',90,90.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.024', 100,100), ('2022-05-03 16:59:00.025',110,110.0),('2022-05-03 16:59:00.025',120,120.0), ('2022-05-03 16:59:00.025',130,130.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.030', 140,140), ('2022-05-03 16:59:00.030',150,150.0),('2022-05-03 16:59:00.031',160,160.0), ('2022-05-03 16:59:00.030',170,170.0), ('2022-05-03 16:59:00.031',180,180.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.042', 190,190), ('2022-05-03 16:59:00.041',200,200.0),('2022-05-03 16:59:00.040',210,210.0);
sql insert into ct3(ts,c1,c5) values('2022-05-03 16:59:00.050', 220,220), ('2022-05-03 16:59:00.051',230,230.0),('2022-05-03 16:59:00.052',240,240.0);
print =============== step 4 insert records into ct4
sql insert into ct4(ts,c1,c3,c4) values('2022-05-03 16:59:00.020', 10,'b0','n0');
sql insert into ct4(ts,c1,c3,c4) values('2022-05-03 16:59:00.021', 20,'b1','n1'), ('2022-05-03 16:59:00.021',30,'b2','n2');
sql insert into ct4(ts,c1,c3,c4) values('2022-05-03 16:59:00.022', 40,'b3','n3'), ('2022-05-03 16:59:00.022',40,'b4','n4'),('2022-05-03 16:59:00.022',50,'b5','n5');
sql insert into ct4(ts,c1,c3,c4) values('2022-05-03 16:59:00.023', 60,'b6','n6'), ('2022-05-03 16:59:00.024',70,'b7','n7'),('2022-05-03 16:59:00.024',80,'b8','n8'), ('2022-05-03 16:59:00.023',90,'b9','n9');
print =============== step 5 query records of ct1 from memory(taosc and taosd merge)
sql select * from ct1;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
print $data50 $data51 $data52 $data53 $data54 $data55
if $rows != 6 then
print rows $rows != 6
return -1
endi
if $data01 != 30 then
print data01 $data01 != 30
return -1
endi
if $data02 != 100.000000000 then
print data02 $data02 != 100.000000000
return -1
endi
if $data03 != bin2 then
print data03 $data03 != bin2
return -1
endi
if $data04 != nchar4 then
print data04 $data04 != nchar4
return -1
endi
if $data05 != 1000.010000000 then
print data05 $data05 != 1000.010000000
return -1
endi
if $data11 != 11 then
print data11 $data11 != 11
return -1
endi
if $data12 != NULL then
print data12 $data12 != NULL
return -1
endi
if $data13 != binary then
print data13 $data13 != binary
return -1
endi
if $data14 != nchar then
print data14 $data14 != nchar
return -1
endi
if $data15 != NULL then
print data15 $data15 != NULL
return -1
endi
if $data51 != 22 then
print data51 $data51 != 22
return -1
endi
if $data52 != NULL then
print data52 $data52 != NULL
return -1
endi
if $data53 != NULL then
print data53 $data53 != NULL
return -1
endi
if $data54 != nchar then
print data54 $data54 != nchar
return -1
endi
if $data55 != 220.000000000 then
print data55 $data55 != 220.000000000
return -1
endi
print =============== step 6 query records of ct2 from memory(taosc and taosd merge)
sql select * from ct2;
print $data00 $data01 $data02 $data03 $data04 $data05
if $rows != 1 then
print rows $rows != 1
return -1
endi
if $data01 != 20 then
print data01 $data01 != 20
return -1
endi
if $data02 != 20.000000000 then
print data02 $data02 != 20.000000000
return -1
endi
if $data03 != xkl then
print data03 $data03 != xkl
return -1
endi
if $data04 != zxc then
print data04 $data04 != zxc
return -1
endi
if $data05 != 10.000000000 then
print data05 $data05 != 10.000000000
return -1
endi
print =============== step 7 query records of ct3 from memory
sql select * from ct3;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
print $data50 $data51 $data52 $data53 $data54 $data55
print $data60 $data61 $data62 $data63 $data64 $data65
print $data70 $data71 $data72 $data73 $data74 $data75
print $data80 $data81 $data82 $data83 $data84 $data85
print $data90 $data91 $data92 $data93 $data94 $data95
print $data[10][0] $data[10][1] $data[10][2] $data[10][3] $data[10][4] $data[10][5]
print $data[11][0] $data[11][1] $data[11][2] $data[11][3] $data[11][4] $data[11][5]
print $data[12][0] $data[12][1] $data[12][2] $data[12][3] $data[12][4] $data[12][5]
print $data[13][0] $data[13][1] $data[13][2] $data[13][3] $data[13][4] $data[13][5]
if $rows != 14 then
print rows $rows != 14
return -1
endi
if $data01 != 10 then
print data01 $data01 != 10
return -1
endi
if $data11 != 20 then
print data11 $data1 != 20
return -1
endi
if $data21 != 50 then
print data21 $data21 != 50
return -1
endi
if $data31 != 90 then
print data31 $data31 != 90
return -1
endi
if $data41 != 100 then
print data41 $data41 != 100
return -1
endi
if $data51 != 130 then
print data51 $data51 != 130
return -1
endi
if $data61 != 170 then
print data61 $data61 != 170
return -1
endi
if $data71 != 180 then
print data71 $data71 != 180
return -1
endi
if $data81 != 210 then
print data81 $data81 != 210
return -1
endi
if $data91 != 200 then
print data91 $data91 != 200
return -1
endi
if $data[10][1] != 190 then
print data[10][1] $data[10][1] != 190
return -1
endi
if $data[11][1] != 220 then
print data[11][1] $data[11][1] != 220
return -1
endi
if $data[12][1] != 230 then
print data[12][1] $data[12][1] != 230
return -1
endi
if $data[13][1] != 240 then
print data[13][1] $data[13][1] != 240
return -1
endi
if $data05 != 10.000000000 then
print data05 $data05 != 10.000000000
return -1
endi
if $data15 != 20.000000000 then
print data15 $data5 != 20.000000000
return -1
endi
if $data25 != 50.000000000 then
print data25 $data25 != 50.000000000
return -1
endi
if $data35 != 90.000000000 then
print data35 $data35 != 90.000000000
return -1
endi
if $data45 != 100.000000000 then
print data45 $data45 != 100.000000000
return -1
endi
if $data55 != 130.000000000 then
print data55 $data55 != 130.000000000
return -1
endi
if $data65 != 170.000000000 then
print data65 $data65 != 170.000000000
return -1
endi
if $data75 != 180.000000000 then
print data75 $data75 != 180.000000000
return -1
endi
if $data85 != 210.000000000 then
print data85 $data85 != 210.000000000
return -1
endi
if $data95 != 200.000000000 then
print data95 $data95 != 200.000000000
return -1
endi
if $data[10][5] != 190.000000000 then
print data[10][5] $data[10][5] != 190.000000000
return -1
endi
if $data[11][5] != 220.000000000 then
print data[11][5] $data[11][5] != 220.000000000
return -1
endi
if $data[12][5] != 230.000000000 then
print data[12][5] $data[12][5] != 230.000000000
return -1
endi
if $data[13][5] != 240.000000000 then
print data[13][5] $data[13][5] != 240.000000000
return -1
endi
print =============== step 8 query records of ct4 from memory
sql select * from ct4;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
if $rows != 5 then
print rows $rows != 5
return -1
endi
if $data01 != 10 then
print data01 $data01 != 10
return -1
endi
if $data11 != 30 then
print data11 $data11 != 30
return -1
endi
if $data21 != 50 then
print data21 $data21 != 50
return -1
endi
if $data31 != 90 then
print data31 $data31 != 90
return -1
endi
if $data41 != 80 then
print data41 $data41 != 80
return -1
endi
if $data03 != b0 then
print data03 $data03 != b0
return -1
endi
if $data13 != b2 then
print data13 $data13 != b2
return -1
endi
if $data23 != b5 then
print data23 $data23 != b5
return -1
endi
if $data33 != b9 then
print data33 $data33 != b9
return -1
endi
if $data43 != b8 then
print data43 $data43 != b8
return -1
endi
if $data04 != n0 then
print data04 $data04 != n0
return -1
endi
if $data14 != n2 then
print data14 $data14 != n2
return -1
endi
if $data24 != n5 then
print data24 $data24 != n5
return -1
endi
if $data34 != n9 then
print data34 $data34 != n9
return -1
endi
if $data44 != n8 then
print data44 $data44 != n8
return -1
endi
#==================== reboot to trigger commit data to file
system sh/exec.sh -n dnode1 -s stop -x SIGINT
system sh/exec.sh -n dnode1 -s start
print =============== step 9 query records of ct1 from file
sql select * from ct1;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
print $data50 $data51 $data52 $data53 $data54 $data55
if $rows != 6 then
print rows $rows != 6
return -1
endi
if $data01 != 30 then
print data01 $data01 != 30
return -1
endi
if $data02 != 100.000000000 then
print data02 $data02 != 100.000000000
return -1
endi
if $data03 != bin2 then
print data03 $data03 != bin2
return -1
endi
if $data04 != nchar4 then
print data04 $data04 != nchar4
return -1
endi
if $data05 != 1000.010000000 then
print data05 $data05 != 1000.010000000
return -1
endi
if $data11 != 11 then
print data11 $data11 != 11
return -1
endi
if $data12 != NULL then
print data12 $data12 != NULL
return -1
endi
if $data13 != binary then
print data13 $data13 != binary
return -1
endi
if $data14 != nchar then
print data14 $data14 != nchar
return -1
endi
if $data15 != NULL then
print data15 $data15 != NULL
return -1
endi
if $data51 != 22 then
print data51 $data51 != 22
return -1
endi
if $data52 != NULL then
print data52 $data52 != NULL
return -1
endi
if $data53 != NULL then
print data53 $data53 != NULL
return -1
endi
if $data54 != nchar then
print data54 $data54 != nchar
return -1
endi
if $data55 != 220.000000000 then
print data55 $data55 != 220.000000000
return -1
endi
print =============== step 10 query records of ct2 from file
sql select * from ct2;
print $data00 $data01 $data02 $data03 $data04 $data05
if $rows != 1 then
print rows $rows != 1
return -1
endi
if $data01 != 20 then
print data01 $data01 != 20
return -1
endi
if $data02 != 20.000000000 then
print data02 $data02 != 20.000000000
return -1
endi
if $data03 != xkl then
print data03 $data03 != xkl
return -1
endi
if $data04 != zxc then
print data04 $data04 != zxc
return -1
endi
if $data05 != 10.000000000 then
print data05 $data05 != 10.000000000
return -1
endi
print =============== step 11 query records of ct3 from file
sql select * from ct3;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
print $data50 $data51 $data52 $data53 $data54 $data55
print $data60 $data61 $data62 $data63 $data64 $data65
print $data70 $data71 $data72 $data73 $data74 $data75
print $data80 $data81 $data82 $data83 $data84 $data85
print $data90 $data91 $data92 $data93 $data94 $data95
print $data[10][0] $data[10][1] $data[10][2] $data[10][3] $data[10][4] $data[10][5]
print $data[11][0] $data[11][1] $data[11][2] $data[11][3] $data[11][4] $data[11][5]
print $data[12][0] $data[12][1] $data[12][2] $data[12][3] $data[12][4] $data[12][5]
print $data[13][0] $data[13][1] $data[13][2] $data[13][3] $data[13][4] $data[13][5]
if $rows != 14 then
print rows $rows != 14
return -1
endi
if $data01 != 10 then
print data01 $data01 != 10
return -1
endi
if $data11 != 20 then
print data11 $data1 != 20
return -1
endi
if $data21 != 50 then
print data21 $data21 != 50
return -1
endi
if $data31 != 90 then
print data31 $data31 != 90
return -1
endi
if $data41 != 100 then
print data41 $data41 != 100
return -1
endi
if $data51 != 130 then
print data51 $data51 != 130
return -1
endi
if $data61 != 170 then
print data61 $data61 != 170
return -1
endi
if $data71 != 180 then
print data71 $data71 != 180
return -1
endi
if $data81 != 210 then
print data81 $data81 != 210
return -1
endi
if $data91 != 200 then
print data91 $data91 != 200
return -1
endi
if $data[10][1] != 190 then
print data[10][1] $data[10][1] != 190
return -1
endi
if $data[11][1] != 220 then
print data[11][1] $data[11][1] != 220
return -1
endi
if $data[12][1] != 230 then
print data[12][1] $data[12][1] != 230
return -1
endi
if $data[13][1] != 240 then
print data[13][1] $data[13][1] != 240
return -1
endi
if $data05 != 10.000000000 then
print data05 $data05 != 10.000000000
return -1
endi
if $data15 != 20.000000000 then
print data15 $data5 != 20.000000000
return -1
endi
if $data25 != 50.000000000 then
print data25 $data25 != 50.000000000
return -1
endi
if $data35 != 90.000000000 then
print data35 $data35 != 90.000000000
return -1
endi
if $data45 != 100.000000000 then
print data45 $data45 != 100.000000000
return -1
endi
if $data55 != 130.000000000 then
print data55 $data55 != 130.000000000
return -1
endi
if $data65 != 170.000000000 then
print data65 $data65 != 170.000000000
return -1
endi
if $data75 != 180.000000000 then
print data75 $data75 != 180.000000000
return -1
endi
if $data85 != 210.000000000 then
print data85 $data85 != 210.000000000
return -1
endi
if $data95 != 200.000000000 then
print data95 $data95 != 200.000000000
return -1
endi
if $data[10][5] != 190.000000000 then
print data[10][5] $data[10][5] != 190.000000000
return -1
endi
if $data[11][5] != 220.000000000 then
print data[11][5] $data[11][5] != 220.000000000
return -1
endi
if $data[12][5] != 230.000000000 then
print data[12][5] $data[12][5] != 230.000000000
return -1
endi
if $data[13][5] != 240.000000000 then
print data[13][5] $data[13][5] != 240.000000000
return -1
endi
print =============== step 12 query records of ct4 from file
sql select * from ct4;
print $data00 $data01 $data02 $data03 $data04 $data05
print $data10 $data11 $data12 $data13 $data14 $data15
print $data20 $data21 $data22 $data23 $data24 $data25
print $data30 $data31 $data32 $data33 $data34 $data35
print $data40 $data41 $data42 $data43 $data44 $data45
if $rows != 5 then
print rows $rows != 5
return -1
endi
if $data01 != 10 then
print data01 $data01 != 10
return -1
endi
if $data11 != 30 then
print data11 $data11 != 30
return -1
endi
if $data21 != 50 then
print data21 $data21 != 50
return -1
endi
if $data31 != 90 then
print data31 $data31 != 90
return -1
endi
if $data41 != 80 then
print data41 $data41 != 80
return -1
endi
if $data03 != b0 then
print data03 $data03 != b0
return -1
endi
if $data13 != b2 then
print data13 $data13 != b2
return -1
endi
if $data23 != b5 then
print data23 $data23 != b5
return -1
endi
if $data33 != b9 then
print data33 $data33 != b9
return -1
endi
if $data43 != b8 then
print data43 $data43 != b8
return -1
endi
if $data04 != n0 then
print data04 $data04 != n0
return -1
endi
if $data14 != n2 then
print data14 $data14 != n2
return -1
endi
if $data24 != n5 then
print data24 $data24 != n5
return -1
endi
if $data34 != n9 then
print data34 $data34 != n9
return -1
endi
if $data44 != n8 then
print data44 $data44 != n8
return -1
endi

View File

@ -391,13 +391,13 @@ if $data02 != 4 then
return -1 return -1
endi endi
if $data03 != 14 then if $data03 != 50 then
print ======$data03 print ======$data03 != 50
return -1 return -1
endi endi
if $data04 != 4 then if $data04 != 20 then
print ======$data04 print ======$data04 != 20
return -1 return -1
endi endi
@ -421,13 +421,13 @@ if $data12 != 4 then
return -1 return -1
endi endi
if $data13 != 10 then if $data13 != 46 then
print ======$data13 print ======$data13 != 46
return -1 return -1
endi endi
if $data14 != 3 then if $data14 != 20 then
print ======$data14 print ======$data14 != 20
return -1 return -1
endi endi

View File

@ -12,24 +12,24 @@ class TDTestCase:
tdSql.init(conn.cursor(),True) tdSql.init(conn.cursor(),True)
self.setsql = TDSetSql() self.setsql = TDSetSql()
# name of normal table # name of normal table
self.ntbname = 'ntb' self.ntbname = 'ntb'
# name of stable # name of stable
self.stbname = 'stb' self.stbname = 'stb'
# structure of column # structure of column
self.column_dict = { self.column_dict = {
'ts':'timestamp', 'ts':'timestamp',
'c1':'int', 'c1':'int',
'c2':'float', 'c2':'float',
'c3':'double' 'c3':'double'
} }
# structure of tag # structure of tag
self.tag_dict = { self.tag_dict = {
't0':'int' 't0':'int'
} }
# number of child tables # number of child tables
self.tbnum = 2 self.tbnum = 2
# values of tag,the number of values should equal to tbnum # values of tag,the number of values should equal to tbnum
self.tag_values = [ self.tag_values = [
f'10', f'10',
f'100' f'100'
] ]
@ -43,7 +43,7 @@ class TDTestCase:
self.db_percision = ['ms','us','ns'] self.db_percision = ['ms','us','ns']
def tbtype_check(self,tb_type): def tbtype_check(self,tb_type):
if tb_type == 'normal table' or tb_type == 'child table': if tb_type == 'normal table' or tb_type == 'child table':
tdSql.checkRows(len(self.values_list)) tdSql.checkRows(len(self.values_list))
elif tb_type == 'stable': elif tb_type == 'stable':
tdSql.checkRows(len(self.values_list) * self.tbnum) tdSql.checkRows(len(self.values_list) * self.tbnum)
def data_check(self,tbname,tb_type): def data_check(self,tbname,tb_type):
@ -98,7 +98,7 @@ class TDTestCase:
self.now_check_ntb() self.now_check_ntb()
self.now_check_stb() self.now_check_stb()
def stop(self): def stop(self):
tdSql.close() tdSql.close()
tdLog.success(f"{__file__} successfully executed") tdLog.success(f"{__file__} successfully executed")

View File

@ -2,11 +2,11 @@ from util.log import *
from util.cases import * from util.cases import *
from util.sql import * from util.sql import *
import numpy as np import numpy as np
import random import random
class TDTestCase: class TDTestCase:
updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 , updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 ,
"jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143, "jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143,
"wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143, "wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143,
"maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 } "maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 }
@ -18,7 +18,7 @@ class TDTestCase:
self.ts = 1537146000000 self.ts = 1537146000000
def prepare_datas_of_distribute(self): def prepare_datas_of_distribute(self):
# prepate datas for 20 tables distributed at different vgroups # prepate datas for 20 tables distributed at different vgroups
tdSql.execute("create database if not exists testdb keep 3650 duration 1000 vgroups 5") tdSql.execute("create database if not exists testdb keep 3650 duration 1000 vgroups 5")
tdSql.execute(" use testdb ") tdSql.execute(" use testdb ")
@ -89,17 +89,17 @@ class TDTestCase:
vgroups = tdSql.queryResult vgroups = tdSql.queryResult
vnode_tables={} vnode_tables={}
for vgroup_id in vgroups: for vgroup_id in vgroups:
vnode_tables[vgroup_id[0]]=[] vnode_tables[vgroup_id[0]]=[]
# check sub_table of per vnode ,make sure sub_table has been distributed # check sub_table of per vnode ,make sure sub_table has been distributed
tdSql.query("show tables like 'ct%'") tdSql.query("show tables like 'ct%'")
table_names = tdSql.queryResult table_names = tdSql.queryResult
tablenames = [] tablenames = []
for table_name in table_names: for table_name in table_names:
vnode_tables[table_name[6]].append(table_name[0]) vnode_tables[table_name[6]].append(table_name[0])
self.vnode_disbutes = vnode_tables self.vnode_disbutes = vnode_tables
count = 0 count = 0
@ -108,7 +108,7 @@ class TDTestCase:
count+=1 count+=1
if count < 2: if count < 2:
tdLog.exit(" the datas of all not satisfy sub_table has been distributed ") tdLog.exit(" the datas of all not satisfy sub_table has been distributed ")
def distribute_agg_query(self): def distribute_agg_query(self):
# basic filter # basic filter
tdSql.query("select apercentile(c1 , 20) from stb1 where c1 is null") tdSql.query("select apercentile(c1 , 20) from stb1 where c1 is null")
@ -129,12 +129,12 @@ class TDTestCase:
tdSql.query("select apercentile(c1,20) from stb1 where t1> 4 partition by tbname") tdSql.query("select apercentile(c1,20) from stb1 where t1> 4 partition by tbname")
tdSql.checkRows(15) tdSql.checkRows(15)
# union all # union all
tdSql.query("select apercentile(c1,20) from stb1 union all select apercentile(c1,20) from stb1 ") tdSql.query("select apercentile(c1,20) from stb1 union all select apercentile(c1,20) from stb1 ")
tdSql.checkRows(2) tdSql.checkRows(2)
tdSql.checkData(0,0,7.389181281) tdSql.checkData(0,0,7.389181281)
# join # join
tdSql.execute(" create database if not exists db ") tdSql.execute(" create database if not exists db ")
tdSql.execute(" use db ") tdSql.execute(" use db ")
@ -142,7 +142,7 @@ class TDTestCase:
tdSql.execute(" create table tb1 using st tags(1) ") tdSql.execute(" create table tb1 using st tags(1) ")
tdSql.execute(" create table tb2 using st tags(2) ") tdSql.execute(" create table tb2 using st tags(2) ")
for i in range(10): for i in range(10):
ts = i*10 + self.ts ts = i*10 + self.ts
tdSql.execute(f" insert into tb1 values({ts},{i},{i}.0)") tdSql.execute(f" insert into tb1 values({ts},{i},{i}.0)")
@ -153,7 +153,7 @@ class TDTestCase:
tdSql.checkData(0,0,9.000000000) tdSql.checkData(0,0,9.000000000)
tdSql.checkData(0,0,9.000000000) tdSql.checkData(0,0,9.000000000)
# group by # group by
tdSql.execute(" use testdb ") tdSql.execute(" use testdb ")
tdSql.query(" select max(c1),c1 from stb1 group by t1 ") tdSql.query(" select max(c1),c1 from stb1 group by t1 ")
tdSql.checkRows(20) tdSql.checkRows(20)
@ -189,7 +189,7 @@ class TDTestCase:
self.check_distribute_datas() self.check_distribute_datas()
self.distribute_agg_query() self.distribute_agg_query()
def stop(self): def stop(self):
tdSql.close() tdSql.close()
tdLog.success("%s successfully executed" % __file__) tdLog.success("%s successfully executed" % __file__)

View File

@ -7,7 +7,7 @@ import platform
class TDTestCase: class TDTestCase:
updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 , updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 ,
"jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143, "jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143,
"wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143, "wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143,
"maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 } "maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 }
@ -24,7 +24,7 @@ class TDTestCase:
avg_sql = f"select avg({col_name}) from {tbname};" avg_sql = f"select avg({col_name}) from {tbname};"
same_sql = f"select {col_name} from {tbname} where {col_name} is not null " same_sql = f"select {col_name} from {tbname} where {col_name} is not null "
tdSql.query(same_sql) tdSql.query(same_sql)
pre_data = np.array(tdSql.queryResult)[np.array(tdSql.queryResult) != None] pre_data = np.array(tdSql.queryResult)[np.array(tdSql.queryResult) != None]
if (platform.system().lower() == 'windows' and pre_data.dtype == 'int32'): if (platform.system().lower() == 'windows' and pre_data.dtype == 'int32'):
@ -35,7 +35,7 @@ class TDTestCase:
tdSql.checkData(0,0,pre_avg) tdSql.checkData(0,0,pre_avg)
def prepare_datas_of_distribute(self): def prepare_datas_of_distribute(self):
# prepate datas for 20 tables distributed at different vgroups # prepate datas for 20 tables distributed at different vgroups
tdSql.execute("create database if not exists testdb keep 3650 duration 1000 vgroups 5") tdSql.execute("create database if not exists testdb keep 3650 duration 1000 vgroups 5")
tdSql.execute(" use testdb ") tdSql.execute(" use testdb ")
@ -106,17 +106,17 @@ class TDTestCase:
vgroups = tdSql.queryResult vgroups = tdSql.queryResult
vnode_tables={} vnode_tables={}
for vgroup_id in vgroups: for vgroup_id in vgroups:
vnode_tables[vgroup_id[0]]=[] vnode_tables[vgroup_id[0]]=[]
# check sub_table of per vnode ,make sure sub_table has been distributed # check sub_table of per vnode ,make sure sub_table has been distributed
tdSql.query("show tables like 'ct%'") tdSql.query("show tables like 'ct%'")
table_names = tdSql.queryResult table_names = tdSql.queryResult
tablenames = [] tablenames = []
for table_name in table_names: for table_name in table_names:
vnode_tables[table_name[6]].append(table_name[0]) vnode_tables[table_name[6]].append(table_name[0])
self.vnode_disbutes = vnode_tables self.vnode_disbutes = vnode_tables
count = 0 count = 0
@ -127,14 +127,14 @@ class TDTestCase:
tdLog.exit(" the datas of all not satisfy sub_table has been distributed ") tdLog.exit(" the datas of all not satisfy sub_table has been distributed ")
def check_avg_distribute_diff_vnode(self,col_name): def check_avg_distribute_diff_vnode(self,col_name):
vgroup_ids = [] vgroup_ids = []
for k ,v in self.vnode_disbutes.items(): for k ,v in self.vnode_disbutes.items():
if len(v)>=2: if len(v)>=2:
vgroup_ids.append(k) vgroup_ids.append(k)
distribute_tbnames = [] distribute_tbnames = []
for vgroup_id in vgroup_ids: for vgroup_id in vgroup_ids:
vnode_tables = self.vnode_disbutes[vgroup_id] vnode_tables = self.vnode_disbutes[vgroup_id]
distribute_tbnames.append(random.sample(vnode_tables,1)[0]) distribute_tbnames.append(random.sample(vnode_tables,1)[0])
@ -143,7 +143,7 @@ class TDTestCase:
tbname_ins += "'%s' ,"%tbname tbname_ins += "'%s' ,"%tbname
tbname_filters = tbname_ins[:-1] tbname_filters = tbname_ins[:-1]
avg_sql = f"select avg({col_name}) from stb1 where tbname in ({tbname_filters});" avg_sql = f"select avg({col_name}) from stb1 where tbname in ({tbname_filters});"
same_sql = f"select {col_name} from stb1 where tbname in ({tbname_filters}) and {col_name} is not null " same_sql = f"select {col_name} from stb1 where tbname in ({tbname_filters}) and {col_name} is not null "
@ -158,8 +158,8 @@ class TDTestCase:
tdSql.checkData(0,0,pre_avg) tdSql.checkData(0,0,pre_avg)
def check_avg_status(self): def check_avg_status(self):
# check max function work status # check max function work status
tdSql.query("show tables like 'ct%'") tdSql.query("show tables like 'ct%'")
table_names = tdSql.queryResult table_names = tdSql.queryResult
tablenames = [] tablenames = []
@ -168,26 +168,26 @@ class TDTestCase:
tdSql.query("desc stb1") tdSql.query("desc stb1")
col_names = tdSql.queryResult col_names = tdSql.queryResult
colnames = [] colnames = []
for col_name in col_names: for col_name in col_names:
if col_name[1] in ["INT" ,"BIGINT" ,"SMALLINT" ,"TINYINT" , "FLOAT" ,"DOUBLE"]: if col_name[1] in ["INT" ,"BIGINT" ,"SMALLINT" ,"TINYINT" , "FLOAT" ,"DOUBLE"]:
colnames.append(col_name[0]) colnames.append(col_name[0])
for tablename in tablenames: for tablename in tablenames:
for colname in colnames: for colname in colnames:
self.check_avg_functions(tablename,colname) self.check_avg_functions(tablename,colname)
# check max function for different vnode # check max function for different vnode
for colname in colnames: for colname in colnames:
if colname.startswith("c"): if colname.startswith("c"):
self.check_avg_distribute_diff_vnode(colname) self.check_avg_distribute_diff_vnode(colname)
else: else:
# self.check_avg_distribute_diff_vnode(colname) # bug for tag # self.check_avg_distribute_diff_vnode(colname) # bug for tag
pass pass
def distribute_agg_query(self): def distribute_agg_query(self):
# basic filter # basic filter
tdSql.query(" select avg(c1) from stb1 ") tdSql.query(" select avg(c1) from stb1 ")
@ -211,7 +211,7 @@ class TDTestCase:
tdSql.query("select avg(c1) from stb1 where t1> 4 partition by tbname") tdSql.query("select avg(c1) from stb1 where t1> 4 partition by tbname")
tdSql.checkRows(15) tdSql.checkRows(15)
# union all # union all
tdSql.query("select avg(c1) from stb1 union all select avg(c1) from stb1 ") tdSql.query("select avg(c1) from stb1 union all select avg(c1) from stb1 ")
tdSql.checkRows(2) tdSql.checkRows(2)
tdSql.checkData(0,0,14.086956522) tdSql.checkData(0,0,14.086956522)
@ -220,7 +220,7 @@ class TDTestCase:
tdSql.checkRows(1) tdSql.checkRows(1)
tdSql.checkData(0,0,14.086956522) tdSql.checkData(0,0,14.086956522)
# join # join
tdSql.execute(" create database if not exists db ") tdSql.execute(" create database if not exists db ")
tdSql.execute(" use db ") tdSql.execute(" use db ")
@ -228,7 +228,7 @@ class TDTestCase:
tdSql.execute(" create table tb1 using st tags(1) ") tdSql.execute(" create table tb1 using st tags(1) ")
tdSql.execute(" create table tb2 using st tags(2) ") tdSql.execute(" create table tb2 using st tags(2) ")
for i in range(10): for i in range(10):
ts = i*10 + self.ts ts = i*10 + self.ts
tdSql.execute(f" insert into tb1 values({ts},{i},{i}.0)") tdSql.execute(f" insert into tb1 values({ts},{i},{i}.0)")
@ -239,7 +239,7 @@ class TDTestCase:
tdSql.checkData(0,0,4.500000000) tdSql.checkData(0,0,4.500000000)
tdSql.checkData(0,1,4.500000000) tdSql.checkData(0,1,4.500000000)
# group by # group by
tdSql.execute(" use testdb ") tdSql.execute(" use testdb ")
# partition by tbname or partition by tag # partition by tbname or partition by tag
@ -270,7 +270,7 @@ class TDTestCase:
self.check_avg_status() self.check_avg_status()
self.distribute_agg_query() self.distribute_agg_query()
def stop(self): def stop(self):
tdSql.close() tdSql.close()
tdLog.success("%s successfully executed" % __file__) tdLog.success("%s successfully executed" % __file__)

View File

@ -2,11 +2,11 @@ from util.log import *
from util.cases import * from util.cases import *
from util.sql import * from util.sql import *
import numpy as np import numpy as np
import random import random
class TDTestCase: class TDTestCase:
updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 , updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 ,
"jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143, "jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143,
"wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143, "wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143,
"maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 } "maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 }
@ -25,7 +25,7 @@ class TDTestCase:
same_sql = f"select sum(c) from (select {col_name} ,1 as c from {tbname} where {col_name} is not null) " same_sql = f"select sum(c) from (select {col_name} ,1 as c from {tbname} where {col_name} is not null) "
tdSql.query(max_sql) tdSql.query(max_sql)
max_result = tdSql.queryResult max_result = tdSql.queryResult
tdSql.query(same_sql) tdSql.query(same_sql)
same_result = tdSql.queryResult same_result = tdSql.queryResult
@ -37,7 +37,7 @@ class TDTestCase:
def prepare_datas_of_distribute(self): def prepare_datas_of_distribute(self):
# prepate datas for 20 tables distributed at different vgroups # prepate datas for 20 tables distributed at different vgroups
tdSql.execute("create database if not exists testdb keep 3650 duration 1000 vgroups 5") tdSql.execute("create database if not exists testdb keep 3650 duration 1000 vgroups 5")
tdSql.execute(" use testdb ") tdSql.execute(" use testdb ")
@ -108,17 +108,17 @@ class TDTestCase:
vgroups = tdSql.queryResult vgroups = tdSql.queryResult
vnode_tables={} vnode_tables={}
for vgroup_id in vgroups: for vgroup_id in vgroups:
vnode_tables[vgroup_id[0]]=[] vnode_tables[vgroup_id[0]]=[]
# check sub_table of per vnode ,make sure sub_table has been distributed # check sub_table of per vnode ,make sure sub_table has been distributed
tdSql.query("show tables like 'ct%'") tdSql.query("show tables like 'ct%'")
table_names = tdSql.queryResult table_names = tdSql.queryResult
tablenames = [] tablenames = []
for table_name in table_names: for table_name in table_names:
vnode_tables[table_name[6]].append(table_name[0]) vnode_tables[table_name[6]].append(table_name[0])
self.vnode_disbutes = vnode_tables self.vnode_disbutes = vnode_tables
count = 0 count = 0
@ -129,14 +129,14 @@ class TDTestCase:
tdLog.exit(" the datas of all not satisfy sub_table has been distributed ") tdLog.exit(" the datas of all not satisfy sub_table has been distributed ")
def check_count_distribute_diff_vnode(self,col_name): def check_count_distribute_diff_vnode(self,col_name):
vgroup_ids = [] vgroup_ids = []
for k ,v in self.vnode_disbutes.items(): for k ,v in self.vnode_disbutes.items():
if len(v)>=2: if len(v)>=2:
vgroup_ids.append(k) vgroup_ids.append(k)
distribute_tbnames = [] distribute_tbnames = []
for vgroup_id in vgroup_ids: for vgroup_id in vgroup_ids:
vnode_tables = self.vnode_disbutes[vgroup_id] vnode_tables = self.vnode_disbutes[vgroup_id]
distribute_tbnames.append(random.sample(vnode_tables,1)[0]) distribute_tbnames.append(random.sample(vnode_tables,1)[0])
@ -145,13 +145,13 @@ class TDTestCase:
tbname_ins += "'%s' ,"%tbname tbname_ins += "'%s' ,"%tbname
tbname_filters = tbname_ins[:-1] tbname_filters = tbname_ins[:-1]
max_sql = f"select count({col_name}) from stb1 where tbname in ({tbname_filters});" max_sql = f"select count({col_name}) from stb1 where tbname in ({tbname_filters});"
same_sql = f"select sum(c) from (select {col_name} ,1 as c from stb1 where tbname in ({tbname_filters}) and {col_name} is not null) " same_sql = f"select sum(c) from (select {col_name} ,1 as c from stb1 where tbname in ({tbname_filters}) and {col_name} is not null) "
tdSql.query(max_sql) tdSql.query(max_sql)
max_result = tdSql.queryResult max_result = tdSql.queryResult
tdSql.query(same_sql) tdSql.query(same_sql)
same_result = tdSql.queryResult same_result = tdSql.queryResult
@ -162,8 +162,8 @@ class TDTestCase:
tdLog.info(" count function work as expected, sql : %s "% max_sql) tdLog.info(" count function work as expected, sql : %s "% max_sql)
def check_count_status(self): def check_count_status(self):
# check max function work status # check max function work status
tdSql.query("show tables like 'ct%'") tdSql.query("show tables like 'ct%'")
table_names = tdSql.queryResult table_names = tdSql.queryResult
tablenames = [] tablenames = []
@ -172,26 +172,26 @@ class TDTestCase:
tdSql.query("desc stb1") tdSql.query("desc stb1")
col_names = tdSql.queryResult col_names = tdSql.queryResult
colnames = [] colnames = []
for col_name in col_names: for col_name in col_names:
if col_name[1] in ["INT" ,"BIGINT" ,"SMALLINT" ,"TINYINT" , "FLOAT" ,"DOUBLE"]: if col_name[1] in ["INT" ,"BIGINT" ,"SMALLINT" ,"TINYINT" , "FLOAT" ,"DOUBLE"]:
colnames.append(col_name[0]) colnames.append(col_name[0])
for tablename in tablenames: for tablename in tablenames:
for colname in colnames: for colname in colnames:
self.check_count_functions(tablename,colname) self.check_count_functions(tablename,colname)
# check max function for different vnode # check max function for different vnode
for colname in colnames: for colname in colnames:
if colname.startswith("c"): if colname.startswith("c"):
self.check_count_distribute_diff_vnode(colname) self.check_count_distribute_diff_vnode(colname)
else: else:
# self.check_count_distribute_diff_vnode(colname) # bug for tag # self.check_count_distribute_diff_vnode(colname) # bug for tag
pass pass
def distribute_agg_query(self): def distribute_agg_query(self):
# basic filter # basic filter
tdSql.query("select count(c1) from stb1 ") tdSql.query("select count(c1) from stb1 ")
@ -212,12 +212,12 @@ class TDTestCase:
tdSql.query("select count(c1) from stb1 where t1> 4 partition by tbname") tdSql.query("select count(c1) from stb1 where t1> 4 partition by tbname")
tdSql.checkRows(15) tdSql.checkRows(15)
# union all # union all
tdSql.query("select count(c1) from stb1 union all select count(c1) from stb1 ") tdSql.query("select count(c1) from stb1 union all select count(c1) from stb1 ")
tdSql.checkRows(2) tdSql.checkRows(2)
tdSql.checkData(0,0,184) tdSql.checkData(0,0,184)
# join # join
tdSql.execute(" create database if not exists db ") tdSql.execute(" create database if not exists db ")
tdSql.execute(" use db ") tdSql.execute(" use db ")
@ -225,7 +225,7 @@ class TDTestCase:
tdSql.execute(" create table tb1 using st tags(1) ") tdSql.execute(" create table tb1 using st tags(1) ")
tdSql.execute(" create table tb2 using st tags(2) ") tdSql.execute(" create table tb2 using st tags(2) ")
for i in range(10): for i in range(10):
ts = i*10 + self.ts ts = i*10 + self.ts
tdSql.execute(f" insert into tb1 values({ts},{i},{i}.0)") tdSql.execute(f" insert into tb1 values({ts},{i},{i}.0)")
@ -236,7 +236,7 @@ class TDTestCase:
tdSql.checkData(0,0,10) tdSql.checkData(0,0,10)
tdSql.checkData(0,1,10) tdSql.checkData(0,1,10)
# group by # group by
tdSql.execute(" use testdb ") tdSql.execute(" use testdb ")
tdSql.query(" select count(*) from stb1 ") tdSql.query(" select count(*) from stb1 ")
@ -251,7 +251,7 @@ class TDTestCase:
# partition by tbname or partition by tag # partition by tbname or partition by tag
tdSql.query("select max(c1),tbname from stb1 partition by tbname") tdSql.query("select max(c1),tbname from stb1 partition by tbname")
query_data = tdSql.queryResult query_data = tdSql.queryResult
for row in query_data: for row in query_data:
tbname = row[1] tbname = row[1]
tdSql.query(" select max(c1) from %s "%tbname) tdSql.query(" select max(c1) from %s "%tbname)
@ -259,7 +259,7 @@ class TDTestCase:
tdSql.query("select max(c1),tbname from stb1 partition by t1") tdSql.query("select max(c1),tbname from stb1 partition by t1")
query_data = tdSql.queryResult query_data = tdSql.queryResult
for row in query_data: for row in query_data:
tbname = row[1] tbname = row[1]
tdSql.query(" select max(c1) from %s "%tbname) tdSql.query(" select max(c1) from %s "%tbname)
@ -287,7 +287,7 @@ class TDTestCase:
self.check_count_status() self.check_count_status()
self.distribute_agg_query() self.distribute_agg_query()
def stop(self): def stop(self):
tdSql.close() tdSql.close()
tdLog.success("%s successfully executed" % __file__) tdLog.success("%s successfully executed" % __file__)

View File

@ -2,11 +2,11 @@ from util.log import *
from util.cases import * from util.cases import *
from util.sql import * from util.sql import *
import numpy as np import numpy as np
import random import random
class TDTestCase: class TDTestCase:
updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 , updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 ,
"jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143, "jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143,
"wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143, "wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143,
"maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 } "maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 }
@ -25,7 +25,7 @@ class TDTestCase:
same_sql = f"select {col_name} from {tbname} order by {col_name} desc limit 1" same_sql = f"select {col_name} from {tbname} order by {col_name} desc limit 1"
tdSql.query(max_sql) tdSql.query(max_sql)
max_result = tdSql.queryResult max_result = tdSql.queryResult
tdSql.query(same_sql) tdSql.query(same_sql)
same_result = tdSql.queryResult same_result = tdSql.queryResult
@ -37,7 +37,7 @@ class TDTestCase:
def prepare_datas_of_distribute(self): def prepare_datas_of_distribute(self):
# prepate datas for 20 tables distributed at different vgroups # prepate datas for 20 tables distributed at different vgroups
tdSql.execute("create database if not exists testdb keep 3650 duration 1000 vgroups 5") tdSql.execute("create database if not exists testdb keep 3650 duration 1000 vgroups 5")
tdSql.execute(" use testdb ") tdSql.execute(" use testdb ")
@ -108,17 +108,17 @@ class TDTestCase:
vgroups = tdSql.queryResult vgroups = tdSql.queryResult
vnode_tables={} vnode_tables={}
for vgroup_id in vgroups: for vgroup_id in vgroups:
vnode_tables[vgroup_id[0]]=[] vnode_tables[vgroup_id[0]]=[]
# check sub_table of per vnode ,make sure sub_table has been distributed # check sub_table of per vnode ,make sure sub_table has been distributed
tdSql.query("show tables like 'ct%'") tdSql.query("show tables like 'ct%'")
table_names = tdSql.queryResult table_names = tdSql.queryResult
tablenames = [] tablenames = []
for table_name in table_names: for table_name in table_names:
vnode_tables[table_name[6]].append(table_name[0]) vnode_tables[table_name[6]].append(table_name[0])
self.vnode_disbutes = vnode_tables self.vnode_disbutes = vnode_tables
count = 0 count = 0
@ -129,14 +129,14 @@ class TDTestCase:
tdLog.exit(" the datas of all not satisfy sub_table has been distributed ") tdLog.exit(" the datas of all not satisfy sub_table has been distributed ")
def check_max_distribute_diff_vnode(self,col_name): def check_max_distribute_diff_vnode(self,col_name):
vgroup_ids = [] vgroup_ids = []
for k ,v in self.vnode_disbutes.items(): for k ,v in self.vnode_disbutes.items():
if len(v)>=2: if len(v)>=2:
vgroup_ids.append(k) vgroup_ids.append(k)
distribute_tbnames = [] distribute_tbnames = []
for vgroup_id in vgroup_ids: for vgroup_id in vgroup_ids:
vnode_tables = self.vnode_disbutes[vgroup_id] vnode_tables = self.vnode_disbutes[vgroup_id]
distribute_tbnames.append(random.sample(vnode_tables,1)[0]) distribute_tbnames.append(random.sample(vnode_tables,1)[0])
@ -145,13 +145,13 @@ class TDTestCase:
tbname_ins += "'%s' ,"%tbname tbname_ins += "'%s' ,"%tbname
tbname_filters = tbname_ins[:-1] tbname_filters = tbname_ins[:-1]
max_sql = f"select max({col_name}) from stb1 where tbname in ({tbname_filters});" max_sql = f"select max({col_name}) from stb1 where tbname in ({tbname_filters});"
same_sql = f"select {col_name} from stb1 where tbname in ({tbname_filters}) order by {col_name} desc limit 1" same_sql = f"select {col_name} from stb1 where tbname in ({tbname_filters}) order by {col_name} desc limit 1"
tdSql.query(max_sql) tdSql.query(max_sql)
max_result = tdSql.queryResult max_result = tdSql.queryResult
tdSql.query(same_sql) tdSql.query(same_sql)
same_result = tdSql.queryResult same_result = tdSql.queryResult
@ -162,8 +162,8 @@ class TDTestCase:
tdLog.info(" max function work as expected, sql : %s "% max_sql) tdLog.info(" max function work as expected, sql : %s "% max_sql)
def check_max_status(self): def check_max_status(self):
# check max function work status # check max function work status
tdSql.query("show tables like 'ct%'") tdSql.query("show tables like 'ct%'")
table_names = tdSql.queryResult table_names = tdSql.queryResult
tablenames = [] tablenames = []
@ -172,26 +172,26 @@ class TDTestCase:
tdSql.query("desc stb1") tdSql.query("desc stb1")
col_names = tdSql.queryResult col_names = tdSql.queryResult
colnames = [] colnames = []
for col_name in col_names: for col_name in col_names:
if col_name[1] in ["INT" ,"BIGINT" ,"SMALLINT" ,"TINYINT" , "FLOAT" ,"DOUBLE"]: if col_name[1] in ["INT" ,"BIGINT" ,"SMALLINT" ,"TINYINT" , "FLOAT" ,"DOUBLE"]:
colnames.append(col_name[0]) colnames.append(col_name[0])
for tablename in tablenames: for tablename in tablenames:
for colname in colnames: for colname in colnames:
self.check_max_functions(tablename,colname) self.check_max_functions(tablename,colname)
# check max function for different vnode # check max function for different vnode
for colname in colnames: for colname in colnames:
if colname.startswith("c"): if colname.startswith("c"):
self.check_max_distribute_diff_vnode(colname) self.check_max_distribute_diff_vnode(colname)
else: else:
# self.check_max_distribute_diff_vnode(colname) # bug for tag # self.check_max_distribute_diff_vnode(colname) # bug for tag
pass pass
def distribute_agg_query(self): def distribute_agg_query(self):
# basic filter # basic filter
tdSql.query("select max(c1) from stb1 where c1 is null") tdSql.query("select max(c1) from stb1 where c1 is null")
@ -212,12 +212,12 @@ class TDTestCase:
tdSql.query("select max(c1) from stb1 where t1> 4 partition by tbname") tdSql.query("select max(c1) from stb1 where t1> 4 partition by tbname")
tdSql.checkRows(15) tdSql.checkRows(15)
# union all # union all
tdSql.query("select max(c1) from stb1 union all select max(c1) from stb1 ") tdSql.query("select max(c1) from stb1 union all select max(c1) from stb1 ")
tdSql.checkRows(2) tdSql.checkRows(2)
tdSql.checkData(0,0,28) tdSql.checkData(0,0,28)
# join # join
tdSql.execute(" create database if not exists db ") tdSql.execute(" create database if not exists db ")
tdSql.execute(" use db ") tdSql.execute(" use db ")
@ -225,7 +225,7 @@ class TDTestCase:
tdSql.execute(" create table tb1 using st tags(1) ") tdSql.execute(" create table tb1 using st tags(1) ")
tdSql.execute(" create table tb2 using st tags(2) ") tdSql.execute(" create table tb2 using st tags(2) ")
for i in range(10): for i in range(10):
ts = i*10 + self.ts ts = i*10 + self.ts
tdSql.execute(f" insert into tb1 values({ts},{i},{i}.0)") tdSql.execute(f" insert into tb1 values({ts},{i},{i}.0)")
@ -236,7 +236,7 @@ class TDTestCase:
tdSql.checkData(0,0,9) tdSql.checkData(0,0,9)
tdSql.checkData(0,0,9.00000) tdSql.checkData(0,0,9.00000)
# group by # group by
tdSql.execute(" use testdb ") tdSql.execute(" use testdb ")
tdSql.query(" select max(c1),c1 from stb1 group by t1 ") tdSql.query(" select max(c1),c1 from stb1 group by t1 ")
tdSql.checkRows(20) tdSql.checkRows(20)
@ -263,13 +263,13 @@ class TDTestCase:
tdSql.checkRows(1) tdSql.checkRows(1)
tdSql.checkData(0,0,28) tdSql.checkData(0,0,28)
tdSql.checkData(0,1,19) tdSql.checkData(0,1,19)
tdSql.checkData(0,2,311110.000000000) tdSql.checkData(0,2,311110.000000000)
tdSql.checkData(0,3,2109) tdSql.checkData(0,3,2109)
# partition by tbname or partition by tag # partition by tbname or partition by tag
tdSql.query("select max(c1),tbname from stb1 partition by tbname") tdSql.query("select max(c1),tbname from stb1 partition by tbname")
query_data = tdSql.queryResult query_data = tdSql.queryResult
for row in query_data: for row in query_data:
tbname = row[1] tbname = row[1]
tdSql.query(" select max(c1) from %s "%tbname) tdSql.query(" select max(c1) from %s "%tbname)
@ -277,7 +277,7 @@ class TDTestCase:
tdSql.query("select max(c1),tbname from stb1 partition by t1") tdSql.query("select max(c1),tbname from stb1 partition by t1")
query_data = tdSql.queryResult query_data = tdSql.queryResult
for row in query_data: for row in query_data:
tbname = row[1] tbname = row[1]
tdSql.query(" select max(c1) from %s "%tbname) tdSql.query(" select max(c1) from %s "%tbname)
@ -305,7 +305,7 @@ class TDTestCase:
self.check_max_status() self.check_max_status()
self.distribute_agg_query() self.distribute_agg_query()
def stop(self): def stop(self):
tdSql.close() tdSql.close()
tdLog.success("%s successfully executed" % __file__) tdLog.success("%s successfully executed" % __file__)

View File

@ -2,11 +2,11 @@ from util.log import *
from util.cases import * from util.cases import *
from util.sql import * from util.sql import *
import numpy as np import numpy as np
import random import random
class TDTestCase: class TDTestCase:
updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 , updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 ,
"jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143, "jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143,
"wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143, "wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143,
"maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 } "maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 }
@ -25,7 +25,7 @@ class TDTestCase:
same_sql = f"select {col_name} from {tbname} where {col_name} is not null order by {col_name} asc limit 1" same_sql = f"select {col_name} from {tbname} where {col_name} is not null order by {col_name} asc limit 1"
tdSql.query(min_sql) tdSql.query(min_sql)
min_result = tdSql.queryResult min_result = tdSql.queryResult
tdSql.query(same_sql) tdSql.query(same_sql)
same_result = tdSql.queryResult same_result = tdSql.queryResult
@ -37,7 +37,7 @@ class TDTestCase:
def prepare_datas_of_distribute(self): def prepare_datas_of_distribute(self):
# prepate datas for 20 tables distributed at different vgroups # prepate datas for 20 tables distributed at different vgroups
tdSql.execute("create database if not exists testdb keep 3650 duration 1000 vgroups 5") tdSql.execute("create database if not exists testdb keep 3650 duration 1000 vgroups 5")
tdSql.execute(" use testdb ") tdSql.execute(" use testdb ")
@ -108,17 +108,17 @@ class TDTestCase:
vgroups = tdSql.queryResult vgroups = tdSql.queryResult
vnode_tables={} vnode_tables={}
for vgroup_id in vgroups: for vgroup_id in vgroups:
vnode_tables[vgroup_id[0]]=[] vnode_tables[vgroup_id[0]]=[]
# check sub_table of per vnode ,make sure sub_table has been distributed # check sub_table of per vnode ,make sure sub_table has been distributed
tdSql.query("show tables like 'ct%'") tdSql.query("show tables like 'ct%'")
table_names = tdSql.queryResult table_names = tdSql.queryResult
tablenames = [] tablenames = []
for table_name in table_names: for table_name in table_names:
vnode_tables[table_name[6]].append(table_name[0]) vnode_tables[table_name[6]].append(table_name[0])
self.vnode_disbutes = vnode_tables self.vnode_disbutes = vnode_tables
count = 0 count = 0
@ -129,14 +129,14 @@ class TDTestCase:
tdLog.exit(" the datas of all not satisfy sub_table has been distributed ") tdLog.exit(" the datas of all not satisfy sub_table has been distributed ")
def check_min_distribute_diff_vnode(self,col_name): def check_min_distribute_diff_vnode(self,col_name):
vgroup_ids = [] vgroup_ids = []
for k ,v in self.vnode_disbutes.items(): for k ,v in self.vnode_disbutes.items():
if len(v)>=2: if len(v)>=2:
vgroup_ids.append(k) vgroup_ids.append(k)
distribute_tbnames = [] distribute_tbnames = []
for vgroup_id in vgroup_ids: for vgroup_id in vgroup_ids:
vnode_tables = self.vnode_disbutes[vgroup_id] vnode_tables = self.vnode_disbutes[vgroup_id]
distribute_tbnames.append(random.sample(vnode_tables,1)[0]) distribute_tbnames.append(random.sample(vnode_tables,1)[0])
@ -145,13 +145,13 @@ class TDTestCase:
tbname_ins += "'%s' ,"%tbname tbname_ins += "'%s' ,"%tbname
tbname_filters = tbname_ins[:-1] tbname_filters = tbname_ins[:-1]
min_sql = f"select min({col_name}) from stb1 where tbname in ({tbname_filters});" min_sql = f"select min({col_name}) from stb1 where tbname in ({tbname_filters});"
same_sql = f"select {col_name} from stb1 where tbname in ({tbname_filters}) and {col_name} is not null order by {col_name} asc limit 1" same_sql = f"select {col_name} from stb1 where tbname in ({tbname_filters}) and {col_name} is not null order by {col_name} asc limit 1"
tdSql.query(min_sql) tdSql.query(min_sql)
min_result = tdSql.queryResult min_result = tdSql.queryResult
tdSql.query(same_sql) tdSql.query(same_sql)
same_result = tdSql.queryResult same_result = tdSql.queryResult
@ -162,8 +162,8 @@ class TDTestCase:
tdLog.info(" min function work as expected, sql : %s "% min_sql) tdLog.info(" min function work as expected, sql : %s "% min_sql)
def check_min_status(self): def check_min_status(self):
# check max function work status # check max function work status
tdSql.query("show tables like 'ct%'") tdSql.query("show tables like 'ct%'")
table_names = tdSql.queryResult table_names = tdSql.queryResult
tablenames = [] tablenames = []
@ -172,26 +172,26 @@ class TDTestCase:
tdSql.query("desc stb1") tdSql.query("desc stb1")
col_names = tdSql.queryResult col_names = tdSql.queryResult
colnames = [] colnames = []
for col_name in col_names: for col_name in col_names:
if col_name[1] in ["INT" ,"BIGINT" ,"SMALLINT" ,"TINYINT" , "FLOAT" ,"DOUBLE"]: if col_name[1] in ["INT" ,"BIGINT" ,"SMALLINT" ,"TINYINT" , "FLOAT" ,"DOUBLE"]:
colnames.append(col_name[0]) colnames.append(col_name[0])
for tablename in tablenames: for tablename in tablenames:
for colname in colnames: for colname in colnames:
self.check_min_functions(tablename,colname) self.check_min_functions(tablename,colname)
# check max function for different vnode # check max function for different vnode
for colname in colnames: for colname in colnames:
if colname.startswith("c"): if colname.startswith("c"):
self.check_min_distribute_diff_vnode(colname) self.check_min_distribute_diff_vnode(colname)
else: else:
# self.check_min_distribute_diff_vnode(colname) # bug for tag # self.check_min_distribute_diff_vnode(colname) # bug for tag
pass pass
def distribute_agg_query(self): def distribute_agg_query(self):
# basic filter # basic filter
tdSql.query("select min(c1) from stb1 where c1 is null") tdSql.query("select min(c1) from stb1 where c1 is null")
@ -212,12 +212,12 @@ class TDTestCase:
tdSql.query("select min(c1) from stb1 where t1> 4 partition by tbname") tdSql.query("select min(c1) from stb1 where t1> 4 partition by tbname")
tdSql.checkRows(15) tdSql.checkRows(15)
# union all # union all
tdSql.query("select min(c1) from stb1 union all select min(c1) from stb1 ") tdSql.query("select min(c1) from stb1 union all select min(c1) from stb1 ")
tdSql.checkRows(2) tdSql.checkRows(2)
tdSql.checkData(0,0,0) tdSql.checkData(0,0,0)
# join # join
tdSql.execute(" create database if not exists db ") tdSql.execute(" create database if not exists db ")
tdSql.execute(" use db ") tdSql.execute(" use db ")
@ -225,7 +225,7 @@ class TDTestCase:
tdSql.execute(" create table tb1 using st tags(1) ") tdSql.execute(" create table tb1 using st tags(1) ")
tdSql.execute(" create table tb2 using st tags(2) ") tdSql.execute(" create table tb2 using st tags(2) ")
for i in range(10): for i in range(10):
ts = i*10 + self.ts ts = i*10 + self.ts
tdSql.execute(f" insert into tb1 values({ts},{i},{i}.0)") tdSql.execute(f" insert into tb1 values({ts},{i},{i}.0)")
@ -236,7 +236,7 @@ class TDTestCase:
tdSql.checkData(0,0,0) tdSql.checkData(0,0,0)
tdSql.checkData(0,0,0.00000) tdSql.checkData(0,0,0.00000)
# group by # group by
tdSql.execute(" use testdb ") tdSql.execute(" use testdb ")
tdSql.query(" select min(c1),c1 from stb1 group by t1 ") tdSql.query(" select min(c1),c1 from stb1 group by t1 ")
tdSql.checkRows(20) tdSql.checkRows(20)
@ -261,12 +261,12 @@ class TDTestCase:
tdSql.query("select min(c1),ceil(t1),pow(c2,1)+2,abs(t3) from stb1 where c1>12") tdSql.query("select min(c1),ceil(t1),pow(c2,1)+2,abs(t3) from stb1 where c1>12")
tdSql.checkRows(1) tdSql.checkRows(1)
tdSql.checkData(0,0,13) tdSql.checkData(0,0,13)
tdSql.checkData(0,2,144445.000000000) tdSql.checkData(0,2,144445.000000000)
# partition by tbname or partition by tag # partition by tbname or partition by tag
tdSql.query("select min(c1),tbname from stb1 partition by tbname") tdSql.query("select min(c1),tbname from stb1 partition by tbname")
query_data = tdSql.queryResult query_data = tdSql.queryResult
for row in query_data: for row in query_data:
tbname = row[1] tbname = row[1]
tdSql.query(" select min(c1) from %s "%tbname) tdSql.query(" select min(c1) from %s "%tbname)
@ -274,7 +274,7 @@ class TDTestCase:
tdSql.query("select min(c1),tbname from stb1 partition by t1") tdSql.query("select min(c1),tbname from stb1 partition by t1")
query_data = tdSql.queryResult query_data = tdSql.queryResult
for row in query_data: for row in query_data:
tbname = row[1] tbname = row[1]
tdSql.query(" select min(c1) from %s "%tbname) tdSql.query(" select min(c1) from %s "%tbname)
@ -303,7 +303,7 @@ class TDTestCase:
self.check_min_status() self.check_min_status()
self.distribute_agg_query() self.distribute_agg_query()
def stop(self): def stop(self):
tdSql.close() tdSql.close()
tdLog.success("%s successfully executed" % __file__) tdLog.success("%s successfully executed" % __file__)

View File

@ -2,11 +2,11 @@ from util.log import *
from util.cases import * from util.cases import *
from util.sql import * from util.sql import *
import numpy as np import numpy as np
import random import random
class TDTestCase: class TDTestCase:
updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 , updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 ,
"jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143, "jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143,
"wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143, "wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143,
"maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 } "maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 }
@ -25,7 +25,7 @@ class TDTestCase:
same_sql = f"select max({col_name})-min({col_name}) from {tbname}" same_sql = f"select max({col_name})-min({col_name}) from {tbname}"
tdSql.query(spread_sql) tdSql.query(spread_sql)
spread_result = tdSql.queryResult spread_result = tdSql.queryResult
tdSql.query(same_sql) tdSql.query(same_sql)
same_result = tdSql.queryResult same_result = tdSql.queryResult
@ -37,7 +37,7 @@ class TDTestCase:
def prepare_datas_of_distribute(self): def prepare_datas_of_distribute(self):
# prepate datas for 20 tables distributed at different vgroups # prepate datas for 20 tables distributed at different vgroups
tdSql.execute("create database if not exists testdb keep 3650 duration 1000 vgroups 5") tdSql.execute("create database if not exists testdb keep 3650 duration 1000 vgroups 5")
tdSql.execute(" use testdb ") tdSql.execute(" use testdb ")
@ -108,17 +108,17 @@ class TDTestCase:
vgroups = tdSql.queryResult vgroups = tdSql.queryResult
vnode_tables={} vnode_tables={}
for vgroup_id in vgroups: for vgroup_id in vgroups:
vnode_tables[vgroup_id[0]]=[] vnode_tables[vgroup_id[0]]=[]
# check sub_table of per vnode ,make sure sub_table has been distributed # check sub_table of per vnode ,make sure sub_table has been distributed
tdSql.query("show tables like 'ct%'") tdSql.query("show tables like 'ct%'")
table_names = tdSql.queryResult table_names = tdSql.queryResult
tablenames = [] tablenames = []
for table_name in table_names: for table_name in table_names:
vnode_tables[table_name[6]].append(table_name[0]) vnode_tables[table_name[6]].append(table_name[0])
self.vnode_disbutes = vnode_tables self.vnode_disbutes = vnode_tables
count = 0 count = 0
@ -129,14 +129,14 @@ class TDTestCase:
tdLog.exit(" the datas of all not satisfy sub_table has been distributed ") tdLog.exit(" the datas of all not satisfy sub_table has been distributed ")
def check_spread_distribute_diff_vnode(self,col_name): def check_spread_distribute_diff_vnode(self,col_name):
vgroup_ids = [] vgroup_ids = []
for k ,v in self.vnode_disbutes.items(): for k ,v in self.vnode_disbutes.items():
if len(v)>=2: if len(v)>=2:
vgroup_ids.append(k) vgroup_ids.append(k)
distribute_tbnames = [] distribute_tbnames = []
for vgroup_id in vgroup_ids: for vgroup_id in vgroup_ids:
vnode_tables = self.vnode_disbutes[vgroup_id] vnode_tables = self.vnode_disbutes[vgroup_id]
distribute_tbnames.append(random.sample(vnode_tables,1)[0]) distribute_tbnames.append(random.sample(vnode_tables,1)[0])
@ -145,13 +145,13 @@ class TDTestCase:
tbname_ins += "'%s' ,"%tbname tbname_ins += "'%s' ,"%tbname
tbname_filters = tbname_ins[:-1] tbname_filters = tbname_ins[:-1]
spread_sql = f"select spread({col_name}) from stb1 where tbname in ({tbname_filters})" spread_sql = f"select spread({col_name}) from stb1 where tbname in ({tbname_filters})"
same_sql = f"select max({col_name}) - min({col_name}) from stb1 where tbname in ({tbname_filters})" same_sql = f"select max({col_name}) - min({col_name}) from stb1 where tbname in ({tbname_filters})"
tdSql.query(spread_sql) tdSql.query(spread_sql)
spread_result = tdSql.queryResult spread_result = tdSql.queryResult
tdSql.query(same_sql) tdSql.query(same_sql)
same_result = tdSql.queryResult same_result = tdSql.queryResult
@ -162,8 +162,8 @@ class TDTestCase:
tdLog.info(" spread function work as expected, sql : %s "% spread_sql) tdLog.info(" spread function work as expected, sql : %s "% spread_sql)
def check_spread_status(self): def check_spread_status(self):
# check max function work status # check max function work status
tdSql.query("show tables like 'ct%'") tdSql.query("show tables like 'ct%'")
table_names = tdSql.queryResult table_names = tdSql.queryResult
tablenames = [] tablenames = []
@ -172,26 +172,26 @@ class TDTestCase:
tdSql.query("desc stb1") tdSql.query("desc stb1")
col_names = tdSql.queryResult col_names = tdSql.queryResult
colnames = [] colnames = []
for col_name in col_names: for col_name in col_names:
if col_name[1] in ["INT" ,"BIGINT" ,"SMALLINT" ,"TINYINT" , "FLOAT" ,"DOUBLE"]: if col_name[1] in ["INT" ,"BIGINT" ,"SMALLINT" ,"TINYINT" , "FLOAT" ,"DOUBLE"]:
colnames.append(col_name[0]) colnames.append(col_name[0])
for tablename in tablenames: for tablename in tablenames:
for colname in colnames: for colname in colnames:
self.check_spread_functions(tablename,colname) self.check_spread_functions(tablename,colname)
# check max function for different vnode # check max function for different vnode
for colname in colnames: for colname in colnames:
if colname.startswith("c"): if colname.startswith("c"):
self.check_spread_distribute_diff_vnode(colname) self.check_spread_distribute_diff_vnode(colname)
else: else:
# self.check_spread_distribute_diff_vnode(colname) # bug for tag # self.check_spread_distribute_diff_vnode(colname) # bug for tag
pass pass
def distribute_agg_query(self): def distribute_agg_query(self):
# basic filter # basic filter
tdSql.query("select spread(c1) from stb1 where c1 is null") tdSql.query("select spread(c1) from stb1 where c1 is null")
@ -212,12 +212,12 @@ class TDTestCase:
tdSql.query("select spread(c1) from stb1 where t1> 4 partition by tbname") tdSql.query("select spread(c1) from stb1 where t1> 4 partition by tbname")
tdSql.checkRows(15) tdSql.checkRows(15)
# union all # union all
tdSql.query("select spread(c1) from stb1 union all select max(c1)-min(c1) from stb1 ") tdSql.query("select spread(c1) from stb1 union all select max(c1)-min(c1) from stb1 ")
tdSql.checkRows(2) tdSql.checkRows(2)
tdSql.checkData(0,0,28.000000000) tdSql.checkData(0,0,28.000000000)
# join # join
tdSql.execute(" create database if not exists db ") tdSql.execute(" create database if not exists db ")
tdSql.execute(" use db ") tdSql.execute(" use db ")
@ -225,7 +225,7 @@ class TDTestCase:
tdSql.execute(" create table tb1 using st tags(1) ") tdSql.execute(" create table tb1 using st tags(1) ")
tdSql.execute(" create table tb2 using st tags(2) ") tdSql.execute(" create table tb2 using st tags(2) ")
for i in range(10): for i in range(10):
ts = i*10 + self.ts ts = i*10 + self.ts
tdSql.execute(f" insert into tb1 values({ts},{i},{i}.0)") tdSql.execute(f" insert into tb1 values({ts},{i},{i}.0)")
@ -236,7 +236,7 @@ class TDTestCase:
tdSql.checkData(0,0,9.000000000) tdSql.checkData(0,0,9.000000000)
tdSql.checkData(0,0,9.00000) tdSql.checkData(0,0,9.00000)
# group by # group by
tdSql.execute(" use testdb ") tdSql.execute(" use testdb ")
tdSql.query(" select max(c1),c1 from stb1 group by t1 ") tdSql.query(" select max(c1),c1 from stb1 group by t1 ")
tdSql.checkRows(20) tdSql.checkRows(20)
@ -272,7 +272,7 @@ class TDTestCase:
self.check_spread_status() self.check_spread_status()
self.distribute_agg_query() self.distribute_agg_query()
def stop(self): def stop(self):
tdSql.close() tdSql.close()
tdLog.success("%s successfully executed" % __file__) tdLog.success("%s successfully executed" % __file__)

View File

@ -7,7 +7,7 @@ import platform
class TDTestCase: class TDTestCase:
updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 , updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 ,
"jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143, "jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143,
"wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143, "wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143,
"maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 } "maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 }
@ -24,7 +24,7 @@ class TDTestCase:
sum_sql = f"select sum({col_name}) from {tbname};" sum_sql = f"select sum({col_name}) from {tbname};"
same_sql = f"select {col_name} from {tbname} where {col_name} is not null " same_sql = f"select {col_name} from {tbname} where {col_name} is not null "
tdSql.query(same_sql) tdSql.query(same_sql)
pre_data = np.array(tdSql.queryResult)[np.array(tdSql.queryResult) != None] pre_data = np.array(tdSql.queryResult)[np.array(tdSql.queryResult) != None]
if (platform.system().lower() == 'windows' and pre_data.dtype == 'int32'): if (platform.system().lower() == 'windows' and pre_data.dtype == 'int32'):
@ -35,7 +35,7 @@ class TDTestCase:
tdSql.checkData(0,0,pre_sum) tdSql.checkData(0,0,pre_sum)
def prepare_datas_of_distribute(self): def prepare_datas_of_distribute(self):
# prepate datas for 20 tables distributed at different vgroups # prepate datas for 20 tables distributed at different vgroups
tdSql.execute("create database if not exists testdb keep 3650 duration 1000 vgroups 5") tdSql.execute("create database if not exists testdb keep 3650 duration 1000 vgroups 5")
tdSql.execute(" use testdb ") tdSql.execute(" use testdb ")
@ -106,17 +106,17 @@ class TDTestCase:
vgroups = tdSql.queryResult vgroups = tdSql.queryResult
vnode_tables={} vnode_tables={}
for vgroup_id in vgroups: for vgroup_id in vgroups:
vnode_tables[vgroup_id[0]]=[] vnode_tables[vgroup_id[0]]=[]
# check sub_table of per vnode ,make sure sub_table has been distributed # check sub_table of per vnode ,make sure sub_table has been distributed
tdSql.query("show tables like 'ct%'") tdSql.query("show tables like 'ct%'")
table_names = tdSql.queryResult table_names = tdSql.queryResult
tablenames = [] tablenames = []
for table_name in table_names: for table_name in table_names:
vnode_tables[table_name[6]].append(table_name[0]) vnode_tables[table_name[6]].append(table_name[0])
self.vnode_disbutes = vnode_tables self.vnode_disbutes = vnode_tables
count = 0 count = 0
@ -127,14 +127,14 @@ class TDTestCase:
tdLog.exit(" the datas of all not satisfy sub_table has been distributed ") tdLog.exit(" the datas of all not satisfy sub_table has been distributed ")
def check_sum_distribute_diff_vnode(self,col_name): def check_sum_distribute_diff_vnode(self,col_name):
vgroup_ids = [] vgroup_ids = []
for k ,v in self.vnode_disbutes.items(): for k ,v in self.vnode_disbutes.items():
if len(v)>=2: if len(v)>=2:
vgroup_ids.append(k) vgroup_ids.append(k)
distribute_tbnames = [] distribute_tbnames = []
for vgroup_id in vgroup_ids: for vgroup_id in vgroup_ids:
vnode_tables = self.vnode_disbutes[vgroup_id] vnode_tables = self.vnode_disbutes[vgroup_id]
distribute_tbnames.append(random.sample(vnode_tables,1)[0]) distribute_tbnames.append(random.sample(vnode_tables,1)[0])
@ -143,7 +143,7 @@ class TDTestCase:
tbname_ins += "'%s' ,"%tbname tbname_ins += "'%s' ,"%tbname
tbname_filters = tbname_ins[:-1] tbname_filters = tbname_ins[:-1]
sum_sql = f"select sum({col_name}) from stb1 where tbname in ({tbname_filters});" sum_sql = f"select sum({col_name}) from stb1 where tbname in ({tbname_filters});"
same_sql = f"select {col_name} from stb1 where tbname in ({tbname_filters}) and {col_name} is not null " same_sql = f"select {col_name} from stb1 where tbname in ({tbname_filters}) and {col_name} is not null "
@ -158,8 +158,8 @@ class TDTestCase:
tdSql.checkData(0,0,pre_sum) tdSql.checkData(0,0,pre_sum)
def check_sum_status(self): def check_sum_status(self):
# check max function work status # check max function work status
tdSql.query("show tables like 'ct%'") tdSql.query("show tables like 'ct%'")
table_names = tdSql.queryResult table_names = tdSql.queryResult
tablenames = [] tablenames = []
@ -168,26 +168,26 @@ class TDTestCase:
tdSql.query("desc stb1") tdSql.query("desc stb1")
col_names = tdSql.queryResult col_names = tdSql.queryResult
colnames = [] colnames = []
for col_name in col_names: for col_name in col_names:
if col_name[1] in ["INT" ,"BIGINT" ,"SMALLINT" ,"TINYINT" , "FLOAT" ,"DOUBLE"]: if col_name[1] in ["INT" ,"BIGINT" ,"SMALLINT" ,"TINYINT" , "FLOAT" ,"DOUBLE"]:
colnames.append(col_name[0]) colnames.append(col_name[0])
for tablename in tablenames: for tablename in tablenames:
for colname in colnames: for colname in colnames:
self.check_sum_functions(tablename,colname) self.check_sum_functions(tablename,colname)
# check max function for different vnode # check max function for different vnode
for colname in colnames: for colname in colnames:
if colname.startswith("c"): if colname.startswith("c"):
self.check_sum_distribute_diff_vnode(colname) self.check_sum_distribute_diff_vnode(colname)
else: else:
# self.check_sum_distribute_diff_vnode(colname) # bug for tag # self.check_sum_distribute_diff_vnode(colname) # bug for tag
pass pass
def distribute_agg_query(self): def distribute_agg_query(self):
# basic filter # basic filter
tdSql.query(" select sum(c1) from stb1 ") tdSql.query(" select sum(c1) from stb1 ")
@ -211,7 +211,7 @@ class TDTestCase:
tdSql.query("select sum(c1) from stb1 where t1> 4 partition by tbname") tdSql.query("select sum(c1) from stb1 where t1> 4 partition by tbname")
tdSql.checkRows(15) tdSql.checkRows(15)
# union all # union all
tdSql.query("select sum(c1) from stb1 union all select sum(c1) from stb1 ") tdSql.query("select sum(c1) from stb1 union all select sum(c1) from stb1 ")
tdSql.checkRows(2) tdSql.checkRows(2)
tdSql.checkData(0,0,2592) tdSql.checkData(0,0,2592)
@ -220,7 +220,7 @@ class TDTestCase:
tdSql.checkRows(1) tdSql.checkRows(1)
tdSql.checkData(0,0,5184) tdSql.checkData(0,0,5184)
# join # join
tdSql.execute(" create database if not exists db ") tdSql.execute(" create database if not exists db ")
tdSql.execute(" use db ") tdSql.execute(" use db ")
@ -228,7 +228,7 @@ class TDTestCase:
tdSql.execute(" create table tb1 using st tags(1) ") tdSql.execute(" create table tb1 using st tags(1) ")
tdSql.execute(" create table tb2 using st tags(2) ") tdSql.execute(" create table tb2 using st tags(2) ")
for i in range(10): for i in range(10):
ts = i*10 + self.ts ts = i*10 + self.ts
tdSql.execute(f" insert into tb1 values({ts},{i},{i}.0)") tdSql.execute(f" insert into tb1 values({ts},{i},{i}.0)")
@ -239,7 +239,7 @@ class TDTestCase:
tdSql.checkData(0,0,45) tdSql.checkData(0,0,45)
tdSql.checkData(0,1,45.000000000) tdSql.checkData(0,1,45.000000000)
# group by # group by
tdSql.execute(" use testdb ") tdSql.execute(" use testdb ")
# partition by tbname or partition by tag # partition by tbname or partition by tag
@ -269,7 +269,7 @@ class TDTestCase:
self.check_sum_status() self.check_sum_status()
self.distribute_agg_query() self.distribute_agg_query()
def stop(self): def stop(self):
tdSql.close() tdSql.close()
tdLog.success("%s successfully executed" % __file__) tdLog.success("%s successfully executed" % __file__)

View File

@ -23,7 +23,7 @@ class TDTestCase:
self.time_step = 1000 self.time_step = 1000
def insert_datas_and_check_irate(self ,tbnums , rownums , time_step ): def insert_datas_and_check_irate(self ,tbnums , rownums , time_step ):
tdLog.info(" prepare datas for auto check irate function ") tdLog.info(" prepare datas for auto check irate function ")
tdSql.execute(" create database test ") tdSql.execute(" create database test ")
@ -48,7 +48,7 @@ class TDTestCase:
c9 = "'nchar_val'" c9 = "'nchar_val'"
c10 = ts c10 = ts
tdSql.execute(f" insert into {tbname} values ({ts},{c1},{c2},{c3},{c4},{c5},{c6},{c7},{c8},{c9},{c10})") tdSql.execute(f" insert into {tbname} values ({ts},{c1},{c2},{c3},{c4},{c5},{c6},{c7},{c8},{c9},{c10})")
tdSql.execute("use test") tdSql.execute("use test")
tbnames = ["stb", "sub_tb_1"] tbnames = ["stb", "sub_tb_1"]
support_types = ["BIGINT", "SMALLINT", "TINYINT", "FLOAT", "DOUBLE", "INT"] support_types = ["BIGINT", "SMALLINT", "TINYINT", "FLOAT", "DOUBLE", "INT"]
@ -204,11 +204,11 @@ class TDTestCase:
# used for sub table # used for sub table
tdSql.query("select irate(abs(c1+c2)) from ct1") tdSql.query("select irate(abs(c1+c2)) from ct1")
tdSql.checkData(0, 0, 0.000000000) tdSql.checkData(0, 0, 0.000000000)
# mix with common col # mix with common col
tdSql.error("select c1, irate(c1) from ct1") tdSql.error("select c1, irate(c1) from ct1")
# mix with common functions # mix with common functions
tdSql.error("select irate(c1), abs(c1) from ct4 ") tdSql.error("select irate(c1), abs(c1) from ct4 ")
@ -236,7 +236,7 @@ class TDTestCase:
"select irate(c1+c2)/10 from stb1 where c1 = 5 partition by tbname ") "select irate(c1+c2)/10 from stb1 where c1 = 5 partition by tbname ")
tdSql.checkRows(2) tdSql.checkRows(2)
tdSql.checkData(0, 0, 0.000000000) tdSql.checkData(0, 0, 0.000000000)
def irate_Arithmetic(self): def irate_Arithmetic(self):
pass pass

View File

@ -10,13 +10,13 @@ from util.cases import *
class TDTestCase: class TDTestCase:
updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 , updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 ,
"jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143, "jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143,
"wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143} "wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143}
def init(self, conn, logSql): def init(self, conn, logSql):
tdLog.debug(f"start to excute {__file__}") tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor()) tdSql.init(conn.cursor())
def prepare_datas(self): def prepare_datas(self):
tdSql.execute( tdSql.execute(
'''create table stb1 '''create table stb1
@ -24,7 +24,7 @@ class TDTestCase:
tags (t1 int) tags (t1 int)
''' '''
) )
tdSql.execute( tdSql.execute(
''' '''
create table t1 create table t1
@ -69,12 +69,12 @@ class TDTestCase:
def check_result_auto_log(self ,origin_query , log_query): def check_result_auto_log(self ,origin_query , log_query):
log_result = tdSql.getResult(log_query) log_result = tdSql.getResult(log_query)
origin_result = tdSql.getResult(origin_query) origin_result = tdSql.getResult(origin_query)
auto_result =[] auto_result =[]
for row in origin_result: for row in origin_result:
row_check = [] row_check = []
for elem in row: for elem in row:
@ -91,20 +91,20 @@ class TDTestCase:
for row_index , row in enumerate(log_result): for row_index , row in enumerate(log_result):
for col_index , elem in enumerate(row): for col_index , elem in enumerate(row):
if auto_result[row_index][col_index] != elem: if auto_result[row_index][col_index] != elem:
check_status = False check_status = False
if not check_status: if not check_status:
tdLog.notice("log function value has not as expected , sql is \"%s\" "%log_query ) tdLog.notice("log function value has not as expected , sql is \"%s\" "%log_query )
sys.exit(1) sys.exit(1)
else: else:
tdLog.info("log value check pass , it work as expected ,sql is \"%s\" "%log_query ) tdLog.info("log value check pass , it work as expected ,sql is \"%s\" "%log_query )
def check_result_auto_log2(self ,origin_query , log_query): def check_result_auto_log2(self ,origin_query , log_query):
log_result = tdSql.getResult(log_query) log_result = tdSql.getResult(log_query)
origin_result = tdSql.getResult(origin_query) origin_result = tdSql.getResult(origin_query)
auto_result =[] auto_result =[]
for row in origin_result: for row in origin_result:
row_check = [] row_check = []
for elem in row: for elem in row:
@ -121,7 +121,7 @@ class TDTestCase:
for row_index , row in enumerate(log_result): for row_index , row in enumerate(log_result):
for col_index , elem in enumerate(row): for col_index , elem in enumerate(row):
if auto_result[row_index][col_index] != elem: if auto_result[row_index][col_index] != elem:
check_status = False check_status = False
if not check_status: if not check_status:
tdLog.notice("log function value has not as expected , sql is \"%s\" "%log_query ) tdLog.notice("log function value has not as expected , sql is \"%s\" "%log_query )
sys.exit(1) sys.exit(1)
@ -133,7 +133,7 @@ class TDTestCase:
origin_result = tdSql.getResult(origin_query) origin_result = tdSql.getResult(origin_query)
auto_result =[] auto_result =[]
for row in origin_result: for row in origin_result:
row_check = [] row_check = []
for elem in row: for elem in row:
@ -150,7 +150,7 @@ class TDTestCase:
for row_index , row in enumerate(log_result): for row_index , row in enumerate(log_result):
for col_index , elem in enumerate(row): for col_index , elem in enumerate(row):
if auto_result[row_index][col_index] != elem: if auto_result[row_index][col_index] != elem:
check_status = False check_status = False
if not check_status: if not check_status:
tdLog.notice("log function value has not as expected , sql is \"%s\" "%log_query ) tdLog.notice("log function value has not as expected , sql is \"%s\" "%log_query )
sys.exit(1) sys.exit(1)
@ -161,7 +161,7 @@ class TDTestCase:
origin_result = tdSql.getResult(origin_query) origin_result = tdSql.getResult(origin_query)
auto_result =[] auto_result =[]
for row in origin_result: for row in origin_result:
row_check = [] row_check = []
for elem in row: for elem in row:
@ -178,13 +178,13 @@ class TDTestCase:
for row_index , row in enumerate(log_result): for row_index , row in enumerate(log_result):
for col_index , elem in enumerate(row): for col_index , elem in enumerate(row):
if auto_result[row_index][col_index] != elem: if auto_result[row_index][col_index] != elem:
check_status = False check_status = False
if not check_status: if not check_status:
tdLog.notice("log function value has not as expected , sql is \"%s\" "%log_query ) tdLog.notice("log function value has not as expected , sql is \"%s\" "%log_query )
sys.exit(1) sys.exit(1)
else: else:
tdLog.info("log value check pass , it work as expected ,sql is \"%s\" "%log_query ) tdLog.info("log value check pass , it work as expected ,sql is \"%s\" "%log_query )
def test_errors(self): def test_errors(self):
error_sql_lists = [ error_sql_lists = [
"select log from t1", "select log from t1",
@ -218,42 +218,42 @@ class TDTestCase:
] ]
for error_sql in error_sql_lists: for error_sql in error_sql_lists:
tdSql.error(error_sql) tdSql.error(error_sql)
def support_types(self): def support_types(self):
type_error_sql_lists = [ type_error_sql_lists = [
"select log(ts ,2 ) from t1" , "select log(ts ,2 ) from t1" ,
"select log(c7,c2 ) from t1", "select log(c7,c2 ) from t1",
"select log(c8,c1 ) from t1", "select log(c8,c1 ) from t1",
"select log(c9,c2 ) from t1", "select log(c9,c2 ) from t1",
"select log(ts,c7 ) from ct1" , "select log(ts,c7 ) from ct1" ,
"select log(c7,c9 ) from ct1", "select log(c7,c9 ) from ct1",
"select log(c8,c2 ) from ct1", "select log(c8,c2 ) from ct1",
"select log(c9,c1 ) from ct1", "select log(c9,c1 ) from ct1",
"select log(ts,2 ) from ct3" , "select log(ts,2 ) from ct3" ,
"select log(c7,2 ) from ct3", "select log(c7,2 ) from ct3",
"select log(c8,2 ) from ct3", "select log(c8,2 ) from ct3",
"select log(c9,2 ) from ct3", "select log(c9,2 ) from ct3",
"select log(ts,2 ) from ct4" , "select log(ts,2 ) from ct4" ,
"select log(c7,2 ) from ct4", "select log(c7,2 ) from ct4",
"select log(c8,2 ) from ct4", "select log(c8,2 ) from ct4",
"select log(c9,2 ) from ct4", "select log(c9,2 ) from ct4",
"select log(ts,2 ) from stb1" , "select log(ts,2 ) from stb1" ,
"select log(c7,2 ) from stb1", "select log(c7,2 ) from stb1",
"select log(c8,2 ) from stb1", "select log(c8,2 ) from stb1",
"select log(c9,2 ) from stb1" , "select log(c9,2 ) from stb1" ,
"select log(ts,2 ) from stbbb1" , "select log(ts,2 ) from stbbb1" ,
"select log(c7,2 ) from stbbb1", "select log(c7,2 ) from stbbb1",
"select log(ts,2 ) from tbname", "select log(ts,2 ) from tbname",
"select log(c9,2 ) from tbname" "select log(c9,2 ) from tbname"
] ]
for type_sql in type_error_sql_lists: for type_sql in type_error_sql_lists:
tdSql.error(type_sql) tdSql.error(type_sql)
type_sql_lists = [ type_sql_lists = [
"select log(c1,2 ) from t1", "select log(c1,2 ) from t1",
"select log(c2,2 ) from t1", "select log(c2,2 ) from t1",
@ -283,16 +283,16 @@ class TDTestCase:
"select log(c5,2 ) from stb1", "select log(c5,2 ) from stb1",
"select log(c6,2 ) from stb1", "select log(c6,2 ) from stb1",
"select log(c6,2) as alisb from stb1", "select log(c6,2) as alisb from stb1",
"select log(c6,2) alisb from stb1", "select log(c6,2) alisb from stb1",
] ]
for type_sql in type_sql_lists: for type_sql in type_sql_lists:
tdSql.query(type_sql) tdSql.query(type_sql)
def basic_log_function(self): def basic_log_function(self):
# basic query # basic query
tdSql.query("select c1 from ct3") tdSql.query("select c1 from ct3")
tdSql.checkRows(0) tdSql.checkRows(0)
tdSql.query("select c1 from t1") tdSql.query("select c1 from t1")
@ -344,7 +344,7 @@ class TDTestCase:
self.check_result_auto_log2( "select c1, c2, c3 , c4, c5 from t1", "select log(c1 ,2), log(c2 ,2) ,log(c3, 2), log(c4 ,2), log(c5 ,2) from t1") self.check_result_auto_log2( "select c1, c2, c3 , c4, c5 from t1", "select log(c1 ,2), log(c2 ,2) ,log(c3, 2), log(c4 ,2), log(c5 ,2) from t1")
self.check_result_auto_log1( "select c1, c2, c3 , c4, c5 from t1", "select log(c1 ,1), log(c2 ,1) ,log(c3, 1), log(c4 ,1), log(c5 ,1) from t1") self.check_result_auto_log1( "select c1, c2, c3 , c4, c5 from t1", "select log(c1 ,1), log(c2 ,1) ,log(c3, 1), log(c4 ,1), log(c5 ,1) from t1")
self.check_result_auto_log__10( "select c1, c2, c3 , c4, c5 from t1", "select log(c1 ,-10), log(c2 ,-10) ,log(c3, -10), log(c4 ,-10), log(c5 ,-10) from t1") self.check_result_auto_log__10( "select c1, c2, c3 , c4, c5 from t1", "select log(c1 ,-10), log(c2 ,-10) ,log(c3, -10), log(c4 ,-10), log(c5 ,-10) from t1")
# used for sub table # used for sub table
tdSql.query("select c1 ,log(c1 ,3) from ct1") tdSql.query("select c1 ,log(c1 ,3) from ct1")
tdSql.checkData(0, 1, 1.892789261) tdSql.checkData(0, 1, 1.892789261)
@ -382,18 +382,18 @@ class TDTestCase:
tdSql.checkData(4 , 2 , None) tdSql.checkData(4 , 2 , None)
tdSql.checkData(4 , 3 , None) tdSql.checkData(4 , 3 , None)
# # used for stable table # # used for stable table
tdSql.query("select log(c1, 2) from stb1") tdSql.query("select log(c1, 2) from stb1")
tdSql.checkRows(25) tdSql.checkRows(25)
# used for not exists table # used for not exists table
tdSql.error("select log(c1, 2) from stbbb1") tdSql.error("select log(c1, 2) from stbbb1")
tdSql.error("select log(c1, 2) from tbname") tdSql.error("select log(c1, 2) from tbname")
tdSql.error("select log(c1, 2) from ct5") tdSql.error("select log(c1, 2) from ct5")
# mix with common col # mix with common col
tdSql.query("select c1, log(c1 ,2) from ct1") tdSql.query("select c1, log(c1 ,2) from ct1")
tdSql.checkData(0 , 0 ,8) tdSql.checkData(0 , 0 ,8)
tdSql.checkData(0 , 1 ,3.000000000) tdSql.checkData(0 , 1 ,3.000000000)
@ -418,7 +418,7 @@ class TDTestCase:
tdSql.checkData(0 , 1 ,None) tdSql.checkData(0 , 1 ,None)
tdSql.checkData(0 , 2 ,None) tdSql.checkData(0 , 2 ,None)
tdSql.checkData(0 , 3 ,None) tdSql.checkData(0 , 3 ,None)
tdSql.checkData(3 , 0 , 6) tdSql.checkData(3 , 0 , 6)
tdSql.checkData(3 , 1 , 2.584962501) tdSql.checkData(3 , 1 , 2.584962501)
tdSql.checkData(3 , 2 ,6.66000) tdSql.checkData(3 , 2 ,6.66000)
@ -439,7 +439,7 @@ class TDTestCase:
tdSql.query("select max(c5), count(c5) from stb1") tdSql.query("select max(c5), count(c5) from stb1")
tdSql.query("select max(c5), count(c5) from ct1") tdSql.query("select max(c5), count(c5) from ct1")
# bug fix for count # bug fix for count
tdSql.query("select count(c1) from ct4 ") tdSql.query("select count(c1) from ct4 ")
tdSql.checkData(0,0,9) tdSql.checkData(0,0,9)
@ -450,7 +450,7 @@ class TDTestCase:
tdSql.query("select count(*) from stb1 ") tdSql.query("select count(*) from stb1 ")
tdSql.checkData(0,0,25) tdSql.checkData(0,0,25)
# # bug fix for compute # # bug fix for compute
tdSql.query("select c1, log(c1 ,2) -0 ,log(c1-4 ,2)-0 from ct4 ") tdSql.query("select c1, log(c1 ,2) -0 ,log(c1-4 ,2)-0 from ct4 ")
tdSql.checkData(0, 0, None) tdSql.checkData(0, 0, None)
tdSql.checkData(0, 1, None) tdSql.checkData(0, 1, None)
@ -507,40 +507,40 @@ class TDTestCase:
# base is an regular number ,int or double # base is an regular number ,int or double
tdSql.query("select c1, log(c1, 2) from ct1") tdSql.query("select c1, log(c1, 2) from ct1")
tdSql.checkData(0, 1,3.000000000) tdSql.checkData(0, 1,3.000000000)
tdSql.query("select c1, log(c1, 2.0) from ct1") tdSql.query("select c1, log(c1, 2.0) from ct1")
tdSql.checkData(0, 1, 3.000000000) tdSql.checkData(0, 1, 3.000000000)
tdSql.query("select c1, log(1, 2.0) from ct1") tdSql.query("select c1, log(1, 2.0) from ct1")
tdSql.checkData(0, 1, 0.000000000) tdSql.checkData(0, 1, 0.000000000)
tdSql.checkRows(13) tdSql.checkRows(13)
# # bug for compute in functions # # bug for compute in functions
# tdSql.query("select c1, abs(1/0) from ct1") # tdSql.query("select c1, abs(1/0) from ct1")
# tdSql.checkData(0, 0, 8) # tdSql.checkData(0, 0, 8)
# tdSql.checkData(0, 1, 1) # tdSql.checkData(0, 1, 1)
tdSql.query("select c1, log(1, 2.0) from ct1") tdSql.query("select c1, log(1, 2.0) from ct1")
tdSql.checkData(0, 1, 0.000000000) tdSql.checkData(0, 1, 0.000000000)
tdSql.checkRows(13) tdSql.checkRows(13)
# two cols start log(x,y) # two cols start log(x,y)
tdSql.query("select c1,c2, log(c1,c2) from ct1") tdSql.query("select c1,c2, log(c1,c2) from ct1")
tdSql.checkData(0, 2, 0.182485070) tdSql.checkData(0, 2, 0.182485070)
tdSql.checkData(1, 2, 0.172791608) tdSql.checkData(1, 2, 0.172791608)
tdSql.checkData(4, 2, None) tdSql.checkData(4, 2, None)
tdSql.query("select c1,c2, log(c2,c1) from ct1") tdSql.query("select c1,c2, log(c2,c1) from ct1")
tdSql.checkData(0, 2, 5.479900349) tdSql.checkData(0, 2, 5.479900349)
tdSql.checkData(1, 2, 5.787318105) tdSql.checkData(1, 2, 5.787318105)
tdSql.checkData(4, 2, None) tdSql.checkData(4, 2, None)
tdSql.query("select c1, log(2.0 , c1) from ct1") tdSql.query("select c1, log(2.0 , c1) from ct1")
tdSql.checkData(0, 1, 0.333333333) tdSql.checkData(0, 1, 0.333333333)
tdSql.checkData(1, 1, 0.356207187) tdSql.checkData(1, 1, 0.356207187)
tdSql.checkData(4, 1, None) tdSql.checkData(4, 1, None)
tdSql.query("select c1, log(2.0 , ceil(abs(c1))) from ct1") tdSql.query("select c1, log(2.0 , ceil(abs(c1))) from ct1")
tdSql.checkData(0, 1, 0.333333333) tdSql.checkData(0, 1, 0.333333333)
tdSql.checkData(1, 1, 0.356207187) tdSql.checkData(1, 1, 0.356207187)
tdSql.checkData(4, 1, None) tdSql.checkData(4, 1, None)
@ -580,10 +580,10 @@ class TDTestCase:
tdSql.checkData(0,3,8.000000000) tdSql.checkData(0,3,8.000000000)
tdSql.checkData(0,4,7.900000000) tdSql.checkData(0,4,7.900000000)
tdSql.checkData(0,5,3.000000000) tdSql.checkData(0,5,3.000000000)
def log_Arithmetic(self): def log_Arithmetic(self):
pass pass
def check_boundary_values(self): def check_boundary_values(self):
tdSql.execute("drop database if exists bound_test") tdSql.execute("drop database if exists bound_test")
@ -612,13 +612,13 @@ class TDTestCase:
self.check_result_auto_log( "select c1, c2, c3 , c4, c5 ,c6 from sub1_bound ", "select log(c1), log(c2) ,log(c3), log(c4), log(c5) ,log(c6) from sub1_bound") self.check_result_auto_log( "select c1, c2, c3 , c4, c5 ,c6 from sub1_bound ", "select log(c1), log(c2) ,log(c3), log(c4), log(c5) ,log(c6) from sub1_bound")
self.check_result_auto_log2( "select c1, c2, c3 , c4, c5 ,c6 from sub1_bound ", "select log(c1,2), log(c2,2) ,log(c3,2), log(c4,2), log(c5,2) ,log(c6,2) from sub1_bound") self.check_result_auto_log2( "select c1, c2, c3 , c4, c5 ,c6 from sub1_bound ", "select log(c1,2), log(c2,2) ,log(c3,2), log(c4,2), log(c5,2) ,log(c6,2) from sub1_bound")
self.check_result_auto_log__10( "select c1, c2, c3 , c4, c5 ,c6 from sub1_bound ", "select log(c1,-10), log(c2,-10) ,log(c3,-10), log(c4,-10), log(c5,-10) ,log(c6,-10) from sub1_bound") self.check_result_auto_log__10( "select c1, c2, c3 , c4, c5 ,c6 from sub1_bound ", "select log(c1,-10), log(c2,-10) ,log(c3,-10), log(c4,-10), log(c5,-10) ,log(c6,-10) from sub1_bound")
self.check_result_auto_log2( "select c1, c2, c3 , c3, c2 ,c1 from sub1_bound ", "select log(c1,2), log(c2,2) ,log(c3,2), log(c3,2), log(c2,2) ,log(c1,2) from sub1_bound") self.check_result_auto_log2( "select c1, c2, c3 , c3, c2 ,c1 from sub1_bound ", "select log(c1,2), log(c2,2) ,log(c3,2), log(c3,2), log(c2,2) ,log(c1,2) from sub1_bound")
self.check_result_auto_log( "select c1, c2, c3 , c3, c2 ,c1 from sub1_bound ", "select log(c1), log(c2) ,log(c3), log(c3), log(c2) ,log(c1) from sub1_bound") self.check_result_auto_log( "select c1, c2, c3 , c3, c2 ,c1 from sub1_bound ", "select log(c1), log(c2) ,log(c3), log(c3), log(c2) ,log(c1) from sub1_bound")
self.check_result_auto_log2("select abs(abs(abs(abs(abs(abs(abs(abs(abs(c1))))))))) nest_col_func from sub1_bound" , "select log(abs(c1) ,2) from sub1_bound" ) self.check_result_auto_log2("select abs(abs(abs(abs(abs(abs(abs(abs(abs(c1))))))))) nest_col_func from sub1_bound" , "select log(abs(c1) ,2) from sub1_bound" )
# check basic elem for table per row # check basic elem for table per row
tdSql.query("select log(abs(c1),2) ,log(abs(c2),2) , log(abs(c3),2) , log(abs(c4),2), log(abs(c5),2), log(abs(c6),2) from sub1_bound ") tdSql.query("select log(abs(c1),2) ,log(abs(c2),2) , log(abs(c3),2) , log(abs(c4),2), log(abs(c5),2), log(abs(c6),2) from sub1_bound ")
tdSql.checkData(0,0,math.log(2147483647,2)) tdSql.checkData(0,0,math.log(2147483647,2))
@ -683,45 +683,45 @@ class TDTestCase:
self.check_result_auto_log2( " select t1,c5 from stb1 where c1 > 0 order by tbname " , "select log(t1,2) ,log(c5,2) from stb1 where c1 > 0 order by tbname" ) self.check_result_auto_log2( " select t1,c5 from stb1 where c1 > 0 order by tbname " , "select log(t1,2) ,log(c5,2) from stb1 where c1 > 0 order by tbname" )
self.check_result_auto_log2( " select t1,c5 from stb1 where c1 > 0 order by tbname " , "select log(t1,2) , log(c5,2) from stb1 where c1 > 0 order by tbname" ) self.check_result_auto_log2( " select t1,c5 from stb1 where c1 > 0 order by tbname " , "select log(t1,2) , log(c5,2) from stb1 where c1 > 0 order by tbname" )
pass pass
def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring
tdSql.prepare() tdSql.prepare()
tdLog.printNoPrefix("==========step1:create table ==============") tdLog.printNoPrefix("==========step1:create table ==============")
self.prepare_datas() self.prepare_datas()
tdLog.printNoPrefix("==========step2:test errors ==============") tdLog.printNoPrefix("==========step2:test errors ==============")
self.test_errors() self.test_errors()
tdLog.printNoPrefix("==========step3:support types ============") tdLog.printNoPrefix("==========step3:support types ============")
self.support_types() self.support_types()
tdLog.printNoPrefix("==========step4: log basic query ============") tdLog.printNoPrefix("==========step4: log basic query ============")
self.basic_log_function() self.basic_log_function()
tdLog.printNoPrefix("==========step5: big number log query ============") tdLog.printNoPrefix("==========step5: big number log query ============")
self.test_big_number() self.test_big_number()
tdLog.printNoPrefix("==========step6: base number for log query ============") tdLog.printNoPrefix("==========step6: base number for log query ============")
self.log_base_test() self.log_base_test()
tdLog.printNoPrefix("==========step7: log boundary query ============") tdLog.printNoPrefix("==========step7: log boundary query ============")
self.check_boundary_values() self.check_boundary_values()
tdLog.printNoPrefix("==========step8: log filter query ============") tdLog.printNoPrefix("==========step8: log filter query ============")
self.abs_func_filter() self.abs_func_filter()
tdLog.printNoPrefix("==========step9: check log result of stable query ============") tdLog.printNoPrefix("==========step9: check log result of stable query ============")
self.support_super_table_test() self.support_super_table_test()
def stop(self): def stop(self):
tdSql.close() tdSql.close()

View File

@ -50,7 +50,7 @@ class TDTestCase:
tb_name = tdCom.getLongName(8, "letters") tb_name = tdCom.getLongName(8, "letters")
tdSql.execute( tdSql.execute(
f"CREATE TABLE {tb_name} (ts timestamp, c1 tinyint, c2 smallint, c3 int, c4 bigint, c5 float, c6 double, c7 binary(100), c8 nchar(200), c9 bool, c10 tinyint unsigned, c11 smallint unsigned, c12 int unsigned, c13 bigint unsigned) tags (t1 tinyint, t2 smallint, t3 int, t4 bigint, t5 float, t6 double, t7 binary(100), t8 nchar(200), t9 bool, t10 tinyint unsigned, t11 smallint unsigned, t12 int unsigned, t13 bigint unsigned)") f"CREATE TABLE {tb_name} (ts timestamp, c1 tinyint, c2 smallint, c3 int, c4 bigint, c5 float, c6 double, c7 binary(100), c8 nchar(200), c9 bool, c10 tinyint unsigned, c11 smallint unsigned, c12 int unsigned, c13 bigint unsigned) tags (t1 tinyint, t2 smallint, t3 int, t4 bigint, t5 float, t6 double, t7 binary(100), t8 nchar(200), t9 bool, t10 tinyint unsigned, t11 smallint unsigned, t12 int unsigned, t13 bigint unsigned)")
for i in range(1, count+1): for i in range(1, count+1):
tdSql.execute( tdSql.execute(
f'CREATE TABLE {tb_name}_sub_{i} using {tb_name} tags ({i}, {i}, {i}, {i}, {i}.{i}, {i}.{i}, "binary{i}", "nchar{i}", true, {i}, {i}, {i}, {i})') f'CREATE TABLE {tb_name}_sub_{i} using {tb_name} tags ({i}, {i}, {i}, {i}, {i}.{i}, {i}.{i}, "binary{i}", "nchar{i}", true, {i}, {i}, {i}, {i})')
self.insertData(f'{tb_name}_sub_{i}') self.insertData(f'{tb_name}_sub_{i}')
@ -412,7 +412,7 @@ class TDTestCase:
query_sql = f'select c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13 from {tb_name} where c1 > 2 and c1 >= 3 or c1 < 1 or c1 <= 0 or c1 =2 or c1 != 1 or c1 <> 1 and c1 is null or c1 between 2 and 3 and c1 not between 1 and 1 and c1 in (2, 3) and c1 not in (1, 2)' query_sql = f'select c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13 from {tb_name} where c1 > 2 and c1 >= 3 or c1 < 1 or c1 <= 0 or c1 =2 or c1 != 1 or c1 <> 1 and c1 is null or c1 between 2 and 3 and c1 not between 1 and 1 and c1 in (2, 3) and c1 not in (1, 2)'
res = tdSql.query(query_sql) res = tdSql.query(query_sql)
tdSql.checkRows(1) tdSql.checkRows(1)
def queryUtinyintCol(self, tb_name, check_elm=None): def queryUtinyintCol(self, tb_name, check_elm=None):
select_elm = "*" if check_elm is None else check_elm select_elm = "*" if check_elm is None else check_elm
# > # >
@ -497,7 +497,7 @@ class TDTestCase:
query_sql = f'select c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13 from {tb_name} where c10 > 2 and c10 >= 3 or c10 < 1 or c10 <= 0 or c10 =2 or c10 != 1 or c10 <> 1 and c10 is null or c10 between 2 and 3 and c10 not between 1 and 1 and c10 in (2, 3) and c10 not in (1, 2)' query_sql = f'select c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13 from {tb_name} where c10 > 2 and c10 >= 3 or c10 < 1 or c10 <= 0 or c10 =2 or c10 != 1 or c10 <> 1 and c10 is null or c10 between 2 and 3 and c10 not between 1 and 1 and c10 in (2, 3) and c10 not in (1, 2)'
res = tdSql.query(query_sql) res = tdSql.query(query_sql)
tdSql.checkRows(10) tdSql.checkRows(10)
def querySmallintCol(self, tb_name, check_elm=None): def querySmallintCol(self, tb_name, check_elm=None):
select_elm = "*" if check_elm is None else check_elm select_elm = "*" if check_elm is None else check_elm
# > # >
@ -582,7 +582,7 @@ class TDTestCase:
query_sql = f'select c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13 from {tb_name} where c2 > 0 and c2 >= 1 or c2 < 4 and c2 <= 3 and c2 != 1 and c2 <> 2 and c2 = 3 or c2 is not null and c2 between 2 and 3 and c2 not between 1 and 2 and c2 in (2,3) and c2 not in (1,2)' query_sql = f'select c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13 from {tb_name} where c2 > 0 and c2 >= 1 or c2 < 4 and c2 <= 3 and c2 != 1 and c2 <> 2 and c2 = 3 or c2 is not null and c2 between 2 and 3 and c2 not between 1 and 2 and c2 in (2,3) and c2 not in (1,2)'
tdSql.query(query_sql) tdSql.query(query_sql)
tdSql.checkRows(11) tdSql.checkRows(11)
def queryUsmallintCol(self, tb_name, check_elm=None): def queryUsmallintCol(self, tb_name, check_elm=None):
select_elm = "*" if check_elm is None else check_elm select_elm = "*" if check_elm is None else check_elm
# > # >
@ -752,7 +752,7 @@ class TDTestCase:
query_sql = f'select c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13 from {tb_name} where c3 > 0 and c3 >= 1 or c3 < 5 and c3 <= 4 and c3 != 2 and c3 <> 2 and c3 = 4 or c3 is not null and c3 between 2 and 4 and c3 not between 1 and 2 and c3 in (2,4) and c3 not in (1,2)' query_sql = f'select c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13 from {tb_name} where c3 > 0 and c3 >= 1 or c3 < 5 and c3 <= 4 and c3 != 2 and c3 <> 2 and c3 = 4 or c3 is not null and c3 between 2 and 4 and c3 not between 1 and 2 and c3 in (2,4) and c3 not in (1,2)'
tdSql.query(query_sql) tdSql.query(query_sql)
tdSql.checkRows(11) tdSql.checkRows(11)
def queryUintCol(self, tb_name, check_elm=None): def queryUintCol(self, tb_name, check_elm=None):
select_elm = "*" if check_elm is None else check_elm select_elm = "*" if check_elm is None else check_elm
# > # >
@ -1086,7 +1086,7 @@ class TDTestCase:
query_sql = f'select c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13 from {tb_name} where c5 > 0 and c5 >= 1 or c5 < 5 and c5 <= 6.6 and c5 != 2 and c5 <> 2 and c5 = 4 or c5 is not null and c5 between 2 and 4 and c5 not between 1 and 2 and c5 in (2,4) and c5 not in (1,2)' query_sql = f'select c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13 from {tb_name} where c5 > 0 and c5 >= 1 or c5 < 5 and c5 <= 6.6 and c5 != 2 and c5 <> 2 and c5 = 4 or c5 is not null and c5 between 2 and 4 and c5 not between 1 and 2 and c5 in (2,4) and c5 not in (1,2)'
tdSql.query(query_sql) tdSql.query(query_sql)
tdSql.checkRows(11) tdSql.checkRows(11)
def queryDoubleCol(self, tb_name, check_elm=None): def queryDoubleCol(self, tb_name, check_elm=None):
select_elm = "*" if check_elm is None else check_elm select_elm = "*" if check_elm is None else check_elm
# > # >
@ -1711,19 +1711,19 @@ class TDTestCase:
tdSql.checkRows(4) tdSql.checkRows(4)
tdSql.checkEqual(self.queryLastC10(query_sql), 7) tdSql.checkEqual(self.queryLastC10(query_sql), 7)
## condition_A or (condition_B and condition_C) or (condition_D and condition_E) and condition_F ## condition_A or (condition_B and condition_C) or (condition_D and condition_E) and condition_F
query_sql = f'select * from {tb_name} where c1 != 1 or (c2 <= 1 and c3 <4) or (c3 >= 4 or c7 is not Null) and c9 <> true' query_sql = f'select * from {tb_name} where c1 != 1 or (c2 <= 1 and c3 <4) or (c3 >= 4 or c7 is not Null) and c9 <> true'
tdSql.query(query_sql) tdSql.query(query_sql)
tdSql.checkRows(3) tdSql.checkRows(3)
tdSql.checkEqual(self.queryLastC10(query_sql), 10) tdSql.checkEqual(self.queryLastC10(query_sql), 10)
## (condition_A or (condition_B and condition_C) or (condition_D and condition_E)) and condition_F ## (condition_A or (condition_B and condition_C) or (condition_D and condition_E)) and condition_F
query_sql = f'select * from {tb_name} where (c1 != 1 or (c2 <= 2 and c3 >= 4) or (c3 >= 4 or c7 is not Null)) and c9 != false' query_sql = f'select * from {tb_name} where (c1 != 1 or (c2 <= 2 and c3 >= 4) or (c3 >= 4 or c7 is not Null)) and c9 != false'
tdSql.query(query_sql) tdSql.query(query_sql)
tdSql.checkRows(9) tdSql.checkRows(9)
tdSql.checkEqual(self.queryLastC10(query_sql), 9) tdSql.checkEqual(self.queryLastC10(query_sql), 9)
## (condition_A or condition_B) or (condition_C or condition_D) and (condition_E or condition_F or condition_G) ## (condition_A or condition_B) or (condition_C or condition_D) and (condition_E or condition_F or condition_G)
query_sql = f'select * from {tb_name} where c1 != 1 or (c2 <= 3 and c3 > 4) and c3 <= 5 and (c7 is not Null and c9 != false)' query_sql = f'select * from {tb_name} where c1 != 1 or (c2 <= 3 and c3 > 4) and c3 <= 5 and (c7 is not Null and c9 != false)'
tdSql.query(query_sql) tdSql.query(query_sql)
tdSql.checkRows(2) tdSql.checkRows(2)
@ -1780,17 +1780,17 @@ class TDTestCase:
tdSql.query(query_sql) tdSql.query(query_sql)
tdSql.checkRows(55) tdSql.checkRows(55)
## condition_A or (condition_B and condition_C) or (condition_D and condition_E) and condition_F ## condition_A or (condition_B and condition_C) or (condition_D and condition_E) and condition_F
query_sql = f'select * from {tb_name} where t1 != 1 or (t2 <= 1 and t3 <4) or (t3 >= 4 or t7 is not Null) and t9 <> true' query_sql = f'select * from {tb_name} where t1 != 1 or (t2 <= 1 and t3 <4) or (t3 >= 4 or t7 is not Null) and t9 <> true'
tdSql.query(query_sql) tdSql.query(query_sql)
tdSql.checkRows(55) tdSql.checkRows(55)
## (condition_A or (condition_B and condition_C) or (condition_D and condition_E)) and condition_F ## (condition_A or (condition_B and condition_C) or (condition_D and condition_E)) and condition_F
query_sql = f'select * from {tb_name} where (t1 != 1 or (t2 <= 2 and t3 >= 4) or (t3 >= 4 or t7 is not Null)) and t9 != false' query_sql = f'select * from {tb_name} where (t1 != 1 or (t2 <= 2 and t3 >= 4) or (t3 >= 4 or t7 is not Null)) and t9 != false'
tdSql.query(query_sql) tdSql.query(query_sql)
tdSql.checkRows(55) tdSql.checkRows(55)
## (condition_A or condition_B) or (condition_C or condition_D) and (condition_E or condition_F or condition_G) ## (condition_A or condition_B) or (condition_C or condition_D) and (condition_E or condition_F or condition_G)
query_sql = f'select * from {tb_name} where t1 != 1 or (t2 <= 3 and t3 > 4) and t3 <= 5 and (t7 is not Null and t9 != false)' query_sql = f'select * from {tb_name} where t1 != 1 or (t2 <= 3 and t3 > 4) and t3 <= 5 and (t7 is not Null and t9 != false)'
tdSql.query(query_sql) tdSql.query(query_sql)
tdSql.checkRows(44) tdSql.checkRows(44)
@ -2033,7 +2033,7 @@ class TDTestCase:
self.checkColType(tb_name, check_elm) self.checkColType(tb_name, check_elm)
else: else:
self.checkColType(stb_name, check_elm) self.checkColType(stb_name, check_elm)
def checkStbTagTypeOperator(self): def checkStbTagTypeOperator(self):
''' '''
Super table full tag type and operator Super table full tag type and operator
@ -2089,7 +2089,7 @@ class TDTestCase:
tb_name = self.initStb() tb_name = self.initStb()
self.queryColPreCal(f'{tb_name}_sub_1') self.queryColPreCal(f'{tb_name}_sub_1')
self.queryTagPreCal(tb_name) self.queryTagPreCal(tb_name)
def checkMultiTb(self): def checkMultiTb(self):
''' '''
test "or" in multi ordinary table test "or" in multi ordinary table
@ -2110,7 +2110,7 @@ class TDTestCase:
''' '''
tb_name = self.initStb() tb_name = self.initStb()
self.queryMultiTbWithTag(tb_name) self.queryMultiTbWithTag(tb_name)
def checkMultiStbJoin(self): def checkMultiStbJoin(self):
''' '''
join test join test

View File

@ -0,0 +1,129 @@
import sys
import time
import socket
import os
import threading
import taos
from util.log import *
from util.sql import *
from util.cases import *
from util.dnodes import *
from util.common import *
sys.path.append("./7-tmq")
from tmqCommon import *
class TDTestCase:
paraDict = {'dbName': 'db1',
'dropFlag': 1,
'event': '',
'vgroups': 2,
'stbName': 'stb0',
'colPrefix': 'c',
'tagPrefix': 't',
'colSchema': [{'type': 'INT', 'count':2}, {'type': 'binary', 'len':16, 'count':1}, {'type': 'timestamp','count':1}],
'tagSchema': [{'type': 'INT', 'count':1}, {'type': 'binary', 'len':20, 'count':1}],
'ctbPrefix': 'ctb',
'ctbStartIdx': 0,
'ctbNum': 100,
'rowsPerTbl': 1000,
'batchNum': 1000,
'startTs': 1640966400000, # 2022-01-01 00:00:00.000
'pollDelay': 20,
'showMsg': 1,
'showRow': 1}
cdbName = 'cdb'
# some parameter to consumer processor
consumerId = 0
expectrowcnt = 0
topicList = ''
ifcheckdata = 0
ifManualCommit = 1
groupId = 'group.id:cgrp1'
autoCommit = 'enable.auto.commit:false'
autoCommitInterval = 'auto.commit.interval.ms:1000'
autoOffset = 'auto.offset.reset:earliest'
pollDelay = 20
showMsg = 1
showRow = 1
hostname = socket.gethostname()
def init(self, conn, logSql):
tdLog.debug(f"start to excute {__file__}")
logSql = False
tdSql.init(conn.cursor(), logSql)
def tmqCase1(self):
tdLog.printNoPrefix("======== test case 1: ")
tdLog.info("step 1: create database, stb, ctb and insert data")
tmqCom.initConsumerTable(self.cdbName)
tdCom.create_database(tdSql,self.paraDict["dbName"],self.paraDict["dropFlag"])
self.paraDict["stbName"] = 'stb1'
tdCom.create_stable(tdSql,dbname=self.paraDict["dbName"],stbname=self.paraDict["stbName"],column_elm_list=self.paraDict["colSchema"],tag_elm_list=self.paraDict["tagSchema"],count=1, default_stbname_prefix=self.paraDict["stbName"])
tdCom.create_ctable(tdSql,dbname=self.paraDict["dbName"],stbname=self.paraDict["stbName"],tag_elm_list=self.paraDict['tagSchema'],count=self.paraDict["ctbNum"],default_ctbname_prefix=self.paraDict["ctbPrefix"])
tmqCom.insert_data_2(tdSql,self.paraDict["dbName"],self.paraDict["ctbPrefix"],self.paraDict["ctbNum"],self.paraDict["rowsPerTbl"],self.paraDict["batchNum"],self.paraDict["startTs"],self.paraDict["ctbStartIdx"])
# pThread1 = tmqCom.asyncInsertData(paraDict=self.paraDict)
self.paraDict["stbName"] = 'stb2'
self.paraDict["ctbPrefix"] = 'newctb'
self.paraDict["batchNum"] = 1000
tdCom.create_stable(tdSql,dbname=self.paraDict["dbName"],stbname=self.paraDict["stbName"],column_elm_list=self.paraDict["colSchema"],tag_elm_list=self.paraDict["tagSchema"],count=1, default_stbname_prefix=self.paraDict["stbName"])
tdCom.create_ctable(tdSql,dbname=self.paraDict["dbName"],stbname=self.paraDict["stbName"],tag_elm_list=self.paraDict['tagSchema'],count=self.paraDict["ctbNum"],default_ctbname_prefix=self.paraDict["ctbPrefix"])
# tmqCom.insert_data_2(tdSql,self.paraDict["dbName"],self.paraDict["ctbPrefix"],self.paraDict["ctbNum"],self.paraDict["rowsPerTbl"],self.paraDict["batchNum"],self.paraDict["startTs"],self.paraDict["ctbStartIdx"])
pThread2 = tmqCom.asyncInsertData(paraDict=self.paraDict)
tdLog.info("create topics from db")
topicName1 = 'UpperCasetopic_%s'%(self.paraDict['dbName'])
tdSql.execute("create topic %s as database %s" %(topicName1, self.paraDict['dbName']))
topicList = topicName1 + ',' +topicName1
keyList = '%s,%s,%s,%s'%(self.groupId,self.autoCommit,self.autoCommitInterval,self.autoOffset)
self.expectrowcnt = self.paraDict["rowsPerTbl"] * self.paraDict["ctbNum"] * 2
tmqCom.insertConsumerInfo(self.consumerId, self.expectrowcnt,topicList,keyList,self.ifcheckdata,self.ifManualCommit)
tdLog.info("start consume processor")
tmqCom.startTmqSimProcess(self.pollDelay,self.paraDict["dbName"],self.showMsg, self.showRow,self.cdbName)
tmqCom.getStartConsumeNotifyFromTmqsim()
tdLog.info("drop one stable")
self.paraDict["stbName"] = 'stb1'
tdSql.execute("drop table %s.%s" %(self.paraDict['dbName'], self.paraDict['stbName']))
tmqCom.drop_ctable(tdSql, dbname=self.paraDict['dbName'], count=self.paraDict["ctbNum"], default_ctbname_prefix=self.paraDict["ctbPrefix"])
# pThread2.join()
tdLog.info("wait result from consumer, then check it")
expectRows = 1
resultList = tmqCom.selectConsumeResult(expectRows)
totalConsumeRows = 0
for i in range(expectRows):
totalConsumeRows += resultList[i]
if not (totalConsumeRows >= self.expectrowcnt/2 and totalConsumeRows <= self.expectrowcnt):
tdLog.info("act consume rows: %d, expect consume rows: between %d and %d"%(totalConsumeRows, self.expectrowcnt/2, self.expectrowcnt))
tdLog.exit("tmq consume rows error!")
time.sleep(10)
tdSql.query("drop topic %s"%topicName1)
tdLog.printNoPrefix("======== test case 1 end ...... ")
def run(self):
tdSql.prepare()
self.tmqCase1()
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
event = threading.Event()
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

View File

@ -685,7 +685,7 @@ int32_t shellHorizontalPrintResult(TAOS_RES *tres, const char *sql) {
uint64_t resShowMaxNum = UINT64_MAX; uint64_t resShowMaxNum = UINT64_MAX;
if (shell.args.commands == NULL && shell.args.file[0] == 0 && !shellIsLimitQuery(sql) && !shellIsShowQuery(sql)) { if (shell.args.commands == NULL && shell.args.file[0] == 0 && !shellIsLimitQuery(sql)) {
resShowMaxNum = SHELL_DEFAULT_RES_SHOW_NUM; resShowMaxNum = SHELL_DEFAULT_RES_SHOW_NUM;
} }
@ -706,8 +706,12 @@ int32_t shellHorizontalPrintResult(TAOS_RES *tres, const char *sql) {
} else if (showMore) { } else if (showMore) {
printf("\r\n"); printf("\r\n");
printf(" Notice: The result shows only the first %d rows.\r\n", SHELL_DEFAULT_RES_SHOW_NUM); printf(" Notice: The result shows only the first %d rows.\r\n", SHELL_DEFAULT_RES_SHOW_NUM);
printf(" You can use the `LIMIT` clause to get fewer result to show.\r\n"); if (shellIsShowQuery(sql)) {
printf(" Or use '>>' to redirect the whole set of the result to a specified file.\r\n"); printf(" You can use '>>' to redirect the whole set of the result to a specified file.\r\n");
} else {
printf(" You can use the `LIMIT` clause to get fewer result to show.\r\n");
printf(" Or use '>>' to redirect the whole set of the result to a specified file.\r\n");
}
printf("\r\n"); printf("\r\n");
printf(" You can use Ctrl+C to stop the underway fetching.\r\n"); printf(" You can use Ctrl+C to stop the underway fetching.\r\n");
printf("\r\n"); printf("\r\n");

@ -1 +1 @@
Subproject commit 69b558ccbfe54a4407fe23eeae2e67c540f59e55 Subproject commit 0b8a3373bb7548f8106d13e7d3b0a988d3c4d48a

@ -1 +1 @@
Subproject commit 267a96fb09fc2ba14acfa47f7d3678def64c29c5 Subproject commit c5fded266d3b10508e38bf3285bb7ecf798bc343