Merge remote-tracking branch 'origin/test/new' into test/new

This commit is contained in:
huohong 2025-02-14 18:14:04 +08:00
commit de49403b5f
30 changed files with 11907 additions and 351 deletions

View File

@ -242,13 +242,14 @@ The query performance test mainly outputs the QPS indicator of query request spe
``` bash
complete query with 3 threads and 10000 query delay avg: 0.002686s min: 0.001182s max: 0.012189s p90: 0.002977s p95: 0.003493s p99: 0.004645s SQL command: select ...
INFO: Total specified queries: 30000
INFO: Spend 26.9530 second completed total queries: 30000, the QPS of all threads: 1113.049
```
- The first line represents the percentile distribution of query execution and query request delay for each of the three threads executing 10000 queries. The SQL command is the test query statement
- The second line indicates that a total of 10000 * 3 = 30000 queries have been completed
- The third line indicates that the total query time is 26.9653 seconds, and the query rate per second (QPS) is 1113.049 times/second
- The second line indicates that the total query time is 26.9653 seconds, and the query rate per second (QPS) is 1113.049 times/second
- If the `continue_if_fail` option is set to `yes` in the query, the last line will output the number of failed requests and error rate, the format like "error + number of failed requests (error rate)"
- QPS = number of successful requests / time spent (in seconds)
- Error rate = number of failed requests / (number of successful requests + number of failed requests)
#### Subscription metrics
@ -330,9 +331,9 @@ Parameters related to supertable creation are configured in the `super_tables` s
- **child_table_exists**: Whether the child table already exists, default is "no", options are "yes" or "no".
- **child_table_count**: Number of child tables, default is 10.
- **childtable_count**: Number of child tables, default is 10.
- **child_table_prefix**: Prefix for child table names, mandatory, no default value.
- **childtable_prefix**: Prefix for child table names, mandatory, no default value.
- **escape_character**: Whether the supertable and child table names contain escape characters, default is "no", options are "yes" or "no".
@ -427,11 +428,9 @@ Specify the configuration parameters for tag and data columns in `super_tables`
- **create_table_thread_count** : The number of threads for creating tables, default is 8.
- **connection_pool_size** : The number of pre-established connections with the TDengine server. If not configured, it defaults to the specified number of threads.
- **result_file** : The path to the result output file, default is ./output.txt.
- **confirm_parameter_prompt** : A toggle parameter that requires user confirmation after a prompt to continue. The default value is false.
- **confirm_parameter_prompt** : A toggle parameter that requires user confirmation after a prompt to continue. The value can be "yes" or "no", by default "no".
- **interlace_rows** : Enables interleaved insertion mode and specifies the number of rows to insert into each subtable at a time. Interleaved insertion mode refers to inserting the specified number of rows into each subtable in sequence and repeating this process until all subtable data has been inserted. The default value is 0, meaning data is inserted into one subtable completely before moving to the next.
This parameter can also be configured in `super_tables`; if configured, the settings in `super_tables` take higher priority and override the global settings.
@ -460,12 +459,12 @@ For other common parameters, see Common Configuration Parameters.
Configuration parameters for querying specified tables (can specify supertables, subtables, or regular tables) are set in `specified_table_query`.
- **mixed_query** "yes": `Mixed Query` "no": `Normal Query`, default is "no"
`Mixed Query`: All SQL statements in `sqls` are grouped by the number of threads, with each thread executing one group. Each SQL statement in a thread needs to perform `query_times` queries.
`Normal Query `: Each SQL in `sqls` starts `threads` and exits after executing `query_times` times. The next SQL can only be executed after all previous SQL threads have finished executing and exited.
Regardless of whether it is a `Normal Query` or `Mixed Query`, the total number of query executions is the same. The total number of queries = `sqls` * `threads` * `query_times`. The difference is that `Normal Query` starts `threads` for each SQL query, while ` Mixed Query` only starts `threads` once to complete all SQL queries. The number of thread startups for the two is different.
`General Query`: Each SQL in `sqls` starts `threads` threads to query this SQL, Each thread exits after executing the `query_times` queries, and only after all threads executing this SQL have completed can the next SQL be executed.
The total number of queries(`General Query`) = the number of `sqls` * `query_times` * `threads`
- `Mixed Query` : All SQL statements in `sqls` are divided into `threads` groups, with each thread executing one group. Each SQL statement needs to execute `query_times` queries.
The total number of queries(`Mixed Query`) = the number of `sqls` * `query_times`
- **query_interval** : Query interval, in seconds, default is 0.
- **query_interval** : Query interval, in millisecond, default is 0.
- **threads** : Number of threads executing the SQL query, default is 1.
@ -487,6 +486,7 @@ The thread mode of the super table query is the same as the `Normal Query` mode
- **sqls** :
- **sql** : The SQL command to execute, required; for supertable queries, keep "xxxx" in the SQL command, the program will automatically replace it with all subtable names of the supertable.
- **result** : File to save the query results, if not specified, results are not saved.
- **Note**: The maximum number of SQL arrays configured under SQL is 100.
### Configuration Parameters for Subscription Scenarios
@ -501,7 +501,7 @@ Configuration parameters for subscribing to specified tables (can specify supert
- **sqls** :
- **sql** : The SQL command to execute, required.
#### Data Type Writing Comparison Table in Configuration File
### Data Type Writing Comparison Table in Configuration File
| # | **Engine** | **taosBenchmark**
| --- | :----------------: | :---------------:

View File

@ -4,17 +4,21 @@ sidebar_label: taos
toc_max_heading_level: 4
---
TDengine 命令行程序(以下简称 TDengine CLI是用户操作 TDengine 实例并与之交互最简洁常用工具。 使用前需要安装 TDengine Server 安装包或 TDengine Client 安装包。
TDengine 命令行程序(以下简称 TDengine CLI是用户操作 TDengine 实例并与之交互最简洁常用工具。
## 启动
## 工具获取
要进入 TDengine CLI您在终端执行 `taos` 即可。
TDengine CLI 是 TDengine 服务器及客户端安装包中默认安装组件,安装后即可使用,参考 [TDengine 安装](../../../get-started/)
## 运行
进入 TDengine CLI 交互执行模式,在终端命令行执行:
```bash
taos
```
如果连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息。
如果连接服务成功,将会打印出欢迎消息和版本信息。若失败,打印错误消息。
TDengine CLI 的提示符号如下:
@ -22,42 +26,24 @@ TDengine CLI 的提示符号如下:
taos>
```
进入 TDengine CLI 后,可执行各种 SQL 语句,包括插入、查询以及各种管理命令。
进入 TDengine CLI 后,可执行各种 SQL 语句,包括插入、查询以及各种管理命令。
退出 TDengine CLI 执行 `q``quit``exit` 回车即可
```shell
taos> quit
```
## 执行 SQL 脚本
在 TDengine CLI 里可以通过 `source` 命令来运行脚本文件中的多条 SQL 命令。
```sql
taos> source <filename>;
```
## 在线修改显示字符宽度
可以在 TDengine CLI 里使用如下命令调整字符显示宽度
```sql
taos> SET MAX_BINARY_DISPLAY_WIDTH <nn>;
```
如显示的内容后面以 ... 结尾时,表示该内容已被截断,可通过本命令修改显示字符宽度以显示完整的内容。
## 命令行参数
您可通过配置命令行参数来改变 TDengine CLI 的行为。以下为常用的几个命令行参数:
### 常用参数
可通过配置命令行参数来改变 TDengine CLI 的行为。以下为常用的几个命令行参数:
- -h HOST: 要连接的 TDengine 服务端所在服务器的 FQDN, 默认为连接本地服务
- -P PORT: 指定服务端所用端口号
- -u USER: 连接时使用的用户名
- -p PASSWORD: 连接服务端时使用的密码,特殊字符如 `! & ( ) < > ; |` 需使用字符 `\` 进行转义处理
- -h HOST: 要连接的 TDengine 服务端所在服务器的 FQDN, 默认值: 127.0.0.1
- -P PORT: 指定服务端所用端口号默认值6030
- -u USER: 连接时使用的用户名默认值root
- -p PASSWORD: 连接服务端时使用的密码,特殊字符如 `! & ( ) < > ; |` 需使用字符 `\` 进行转义处理, 默认值taosdata
- -?, --help: 打印出所有命令行参数
还有更多其他参数:
### 更多参数
- -a AUTHSTR: 连接服务端的授权信息
- -A: 通过用户名和密码计算授权信息
@ -79,27 +65,58 @@ taos> SET MAX_BINARY_DISPLAY_WIDTH <nn>;
- -z TIMEZONE: 指定时区,默认为本地时区
- -V: 打印出当前版本号
示例:
### 非交互式执行
使用 `-s` 参数可进行非交互式执行 SQL执行完成后退出此模式适合在自动化脚本中使用。
如以下命令连接到服务器 h1.taos.com, 执行 -s 指定的 SQL:
```bash
taos -h h1.taos.com -s "use db; show tables;"
```
## 配置文件
### taosc 配置文件
也可以通过配置文件中的参数设置来控制 TDengine CLI 的行为。可用配置参数请参考[客户端配置](../../components/taosc)
使用 `-c` 参数改变 `taosc` 客户端加载配置文件的位置,客户端配置参数参考 [客户端配置](../../components/taosc)
以下命令指定了 `taosc` 客户端加载 `/root/cfg/` 下的 `taos.cfg` 配置文件
```bash
taos -c /root/cfg/
```
## 错误代码表
在 TDengine 3.3.4.8 版本后 TDengine CLI 在返回错误信息中返回了具体错误码,用户可到 TDengine 官网错误码页面查找具体原因及解决措施,见:[错误码参考表](https://docs.taosdata.com/reference/error-code/)
## 执行 SQL 脚本
## TDengine CLI TAB 键补全
在 TDengine CLI 里可以通过 `source` 命令来运行脚本文件中的多条 SQL 命令。
```sql
taos> source <filename>;
```
## 数据导入/导出
### 导出查询结果
- 可以使用符号 “>>” 导出查询结果到某个文件中,语法为: sql 查询语句 >> ‘输出文件名’; 输出文件如果不写路径的话,将输出至当前目录下。如 select * from d0 >> /root/d0.csv; 将把查询结果输出到 /root/d0.csv 中。
### 数据从文件导入
- 可以使用 insert into table_name file '输入文件名',把上一步中导出的数据文件再导入到指定表中。如 insert into d0 file '/root/d0.csv'; 表示把上面导出的数据全部再导致至 d0 表中。
## 设置字符类型显示宽度
可以在 TDengine CLI 里使用如下命令调整字符显示宽度
```sql
taos> SET MAX_BINARY_DISPLAY_WIDTH <nn>;
```
如显示的内容后面以 ... 结尾时,表示该内容已被截断,可通过本命令修改显示字符宽度以显示完整的内容。
## TAB 键自动补全
- TAB 键前为空命令状态下按 TAB 键,会列出 TDengine CLI 支持的所有命令
- TAB 键前为空格状态下按 TAB 键,会显示此位置可以出现的所有命令词的第一个,再次按 TAB 键切为下一个
- TAB 键前为字符串,会搜索与此字符串前缀匹配的所有可出现命令词,并显示第一个,再次按 TAB 键切为下一个
- 输入反斜杠 `\` + TAB 键, 会自动补全为列显示模式命令词 `\G;`
## TDengine CLI 小技巧
## 使用小技巧
- 可以使用上下光标键查看历史输入的指令
- 在 TDengine CLI 中使用 `alter user` 命令可以修改用户密码,缺省密码为 `taosdata`
@ -107,10 +124,5 @@ taos -h h1.taos.com -s "use db; show tables;"
- 执行 `RESET QUERY CACHE` 可清除本地表 Schema 的缓存
- 批量执行 SQL 语句。可以将一系列的 TDengine CLI 命令(以英文 ; 结尾,每个 SQL 语句为一行)按行存放在文件里,在 TDengine CLI 里执行命令 `source <file-name>` 自动执行该文件里所有的 SQL 语句
## TDengine CLI 导出查询结果到文件中
- 可以使用符号 “>>” 导出查询结果到某个文件中,语法为: sql 查询语句 >> ‘输出文件名’; 输出文件如果不写路径的话,将输出至当前目录下。如 select * from d0 >> /root/d0.csv; 将把查询结果输出到 /root/d0.csv 中。
## TDengine CLI 导入文件中的数据到表中
- 可以使用 insert into table_name file '输入文件名',把上一步中导出的数据文件再导入到指定表中。如 insert into d0 file '/root/d0.csv'; 表示把上面导出的数据全部再导致至 d0 表中。
## 错误代码表
在 TDengine 3.3.4.8 版本后 TDengine CLI 在返回错误信息中返回了具体错误码,用户可到 TDengine 官网错误码页面查找具体原因及解决措施,见:[错误码参考表](https://docs.taosdata.com/reference/error-code/)

View File

@ -6,44 +6,24 @@ toc_max_heading_level: 4
taosdump 是为开源用户提供的 TDengine 数据备份/恢复工具,备份数据文件采用标准 [ Apache AVRO ](https://avro.apache.org/) 格式方便与外界生态交换数据。taosdump 提供多种数据备份及恢复选项来满足不同需求,可通过 --help 查看支持的全部选项。
## 工具获取
## 安装
taosdump 是 TDengine 服务器及客户端安装包中默认安装组件,安装后即可使用,参考 [TDengine 安装](../../../get-started/)
taosdump 是 TDengine 安装包中默认安装组件,安装 TDengine 后即可使用,可参考 [TDengine 安装](../../../get-started/)
## 运行
taosdump 需在命令行终端中运行,运行时必须带参数,指明是备份操作或还原操作,如:
``` bash
taosdump -h dev126 -D test -o /root/test/
```
以上命令表示备份主机名为 `dev126` 机器上的 `test` 数据库到 `/root/test/` 目录下
``` bash
taosdump -h dev126 -i /root/test/
```
以上命令表示把 `/root/test/` 目录下之前备份的数据文件恢复到主机名为 `dev126` 的主机上
## 常用使用场景
### taosdump 备份数据
1. 备份所有数据库:指定 `-A``--all-databases` 参数;
2. 备份多个指定数据库:使用 `-D db1,db2,...` 参数;
3. 备份指定数据库中某些超级表或普通表:使用 `dbname stbname1 stbname2 tbname1 tbname2 ...` 参数,注意这种输入序列第一个参数为数据库名称,且只支持一个数据库,第二个和之后的参数为该数据库中的超级表或普通表名称,中间以空格分隔;
4. 备份系统 log 库TDengine 集群通常会包含一个系统数据库,名为 `log`,这个数据库内的数据为 TDengine 自我运行的数据taosdump 默认不会对 log 库进行备份。如果有特定需求对 log 库进行备份,可以使用 `-a``--allow-sys` 命令行参数。
5. “宽容”模式备份taosdump 1.4.1 之后的版本提供 `-n` 参数和 `-L` 参数,用于备份数据时不使用转义字符和“宽容”模式,可以在表名、列名、标签名没使用转义字符的情况下减少备份数据时间和备份数据占用空间。如果不确定符合使用 `-n``-L` 条件时请使用默认参数进行“严格”模式进行备份。转义字符的说明请参考[官方文档](../../taos-sql/escape)。
6. `-o` 参数指定的目录下如果已存在备份文件为防止数据被覆盖taosdump 会报错并退出,请更换其它空目录或清空原来数据后再备份。
7. 目前 taosdump 不支持数据断点继备功能,一旦数据备份中断,需要从头开始。如果备份需要很长时间,建议使用(-S -E 选项)指定开始/结束时间进行分段备份的方法,
:::tip
- taosdump 1.4.1 之后的版本提供 `-I` 参数,用于解析 avro 文件 schema 和数据,如果指定 `-s` 参数将只解析 schema。
- taosdump 1.4.2 之后的备份使用 `-B` 参数指定的批次数,默认值为 16384如果在某些环境下由于网络速度或磁盘性能不足导致 "Error actual dump .. batch .." 可以通过 `-B` 参数调整为更小的值进行尝试。
- taosdump 的导出不支持中断恢复,所以当进程意外终止后,正确的处理方式是删除当前已导出或生成的所有相关文件。
- taosdump 的导入支持中断恢复,但是当进程重新启动时,会收到一些“表已经存在”的提示,可以忽视。
:::
### taosdump 恢复数据
- 恢复指定路径下的数据文件:使用 `-i` 参数加上数据文件所在路径。如前面提及,不应该使用同一个目录备份不同数据集合,也不应该在同一路径多次备份同一数据集,否则备份数据会造成覆盖或多次备份。
- taosdump 支持数据恢复至新数据库名下,参数是 -W, 详细见命令行参数说明。
:::tip
taosdump 内部使用 TDengine stmt binding API 进行恢复数据的写入,为提高数据恢复性能,目前使用 16384 为一次写入批次。如果备份数据中有比较多列数据,可能会导致产生 "WAL size exceeds limit" 错误,此时可以通过使用 `-B` 参数调整为一个更小的值进行尝试。
:::
## 详细命令行参数列表
## 命令行参数
以下为 taosdump 详细命令行参数列表:
@ -119,3 +99,34 @@ for any corresponding short options.
Report bugs to <support@taosdata.com>.
```
## 常用使用场景
### taosdump 备份数据
1. 备份所有数据库:指定 `-A``--all-databases` 参数;
2. 备份多个指定数据库:使用 `-D db1,db2,...` 参数;
3. 备份指定数据库中某些超级表或普通表:使用 `dbname stbname1 stbname2 tbname1 tbname2 ...` 参数,注意这种输入序列第一个参数为数据库名称,且只支持一个数据库,第二个和之后的参数为该数据库中的超级表或普通表名称,中间以空格分隔;
4. 备份系统 log 库TDengine 集群通常会包含一个系统数据库,名为 `log`,这个数据库内的数据为 TDengine 自我运行的数据taosdump 默认不会对 log 库进行备份。如果有特定需求对 log 库进行备份,可以使用 `-a``--allow-sys` 命令行参数。
5. “宽容”模式备份taosdump 1.4.1 之后的版本提供 `-n` 参数和 `-L` 参数,用于备份数据时不使用转义字符和“宽容”模式,可以在表名、列名、标签名没使用转义字符的情况下减少备份数据时间和备份数据占用空间。如果不确定符合使用 `-n``-L` 条件时请使用默认参数进行“严格”模式进行备份。转义字符的说明请参考[官方文档](../../taos-sql/escape)。
6. `-o` 参数指定的目录下如果已存在备份文件为防止数据被覆盖taosdump 会报错并退出,请更换其它空目录或清空原来数据后再备份。
7. 目前 taosdump 不支持数据断点继备功能,一旦数据备份中断,需要从头开始。如果备份需要很长时间,建议使用(-S -E 选项)指定开始/结束时间进行分段备份的方法,
:::tip
- taosdump 1.4.1 之后的版本提供 `-I` 参数,用于解析 avro 文件 schema 和数据,如果指定 `-s` 参数将只解析 schema。
- taosdump 1.4.2 之后的备份使用 `-B` 参数指定的批次数,默认值为 16384如果在某些环境下由于网络速度或磁盘性能不足导致 "Error actual dump .. batch .." 可以通过 `-B` 参数调整为更小的值进行尝试。
- taosdump 的导出不支持中断恢复,所以当进程意外终止后,正确的处理方式是删除当前已导出或生成的所有相关文件。
- taosdump 的导入支持中断恢复,但是当进程重新启动时,会收到一些“表已经存在”的提示,可以忽视。
:::
### taosdump 恢复数据
- 恢复指定路径下的数据文件:使用 `-i` 参数加上数据文件所在路径。如前面提及,不应该使用同一个目录备份不同数据集合,也不应该在同一路径多次备份同一数据集,否则备份数据会造成覆盖或多次备份。
- taosdump 支持数据恢复至新数据库名下,参数是 -W, 详细见命令行参数说明。
:::tip
taosdump 内部使用 TDengine stmt binding API 进行恢复数据的写入,为提高数据恢复性能,目前使用 16384 为一次写入批次。如果备份数据中有比较多列数据,可能会导致产生 "WAL size exceeds limit" 错误,此时可以通过使用 `-B` 参数调整为一个更小的值进行尝试。
:::

View File

@ -6,9 +6,9 @@ toc_max_heading_level: 4
taosBenchmark 是 TDengine 产品性能基准测试工具,提供对 TDengine 产品写入、查询及订阅性能测试,输出性能指标。
## 安装
## 工具获取
taosBenchmark 是 TDengine 安装包中默认安装组件,安装 TDengine 后即可使用,参考 [TDengine 安装](../../../get-started/)
taosBenchmark 是 TDengine 服务器及客户端安装包中默认安装组件,安装后即可使用,参考 [TDengine 安装](../../../get-started/)
## 运行
@ -87,7 +87,7 @@ taosBenchmark -f <json file>
查看更多 json 配置文件示例可 [点击这里](https://github.com/taosdata/TDengine/tree/main/tools/taos-tools/example)
## 命令行参数详解
## 命令行参数
| 命令行参数 | 功能说明 |
| ---------------------------- | ----------------------------------------------- |
| -f/--file \<json file> | 要使用的 JSON 配置文件,由该文件指定所有参数,本参数与命令行其他参数不能同时使用。没有默认值 |
@ -159,12 +159,13 @@ SUCC: insert delay, min: 19.6780ms, avg: 64.9390ms, p90: 94.6900ms, p95: 105.187
查询性能测试主要输出查询请求速度 QPS 指标, 输出格式如下:
``` bash
complete query with 3 threads and 10000 query delay avg: 0.002686s min: 0.001182s max: 0.012189s p90: 0.002977s p95: 0.003493s p99: 0.004645s SQL command: select ...
INFO: Total specified queries: 30000
INFO: Spend 26.9530 second completed total queries: 30000, the QPS of all threads: 1113.049
```
- 第一行表示 3 个线程每个线程执行 10000 次查询及查询请求延时百分位分布情况,`SQL command` 为测试的查询语句
- 第二行表示总共完成了 10000 * 3 = 30000 次查询总数
- 第三行表示查询总耗时为 26.9653 秒,每秒查询率(QPS)为1113.049 次/秒
- 第二行表示查询总耗时为 26.9653 秒,每秒查询率(QPS)为1113.049 次/秒
- 如果在查询中设置了 `continue_if_fail` 选项为 `yes`,在最后一行中会输出失败请求个数及错误率,格式 error + 失败请求个数 (错误率)
- QPS = 成功请求数量 / 花费时间(单位秒)
- 错误率 = 失败请求数量 /(成功请求数量 + 失败请求数量)
#### 订阅指标
@ -182,7 +183,7 @@ INFO: Consumed total msgs: 3000, total rows: 30000000
- 4 ~ 6 行是测试完成后每个消费者总体统计,统计共消费了多少条消息,共计多少行
- 第 7 行所有消费者总体统计,`msgs` 表示共消费了多少条消息, `rows` 表示共消费了多少行数据
## 配置文件参数详解
## 配置文件参数
### 通用配置参数
@ -203,42 +204,26 @@ INFO: Consumed total msgs: 3000, total rows: 30000000
插入场景下 `filetype` 必须设置为 `insert`,该参数及其它通用参数详见[通用配置参数](#通用配置参数)
- ** keep_trying ** : 失败后进行重试的次数,默认不重试。需使用 v3.0.9 以上版本。
- **keep_trying** : 失败后进行重试的次数,默认不重试。需使用 v3.0.9 以上版本。
- ** trying_interval ** : 失败重试间隔时间,单位为毫秒,仅在 keep_trying 指定重试后有效。需使用 v3.0.9 以上版本。
- ** childtable_from 和 childtable_to ** : 指定写入子表范围,开闭区间为 [childtable_from, childtable_to).
- **trying_interval** : 失败重试间隔时间,单位为毫秒,仅在 keep_trying 指定重试后有效。需使用 v3.0.9 以上版本。
- **childtable_from 和 childtable_to** : 指定写入子表范围,开闭区间为 [childtable_from, childtable_to).
 
- ** continue_if_fail ** : 允许用户定义失败后行为
- **continue_if_fail** : 允许用户定义失败后行为
“continue_if_fail”:  “no”, 失败 taosBenchmark 自动退出,默认行为
“continue_if_fail”: “yes”, 失败 taosBenchmark 警告用户,并继续写入
“continue_if_fail”: “smart”, 如果子表不存在失败taosBenchmark 会建立子表并继续写入
#### 数据库相关配置参数
#### 数据库相关
创建数据库时的相关参数在 json 配置文件中的 `dbinfo` 中配置,个别具体参数如下。其余参数均与 TDengine 中 `create database` 时所指定的数据库参数相对应,详见[../../taos-sql/database]
- **name** : 数据库名。
- **drop** : 数据库已存在时是否删除重建,可选项为 "yes" 或 "no", 默认为 “yes”
- **drop** : 数据库已存在时是否删除,可选项为 "yes" 或 "no", 默认为 “yes”
#### 流式计算相关配置参数
创建流式计算的相关参数在 json 配置文件中的 `stream` 中配置,具体参数如下。
- **stream_name** : 流式计算的名称,必填项。
- **stream_stb** : 流式计算对应的超级表名称,必填项。
- **stream_sql** : 流式计算的sql语句必填项。
- **trigger_mode** : 流式计算的触发模式,可选项。
- **watermark** : 流式计算的水印,可选项。
- **drop** : 是否创建流式计算,可选项为 "yes" 或者 "no", 为 "no" 时不创建。
#### 超级表相关配置参数
#### 超级表相关
创建超级表时的相关参数在 json 配置文件中的 `super_tables` 中配置,具体参数如下。
@ -246,9 +231,9 @@ INFO: Consumed total msgs: 3000, total rows: 30000000
- **child_table_exists** : 子表是否已经存在,默认值为 "no",可选值为 "yes" 或 "no"。
- **child_table_count** : 子表的数量,默认值为 10。
- **childtable_count** : 子表的数量,默认值为 10。
- **child_table_prefix** : 子表名称的前缀,必选配置项,没有默认值。
- **childtable_prefix** : 子表名称的前缀,必选配置项,没有默认值。
- **escape_character** : 超级表和子表名称中是否包含转义字符,默认值为 "no",可选值为 "yes" 或 "no"。
@ -300,7 +285,7 @@ INFO: Consumed total msgs: 3000, total rows: 30000000
- **sqls** : 字符串数组类型,指定超级表创建成功后要执行的 sql 数组sql 中指定表名前面要带数据库名,否则会报未指定数据库错误
#### 标签列与数据列配置参数
#### 标签列与数据列
指定超级表标签列与数据列的配置参数分别在 `super_tables` 中的 `columns``tag` 中。
@ -335,19 +320,17 @@ INFO: Consumed total msgs: 3000, total rows: 30000000
- **fillNull**: 字符串类型,指定此列是否随机插入 NULL 值,可指定为 “true” 或 "false", 只有当 generate_row_rule 为 2 时有效
#### 插入行为配置参数
#### 插入行为相关
- **thread_count** : 插入数据的线程数量,默认为 8。
- **thread_bind_vgroup** : 写入时 vgroup 是否和写入线程绑定,绑定后可提升写入速度, 取值为 "yes" 或 "no",默认值为 “no”, 设置为 “no” 后与原来行为一致。 当设为 “yes” 时,如果 thread_count 数量大小写入数据库的 vgroups 数量, thread_count 自动调整为 vgroups 数量;如果 thread_count 数量小于 vgroups 数量,写入线程数量不做调整,一个线程写完一个 vgroup 数据后再写下一个,同时保持一个 vgroup 同时只能由一个线程写入的规则。
**thread_bind_vgroup** : 写入时 vgroup 是否和写入线程绑定,绑定后可提升写入速度, 取值为 "yes" 或 "no",默认值为 “no”, 设置为 “no” 后与原来行为一致。 当设为 “yes” 时,如果 thread_count 大于写入数据库 vgroups 数量, thread_count 自动调整为 vgroups 数量;如果 thread_count 小于 vgroups 数量,写入线程数量不做调整,一个线程写完一个 vgroup 数据后再写下一个,同时保持一个 vgroup 同时只能由一个线程写入的规则。
- **create_table_thread_count** : 建表的线程数量,默认为 8。
- **connection_pool_size** : 预先建立的与 TDengine 服务端之间的连接的数量。若不配置,则与所指定的线程数相同。
- **result_file** : 结果输出文件的路径,默认值为 ./output.txt。
- **confirm_parameter_prompt** : 开关参数,要求用户在提示后确认才能继续。默认值为 false
- **confirm_parameter_prompt** : 开关参数,要求用户在提示后确认才能继续 可取值 "yes" or "no"。默认值为 "no"
- **interlace_rows** : 启用交错插入模式并同时指定向每个子表每次插入的数据行数。交错插入模式是指依次向每张子表插入由本参数所指定的行数并重复这个过程,直到所有子表的数据都插入完成。默认值为 0 即向一张子表完成数据插入后才会向下一张子表进行数据插入。
`super_tables` 中也可以配置该参数,若配置则以 `super_tables` 中的配置为高优先级,覆盖全局设置。
@ -373,16 +356,20 @@ interval 控制休眠时间,避免持续查询慢查询消耗 CPU ,单位为
其它通用参数详见[通用配置参数](#通用配置参数)。
#### 执行指定查询语句的配置参数
#### 执行指定查询语句
查询指定表(可以指定超级表、子表或普通表)的配置参数在 `specified_table_query` 中设置。
- **mixed_query** : 查询模式,取值 “yes” 为`混合查询` "no" 为`正常查询` , 默认值为 “no”
`混合查询``sqls` 中所有 sql 按 `threads` 线程数分组,每个线程执行一组, 线程中每个 sql 都需执行 `query_times` 次查询
`正常查询``sqls` 中每个 sql 启动 `threads` 个线程,每个线程执行完 `query_times` 次后退出,下个 sql 需等待上个 sql 线程全部执行完退出后方可执行
不管 `正常查询` 还是 `混合查询` ,执行查询总次数是相同的 ,查询总次数 = `sqls` 个数 * `threads` * `query_times` 区别是 `正常查询` 每个 sql 都会启动 `threads` 个线程,而 `混合查询` 只启动一次 `threads` 个线程执行完所有 SQL, 两者启动线程次数不一样。
- **mixed_query** : 查询模式
“yes” :`混合查询`
"no"(默认值) :`普通查询`
`普通查询``sqls` 中每个 sql 启动 `threads` 个线程查询此 sql, 执行完 `query_times` 次查询后退出,执行此 sql 的所有线程都完成后进入下一个 sql
`查询总次数` = `sqls` 个数 * `query_times` * `threads`
`混合查询``sqls` 中所有 sql 分成 `threads` 个组,每个线程执行一组, 每个 sql 都需执行 `query_times` 次查询
`查询总次数` = `sqls` 个数 * `query_times`
- **query_interval** : 查询时间间隔,单位是秒,默认值为 0。
- **query_interval** : 查询时间间隔,单位: millisecond,默认值为 0。
- **threads** : 执行查询 SQL 的线程数,默认值为 1。
@ -390,7 +377,7 @@ interval 控制休眠时间,避免持续查询慢查询消耗 CPU ,单位为
- **sql**: 执行的 SQL 命令,必填。
- **result**: 保存查询结果的文件,未指定则不保存。
#### 查询超级表的配置参数
#### 查询超级表
查询超级表的配置参数在 `super_table_query` 中设置。
超级表查询的线程模式与上面介绍的指定查询语句查询的 `正常查询` 模式相同,不同之处是本 `sqls` 使用所有子表填充。
@ -402,16 +389,14 @@ interval 控制休眠时间,避免持续查询慢查询消耗 CPU ,单位为
- **threads** : 执行查询 SQL 的线程数,默认值为 1。
- **sqls**
- **sql** : 执行的 SQL 命令,必填;对于超级表的查询 SQL在 SQL 命令中保留 "xxxx",程序会自动将其替换为超级表的所有子表名。
替换为超级表中所有的子表名。
- **sql** : 执行的 SQL 命令,必填;对于超级表的查询 SQL在 SQL 命令中必须保留 "xxxx",会替换为超级下所有子表名后再执行。
- **result** : 保存查询结果的文件,未指定则不保存。
- **限制项** : sqls 下配置 sql 数组最大为 100 个
### 订阅场景配置参数
订阅场景下 `filetype` 必须设置为 `subscribe`,该参数及其它通用参数详见[通用配置参数](#通用配置参数)
#### 执行指定订阅语句的配置参数
订阅指定表(可以指定超级表、子表或者普通表)的配置参数在 `specified_table_query` 中设置。
- **threads/concurrent** : 执行 SQL 的线程数,默认为 1。
@ -420,7 +405,7 @@ interval 控制休眠时间,避免持续查询慢查询消耗 CPU ,单位为
- **sql** : 执行的 SQL 命令,必填。
#### 配置文件中数据类型书写对照表
### 配置文件中数据类型书写对照表
| # | **引擎** | **taosBenchmark**
| --- | :----------------: | :---------------:

View File

@ -215,7 +215,7 @@ SHOW db_name.ALIVE;
查询数据库 db_name 的可用状态,返回值 0不可用 1完全可用 2部分可用即数据库包含的 VNODE 部分节点可用,部分节点不可用)
## 查看DB 的磁盘空间占用
## 查看 DB 的磁盘空间占用
```sql
select * from INFORMATION_SCHEMA.INS_DISK_USAGE where db_name = 'db_name'

View File

@ -310,5 +310,17 @@ TDinsight插件中展示的数据是通过taosKeeper和taosAdapter服务收集
### 34 超级表带 TAG 过滤查子查数据与直接查子表哪个块?
直接查子表更快。超级表带 TAG 过滤查询子查数据是为满足查询方便性,同时可对多个子表中数据进行过滤,如果目的是追求性能并已明确查询子表,直接从子表查性能更高
### 35 如何查看数据压缩率指标?
TDengine 目前只提供以表为统计单位的压缩率,数据库及整体还未提供,查看命令是在客户端 taos-CLI 中执行 `SHOW TABLE DISTRIBUTED table_name;` 命令table_name 为要查看压缩率的表,可以为超级表、普通表及子表,详细可 [查看此处](https://docs.taosdata.com/reference/taos-sql/show/#show-table-distributed)
### 35 如何查看数据库的数据压缩率和磁盘占用指标?
TDengine 3.3.5.0 之前的版本,只提供以表为统计单位的压缩率,数据库及整体还未提供,查看命令是在客户端 taos-CLI 中执行 `SHOW TABLE DISTRIBUTED table_name;` 命令table_name 为要查看压缩率的表,可以为超级表、普通表及子表,详细可 [查看此处](https://docs.taosdata.com/reference/taos-sql/show/#show-table-distributed)
TDengine 3.3.5.0 及以上的版本,还提供了数据库整体压缩率和磁盘空间占用统计。查看数据库整体的数据压缩率和磁盘空间占用的命令为 `SHOW db_name.disk_info;`,查看数据库各个模块的磁盘空间占用的命令为 `SELECT * FROM INFORMATION_SCHEMA.INS_DISK_USAGE WHERE db_name='db_name';`db_name 为要查看的数据库名称。详细可 [查看此处](https://docs.taosdata.com/reference/taos-sql/database/#%E6%9F%A5%E7%9C%8B-db-%E7%9A%84%E7%A3%81%E7%9B%98%E7%A9%BA%E9%97%B4%E5%8D%A0%E7%94%A8)
### 36 短时间内,通过 systemd 重启 taosd 超过一定次数后重启失败报错start-limit-hit。
问题描述:
TDengine 3.3.5.1 及以上的版本taosd.service 的 systemd 配置文件中StartLimitInterval 参数从 60 秒调整为 900 秒。若在 900 秒内 taosd 服务重启达到 3 次,后续通过 systemd 启动 taosd 服务时会失败,执行 `systemctl status taosd.service` 显示错误Failed with result 'start-limit-hit'。
问题原因:
TDengine 3.3.5.1 之前的版本StartLimitInterval 为 60 秒。若在 60 秒内无法完成 3 次重启(例如,因从 WAL预写式日志中恢复大量数据导致启动时间较长则下一个 60 秒周期内的重启会重新计数,导致系统持续不断地重启 taosd 服务。为避免无限重启问题,将 StartLimitInterval 由 60 秒调整为 900 秒。因此,在使用 systemd 短时间内多次启动 taosd 时遇到 start-limit-hit 错误的机率增多。
问题解决:
1通过 systemd 重启 taosd 服务:推荐方法是先执行命令 `systemctl reset-failed taosd.service` 重置失败计数器,然后再通过 `systemctl restart taosd.service` 重启;若需长期调整,可手动修改 /etc/systemd/system/taosd.service 文件,将 StartLimitInterval 调小或将 StartLimitBurst 调大(注:重新安装 taosd 会重置该参数,需要重新修改),执行 `systemctl daemon-reload` 重新加载配置然后再重启。2也可以不通过 systemd 而是通过 taosd 命令直接重启 taosd 服务,此时不受 StartLimitInterval 和 StartLimitBurst 参数限制。

View File

@ -385,13 +385,24 @@ static void taosReserveOldLog(char *oldName, char *keepName) {
static void taosKeepOldLog(char *oldName) {
if (oldName[0] != 0) {
char compressFileName[PATH_MAX + 20];
snprintf(compressFileName, PATH_MAX + 20, "%s.gz", oldName);
if (taosCompressFile(oldName, compressFileName) == 0) {
int32_t code = taosRemoveFile(oldName);
if (code != 0) {
TAOS_UNUSED(printf("failed to remove file:%s, reason:%s\n", oldName, tstrerror(code)));
}
int32_t code = 0, lino = 0;
TdFilePtr oldFile = NULL;
if ((oldFile = taosOpenFile(oldName, TD_FILE_READ))) {
TAOS_CHECK_GOTO(taosLockFile(oldFile), &lino, _exit2);
char compressFileName[PATH_MAX + 20];
snprintf(compressFileName, PATH_MAX + 20, "%s.gz", oldName);
TAOS_CHECK_GOTO(taosCompressFile(oldName, compressFileName), &lino, _exit1);
TAOS_CHECK_GOTO(taosRemoveFile(oldName), &lino, _exit1);
_exit1:
TAOS_UNUSED(taosUnLockFile(oldFile));
_exit2:
TAOS_UNUSED(taosCloseFile(&oldFile));
} else {
code = terrno;
}
if (code != 0 && tsLogEmbedded == 1) { // print error messages only in embedded log mode
// avoid using uWarn or uError, as they may open a new log file and potentially cause a deadlock.
fprintf(stderr, "WARN: failed at line %d to keep old log file:%s, reason:%s\n", lino, oldName, tstrerror(code));
}
}
}
@ -1041,7 +1052,7 @@ static void taosWriteLog(SLogBuff *pLogBuf) {
}
#define LOG_ROTATE_INTERVAL 3600
#if !defined(TD_ENTERPRISE) || defined(ASSERT_NOT_CORE)
#if !defined(TD_ENTERPRISE) || defined(ASSERT_NOT_CORE) || defined(GRANTS_CFG)
#define LOG_INACTIVE_TIME 7200
#define LOG_ROTATE_BOOT 900
#else

View File

@ -0,0 +1,262 @@
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import os
import threading as thd
import multiprocessing as mp
from numpy.lib.function_base import insert
import taos
from taos import *
from util.log import *
from util.cases import *
from util.sql import *
import numpy as np
import datetime as dt
from datetime import datetime
from ctypes import *
import time
# constant define
WAITS = 5 # wait seconds
class TDTestCase:
#
# --------------- main frame -------------------
def caseDescription(self):
'''
limit and offset keyword function test cases;
case1: limit offset base function test
case2: offset return valid
'''
return
def getBuildPath(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
projPath = selfPath[:selfPath.find("community")]
else:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root)-len("/build/bin")]
break
return buildPath
# init
def init(self, conn, logSql, replicaVar=1):
self.replicaVar = int(replicaVar)
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor())
# tdSql.prepare()
# self.create_tables();
self.ts = 1500000000000
# stop
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
# --------------- case -------------------
def newcon(self,host,cfg):
user = "root"
password = "taosdata"
port =6030
con=taos.connect(host=host, user=user, password=password, config=cfg ,port=port)
tdLog.debug(con)
return con
def stmtExe(self,conn,sql,bindStat):
queryStat=conn.statement("%s"%sql)
queryStat.bind_param(bindStat)
queryStat.execute()
result=queryStat.use_result()
rows=result.fetch_all()
return rows
def test_stmt_set_tbname_tag(self,conn):
dbname = "stmt_tag"
stablename = 'log'
try:
conn.execute("drop database if exists %s" % dbname)
conn.execute("create database if not exists %s PRECISION 'us' " % dbname)
conn.select_db(dbname)
conn.execute("create table if not exists %s(ts timestamp, bo bool, nil tinyint, ti tinyint, si smallint, ii int,\
bi bigint, tu tinyint unsigned, su smallint unsigned, iu int unsigned, bu bigint unsigned, \
ff float, dd double, bb binary(100), nn nchar(100), tt timestamp , vc varchar(100)) tags (t1 timestamp, t2 bool,\
t3 tinyint, t4 tinyint, t5 smallint, t6 int, t7 bigint, t8 tinyint unsigned, t9 smallint unsigned, \
t10 int unsigned, t11 bigint unsigned, t12 float, t13 double, t14 binary(100), t15 nchar(100), t16 timestamp)"%stablename)
stmt = conn.statement("insert into ? using log tags (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?) \
values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)")
tags = new_bind_params(16)
tags[0].timestamp(1626861392589123, PrecisionEnum.Microseconds)
tags[1].bool(True)
tags[2].bool(False)
tags[3].tinyint(2)
tags[4].smallint(3)
tags[5].int(4)
tags[6].bigint(5)
tags[7].tinyint_unsigned(6)
tags[8].smallint_unsigned(7)
tags[9].int_unsigned(8)
tags[10].bigint_unsigned(9)
tags[11].float(10.1)
tags[12].double(10.11)
tags[13].binary("hello")
tags[14].nchar("stmt")
tags[15].timestamp(1626861392589, PrecisionEnum.Milliseconds)
stmt.set_tbname_tags("tb1", tags)
params = new_multi_binds(17)
params[0].timestamp((1626861392589111, 1626861392590111, 1626861392591111))
params[1].bool((True, None, False))
params[2].tinyint([-128, -128, None]) # -128 is tinyint null
params[3].tinyint([0, 127, None])
params[4].smallint([3, None, 2])
params[5].int([3, 4, None])
params[6].bigint([3, 4, None])
params[7].tinyint_unsigned([3, 4, None])
params[8].smallint_unsigned([3, 4, None])
params[9].int_unsigned([3, 4, None])
params[10].bigint_unsigned([3, 4, 5])
params[11].float([3, None, 1])
params[12].double([3, None, 1.2])
params[13].binary(["abc", "dddafadfadfadfadfa", None])
params[14].nchar(["涛思数据", None, "a long string with 中文?字符"])
params[15].timestamp([None, None, 1626861392591])
params[16].binary(["涛思数据16", None, None])
stmt.bind_param_batch(params)
stmt.execute()
assert stmt.affected_rows == 3
#query all
queryparam=new_bind_params(1)
queryparam[0].int(10)
rows=self.stmtExe(conn,"select * from log where bu < ?",queryparam)
tdLog.debug("assert 1st case %s"%rows)
assert str(rows[0][0]) == "2021-07-21 17:56:32.589111"
assert rows[0][10] == 3 , '1st case is failed'
assert rows[1][10] == 4 , '1st case is failed'
#query: Numeric Functions
queryparam=new_bind_params(2)
queryparam[0].int(5)
queryparam[1].int(5)
rows=self.stmtExe(conn,"select abs(?) from log where bu < ?",queryparam)
tdLog.debug("assert 2nd case %s"%rows)
assert rows[0][0] == 5 , '2nd case is failed'
assert rows[1][0] == 5 , '2nd case is failed'
#query: Numeric Functions and escapes
queryparam=new_bind_params(1)
queryparam[0].int(5)
rows=self.stmtExe(conn,"select abs(?) from log where nn= 'a? long string with 中文字符'",queryparam)
tdLog.debug("assert 3rd case %s"%rows)
assert rows == [] , '3rd case is failed'
#query: string Functions
queryparam=new_bind_params(1)
queryparam[0].binary('中文字符')
rows=self.stmtExe(conn,"select CHAR_LENGTH(?) from log ",queryparam)
tdLog.debug("assert 4th case %s"%rows)
assert rows[0][0] == 4, '4th case is failed'
assert rows[1][0] == 4, '4th case is failed'
queryparam=new_bind_params(1)
queryparam[0].binary('123')
rows=self.stmtExe(conn,"select CHAR_LENGTH(?) from log ",queryparam)
tdLog.debug("assert 4th case %s"%rows)
assert rows[0][0] == 3, '4th.1 case is failed'
assert rows[1][0] == 3, '4th.1 case is failed'
#query: conversion Functions
queryparam=new_bind_params(1)
queryparam[0].binary('1232a')
rows=self.stmtExe(conn,"select cast( ? as bigint) from log",queryparam)
tdLog.debug("assert 5th case %s"%rows)
assert rows[0][0] == 1232, '5th.1 case is failed'
assert rows[1][0] == 1232, '5th.1 case is failed'
querystmt4=conn.statement("select cast( ? as binary(10)) from log ")
queryparam=new_bind_params(1)
queryparam[0].int(123)
rows=self.stmtExe(conn,"select cast( ? as bigint) from log",queryparam)
tdLog.debug("assert 6th case %s"%rows)
assert rows[0][0] == 123, '6th.1 case is failed'
assert rows[1][0] == 123, '6th.1 case is failed'
#query: datatime Functions
queryparam=new_bind_params(1)
queryparam[0].timestamp(1626861392591112)
rows=self.stmtExe(conn,"select timediff('2021-07-21 17:56:32.590111',?,1a) from log",queryparam)
tdLog.debug("assert 7th case %s"%rows)
assert rows[0][0] == -1, '7th case is failed'
assert rows[1][0] == -1, '7th case is failed'
#query: aggregate Functions
queryparam=new_bind_params(1)
queryparam[0].int(123)
rows=self.stmtExe(conn,"select count(?) from log ",queryparam)
tdLog.debug("assert 8th case %s"%rows)
assert rows[0][0] == 3, ' 8th case is failed'
#query: selector Functions 9
queryparam=new_bind_params(1)
queryparam[0].int(2)
rows=self.stmtExe(conn,"select bottom(bu,?) from log group by bu order by bu desc ; ",queryparam)
tdLog.debug("assert 9th case %s"%rows)
assert rows[1][0] == 4, ' 9 case is failed'
assert rows[2][0] == 3, ' 9 case is failed'
# #query: time-series specific Functions 10
querystmt=conn.statement(" select twa(?) from log; ")
queryparam=new_bind_params(1)
queryparam[0].int(15)
rows=self.stmtExe(conn," select twa(?) from log; ",queryparam)
tdLog.debug("assert 10th case %s"%rows)
assert rows[0][0] == 15, ' 10th case is failed'
# conn.execute("drop database if exists %s" % dbname)
conn.close()
tdLog.success("%s successfully executed" % __file__)
except Exception as err:
# conn.execute("drop database if exists %s" % dbname)
conn.close()
raise err
def run(self):
buildPath = self.getBuildPath()
config = buildPath+ "../sim/dnode1/cfg/"
host="localhost"
connectstmt=self.newcon(host,config)
self.test_stmt_set_tbname_tag(connectstmt)
return
# add case with filename
#
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())

View File

@ -0,0 +1,50 @@
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
from util.cases import *
from util.sql import *
from util.dnodes import *
from util.log import *
class TDTestCase:
def init(self, conn, logSql, replicaVar=1):
tdLog.debug(f"start to init {__file__}")
self.replicaVar = int(replicaVar)
tdSql.init(conn.cursor(), logSql)
def run(self):
tdSql.execute('CREATE DATABASE db vgroups 1 replica 2;')
time.sleep(1)
tdSql.query("show db.vgroups;")
if(tdSql.queryResult[0][4] == "follower") and (tdSql.queryResult[0][6] == "leader"):
tdLog.info("stop dnode2")
sc.dnodeStop(2)
if(tdSql.queryResult[0][6] == "follower") and (tdSql.queryResult[0][4] == "leader"):
tdLog.info("stop dnode 3")
sc.dnodeStop(3)
tdLog.info("wait 10 seconds")
time.sleep(10)
tdSql.query("show db.vgroups;")
if(tdSql.queryResult[0][4] != "assigned") and (tdSql.queryResult[0][6] != "assigned"):
tdLog.exit("failed to set aasigned")
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

View File

@ -0,0 +1,329 @@
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import random
import time
import platform
import taos
from util.log import *
from util.cases import *
from util.sql import *
class TDTestCase:
# get col value and total max min ...
def getColsValue(self, i, j):
# c1 value
if random.randint(1, 10) == 5:
c1 = None
else:
c1 = 1
# c2 value
if j % 3200 == 0:
c2 = 8764231
elif random.randint(1, 10) == 5:
c2 = None
else:
c2 = random.randint(-87654297, 98765321)
value = f"({self.ts}, "
# c1
if c1 is None:
value += "null,"
else:
self.c1Cnt += 1
value += f"{c1},"
# c2
if c2 is None:
value += "null,"
else:
value += f"{c2},"
# total count
self.c2Cnt += 1
# max
if self.c2Max is None:
self.c2Max = c2
else:
if c2 > self.c2Max:
self.c2Max = c2
# min
if self.c2Min is None:
self.c2Min = c2
else:
if c2 < self.c2Min:
self.c2Min = c2
# sum
if self.c2Sum is None:
self.c2Sum = c2
else:
self.c2Sum += c2
# c3 same with ts
value += f"{self.ts})"
# move next
self.ts += 1
return value
# insert data
def insertData(self):
tdLog.info("insert data ....")
sqls = ""
for i in range(self.childCnt):
# insert child table
values = ""
pre_insert = f"insert into t{i} values "
for j in range(self.childRow):
if values == "":
values = self.getColsValue(i, j)
else:
values += "," + self.getColsValue(i, j)
# batch insert
if j % self.batchSize == 0 and values != "":
sql = pre_insert + values
tdSql.execute(sql)
values = ""
# append last
if values != "":
sql = pre_insert + values
tdSql.execute(sql)
values = ""
sql = "flush database db;"
tdLog.info(sql)
tdSql.execute(sql)
# insert finished
tdLog.info(f"insert data successfully.\n"
f" inserted child table = {self.childCnt}\n"
f" inserted child rows = {self.childRow}\n"
f" total inserted rows = {self.childCnt*self.childRow}\n")
return
# prepareEnv
def prepareEnv(self):
# init
self.ts = 1680000000000*1000*1000
self.childCnt = 5
self.childRow = 10000
self.batchSize = 5000
# total
self.c1Cnt = 0
self.c2Cnt = 0
self.c2Max = None
self.c2Min = None
self.c2Sum = None
# create database db
sql = f"create database db vgroups 2 precision 'ns' "
tdLog.info(sql)
tdSql.execute(sql)
sql = f"use db"
tdSql.execute(sql)
# create super talbe st
sql = f"create table st(ts timestamp, c1 int, c2 bigint, ts1 timestamp) tags(area int)"
tdLog.info(sql)
tdSql.execute(sql)
# create child table
for i in range(self.childCnt):
sql = f"create table t{i} using st tags({i}) "
tdSql.execute(sql)
# create stream
if platform.system().lower() != 'windows':
sql = "create stream ma into sta as select count(ts) from st interval(100b)"
tdLog.info(sql)
tdSql.execute(sql)
# insert data
self.insertData()
# check data correct
def checkExpect(self, sql, expectVal):
tdSql.query(sql)
rowCnt = tdSql.getRows()
for i in range(rowCnt):
val = tdSql.getData(i,0)
if val != expectVal:
tdLog.exit(f"Not expect . query={val} expect={expectVal} i={i} sql={sql}")
return False
tdLog.info(f"check expect ok. sql={sql} expect ={expectVal} rowCnt={rowCnt}")
return True
# check time macro
def checkTimeMacro(self):
# 2 week
val = 2
nsval = -val*7*24*60*60*1000*1000*1000
expectVal = self.childCnt * self.childRow
sql = f"select count(ts) from st where timediff(ts - {val}w, ts1) = {nsval} "
self.checkExpect(sql, expectVal)
# 20 day
val = 20
nsval = -val*24*60*60*1000*1000*1000
uint = "d"
sql = f"select count(ts) from st where timediff(ts - {val}{uint}, ts1) = {nsval} "
self.checkExpect(sql, expectVal)
# 30 hour
val = 30
nsval = -val*60*60*1000*1000*1000
uint = "h"
sql = f"select count(ts) from st where timediff(ts - {val}{uint}, ts1) = {nsval} "
self.checkExpect(sql, expectVal)
# 90 minutes
val = 90
nsval = -val*60*1000*1000*1000
uint = "m"
sql = f"select count(ts) from st where timediff(ts - {val}{uint}, ts1) = {nsval} "
self.checkExpect(sql, expectVal)
# 2s
val = 2
nsval = -val*1000*1000*1000
uint = "s"
sql = f"select count(ts) from st where timediff(ts - {val}{uint}, ts1) = {nsval} "
self.checkExpect(sql, expectVal)
# 20a
val = 5
nsval = -val*1000*1000
uint = "a"
sql = f"select count(ts) from st where timediff(ts - {val}{uint}, ts1) = {nsval} "
self.checkExpect(sql, expectVal)
# 300u
val = 300
nsval = -val*1000
uint = "u"
sql = f"select count(ts) from st where timediff(ts - {val}{uint}, ts1) = {nsval} "
self.checkExpect(sql, expectVal)
# 8b
val = 8
sql = f"select timediff(ts1, ts - {val}b) from st "
self.checkExpect(sql, val)
# timetruncate check
sql = '''select ts,timetruncate(ts,1u),
timetruncate(ts,1b),
timetruncate(ts,1m),
timetruncate(ts,1h),
timetruncate(ts,1w)
from t0 order by ts desc limit 1;'''
tdSql.query(sql)
tdSql.checkData(0,1, "2023-03-28 18:40:00.000009000")
tdSql.checkData(0,2, "2023-03-28 18:40:00.000009999")
tdSql.checkData(0,3, "2023-03-28 18:40:00.000000000")
tdSql.checkData(0,4, "2023-03-28 18:00:00.000000000")
tdSql.checkData(0,5, "2023-03-23 00:00:00.000000000")
# timediff
sql = '''select ts,timediff(ts,ts+1b,1b),
timediff(ts,ts+1u,1u),
timediff(ts,ts+1a,1a),
timediff(ts,ts+1s,1s),
timediff(ts,ts+1m,1m),
timediff(ts,ts+1h,1h),
timediff(ts,ts+1d,1d),
timediff(ts,ts+1w,1w)
from t0 order by ts desc limit 1;'''
tdSql.query(sql)
tdSql.checkData(0,1, -1)
tdSql.checkData(0,2, -1)
tdSql.checkData(0,3, -1)
tdSql.checkData(0,4, -1)
tdSql.checkData(0,5, -1)
tdSql.checkData(0,6, -1)
tdSql.checkData(0,7, -1)
tdSql.checkData(0,8, -1)
# init
def init(self, conn, logSql, replicaVar=1):
seed = time.time() % 10000
random.seed(seed)
self.replicaVar = int(replicaVar)
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor(), True)
# where
def checkWhere(self):
cnt = 300
start = self.ts - cnt
sql = f"select count(ts) from st where ts >= {start} and ts <= {self.ts}"
self.checkExpect(sql, cnt)
for i in range(50):
cnt = random.randint(1,40000)
base = 2000
start = self.ts - cnt - base
end = self.ts - base
sql = f"select count(ts) from st where ts >= {start} and ts < {end}"
self.checkExpect(sql, cnt)
# stream
def checkStream(self):
allRows = self.childCnt * self.childRow
# ensure write data is expected
sql = "select count(*) from (select diff(ts) as a from (select ts from st order by ts asc)) where a=1;"
self.checkExpect(sql, allRows - 1)
# stream count is ok
sql =f"select count(*) from sta"
cnt = int(allRows / 100) - 1 # last window is not close, so need reduce one
self.checkExpect(sql, cnt)
# check fields
sql =f"select count(*) from sta where `count(ts)` != 100"
self.checkExpect(sql, 0)
# check timestamp
sql =f"select count(*) from (select diff(`_wstart`) from sta)"
self.checkExpect(sql, cnt - 1)
sql =f"select count(*) from (select diff(`_wstart`) as a from sta) where a != 100"
self.checkExpect(sql, 0)
# run
def run(self):
# prepare env
self.prepareEnv()
# time macro like 1w 1d 1h 1m 1s 1a 1u 1b
self.checkTimeMacro()
# check where
self.checkWhere()
# check stream
if platform.system().lower() != 'windows':
self.checkStream()
# stop
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

View File

@ -0,0 +1,465 @@
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import random
import time
import copy
import string
import platform
import taos
from util.log import *
from util.cases import *
from util.sql import *
class TDTestCase:
# random string
def random_string(self, count):
letters = string.ascii_letters
return ''.join(random.choice(letters) for i in range(count))
# get col value and total max min ...
def getColsValue(self, i, j):
# c1 value
if random.randint(1, 10) == 5:
c1 = None
else:
c1 = 1
# c2 value
if j % 3200 == 0:
c2 = 8764231
elif random.randint(1, 10) == 5:
c2 = None
else:
c2 = random.randint(-87654297, 98765321)
value = f"({self.ts}, "
# c1
if c1 is None:
value += "null,"
else:
self.c1Cnt += 1
value += f"{c1},"
# c2
if c2 is None:
value += "null,"
else:
value += f"{c2},"
# total count
self.c2Cnt += 1
# max
if self.c2Max is None:
self.c2Max = c2
else:
if c2 > self.c2Max:
self.c2Max = c2
# min
if self.c2Min is None:
self.c2Min = c2
else:
if c2 < self.c2Min:
self.c2Min = c2
# sum
if self.c2Sum is None:
self.c2Sum = c2
else:
self.c2Sum += c2
# c3 same with ts
value += f"{self.ts})"
# move next
self.ts += 1
return value
# insert data
def insertData(self):
tdLog.info("insert data ....")
sqls = ""
for i in range(self.childCnt):
# insert child table
values = ""
pre_insert = f"insert into @db_name.t{i} values "
for j in range(self.childRow):
if values == "":
values = self.getColsValue(i, j)
else:
values += "," + self.getColsValue(i, j)
# batch insert
if j % self.batchSize == 0 and values != "":
sql = pre_insert + values
self.exeDouble(sql)
values = ""
# append last
if values != "":
sql = pre_insert + values
self.exeDouble(sql)
values = ""
# insert nomal talbe
for i in range(20):
self.ts += 1000
name = self.random_string(20)
sql = f"insert into @db_name.ta values({self.ts}, {i}, {self.ts%100000}, '{name}', false)"
self.exeDouble(sql)
# insert finished
tdLog.info(f"insert data successfully.\n"
f" inserted child table = {self.childCnt}\n"
f" inserted child rows = {self.childRow}\n"
f" total inserted rows = {self.childCnt*self.childRow}\n")
return
def exeDouble(self, sql):
# dbname replace
sql1 = sql.replace("@db_name", self.db1)
if len(sql1) > 100:
tdLog.info(sql1[:100])
else:
tdLog.info(sql1)
tdSql.execute(sql1)
sql2 = sql.replace("@db_name", self.db2)
if len(sql2) > 100:
tdLog.info(sql2[:100])
else:
tdLog.info(sql2)
tdSql.execute(sql2)
# prepareEnv
def prepareEnv(self):
# init
self.ts = 1680000000000
self.childCnt = 4
self.childRow = 10000
self.batchSize = 50000
self.vgroups1 = 1
self.vgroups2 = 1
self.db1 = "db1"
self.db2 = "db2"
# total
self.c1Cnt = 0
self.c2Cnt = 0
self.c2Max = None
self.c2Min = None
self.c2Sum = None
# create database db wal_retention_period 0
sql = f"create database @db_name vgroups {self.vgroups1} replica {self.replicaVar} wal_retention_period 0 wal_retention_size 1"
self.exeDouble(sql)
# create super talbe st
sql = f"create table @db_name.st(ts timestamp, c1 int, c2 bigint, ts1 timestamp) tags(area int)"
self.exeDouble(sql)
# create child table
for i in range(self.childCnt):
sql = f"create table @db_name.t{i} using @db_name.st tags({i}) "
self.exeDouble(sql)
# create normal table
sql = f"create table @db_name.ta(ts timestamp, c1 int, c2 bigint, c3 binary(32), c4 bool)"
self.exeDouble(sql)
# insert data
self.insertData()
# update
self.ts = 1680000000000 + 20000
self.childRow = 1000
# delete data
sql = "delete from @db_name.st where ts > 1680000019000 and ts < 1680000062000"
self.exeDouble(sql)
sql = "delete from @db_name.st where ts > 1680000099000 and ts < 1680000170000"
self.exeDouble(sql)
# check data correct
def checkExpect(self, sql, expectVal):
tdSql.query(sql)
rowCnt = tdSql.getRows()
for i in range(rowCnt):
val = tdSql.getData(i,0)
if val != expectVal:
tdLog.exit(f"Not expect . query={val} expect={expectVal} i={i} sql={sql}")
return False
tdLog.info(f"check expect ok. sql={sql} expect ={expectVal} rowCnt={rowCnt}")
return True
# init
def init(self, conn, logSql, replicaVar=1):
seed = time.time() % 10000
random.seed(seed)
self.replicaVar = int(replicaVar)
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor(), True)
# check query result same
def queryDouble(self, sql):
# sql
sql1 = sql.replace('@db_name', self.db1)
tdLog.info(sql1)
start1 = time.time()
rows1 = tdSql.query(sql1,queryTimes=2)
spend1 = time.time() - start1
res1 = copy.deepcopy(tdSql.queryResult)
sql2 = sql.replace('@db_name', self.db2)
tdLog.info(sql2)
start2 = time.time()
tdSql.query(sql2,queryTimes=2)
spend2 = time.time() - start2
res2 = tdSql.queryResult
rowlen1 = len(res1)
rowlen2 = len(res2)
errCnt = 0
if rowlen1 != rowlen2:
tdLog.exit(f"both row count not equal. rowlen1={rowlen1} rowlen2={rowlen2} ")
return False
for i in range(rowlen1):
row1 = res1[i]
row2 = res2[i]
collen1 = len(row1)
collen2 = len(row2)
if collen1 != collen2:
tdLog.exit(f"both col count not equal. collen1={collen1} collen2={collen2}")
return False
for j in range(collen1):
if row1[j] != row2[j]:
tdLog.info(f"error both column value not equal. row={i} col={j} col1={row1[j]} col2={row2[j]} .")
errCnt += 1
if errCnt > 0:
tdLog.exit(f" db2 column value different with db2. different count ={errCnt} ")
# warning performance
diff = (spend2 - spend1)*100/spend1
tdLog.info("spend1=%.6fs spend2=%.6fs diff=%.1f%%"%(spend1, spend2, diff))
if spend2 > spend1 and diff > 20:
tdLog.info("warning: the diff for performance after spliting is over 20%")
return True
# check result
def checkResult(self):
# check vgroupid
sql = f"select vgroup_id from information_schema.ins_vgroups where db_name='{self.db2}'"
tdSql.query(sql,queryTimes=2)
tdSql.checkRows(self.vgroups2)
# check child table count same
sql = "select table_name from information_schema.ins_tables where db_name='@db_name' order by table_name"
self.queryDouble(sql)
# check row value is ok
sql = "select * from @db_name.st order by ts, tbname"
self.queryDouble(sql)
# where
sql = "select *,tbname from @db_name.st where c1 < 1000 order by ts, tbname"
self.queryDouble(sql)
# max
sql = "select max(c1) from @db_name.st"
self.queryDouble(sql)
# min
sql = "select min(c2) from @db_name.st"
self.queryDouble(sql)
# sum
sql = "select sum(c1) from @db_name.st"
self.queryDouble(sql)
# normal table
# count
sql = "select count(*) from @db_name.ta"
self.queryDouble(sql)
# all rows
sql = "select * from @db_name.ta"
self.queryDouble(sql)
# sum
sql = "select sum(c1) from @db_name.ta"
self.queryDouble(sql)
# get vgroup list
def getVGroup(self, db_name):
vgidList = []
sql = f"select vgroup_id from information_schema.ins_vgroups where db_name='{db_name}'"
res = tdSql.getResult(sql)
rows = len(res)
for i in range(rows):
vgidList.append(res[i][0])
return vgidList;
# split vgroup on db2
def splitVGroup(self, db_name):
vgids = self.getVGroup(db_name)
selid = random.choice(vgids)
sql = f"split vgroup {selid}"
tdLog.info(sql)
tdSql.execute(sql)
# wait end
seconds = 300
for i in range(seconds):
sql ="show transactions;"
rows = tdSql.query(sql)
if rows == 0:
tdLog.info("split vgroup finished.")
return True
#tdLog.info(f"i={i} wait split vgroup ...")
time.sleep(1)
tdLog.exit(f"split vgroup transaction is not finished after executing {seconds}s")
return False
# split error
def expectSplitError(self, dbName):
vgids = self.getVGroup(dbName)
selid = random.choice(vgids)
sql = f"split vgroup {selid}"
tdLog.info(sql)
tdSql.error(sql)
# expect split ok
def expectSplitOk(self, dbName):
# split vgroup
vgList1 = self.getVGroup(dbName)
self.splitVGroup(dbName)
vgList2 = self.getVGroup(dbName)
vgNum1 = len(vgList1) + 1
vgNum2 = len(vgList2)
if vgNum1 != vgNum2:
tdLog.exit(f" vglist len={vgNum1} is not same for expect {vgNum2}")
return
# split empty database
def splitEmptyDB(self):
dbName = "emptydb"
vgNum = 2
# create database
sql = f"create database {dbName} vgroups {vgNum} replica {self.replicaVar }"
tdLog.info(sql)
tdSql.execute(sql)
# split vgroup
self.expectSplitOk(dbName)
# forbid
def checkForbid(self):
# stream
if platform.system().lower() != 'windows':
tdLog.info("check forbid split having stream...")
tdSql.execute("create database streamdb;")
tdSql.execute("use streamdb;")
tdSql.execute("create table ta(ts timestamp, age int);")
tdSql.execute("create stream ma into sta as select count(*) from ta interval(1s);")
self.expectSplitError("streamdb")
tdSql.execute("drop stream ma;")
self.expectSplitOk("streamdb")
# topic
tdLog.info("check forbid split having topic...")
tdSql.execute("create database topicdb wal_retention_period 10;")
tdSql.execute("use topicdb;")
tdSql.execute("create table ta(ts timestamp, age int);")
tdSql.execute("create topic toa as select * from ta;")
#self.expectSplitError("topicdb")
tdSql.execute("drop topic toa;")
self.expectSplitOk("topicdb")
# compact and check db2
def compactAndCheck(self):
tdLog.info("compact db2 and check result ...")
# compact
tdSql.execute(f"compact database {self.db2};")
# check result
self.checkResult()
# run
def run(self):
# prepare env
self.prepareEnv()
tdLog.info("generate at least two stt files of the same fileset (e.g. v4f1944) for db2 and db1 ")
for dbname in [self.db2, self.db1]:
tdSql.execute(f'insert into {dbname}.t1 values("2023-03-28 10:40:00.010",103,103,"2023-03-28 18:41:39.999") ;')
tdSql.execute(f'flush database {dbname}')
tdSql.execute(f'insert into {dbname}.t1 values("2023-03-28 10:40:00.100",103,103,"2023-03-28 18:41:39.999") ;')
tdSql.execute(f'flush database {dbname}')
tdSql.execute(f'insert into {dbname}.t1 values("2023-03-28 10:40:00.100",103,103,"2023-03-28 18:41:39.999") ;')
tdSql.execute(f'flush database {dbname}')
tdLog.info("check db1 and db2 same after creating ...")
self.checkResult()
for i in range(3):
# split vgroup on db2
start = time.time()
self.splitVGroup(self.db2)
end = time.time()
self.vgroups2 += 1
# insert the same data per tables into splited vgroups
tdLog.info("insert the same data per tables into splited vgroups(3,4)")
for dbname in [self.db2, self.db1]:
for tableid in range(self.childCnt):
tdSql.execute(f'insert into {dbname}.t{tableid} values("2023-03-28 10:40:00.100",103,103,"2023-03-28 18:41:39.999") ;')
tdSql.execute(f'flush database {dbname}')
tdSql.execute(f'insert into {dbname}.ta values("2023-03-28 10:40:00.100",103,103,"2023-03-28 18:41:39.999",0);')
tdSql.execute(f'flush database {dbname}')
# check two db query result same
self.checkResult()
spend = "%.3f"%(end-start)
tdLog.info(f"split vgroup i={i} passed. spend = {spend}s")
# split empty db
self.splitEmptyDB()
# check topic and stream forib
self.checkForbid()
# compact database
self.compactAndCheck()
# stop
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

515
test/query/function/sin.py Normal file
View File

@ -0,0 +1,515 @@
import taos
import sys
import datetime
import inspect
import math
from util.log import *
from util.sql import *
from util.cases import *
class TDTestCase:
def init(self, conn, logSql, replicaVar=1):
self.replicaVar = int(replicaVar)
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor())
def prepare_datas(self, dbname="db"):
tdSql.execute(
f'''create table {dbname}.stb1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
tags (t1 int)
'''
)
tdSql.execute(
f'''
create table {dbname}.t1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
'''
)
for i in range(4):
tdSql.execute(f'create table {dbname}.ct{i+1} using {dbname}.stb1 tags ( {i+1} )')
for i in range(9):
tdSql.execute(
f"insert into {dbname}.ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
)
tdSql.execute(
f"insert into {dbname}.ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
)
tdSql.execute(f"insert into {dbname}.ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
tdSql.execute(f"insert into {dbname}.ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute(f"insert into {dbname}.ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute(f"insert into {dbname}.ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute(f"insert into {dbname}.ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute(f"insert into {dbname}.ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute(f"insert into {dbname}.ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute(
f'''insert into {dbname}.t1 values
( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
( '2021-01-01 01:01:06.000', 3, 33333, 333, 33, 3.33, 33.33, 0, "binary3", "nchar3", now()+3a )
( '2021-05-07 01:01:10.000', 4, 44444, 444, 44, 4.44, 44.44, 1, "binary4", "nchar4", now()+4a )
( '2021-07-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( '2021-09-30 01:01:16.000', 5, 55555, 555, 55, 5.55, 55.55, 0, "binary5", "nchar5", now()+5a )
( '2022-02-01 01:01:20.000', 6, 66666, 666, 66, 6.66, 66.66, 1, "binary6", "nchar6", now()+6a )
( '2022-10-28 01:01:26.000', 7, 00000, 000, 00, 0.00, 00.00, 1, "binary7", "nchar7", "1970-01-01 08:00:00.000" )
( '2022-12-01 01:01:30.000', 8, -88888, -888, -88, -8.88, -88.88, 0, "binary8", "nchar8", "1969-01-01 01:00:00.000" )
( '2022-12-31 01:01:36.000', 9, -99999999999999999, -999, -99, -9.99, -999999999999999999999.99, 1, "binary9", "nchar9", "1900-01-01 00:00:00.000" )
( '2023-02-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
'''
)
def check_result_auto_sin(self ,origin_query , pow_query):
pow_result = tdSql.getResult(pow_query)
origin_result = tdSql.getResult(origin_query)
auto_result =[]
for row in origin_result:
row_check = []
for elem in row:
if elem == None:
elem = None
else:
elem = math.sin(elem)
row_check.append(elem)
auto_result.append(row_check)
tdSql.query(pow_query)
for row_index , row in enumerate(pow_result):
for col_index , elem in enumerate(row):
tdSql.checkData(row_index ,col_index ,auto_result[row_index][col_index])
def test_errors(self, dbname="db"):
error_sql_lists = [
f"select sin from {dbname}.t1",
# f"select sin(-+--+c1 ) from {dbname}.t1",
# f"select +-sin(c1) from {dbname}.t1",
# f"select ++-sin(c1) from {dbname}.t1",
# f"select ++--sin(c1) from {dbname}.t1",
# f"select - -sin(c1)*0 from {dbname}.t1",
# f"select sin(tbname+1) from {dbname}.t1 ",
f"select sin(123--123)==1 from {dbname}.t1",
f"select sin(c1) as 'd1' from {dbname}.t1",
f"select sin(c1 ,c2) from {dbname}.t1",
f"select sin(c1 ,NULL ) from {dbname}.t1",
f"select sin(,) from {dbname}.t1;",
f"select sin(sin(c1) ab from {dbname}.t1)",
f"select sin(c1 ) as int from {dbname}.t1",
f"select sin from {dbname}.stb1",
# f"select sin(-+--+c1) from {dbname}.stb1",
# f"select +-sin(c1) from {dbname}.stb1",
# f"select ++-sin(c1) from {dbname}.stb1",
# f"select ++--sin(c1) from {dbname}.stb1",
# f"select - -sin(c1)*0 from {dbname}.stb1",
# f"select sin(tbname+1) from {dbname}.stb1 ",
f"select sin(123--123)==1 from {dbname}.stb1",
f"select sin(c1) as 'd1' from {dbname}.stb1",
f"select sin(c1 ,c2 ) from {dbname}.stb1",
f"select sin(c1 ,NULL) from {dbname}.stb1",
f"select sin(,) from {dbname}.stb1;",
f"select sin(sin(c1) ab from {dbname}.stb1)",
f"select sin(c1) as int from {dbname}.stb1"
]
for error_sql in error_sql_lists:
tdSql.error(error_sql)
def support_types(self, dbname="db"):
type_error_sql_lists = [
f"select sin(ts) from {dbname}.t1" ,
f"select sin(c7) from {dbname}.t1",
f"select sin(c8) from {dbname}.t1",
f"select sin(c9) from {dbname}.t1",
f"select sin(ts) from {dbname}.ct1" ,
f"select sin(c7) from {dbname}.ct1",
f"select sin(c8) from {dbname}.ct1",
f"select sin(c9) from {dbname}.ct1",
f"select sin(ts) from {dbname}.ct3" ,
f"select sin(c7) from {dbname}.ct3",
f"select sin(c8) from {dbname}.ct3",
f"select sin(c9) from {dbname}.ct3",
f"select sin(ts) from {dbname}.ct4" ,
f"select sin(c7) from {dbname}.ct4",
f"select sin(c8) from {dbname}.ct4",
f"select sin(c9) from {dbname}.ct4",
f"select sin(ts) from {dbname}.stb1" ,
f"select sin(c7) from {dbname}.stb1",
f"select sin(c8) from {dbname}.stb1",
f"select sin(c9) from {dbname}.stb1" ,
f"select sin(ts) from {dbname}.stbbb1" ,
f"select sin(c7) from {dbname}.stbbb1",
f"select sin(ts) from {dbname}.tbname",
f"select sin(c9) from {dbname}.tbname"
]
for type_sql in type_error_sql_lists:
tdSql.error(type_sql)
type_sql_lists = [
f"select sin(c1) from {dbname}.t1",
f"select sin(c2) from {dbname}.t1",
f"select sin(c3) from {dbname}.t1",
f"select sin(c4) from {dbname}.t1",
f"select sin(c5) from {dbname}.t1",
f"select sin(c6) from {dbname}.t1",
f"select sin(c1) from {dbname}.ct1",
f"select sin(c2) from {dbname}.ct1",
f"select sin(c3) from {dbname}.ct1",
f"select sin(c4) from {dbname}.ct1",
f"select sin(c5) from {dbname}.ct1",
f"select sin(c6) from {dbname}.ct1",
f"select sin(c1) from {dbname}.ct3",
f"select sin(c2) from {dbname}.ct3",
f"select sin(c3) from {dbname}.ct3",
f"select sin(c4) from {dbname}.ct3",
f"select sin(c5) from {dbname}.ct3",
f"select sin(c6) from {dbname}.ct3",
f"select sin(c1) from {dbname}.stb1",
f"select sin(c2) from {dbname}.stb1",
f"select sin(c3) from {dbname}.stb1",
f"select sin(c4) from {dbname}.stb1",
f"select sin(c5) from {dbname}.stb1",
f"select sin(c6) from {dbname}.stb1",
f"select sin(c6) as alisb from {dbname}.stb1",
f"select sin(c6) alisb from {dbname}.stb1",
]
for type_sql in type_sql_lists:
tdSql.query(type_sql)
def basic_sin_function(self, dbname="db"):
# basic query
tdSql.query(f"select c1 from {dbname}.ct3")
tdSql.checkRows(0)
tdSql.query(f"select c1 from {dbname}.t1")
tdSql.checkRows(12)
tdSql.query(f"select c1 from {dbname}.stb1")
tdSql.checkRows(25)
# used for empty table , ct3 is empty
tdSql.query(f"select sin(c1) from {dbname}.ct3")
tdSql.checkRows(0)
tdSql.query(f"select sin(c2) from {dbname}.ct3")
tdSql.checkRows(0)
tdSql.query(f"select sin(c3) from {dbname}.ct3")
tdSql.checkRows(0)
tdSql.query(f"select sin(c4) from {dbname}.ct3")
tdSql.checkRows(0)
tdSql.query(f"select sin(c5) from {dbname}.ct3")
tdSql.checkRows(0)
tdSql.query(f"select sin(c6) from {dbname}.ct3")
tdSql.checkRows(0)
# # used for regular table
tdSql.query(f"select sin(c1) from {dbname}.t1")
tdSql.checkData(0, 0, None)
tdSql.checkData(1 , 0, 0.841470985)
tdSql.checkData(3 , 0, 0.141120008)
tdSql.checkData(5 , 0, None)
tdSql.query(f"select c1, c2, c3 , c4, c5 from {dbname}.t1")
tdSql.checkData(1, 4, 1.11000)
tdSql.checkData(3, 3, 33)
tdSql.checkData(5, 4, None)
tdSql.query(f"select ts,c1, c2, c3 , c4, c5 from {dbname}.t1")
tdSql.checkData(1, 5, 1.11000)
tdSql.checkData(3, 4, 33)
tdSql.checkData(5, 5, None)
self.check_result_auto_sin( f"select abs(c1), abs(c2), abs(c3) , abs(c4), abs(c5) from {dbname}.t1", f"select sin(abs(c1)), sin(abs(c2)) ,sin(abs(c3)), sin(abs(c4)), sin(abs(c5)) from {dbname}.t1")
# used for sub table
tdSql.query(f"select c2 ,sin(c2) from {dbname}.ct1")
tdSql.checkData(0, 1, -0.220708349)
tdSql.checkData(1 , 1, -0.556921845)
tdSql.checkData(3 , 1, -0.798311364)
tdSql.checkData(4 , 1, 0.000000000)
tdSql.query(f"select c1, c5 ,sin(c5) from {dbname}.ct4")
tdSql.checkData(0 , 2, None)
tdSql.checkData(1 , 2, 0.518228108)
tdSql.checkData(2 , 2, 0.996475613)
tdSql.checkData(3 , 2, 0.367960369)
tdSql.checkData(5 , 2, None)
self.check_result_auto_sin( f"select c1, c2, c3 , c4, c5 from {dbname}.ct1", f"select sin(c1), sin(c2) ,sin(c3), sin(c4), sin(c5) from {dbname}.ct1")
# nest query for sin functions
tdSql.query(f"select c4 , sin(c4) ,sin(sin(c4)) , sin(sin(sin(c4))) from {dbname}.ct1;")
tdSql.checkData(0 , 0 , 88)
tdSql.checkData(0 , 1 , 0.035398303)
tdSql.checkData(0 , 2 , 0.035390911)
tdSql.checkData(0 , 3 , 0.035383523)
tdSql.checkData(1 , 0 , 77)
tdSql.checkData(1 , 1 , 0.999520159)
tdSql.checkData(1 , 2 , 0.841211629)
tdSql.checkData(1 , 3 , 0.745451290)
tdSql.checkData(11 , 0 , -99)
tdSql.checkData(11 , 1 , 0.999206834)
tdSql.checkData(11 , 2 , 0.841042171)
tdSql.checkData(11 , 3 , 0.745338326)
# used for stable table
tdSql.query(f"select sin(c1) from {dbname}.stb1")
tdSql.checkRows(25)
# used for not exists table
tdSql.error(f"select sin(c1) from {dbname}.stbbb1")
tdSql.error(f"select sin(c1) from {dbname}.tbname")
tdSql.error(f"select sin(c1) from {dbname}.ct5")
# mix with common col
tdSql.query(f"select c1, sin(c1) from {dbname}.ct1")
tdSql.query(f"select c2, sin(c2) from {dbname}.ct4")
# mix with common functions
tdSql.query(f"select c1, sin(c1),sin(c1), sin(sin(c1)) from {dbname}.ct4 ")
tdSql.checkData(0 , 0 ,None)
tdSql.checkData(0 , 1 ,None)
tdSql.checkData(0 , 2 ,None)
tdSql.checkData(0 , 3 ,None)
tdSql.checkData(3 , 0 , 6)
tdSql.checkData(3 , 1 ,-0.279415498)
tdSql.checkData(3 , 2 ,-0.279415498)
tdSql.checkData(3 , 3 ,-0.275793863)
tdSql.query(f"select c1, sin(c1),c5, floor(c5) from {dbname}.stb1 ")
# # mix with agg functions , not support
tdSql.error(f"select c1, sin(c1),c5, count(c5) from {dbname}.stb1 ")
tdSql.error(f"select c1, sin(c1),c5, count(c5) from {dbname}.ct1 ")
tdSql.error(f"select sin(c1), count(c5) from {dbname}.stb1 ")
tdSql.error(f"select sin(c1), count(c5) from {dbname}.ct1 ")
tdSql.error(f"select c1, count(c5) from {dbname}.ct1 ")
tdSql.error(f"select c1, count(c5) from {dbname}.stb1 ")
# agg functions mix with agg functions
tdSql.query(f"select max(c5), count(c5) from {dbname}.stb1")
tdSql.query(f"select max(c5), count(c5) from {dbname}.ct1")
# # bug fix for compute
tdSql.query(f"select c1, sin(c1) -0 ,sin(c1-4)-0 from {dbname}.ct4 ")
tdSql.checkData(0, 0, None)
tdSql.checkData(0, 1, None)
tdSql.checkData(0, 2, None)
tdSql.checkData(1, 0, 8)
tdSql.checkData(1, 1, 0.989358247)
tdSql.checkData(1, 2, -0.756802495)
tdSql.query(f"select c1, sin(c1) -0 ,sin(c1-0.1)-0.1 from {dbname}.ct4")
tdSql.checkData(0, 0, None)
tdSql.checkData(0, 1, None)
tdSql.checkData(0, 2, None)
tdSql.checkData(1, 0, 8)
tdSql.checkData(1, 1, 0.989358247)
tdSql.checkData(1, 2, 0.898941342)
tdSql.query(f"select c1, sin(c1), c2, sin(c2), c3, sin(c3) from {dbname}.ct1")
def test_big_number(self, dbname="db"):
tdSql.query(f"select c1, sin(100000000) from {dbname}.ct1") # bigint to double data overflow
tdSql.checkData(4, 1, math.sin(100000000))
tdSql.query(f"select c1, sin(10000000000000) from {dbname}.ct1") # bigint to double data overflow
tdSql.checkData(4, 1, math.sin(10000000000000))
tdSql.query(f"select c1, sin(10000000000000000000000000) from {dbname}.ct1") # bigint to double data overflow
tdSql.query(f"select c1, sin(10000000000000000000000000.0) from {dbname}.ct1") # 10000000000000000000000000.0 is a double value
tdSql.checkData(1, 1, math.sin(10000000000000000000000000.0))
tdSql.query(f"select c1, sin(10000000000000000000000000000000000) from {dbname}.ct1") # bigint to double data overflow
tdSql.query(f"select c1, sin(10000000000000000000000000000000000.0) from {dbname}.ct1") # 10000000000000000000000000.0 is a double value
tdSql.checkData(4, 1, math.sin(10000000000000000000000000000000000.0))
tdSql.query(f"select c1, sin(10000000000000000000000000000000000000000) from {dbname}.ct1") # bigint to double data overflow
tdSql.query(f"select c1, sin(10000000000000000000000000000000000000000.0) from {dbname}.ct1") # 10000000000000000000000000.0 is a double value
tdSql.checkData(4, 1, math.sin(10000000000000000000000000000000000000000.0))
tdSql.query(f"select c1, sin(10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000) from {dbname}.ct1") # bigint to double data overflow
def abs_func_filter(self, dbname="db"):
tdSql.query(f"select c1, abs(c1) -0 ,ceil(c1-0.1)-0 ,floor(c1+0.1)-0.1 ,ceil(sin(c1)-0.5) from {dbname}.ct4 where c1>5 ")
tdSql.checkRows(3)
tdSql.checkData(0,0,8)
tdSql.checkData(0,1,8.000000000)
tdSql.checkData(0,2,8.000000000)
tdSql.checkData(0,3,7.900000000)
tdSql.checkData(0,4,1.000000000)
tdSql.query(f"select c1, abs(c1) -0 ,ceil(c1-0.1)-0 ,floor(c1+0.1)-0.1 ,ceil(sin(c1)-0.5) from {dbname}.ct4 where c1=5 ")
tdSql.checkRows(1)
tdSql.checkData(0,0,5)
tdSql.checkData(0,1,5.000000000)
tdSql.checkData(0,2,5.000000000)
tdSql.checkData(0,3,4.900000000)
tdSql.checkData(0,4,-1.000000000)
tdSql.query(f"select c1,c2 , abs(c1) -0 ,ceil(c1-0.1)-0 ,floor(c1+0.1)-0.1 ,ceil(sin(c1)-0.5) from {dbname}.ct4 where c1=sin(c1) limit 1 ")
tdSql.checkRows(1)
tdSql.checkData(0,0,0)
tdSql.checkData(0,1,0)
tdSql.checkData(0,2,0.000000000)
tdSql.checkData(0,3,0.000000000)
tdSql.checkData(0,4,-0.100000000)
tdSql.checkData(0,5,0.000000000)
def check_boundary_values(self, dbname="testdb"):
PI=3.1415926
tdSql.execute(f"drop database if exists {dbname}")
tdSql.execute(f"create database if not exists {dbname}")
tdSql.execute(
f"create table {dbname}.stb_bound (ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(32),c9 nchar(32), c10 timestamp) tags (t1 int);"
)
tdSql.execute(f'create table {dbname}.sub1_bound using {dbname}.stb_bound tags ( 1 )')
tdSql.execute(
f"insert into {dbname}.sub1_bound values ( now()-10s, 2147483647, 9223372036854775807, 32767, 127, 3.40E+38, 1.7e+308, True, 'binary_tb1', 'nchar_tb1', now() )"
)
tdSql.execute(
f"insert into {dbname}.sub1_bound values ( now()-5s, -2147483647, -9223372036854775807, -32767, -127, -3.40E+38, -1.7e+308, True, 'binary_tb1', 'nchar_tb1', now() )"
)
tdSql.execute(
f"insert into {dbname}.sub1_bound values ( now(), 2147483646, 9223372036854775806, 32766, 126, 3.40E+38, 1.7e+308, True, 'binary_tb1', 'nchar_tb1', now() )"
)
tdSql.execute(
f"insert into {dbname}.sub1_bound values ( now()+5s, -2147483646, -9223372036854775806, -32766, -126, -3.40E+38, -1.7e+308, True, 'binary_tb1', 'nchar_tb1', now() )"
)
tdSql.error(
f"insert into {dbname}.sub1_bound values ( now()+10s, 2147483648, 9223372036854775808, 32768, 128, 3.40E+38, 1.7e+308, True, 'binary_tb1', 'nchar_tb1', now() )"
)
self.check_result_auto_sin( f"select abs(c1), abs(c2), abs(c3) , abs(c4) from {dbname}.sub1_bound ", f"select sin(abs(c1)), sin(abs(c2)) ,sin(abs(c3)), sin(abs(c4)) from {dbname}.sub1_bound")
self.check_result_auto_sin( f"select c1, c2, c3 , c3, c2 ,c1 from {dbname}.sub1_bound ", f"select sin(c1), sin(c2) ,sin(c3), sin(c3), sin(c2) ,sin(c1) from {dbname}.sub1_bound")
self.check_result_auto_sin(f"select abs(abs(abs(abs(abs(abs(abs(abs(abs(c1))))))))) nest_col_func from {dbname}.sub1_bound" , f"select sin(abs(c1)) from {dbname}.sub1_bound" )
# check basic elem for table per row
tdSql.query(f"select sin(abs(c1)) ,sin(abs(c2)) , sin(abs(c3)) , sin(abs(c4)), sin(abs(c5)), sin(abs(c6)) from {dbname}.sub1_bound ")
tdSql.checkData(0,0,math.sin(2147483647))
tdSql.checkData(0,1,math.sin(9223372036854775807))
tdSql.checkData(0,2,math.sin(32767))
tdSql.checkData(0,3,math.sin(127))
tdSql.checkData(0,4,math.sin(339999995214436424907732413799364296704.00000))
tdSql.checkData(1,0,math.sin(2147483647))
tdSql.checkData(1,1,math.sin(9223372036854775807))
tdSql.checkData(1,2,math.sin(32767))
tdSql.checkData(1,3,math.sin(127))
tdSql.checkData(1,4,math.sin(339999995214436424907732413799364296704.00000))
tdSql.checkData(3,0,math.sin(2147483646))
tdSql.checkData(3,1,math.sin(9223372036854775806))
tdSql.checkData(3,2,math.sin(32766))
tdSql.checkData(3,3,math.sin(126))
tdSql.checkData(3,4,math.sin(339999995214436424907732413799364296704.00000))
# check + - * / in functions
tdSql.query(f"select sin(abs(c1+1)) ,sin(abs(c2)) , sin(abs(c3*1)) , sin(abs(c4/2)), sin(abs(c5))/2, sin(abs(c6)) from {dbname}.sub1_bound ")
tdSql.checkData(0,0,math.sin(2147483648.000000000))
tdSql.checkData(0,1,math.sin(9223372036854775807))
tdSql.checkData(0,2,math.sin(32767.000000000))
tdSql.checkData(0,3,math.sin(63.500000000))
tdSql.execute(f"create stable {dbname}.st (ts timestamp, num1 float, num2 double) tags (t1 int);")
tdSql.execute(f'create table {dbname}.tb1 using {dbname}.st tags (1)')
tdSql.execute(f'create table {dbname}.tb2 using {dbname}.st tags (2)')
tdSql.execute(f'create table {dbname}.tb3 using {dbname}.st tags (3)')
tdSql.execute(f'insert into {dbname}.tb1 values (now()-40s, {PI/2}, {PI/2})')
tdSql.execute(f'insert into {dbname}.tb1 values (now()-30s, {PI}, {PI})')
tdSql.execute(f'insert into {dbname}.tb1 values (now()-20s, {PI*1.5}, {PI*1.5})')
tdSql.execute(f'insert into {dbname}.tb1 values (now()-10s, {PI*2}, {PI*2})')
tdSql.execute(f'insert into {dbname}.tb1 values (now(), {PI*2.5}, {PI*2.5})')
tdSql.execute(f'insert into {dbname}.tb2 values (now()-40s, {PI/2}, {PI/2})')
tdSql.execute(f'insert into {dbname}.tb2 values (now()-30s, {PI}, {PI})')
tdSql.execute(f'insert into {dbname}.tb2 values (now()-20s, {PI*1.5}, {PI*1.5})')
tdSql.execute(f'insert into {dbname}.tb2 values (now()-10s, {PI*2}, {PI*2})')
tdSql.execute(f'insert into {dbname}.tb2 values (now(), {PI*2.5}, {PI*2.5})')
self.check_result_auto_sin(f"select num1,num2 from {dbname}.tb3;" , f"select sin(num1),sin(num2) from {dbname}.tb3")
def support_super_table_test(self, dbname="db"):
self.check_result_auto_sin( f"select c5 from {dbname}.stb1 order by ts " , f"select sin(c5) from {dbname}.stb1 order by ts" )
self.check_result_auto_sin( f"select c5 from {dbname}.stb1 order by tbname " , f"select sin(c5) from {dbname}.stb1 order by tbname" )
self.check_result_auto_sin( f"select c5 from {dbname}.stb1 where c1 > 0 order by tbname " , f"select sin(c5) from {dbname}.stb1 where c1 > 0 order by tbname" )
self.check_result_auto_sin( f"select c5 from {dbname}.stb1 where c1 > 0 order by tbname " , f"select sin(c5) from {dbname}.stb1 where c1 > 0 order by tbname" )
self.check_result_auto_sin( f"select t1,c5 from {dbname}.stb1 order by ts " , f"select sin(t1), sin(c5) from {dbname}.stb1 order by ts" )
self.check_result_auto_sin( f"select t1,c5 from {dbname}.stb1 order by tbname " , f"select sin(t1) ,sin(c5) from {dbname}.stb1 order by tbname" )
self.check_result_auto_sin( f"select t1,c5 from {dbname}.stb1 where c1 > 0 order by tbname " , f"select sin(t1) ,sin(c5) from {dbname}.stb1 where c1 > 0 order by tbname" )
self.check_result_auto_sin( f"select t1,c5 from {dbname}.stb1 where c1 > 0 order by tbname " , f"select sin(t1) , sin(c5) from {dbname}.stb1 where c1 > 0 order by tbname" )
def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring
tdSql.prepare()
tdLog.printNoPrefix("==========step1:create table ==============")
self.prepare_datas()
tdLog.printNoPrefix("==========step2:test errors ==============")
self.test_errors()
tdLog.printNoPrefix("==========step3:support types ============")
self.support_types()
tdLog.printNoPrefix("==========step4: sin basic query ============")
self.basic_sin_function()
tdLog.printNoPrefix("==========step5: sin filter query ============")
self.abs_func_filter()
tdLog.printNoPrefix("==========step6: big number sin query ============")
self.test_big_number()
tdLog.printNoPrefix("==========step7: sin boundary query ============")
self.check_boundary_values()
tdLog.printNoPrefix("==========step8: check sin result of stable query ============")
self.support_super_table_test()
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

87
test/query/hint/hint.py Normal file
View File

@ -0,0 +1,87 @@
from wsgiref.headers import tspecials
from util.log import *
from util.cases import *
from util.sql import *
import numpy as np
class TDTestCase:
def init(self, conn, logSql, replicaVar=1):
self.replicaVar = int(replicaVar)
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor())
self.rowNum = 10
self.batchNum = 5
self.ts = 1537146000000
def run(self):
dbname = "db"
tdSql.prepare()
tdSql.execute(f'''create table sta(ts timestamp, col1 int, col2 bigint) tags(tg1 int, tg2 binary(20))''')
tdSql.execute(f"create table sta1 using sta tags(1, 'a')")
tdSql.execute(f"create table sta2 using sta tags(2, 'b')")
tdSql.execute(f"create table sta3 using sta tags(3, 'c')")
tdSql.execute(f"create table sta4 using sta tags(4, 'a')")
tdSql.execute(f"insert into sta1 values(1537146000001, 11, 110)")
tdSql.execute(f"insert into sta1 values(1537146000002, 12, 120)")
tdSql.execute(f"insert into sta1 values(1537146000003, 13, 130)")
tdSql.execute(f"insert into sta2 values(1537146000001, 21, 210)")
tdSql.execute(f"insert into sta2 values(1537146000002, 22, 220)")
tdSql.execute(f"insert into sta2 values(1537146000003, 23, 230)")
tdSql.execute(f"insert into sta3 values(1537146000001, 31, 310)")
tdSql.execute(f"insert into sta3 values(1537146000002, 32, 320)")
tdSql.execute(f"insert into sta3 values(1537146000003, 33, 330)")
tdSql.execute(f"insert into sta4 values(1537146000001, 41, 410)")
tdSql.execute(f"insert into sta4 values(1537146000002, 42, 420)")
tdSql.execute(f"insert into sta4 values(1537146000003, 43, 430)")
tdSql.execute(f'''create table stb(ts timestamp, col1 int, col2 bigint) tags(tg1 int, tg2 binary(20))''')
tdSql.execute(f"create table stb1 using stb tags(1, 'a')")
tdSql.execute(f"create table stb2 using stb tags(2, 'b')")
tdSql.execute(f"create table stb3 using stb tags(3, 'c')")
tdSql.execute(f"create table stb4 using stb tags(4, 'a')")
tdSql.execute(f"insert into stb1 values(1537146000001, 911, 9110)")
tdSql.execute(f"insert into stb1 values(1537146000002, 912, 9120)")
tdSql.execute(f"insert into stb1 values(1537146000003, 913, 9130)")
tdSql.execute(f"insert into stb2 values(1537146000001, 921, 9210)")
tdSql.execute(f"insert into stb2 values(1537146000002, 922, 9220)")
tdSql.execute(f"insert into stb2 values(1537146000003, 923, 9230)")
tdSql.execute(f"insert into stb3 values(1537146000001, 931, 9310)")
tdSql.execute(f"insert into stb3 values(1537146000002, 932, 9320)")
tdSql.execute(f"insert into stb3 values(1537146000003, 933, 9330)")
tdSql.execute(f"insert into stb4 values(1537146000001, 941, 9410)")
tdSql.execute(f"insert into stb4 values(1537146000002, 942, 9420)")
tdSql.execute(f"insert into stb4 values(1537146000003, 943, 9430)")
tdSql.query(f"select /*+ batch_scan() */ count(*) from sta a, stb b where a.tg1=b.tg1 and a.ts=b.ts and b.tg2 > 'a' interval(1a);")
tdSql.checkRows(3)
tdSql.query(f"select /*+ no_batch_scan() */ count(*) from sta a, stb b where a.tg1=b.tg1 and a.ts=b.ts and b.tg2 > 'a' interval(1a);")
tdSql.checkRows(3)
tdSql.query(f"select /*+ batch_scan(a) */ count(*) from sta a, stb b where a.tg1=b.tg1 and a.ts=b.ts and b.tg2 > 'a' interval(1a);")
tdSql.checkRows(3)
tdSql.query(f"select /*+ batch_scan(a,) */ count(*) from sta a, stb b where a.tg1=b.tg1 and a.ts=b.ts and b.tg2 > 'a' interval(1a);")
tdSql.checkRows(3)
tdSql.query(f"select /*+ a,a */ count(*) from sta a, stb b where a.tg1=b.tg1 and a.ts=b.ts and b.tg2 > 'a' interval(1a);")
tdSql.checkRows(3)
tdSql.query(f"select /*+*/ count(*) from sta a, stb b where a.tg1=b.tg1 and a.ts=b.ts and b.tg2 > 'a' interval(1a);")
tdSql.checkRows(3)
tdSql.query(f"select /*+ batch_scan(),no_batch_scan() */ count(*) from sta a, stb b where a.tg1=b.tg1 and a.ts=b.ts and b.tg2 > 'a' interval(1a);")
tdSql.checkRows(3)
tdSql.query(f"select /*+ no_batch_scan() batch_scan() */ count(*) from sta a, stb b where a.tg1=b.tg1 and a.ts=b.ts and b.tg2 > 'a' interval(1a);")
tdSql.checkRows(3)
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())

515
test/query/join/join.py Normal file
View File

@ -0,0 +1,515 @@
import datetime
from dataclasses import dataclass, field
from typing import List, Any, Tuple
from util.log import *
from util.sql import *
from util.cases import *
from util.dnodes import *
PRIMARY_COL = "ts"
INT_COL = "c_int"
BINT_COL = "c_bint"
SINT_COL = "c_sint"
TINT_COL = "c_tint"
FLOAT_COL = "c_float"
DOUBLE_COL = "c_double"
BOOL_COL = "c_bool"
TINT_UN_COL = "c_utint"
SINT_UN_COL = "c_usint"
BINT_UN_COL = "c_ubint"
INT_UN_COL = "c_uint"
BINARY_COL = "c_binary"
NCHAR_COL = "c_nchar"
TS_COL = "c_ts"
NUM_COL = [INT_COL, BINT_COL, SINT_COL, TINT_COL, FLOAT_COL, DOUBLE_COL, ]
CHAR_COL = [BINARY_COL, NCHAR_COL, ]
BOOLEAN_COL = [BOOL_COL, ]
TS_TYPE_COL = [TS_COL, ]
INT_TAG = "t_int"
ALL_COL = [PRIMARY_COL, INT_COL, BINT_COL, SINT_COL, TINT_COL, FLOAT_COL, DOUBLE_COL, BINARY_COL, NCHAR_COL, BOOL_COL, TS_COL]
TAG_COL = [INT_TAG]
# insert data args
TIME_STEP = 10000
NOW = int(datetime.datetime.timestamp(datetime.datetime.now()) * 1000)
# init db/table
DBNAME = "db"
STBNAME = f"{DBNAME}.stb1"
CTBNAME = f"{DBNAME}.ct1"
NTBNAME = f"{DBNAME}.nt1"
@dataclass
class DataSet:
ts_data : List[int] = field(default_factory=list)
int_data : List[int] = field(default_factory=list)
bint_data : List[int] = field(default_factory=list)
sint_data : List[int] = field(default_factory=list)
tint_data : List[int] = field(default_factory=list)
int_un_data : List[int] = field(default_factory=list)
bint_un_data: List[int] = field(default_factory=list)
sint_un_data: List[int] = field(default_factory=list)
tint_un_data: List[int] = field(default_factory=list)
float_data : List[float] = field(default_factory=list)
double_data : List[float] = field(default_factory=list)
bool_data : List[int] = field(default_factory=list)
binary_data : List[str] = field(default_factory=list)
nchar_data : List[str] = field(default_factory=list)
class TDTestCase:
def init(self, conn, logSql, replicaVar=1):
self.replicaVar = int(replicaVar)
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor(), True)
def __query_condition(self,tbname):
query_condition = []
for char_col in CHAR_COL:
query_condition.extend(
(
f"{tbname}.{char_col}",
# f"upper( {tbname}.{char_col} )",
)
)
query_condition.extend( f"cast( {tbname}.{un_char_col} as binary(16) ) " for un_char_col in NUM_COL)
for num_col in NUM_COL:
query_condition.extend(
(
f"sin( {tbname}.{num_col} )",
)
)
query_condition.extend( f"{tbname}.{num_col} + {tbname}.{num_col_1} " for num_col_1 in NUM_COL )
query_condition.append(''' "test1234!@#$%^&*():'><?/.,][}{" ''')
return query_condition
def __join_condition(self, tb_list, filter=PRIMARY_COL, INNER=False, alias_tb1="tb1", alias_tb2="tb2"):
table_reference = tb_list[0]
join_condition = table_reference
join = "inner join" if INNER else "join"
for i in range(len(tb_list[1:])):
join_condition += f" as {alias_tb1} {join} {tb_list[i+1]} as {alias_tb2} on {alias_tb1}.{filter}={alias_tb2}.{filter}"
return join_condition
def __where_condition(self, col=None, tbname=None, query_conditon=None):
if query_conditon and isinstance(query_conditon, str):
if query_conditon.startswith("count"):
query_conditon = query_conditon[6:-1]
elif query_conditon.startswith("max"):
query_conditon = query_conditon[4:-1]
elif query_conditon.startswith("sum"):
query_conditon = query_conditon[4:-1]
elif query_conditon.startswith("min"):
query_conditon = query_conditon[4:-1]
if query_conditon:
return f" where {query_conditon} is not null"
if col in NUM_COL:
return f" where abs( {tbname}.{col} ) >= 0"
if col in CHAR_COL:
return f" where lower( {tbname}.{col} ) like 'bina%' or lower( {tbname}.{col} ) like '_cha%' "
if col in BOOLEAN_COL:
return f" where {tbname}.{col} in (false, true) "
if col in TS_TYPE_COL or col in PRIMARY_COL:
return f" where cast( {tbname}.{col} as binary(16) ) is not null "
return ""
def __group_condition(self, col, having = None):
if isinstance(col, str):
if col.startswith("count"):
col = col[6:-1]
elif col.startswith("max"):
col = col[4:-1]
elif col.startswith("sum"):
col = col[4:-1]
elif col.startswith("min"):
col = col[4:-1]
return f" group by {col} having {having}" if having else f" group by {col} "
def __gen_sql(self, select_clause, from_clause, where_condition="", group_condition=""):
if isinstance(select_clause, str) and "on" not in from_clause and select_clause.split(".")[0] != from_clause.split(".")[0]:
return
return f"select {select_clause} from {from_clause} {where_condition} {group_condition}"
@property
def __join_tblist(self, dbname=DBNAME):
return [
# ["ct1", "ct2"],
[f"{dbname}.ct1", f"{dbname}.ct4"],
[f"{dbname}.ct1", f"{dbname}.nt1"],
# ["ct2", "ct4"],
# ["ct2", "nt1"],
# ["ct4", "nt1"],
# ["ct1", "ct2", "ct4"],
# ["ct1", "ct2", "nt1"],
# ["ct1", "ct4", "nt1"],
# ["ct2", "ct4", "nt1"],
# ["ct1", "ct2", "ct4", "nt1"],
]
@property
def __sqls_list(self):
sqls = []
__join_tblist = self.__join_tblist
for join_tblist in __join_tblist:
alias_tb = "tb1"
# for join_tb in join_tblist:
select_claus_list = self.__query_condition(alias_tb)
for select_claus in select_claus_list:
group_claus = self.__group_condition( col=select_claus)
where_claus = self.__where_condition( query_conditon=select_claus )
having_claus = self.__group_condition( col=select_claus, having=f"{select_claus} is not null" )
sqls.extend(
(
# self.__gen_sql(select_claus, self.__join_condition(join_tblist, alias_tb1=alias_tb), where_claus, group_claus),
self.__gen_sql(select_claus, self.__join_condition(join_tblist, alias_tb1=alias_tb), where_claus, having_claus),
# self.__gen_sql(select_claus, self.__join_condition(join_tblist, alias_tb1=alias_tb), where_claus),
# self.__gen_sql(select_claus, self.__join_condition(join_tblist, alias_tb1=alias_tb), group_claus),
# self.__gen_sql(select_claus, self.__join_condition(join_tblist, alias_tb1=alias_tb), having_claus),
# self.__gen_sql(select_claus, self.__join_condition(join_tblist, alias_tb1=alias_tb)),
# self.__gen_sql(select_claus, self.__join_condition(join_tblist, INNER=True, alias_tb1=alias_tb), where_claus, group_claus),
self.__gen_sql(select_claus, self.__join_condition(join_tblist, INNER=True, alias_tb1=alias_tb), where_claus, having_claus),
# self.__gen_sql(select_claus, self.__join_condition(join_tblist, INNER=True, alias_tb1=alias_tb), where_claus, ),
# self.__gen_sql(select_claus, self.__join_condition(join_tblist, INNER=True, alias_tb1=alias_tb), having_claus ),
# self.__gen_sql(select_claus, self.__join_condition(join_tblist, INNER=True, alias_tb1=alias_tb), group_claus ),
# self.__gen_sql(select_claus, self.__join_condition(join_tblist, INNER=True, alias_tb1=alias_tb) ),
)
)
return list(filter(None, sqls))
def __join_check(self,):
tdLog.printNoPrefix("==========current sql condition check , must return query ok==========")
for i in range(len(self.__sqls_list)):
tdSql.query(self.__sqls_list[i])
# if i % 10 == 0 :
# tdLog.success(f"{i} sql is already executed success !")
def __join_check_old(self, tblist, checkrows, join_flag=True):
query_conditions = self.__query_condition(tblist[0])
join_condition = self.__join_condition(tb_list=tblist) if join_flag else " "
for condition in query_conditions:
where_condition = self.__where_condition(col=condition, tbname=tblist[0])
group_having = self.__group_condition(col=condition, having=f"{condition} is not null " )
group_no_having= self.__group_condition(col=condition )
groups = ["", group_having, group_no_having]
for group_condition in groups:
if where_condition:
sql = f" select {condition} from {tblist[0]},{tblist[1]} where {join_condition} and {where_condition} {group_condition} "
else:
sql = f" select {condition} from {tblist[0]},{tblist[1]} where {join_condition} {group_condition} "
if not join_flag :
tdSql.error(sql=sql)
break
if len(tblist) == 2:
if "ct1" in tblist or "nt1" in tblist:
self.__join_current(sql, checkrows)
elif where_condition or "not null" in group_condition:
self.__join_current(sql, checkrows + 2 )
elif group_condition:
self.__join_current(sql, checkrows + 3 )
else:
self.__join_current(sql, checkrows + 5 )
if len(tblist) > 2 or len(tblist) < 1:
tdSql.error(sql=sql)
def __join_current(self, sql, checkrows):
tdSql.query(sql=sql)
# tdSql.checkRows(checkrows)
def __test_error(self, dbname=DBNAME):
# sourcery skip: extract-duplicate-method, move-assign-in-block
tdLog.printNoPrefix("==========err sql condition check , must return error==========")
err_list_1 = [f"{dbname}.ct1", f"{dbname}.ct2", f"{dbname}.ct4"]
err_list_2 = [f"{dbname}.ct1", f"{dbname}.ct2", f"{dbname}.nt1"]
err_list_3 = [f"{dbname}.ct1", f"{dbname}.ct4", f"{dbname}.nt1"]
err_list_4 = [f"{dbname}.ct2", f"{dbname}.ct4", f"{dbname}.nt1"]
err_list_5 = [f"{dbname}.ct1", f"{dbname}.ct2", f"{dbname}.ct4", f"{dbname}.nt1"]
self.__join_check_old(err_list_1, -1)
tdLog.printNoPrefix(f"==========err sql condition check in {err_list_1} over==========")
self.__join_check_old(err_list_2, -1)
tdLog.printNoPrefix(f"==========err sql condition check in {err_list_2} over==========")
self.__join_check_old(err_list_3, -1)
tdLog.printNoPrefix(f"==========err sql condition check in {err_list_3} over==========")
self.__join_check_old(err_list_4, -1)
tdLog.printNoPrefix(f"==========err sql condition check in {err_list_4} over==========")
self.__join_check_old(err_list_5, -1)
tdLog.printNoPrefix(f"==========err sql condition check in {err_list_5} over==========")
self.__join_check_old(["ct2", "ct4"], -1, join_flag=False)
tdLog.printNoPrefix("==========err sql condition check in has no join condition over==========")
tdSql.error( f"select c1, c2 from {dbname}.ct2, {dbname}.ct4 where ct2.{PRIMARY_COL}=ct4.{PRIMARY_COL}" )
tdSql.error( f"select ct2.c1, ct2.c2 from {dbname}.ct2 as ct2, {dbname}.ct4 as ct4 where ct2.{INT_COL}=ct4.{INT_COL}" )
tdSql.error( f"select ct2.c1, ct2.c2 from {dbname}.ct2 as ct2, {dbname}.ct4 as ct4 where ct2.{TS_COL}=ct4.{TS_COL}" )
tdSql.error( f"select ct2.c1, ct2.c2 from {dbname}.ct2 as ct2, {dbname}.ct4 as ct4 where ct2.{PRIMARY_COL}=ct4.{TS_COL}" )
tdSql.error( f"select ct2.c1, ct1.c2 from {dbname}.ct2 as ct2, {dbname}.ct4 as ct4 where ct2.{PRIMARY_COL}=ct4.{PRIMARY_COL}" )
tdSql.error( f"select ct2.c1, ct4.c2 from {dbname}.ct2 as ct2, {dbname}.ct4 as ct4 where ct2.{PRIMARY_COL}=ct4.{PRIMARY_COL} and c1 is not null " )
tdSql.error( f"select ct2.c1, ct4.c2 from {dbname}.ct2 as ct2, {dbname}.ct4 as ct4 where ct2.{PRIMARY_COL}=ct4.{PRIMARY_COL} and ct1.c1 is not null " )
tbname = [f"{dbname}.ct1", f"{dbname}.ct2", f"{dbname}.ct4", f"{dbname}.nt1"]
# for tb in tbname:
# for errsql in self.__join_err_check(tb):
# tdSql.error(sql=errsql)
# tdLog.printNoPrefix(f"==========err sql condition check in {tb} over==========")
def all_test(self):
self.__join_check()
self.__test_error()
def __create_tb(self, stb="stb1", ctb_num=20, ntbnum=1, dbname=DBNAME):
create_stb_sql = f'''create table {dbname}.{stb}(
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp,
{TINT_UN_COL} tinyint unsigned, {SINT_UN_COL} smallint unsigned,
{INT_UN_COL} int unsigned, {BINT_UN_COL} bigint unsigned
) tags ({INT_TAG} int)
'''
for i in range(ntbnum):
create_ntb_sql = f'''create table {dbname}.nt{i+1}(
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp,
{TINT_UN_COL} tinyint unsigned, {SINT_UN_COL} smallint unsigned,
{INT_UN_COL} int unsigned, {BINT_UN_COL} bigint unsigned
)
'''
tdSql.execute(create_stb_sql)
tdSql.execute(create_ntb_sql)
for i in range(ctb_num):
tdSql.execute(f'create table {dbname}.ct{i+1} using {dbname}.{stb} tags ( {i+1} )')
def __data_set(self, rows):
data_set = DataSet()
for i in range(rows):
data_set.ts_data.append(NOW + 1 * (rows - i))
data_set.int_data.append(rows - i)
data_set.bint_data.append(11111 * (rows - i))
data_set.sint_data.append(111 * (rows - i) % 32767)
data_set.tint_data.append(11 * (rows - i) % 127)
data_set.int_un_data.append(rows - i)
data_set.bint_un_data.append(11111 * (rows - i))
data_set.sint_un_data.append(111 * (rows - i) % 32767)
data_set.tint_un_data.append(11 * (rows - i) % 127)
data_set.float_data.append(1.11 * (rows - i))
data_set.double_data.append(1100.0011 * (rows - i))
data_set.bool_data.append((rows - i) % 2)
data_set.binary_data.append(f'binary{(rows - i)}')
data_set.nchar_data.append(f'nchar_测试_{(rows - i)}')
return data_set
def __insert_data(self, dbname=DBNAME):
tdLog.printNoPrefix("==========step: start inser data into tables now.....")
data = self.__data_set(rows=self.rows)
# now_time = int(datetime.datetime.timestamp(datetime.datetime.now()) * 1000)
null_data = '''null, null, null, null, null, null, null, null, null, null, null, null, null, null'''
zero_data = "0, 0, 0, 0, 0, 0, 0, 'binary_0', 'nchar_0', 0, 0, 0, 0, 0"
for i in range(self.rows):
row_data = f'''
{data.int_data[i]}, {data.bint_data[i]}, {data.sint_data[i]}, {data.tint_data[i]}, {data.float_data[i]}, {data.double_data[i]},
{data.bool_data[i]}, '{data.binary_data[i]}', '{data.nchar_data[i]}', {data.ts_data[i]}, {data.tint_un_data[i]},
{data.sint_un_data[i]}, {data.int_un_data[i]}, {data.bint_un_data[i]}
'''
neg_row_data = f'''
{-1 * data.int_data[i]}, {-1 * data.bint_data[i]}, {-1 * data.sint_data[i]}, {-1 * data.tint_data[i]}, {-1 * data.float_data[i]}, {-1 * data.double_data[i]},
{data.bool_data[i]}, '{data.binary_data[i]}', '{data.nchar_data[i]}', {data.ts_data[i]}, {1 * data.tint_un_data[i]},
{1 * data.sint_un_data[i]}, {1 * data.int_un_data[i]}, {1 * data.bint_un_data[i]}
'''
tdSql.execute( f"insert into {dbname}.ct1 values ( {NOW - i * TIME_STEP}, {row_data} )" )
tdSql.execute( f"insert into {dbname}.ct2 values ( {NOW - i * int(TIME_STEP * 0.6)}, {neg_row_data} )" )
tdSql.execute( f"insert into {dbname}.ct4 values ( {NOW - i * int(TIME_STEP * 0.8) }, {row_data} )" )
tdSql.execute( f"insert into {dbname}.nt1 values ( {NOW - i * int(TIME_STEP * 1.2)}, {row_data} )" )
tdSql.execute( f"insert into {dbname}.ct2 values ( {NOW + int(TIME_STEP * 0.6)}, {null_data} )" )
tdSql.execute( f"insert into {dbname}.ct2 values ( {NOW - (self.rows + 1) * int(TIME_STEP * 0.6)}, {null_data} )" )
tdSql.execute( f"insert into {dbname}.ct2 values ( {NOW - self.rows * int(TIME_STEP * 0.29) }, {null_data} )" )
tdSql.execute( f"insert into {dbname}.ct4 values ( {NOW + int(TIME_STEP * 0.8)}, {null_data} )" )
tdSql.execute( f"insert into {dbname}.ct4 values ( {NOW - (self.rows + 1) * int(TIME_STEP * 0.8)}, {null_data} )" )
tdSql.execute( f"insert into {dbname}.ct4 values ( {NOW - self.rows * int(TIME_STEP * 0.39)}, {null_data} )" )
tdSql.execute( f"insert into {dbname}.nt1 values ( {NOW + int(TIME_STEP * 1.2)}, {null_data} )" )
tdSql.execute( f"insert into {dbname}.nt1 values ( {NOW - (self.rows + 1) * int(TIME_STEP * 1.2)}, {null_data} )" )
tdSql.execute( f"insert into {dbname}.nt1 values ( {NOW - self.rows * int(TIME_STEP * 0.59)}, {null_data} )" )
def ts5863(self, dbname=DBNAME):
tdSql.execute(f"CREATE STABLE {dbname}.`st_quality` (`ts` TIMESTAMP, `quality` INT, `val` NCHAR(64), `rts` TIMESTAMP) \
TAGS (`cx` VARCHAR(10), `gyd` VARCHAR(10), `gx` VARCHAR(10), `lx` VARCHAR(10)) SMA(`ts`,`quality`,`val`)")
tdSql.execute(f"create table {dbname}.st_q1 using {dbname}.st_quality tags ('cx', 'gyd', 'gx1', 'lx1')")
sql1 = f"select t.val as batch_no, a.tbname as sample_point_code, min(cast(a.val as double)) as `min`, \
max(cast(a.val as double)) as `max`, avg(cast(a.val as double)) as `avg` from {dbname}.st_quality t \
left join {dbname}.st_quality a on a.ts=t.ts and a.cx=t.cx and a.gyd=t.gyd \
where t.ts >= 1734574900000 and t.ts <= 1734575000000 \
and t.tbname = 'st_q1' \
and a.tbname in ('st_q2', 'st_q3') \
group by t.val, a.tbname"
tdSql.query(sql1)
tdSql.checkRows(0)
tdSql.execute(f"create table {dbname}.st_q2 using {dbname}.st_quality tags ('cx2', 'gyd2', 'gx2', 'lx2')")
tdSql.execute(f"create table {dbname}.st_q3 using {dbname}.st_quality tags ('cx', 'gyd', 'gx3', 'lx3')")
tdSql.execute(f"create table {dbname}.st_q4 using {dbname}.st_quality tags ('cx', 'gyd', 'gx4', 'lx4')")
tdSql.query(sql1)
tdSql.checkRows(0)
tdSql.execute(f"insert into {dbname}.st_q1 values (1734574900000, 1, '1', 1734574900000)")
tdSql.query(sql1)
tdSql.checkRows(0)
tdSql.execute(f"insert into {dbname}.st_q2 values (1734574900000, 1, '1', 1734574900000)")
tdSql.query(sql1)
tdSql.checkRows(0)
tdSql.execute(f"insert into {dbname}.st_q3 values (1734574900000, 1, '1', 1734574900000)")
tdSql.query(sql1)
tdSql.checkRows(1)
tdSql.checkData(0, 0, 1)
tdSql.checkData(0, 1, 'st_q3')
tdSql.checkData(0, 2, 1)
tdSql.checkData(0, 3, 1)
tdSql.checkData(0, 4, 1)
tdSql.execute(f"insert into {dbname}.st_q1 values (1734574900001, 2, '2', 1734574900000)")
tdSql.execute(f"insert into {dbname}.st_q3 values (1734574900001, 2, '2', 1734574900000)")
sql2 = f"select t.val as batch_no, a.tbname as sample_point_code, min(cast(a.val as double)) as `min`, \
max(cast(a.val as double)) as `max`, avg(cast(a.val as double)) as `avg` from {dbname}.st_quality t \
left join {dbname}.st_quality a on a.ts=t.ts and a.cx=t.cx and a.gyd=t.gyd \
where t.ts >= 1734574900000 and t.ts <= 1734575000000 \
and t.tbname = 'st_q1' \
and a.tbname in ('st_q2', 'st_q3') \
group by t.val, a.tbname order by batch_no"
tdSql.query(sql2)
tdSql.checkRows(2)
tdSql.checkData(0, 0, 1)
tdSql.checkData(0, 1, 'st_q3')
tdSql.checkData(0, 2, 1)
tdSql.checkData(0, 3, 1)
tdSql.checkData(0, 4, 1)
tdSql.checkData(1, 0, 2)
tdSql.checkData(1, 1, 'st_q3')
tdSql.checkData(1, 2, 2)
tdSql.checkData(1, 3, 2)
tdSql.checkData(1, 4, 2)
sql3 = f"select min(cast(a.val as double)) as `min` from {dbname}.st_quality t left join {dbname}.st_quality \
a on a.ts=t.ts and a.cx=t.cx where t.tbname = 'st_q3' and a.tbname in ('st_q3', 'st_q2')"
tdSql.execute(f"insert into {dbname}.st_q1 values (1734574900002, 2, '2', 1734574900000)")
tdSql.execute(f"insert into {dbname}.st_q4 values (1734574900002, 2, '2', 1734574900000)")
tdSql.execute(f"insert into {dbname}.st_q1 values (1734574900003, 3, '3', 1734574900000)")
tdSql.execute(f"insert into {dbname}.st_q3 values (1734574900003, 3, '3', 1734574900000)")
tdSql.query(sql3)
tdSql.checkRows(1)
tdSql.checkData(0, 0, 1)
sql3 = f"select min(cast(a.val as double)) as `min`, max(cast(a.val as double)) as `max`, avg(cast(a.val as double)) as `avg` \
from {dbname}.st_quality t left join {dbname}.st_quality a \
on a.ts=t.ts and a.cx=t.cx where t.tbname = 'st_q3' and a.tbname in ('st_q3', 'st_q2')"
tdSql.query(sql3)
tdSql.checkRows(1)
tdSql.checkData(0, 0, 1)
tdSql.checkData(0, 1, 3)
tdSql.checkData(0, 2, 2)
tdSql.query(sql1)
tdSql.checkRows(3)
tdSql.query(sql2)
tdSql.checkRows(3)
tdSql.checkData(0, 0, 1)
tdSql.checkData(0, 1, 'st_q3')
tdSql.checkData(0, 2, 1)
tdSql.checkData(0, 3, 1)
tdSql.checkData(0, 4, 1)
tdSql.checkData(1, 0, 2)
tdSql.checkData(1, 1, 'st_q3')
tdSql.checkData(1, 2, 2)
tdSql.checkData(1, 3, 2)
tdSql.checkData(1, 4, 2)
tdSql.checkData(2, 0, 3)
tdSql.checkData(2, 1, 'st_q3')
tdSql.checkData(2, 2, 3)
tdSql.checkData(2, 3, 3)
tdSql.checkData(2, 4, 3)
def run(self):
tdSql.prepare()
tdLog.printNoPrefix("==========step1:create table")
self.__create_tb(dbname=DBNAME)
tdLog.printNoPrefix("==========step2:insert data")
self.rows = 10
self.__insert_data(dbname=DBNAME)
tdLog.printNoPrefix("==========step3:all check")
tdSql.query(f"select count(*) from {DBNAME}.ct1")
tdSql.checkData(0, 0, self.rows)
self.all_test()
tdLog.printNoPrefix("==========step4:cross db check")
dbname1 = "db1"
tdSql.execute(f"create database {dbname1} duration 172800m")
tdSql.execute(f"use {dbname1}")
self.__create_tb(dbname=dbname1)
self.__insert_data(dbname=dbname1)
tdSql.query("select ct1.c_int from db.ct1 as ct1 join db1.ct1 as cy1 on ct1.ts=cy1.ts")
tdSql.checkRows(self.rows)
tdSql.query("select ct1.c_int from db.stb1 as ct1 join db1.ct1 as cy1 on ct1.ts=cy1.ts")
tdSql.checkRows(self.rows + int(self.rows * 0.6 //3)+ int(self.rows * 0.8 // 4))
tdSql.query("select ct1.c_int from db.nt1 as ct1 join db1.nt1 as cy1 on ct1.ts=cy1.ts")
tdSql.checkRows(self.rows + 3)
tdSql.query("select ct1.c_int from db.stb1 as ct1 join db1.stb1 as cy1 on ct1.ts=cy1.ts")
tdSql.checkRows(50)
tdSql.query("select count(*) from db.ct1")
tdSql.checkData(0, 0, self.rows)
tdSql.query("select count(*) from db1.ct1")
tdSql.checkData(0, 0, self.rows)
self.all_test()
tdSql.query("select count(*) from db.ct1")
tdSql.checkData(0, 0, self.rows)
tdSql.query("select count(*) from db1.ct1")
tdSql.checkData(0, 0, self.rows)
tdSql.execute(f"flush database {DBNAME}")
tdSql.execute(f"flush database {dbname1}")
# tdDnodes.stop(1)
# tdDnodes.start(1)
tdSql.execute("use db")
tdSql.query("select count(*) from db.ct1")
tdSql.checkData(0, 0, self.rows)
tdSql.query("select count(*) from db1.ct1")
tdSql.checkData(0, 0, self.rows)
tdLog.printNoPrefix("==========step4:after wal, all check again ")
self.all_test()
tdSql.query("select count(*) from db.ct1")
tdSql.checkData(0, 0, self.rows)
self.ts5863(dbname=dbname1)
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

6202
test/query/nested/nestedQuery.py Executable file

File diff suppressed because it is too large Load Diff

474
test/query/union/union.py Normal file
View File

@ -0,0 +1,474 @@
import datetime
from util.log import *
from util.sql import *
from util.cases import *
from util.dnodes import *
PRIMARY_COL = "ts"
INT_COL = "c1"
BINT_COL = "c2"
SINT_COL = "c3"
TINT_COL = "c4"
FLOAT_COL = "c5"
DOUBLE_COL = "c6"
BOOL_COL = "c7"
BINARY_COL = "c8"
NCHAR_COL = "c9"
TS_COL = "c10"
NUM_COL = [ INT_COL, BINT_COL, SINT_COL, TINT_COL, FLOAT_COL, DOUBLE_COL, ]
CHAR_COL = [ BINARY_COL, NCHAR_COL, ]
BOOLEAN_COL = [ BOOL_COL, ]
TS_TYPE_COL = [ TS_COL, ]
class TDTestCase:
def init(self, conn, logSql, replicaVar=1):
self.replicaVar = int(replicaVar)
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor())
def __query_condition(self,tbname):
query_condition = []
for char_col in CHAR_COL:
query_condition.extend(
(
f"count( {tbname}.{char_col} )",
f"cast( {tbname}.{char_col} as nchar(3) )",
)
)
for num_col in NUM_COL:
query_condition.extend(
(
f"log( {tbname}.{num_col}, {tbname}.{num_col})",
)
)
query_condition.extend(
(
''' "test12" ''',
# 1010,
)
)
return query_condition
def __join_condition(self, tb_list, filter=PRIMARY_COL, INNER=False):
table_reference = tb_list[0]
join_condition = f'{table_reference} {table_reference.split(".")[-1]}'
join = "inner join" if INNER else "join"
for i in range(len(tb_list[1:])):
join_condition += f" {join} {tb_list[i+1]} {tb_list[i+1].split('.')[-1]} on {table_reference.split('.')[-1]}.{filter}={tb_list[i+1].split('.')[-1]}.{filter}"
return join_condition
def __where_condition(self, col=None, tbname=None, query_conditon=None):
if query_conditon and isinstance(query_conditon, str):
if query_conditon.startswith("count"):
query_conditon = query_conditon[6:-1]
elif query_conditon.startswith("max"):
query_conditon = query_conditon[4:-1]
elif query_conditon.startswith("sum"):
query_conditon = query_conditon[4:-1]
elif query_conditon.startswith("min"):
query_conditon = query_conditon[4:-1]
if query_conditon:
return f" where {query_conditon} is not null"
if col in NUM_COL:
return f" where abs( {tbname}.{col} ) >= 0"
if col in CHAR_COL:
return f" where lower( {tbname}.{col} ) like 'bina%' or lower( {tbname}.{col} ) like '_cha%' "
if col in BOOLEAN_COL:
return f" where {tbname}.{col} in (false, true) "
if col in TS_TYPE_COL or col in PRIMARY_COL:
return f" where cast( {tbname}.{col} as binary(16) ) is not null "
return ""
def __group_condition(self, col, having = None):
if isinstance(col, str):
if col.startswith("count"):
col = col[6:-1]
elif col.startswith("max"):
col = col[4:-1]
elif col.startswith("sum"):
col = col[4:-1]
elif col.startswith("min"):
col = col[4:-1]
return f" group by {col} having {having}" if having else f" group by {col} "
def __single_sql(self, select_clause, from_clause, where_condition="", group_condition=""):
if isinstance(select_clause, str) and "on" not in from_clause and select_clause.split(".")[0] != from_clause.split(".")[0]:
return
return f"select {select_clause} from {from_clause} {where_condition} {group_condition}"
@property
def __join_tblist(self, dbname="db"):
return [
[f"{dbname}.ct1", f"{dbname}.t1"],
[f"{dbname}.ct4", f"{dbname}.t1"],
# ["ct1", "ct2", "ct4"],
# ["ct1", "ct2", "t1"],
# ["ct1", "ct4", "t1"],
# ["ct2", "ct4", "t1"],
# ["ct1", "ct2", "ct4", "t1"],
]
@property
def __tb_list(self, dbname="db"):
return [
f"{dbname}.ct1",
f"{dbname}.ct4",
]
def sql_list(self):
sqls = []
__join_tblist = self.__join_tblist
for join_tblist in __join_tblist:
for join_tb in join_tblist:
join_tb_name = join_tb.split(".")[-1]
select_claus_list = self.__query_condition(join_tb_name)
for select_claus in select_claus_list:
group_claus = self.__group_condition( col=select_claus)
where_claus = self.__where_condition(query_conditon=select_claus)
having_claus = self.__group_condition( col=select_claus, having=f"{select_claus} is not null")
sqls.extend(
(
self.__single_sql(select_claus, self.__join_condition(join_tblist, INNER=True), where_claus, having_claus),
)
)
__no_join_tblist = self.__tb_list
for tb in __no_join_tblist:
tb_name = join_tb.split(".")[-1]
select_claus_list = self.__query_condition(tb_name)
for select_claus in select_claus_list:
group_claus = self.__group_condition(col=select_claus)
where_claus = self.__where_condition(query_conditon=select_claus)
having_claus = self.__group_condition(col=select_claus, having=f"{select_claus} is not null")
sqls.extend(
(
self.__single_sql(select_claus, tb, where_claus, having_claus),
)
)
# return filter(None, sqls)
return list(filter(None, sqls))
def __get_type(self, col):
if tdSql.cursor.istype(col, "BOOL"):
return "BOOL"
if tdSql.cursor.istype(col, "INT"):
return "INT"
if tdSql.cursor.istype(col, "BIGINT"):
return "BIGINT"
if tdSql.cursor.istype(col, "TINYINT"):
return "TINYINT"
if tdSql.cursor.istype(col, "SMALLINT"):
return "SMALLINT"
if tdSql.cursor.istype(col, "FLOAT"):
return "FLOAT"
if tdSql.cursor.istype(col, "DOUBLE"):
return "DOUBLE"
if tdSql.cursor.istype(col, "BINARY"):
return "BINARY"
if tdSql.cursor.istype(col, "NCHAR"):
return "NCHAR"
if tdSql.cursor.istype(col, "TIMESTAMP"):
return "TIMESTAMP"
if tdSql.cursor.istype(col, "JSON"):
return "JSON"
if tdSql.cursor.istype(col, "TINYINT UNSIGNED"):
return "TINYINT UNSIGNED"
if tdSql.cursor.istype(col, "SMALLINT UNSIGNED"):
return "SMALLINT UNSIGNED"
if tdSql.cursor.istype(col, "INT UNSIGNED"):
return "INT UNSIGNED"
if tdSql.cursor.istype(col, "BIGINT UNSIGNED"):
return "BIGINT UNSIGNED"
def union_check(self, dbname = "db"):
sqls = self.sql_list()
for i in range(len(sqls)):
tdSql.query(sqls[i])
res1_type = self.__get_type(0)
# if i % 5 == 0:
# tdLog.success(f"{i} : sql is already executing!")
for j in range(len(sqls[i:])):
tdSql.query(sqls[j+i])
order_union_type = False
rev_order_type = False
all_union_type = False
res2_type = self.__get_type(0)
if res2_type == res1_type:
all_union_type = True
elif res1_type in ( "BIGINT" , "NCHAR" ) and res2_type in ("BIGINT" , "NCHAR"):
all_union_type = True
elif res1_type in ("BIGINT", "NCHAR"):
order_union_type = True
elif res2_type in ("BIGINT", "NCHAR"):
rev_order_type = True
elif res1_type == "TIMESAMP" and res2_type not in ("BINARY", "NCHAR"):
order_union_type = True
elif res2_type == "TIMESAMP" and res1_type not in ("BINARY", "NCHAR"):
rev_order_type = True
elif res1_type == "BINARY" and res2_type != "NCHAR":
order_union_type = True
elif res2_type == "BINARY" and res1_type != "NCHAR":
rev_order_type = True
if all_union_type:
tdSql.execute(f"{sqls[i]} union {sqls[j+i]}")
tdSql.execute(f"{sqls[j+i]} union all {sqls[i]}")
elif order_union_type:
tdSql.execute(f"{sqls[i]} union all {sqls[j+i]}")
elif rev_order_type:
tdSql.execute(f"{sqls[j+i]} union {sqls[i]}")
else:
tdSql.error(f"{sqls[i]} union {sqls[j+i]}")
# check union with timeline function
tdSql.query(f"select first(c1) from (select * from {dbname}.t1 union select * from {dbname}.t1 order by ts)")
tdSql.checkRows(1)
tdSql.checkData(0, 0, 9)
tdSql.query(f"select last(c1) from (select * from {dbname}.t1 union select * from {dbname}.t1 order by ts desc)")
tdSql.checkRows(1)
tdSql.checkData(0, 0, 2147450880)
tdSql.query(f"select irate(c1) from (select * from {dbname}.t1 union select * from {dbname}.t1 order by ts)")
tdSql.checkRows(1)
tdSql.checkData(0, 0, 9.102222222222222)
tdSql.query(f"select elapsed(ts) from (select * from {dbname}.t1 union select * from {dbname}.t1 order by ts)")
tdSql.checkRows(1)
tdSql.checkData(0, 0, 46800000.000000000000000)
tdSql.query(f"select diff(c1) from (select * from {dbname}.t1 union select * from {dbname}.t1 order by ts)")
tdSql.checkRows(14)
tdSql.query(f"select derivative(c1, 1s, 0) from (select * from {dbname}.t1 union select * from {dbname}.t1 order by ts)")
tdSql.checkRows(11)
tdSql.query(f"select count(*) from {dbname}.t1 as a join {dbname}.t1 as b on a.ts = b.ts and a.ts is null")
tdSql.checkRows(1)
tdSql.checkData(0, 0, 0)
tdSql.query(f"select first(c1) from (select * from {dbname}.t1 union select * from {dbname}.t1)")
tdSql.query(f"select last(c1) from (select * from {dbname}.t1 union select * from {dbname}.t1)")
tdSql.error(f"select irate(c1) from (select * from {dbname}.t1 union select * from {dbname}.t1)")
tdSql.error(f"select elapsed(ts) from (select * from {dbname}.t1 union select * from {dbname}.t1)")
tdSql.error(f"select diff(c1) from (select * from {dbname}.t1 union select * from {dbname}.t1)")
tdSql.error(f"select derivative(c1, 1s, 0) from (select * from {dbname}.t1 union select * from {dbname}.t1)")
def __test_error(self, dbname="db"):
tdSql.error( f"show {dbname}.tables union show {dbname}.tables" )
tdSql.error( f"create table {dbname}.errtb1 union all create table {dbname}.errtb2" )
tdSql.error( f"drop table {dbname}.ct1 union all drop table {dbname}.ct3" )
tdSql.error( f"select c1 from {dbname}.ct1 union all drop table {dbname}.ct3" )
tdSql.error( f"select c1 from {dbname}.ct1 union all '' " )
tdSql.error( f" '' union all select c1 from{dbname}. ct1 " )
def all_test(self):
self.__test_error()
self.union_check()
def __create_tb(self, dbname="db"):
tdLog.printNoPrefix("==========step1:create table")
create_stb_sql = f'''create table {dbname}.stb1(
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp
) tags (tag1 int)
'''
create_ntb_sql = f'''create table {dbname}.t1(
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp
)
'''
tdSql.execute(create_stb_sql)
tdSql.execute(create_ntb_sql)
for i in range(4):
tdSql.execute(f'create table {dbname}.ct{i+1} using {dbname}.stb1 tags ( {i+1} )')
def __insert_data(self, rows, dbname="db"):
now_time = int(datetime.datetime.timestamp(datetime.datetime.now()) * 1000)
for i in range(rows):
tdSql.execute(
f"insert into {dbname}.ct1 values ( { now_time - i * 1000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
)
tdSql.execute(
f"insert into {dbname}.ct4 values ( { now_time - i * 7776000000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
)
tdSql.execute(
f"insert into {dbname}.ct2 values ( { now_time - i * 7776000000 }, {-i}, {-11111 * i}, {-111 * i % 32767 }, {-11 * i % 127}, {-1.11*i}, {-1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
)
tdSql.execute(
f'''insert into {dbname}.ct1 values
( { now_time - rows * 5 }, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar_测试_0', { now_time + 8 } )
( { now_time + 10000 }, { rows }, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar_测试_9', { now_time + 9 } )
'''
)
tdSql.execute(
f'''insert into {dbname}.ct4 values
( { now_time - rows * 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( { now_time - rows * 3888000000 + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( { now_time + 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
(
{ now_time + 5184000000}, {pow(2,31)-pow(2,15)}, {pow(2,63)-pow(2,30)}, 32767, 127,
{ 3.3 * pow(10,38) }, { 1.3 * pow(10,308) }, { rows % 2 }, "binary_limit-1", "nchar_测试_limit-1", { now_time - 86400000}
)
(
{ now_time + 2592000000 }, {pow(2,31)-pow(2,16)}, {pow(2,63)-pow(2,31)}, 32766, 126,
{ 3.2 * pow(10,38) }, { 1.2 * pow(10,308) }, { (rows-1) % 2 }, "binary_limit-2", "nchar_测试_limit-2", { now_time - 172800000}
)
'''
)
tdSql.execute(
f'''insert into {dbname}.ct2 values
( { now_time - rows * 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( { now_time - rows * 3888000000 + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( { now_time + 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
(
{ now_time + 5184000000 }, { -1 * pow(2,31) + pow(2,15) }, { -1 * pow(2,63) + pow(2,30) }, -32766, -126,
{ -1 * 3.2 * pow(10,38) }, { -1.2 * pow(10,308) }, { rows % 2 }, "binary_limit-1", "nchar_测试_limit-1", { now_time - 86400000 }
)
(
{ now_time + 2592000000 }, { -1 * pow(2,31) + pow(2,16) }, { -1 * pow(2,63) + pow(2,31) }, -32767, -127,
{ - 3.3 * pow(10,38) }, { -1.3 * pow(10,308) }, { (rows-1) % 2 }, "binary_limit-2", "nchar_测试_limit-2", { now_time - 172800000 }
)
'''
)
for i in range(rows):
insert_data = f'''insert into {dbname}.t1 values
( { now_time - i * 3600000 }, {i}, {i * 11111}, { i % 32767 }, { i % 127}, { i * 1.11111 }, { i * 1000.1111 }, { i % 2},
"binary_{i}", "nchar_测试_{i}", { now_time - 1000 * i } )
'''
tdSql.execute(insert_data)
tdSql.execute(
f'''insert into {dbname}.t1 values
( { now_time + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( { now_time - (( rows // 2 ) * 60 + 30) * 60000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( { now_time - rows * 3600000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( { now_time + 7200000 }, { pow(2,31) - pow(2,15) }, { pow(2,63) - pow(2,30) }, 32767, 127,
{ 3.3 * pow(10,38) }, { 1.3 * pow(10,308) }, { rows % 2 },
"binary_limit-1", "nchar_测试_limit-1", { now_time - 86400000 }
)
(
{ now_time + 3600000 } , { pow(2,31) - pow(2,16) }, { pow(2,63) - pow(2,31) }, 32766, 126,
{ 3.2 * pow(10,38) }, { 1.2 * pow(10,308) }, { (rows-1) % 2 },
"binary_limit-2", "nchar_测试_limit-2", { now_time - 172800000 }
)
'''
)
def test_TS_5630(self):
sql = "CREATE DATABASE `ep_iot` BUFFER 256 CACHESIZE 20 CACHEMODEL 'both' COMP 2 DURATION 14400m WAL_FSYNC_PERIOD 3000 MAXROWS 4096 MINROWS 100 STT_TRIGGER 2 KEEP 5256000m,5256000m,5256000m PAGES 256 PAGESIZE 4 PRECISION 'ms' REPLICA 1 WAL_LEVEL 1 VGROUPS 3 SINGLE_STABLE 0 TABLE_PREFIX 0 TABLE_SUFFIX 0 TSDB_PAGESIZE 4 WAL_RETENTION_PERIOD 3600 WAL_RETENTION_SIZE 0"
tdSql.execute(sql, queryTimes=1)
tdLog.info("database ep_iot created")
sql = "CREATE STABLE `ep_iot`.`sldc_dp` (`ts` TIMESTAMP, `data_write_time` TIMESTAMP, `jz1fdgl` DOUBLE, `jz1ssfdfh` DOUBLE, `jz1fdmh` DOUBLE, `jz1gdmh` DOUBLE, `jz1qjrhl` DOUBLE, `jz1zhcydl` DOUBLE, `jz1zkby` DOUBLE, `jz1zzqyl` DOUBLE, `jz1zzqwda` DOUBLE, `jz1zzqwdb` DOUBLE, `jz1zzqll` DOUBLE, `jz1gswd` DOUBLE, `jz1gsll` DOUBLE, `jz1glxl` DOUBLE, `jz1qjrh` DOUBLE, `jz1zhrxl` DOUBLE, `jz1gmjassllfk` DOUBLE, `jz1gmjasslllj` DOUBLE, `jz1gmjbssllfk` DOUBLE, `jz1gmjbsslllj` DOUBLE, `jz1gmjcssllfk` DOUBLE, `jz1gmjcsslllj` DOUBLE, `jz1gmjdssllfk` DOUBLE, `jz1gmjdsslllj` DOUBLE, `jz1gmjessllfk` DOUBLE, `jz1gmjesslllj` DOUBLE, `jz1gmjfssllfk` DOUBLE, `jz1gmjfsslllj` DOUBLE, `jz1zrqwda` DOUBLE, `jz1zrqwdb` DOUBLE, `jz1zrzqyl` DOUBLE, `jz1mmjadl` DOUBLE, `jz1mmjbdl` DOUBLE, `jz1mmjcdl` DOUBLE, `jz1mmjddl` DOUBLE, `jz1mmjedl` DOUBLE, `jz1mmjfdl` DOUBLE, `jz1cyqckwda` DOUBLE, `jz1cyqckwdb` DOUBLE, `jz1njswd` DOUBLE, `jz1nqqxhsckawd` DOUBLE, `jz1nqqxhsckbwd` DOUBLE, `jz1nqqxhsrkawd` DOUBLE, `jz1nqqxhsrkbwd` DOUBLE, `jz1kyqackyqwdsel` DOUBLE, `jz1kyqbckyqwdsel` DOUBLE, `jz1yfjackyqwd` DOUBLE, `jz1yfjbckyqwd` DOUBLE, `jz1trkyqwd` DOUBLE, `jz1trkyqwd1` DOUBLE, `jz1trkyqwd2` DOUBLE, `jz1trkyqwd3` DOUBLE, `jz1tckjyqwd1` DOUBLE, `jz1tckjyqwd2` DOUBLE, `jz1tckyqwd1` DOUBLE, `jz1bya` DOUBLE, `jz1byb` DOUBLE, `jz1pqwda` DOUBLE, `jz1pqwdb` DOUBLE, `jz1gmjadl` DOUBLE, `jz1gmjbdl` DOUBLE, `jz1gmjcdl` DOUBLE, `jz1gmjddl` DOUBLE, `jz1gmjedl` DOUBLE, `jz1gmjfdl` DOUBLE, `jz1yfjadl` DOUBLE, `jz1yfjbdl` DOUBLE, `jz1ycfjadl` DOUBLE, `jz1ycfjbdl` DOUBLE, `jz1sfjadl` DOUBLE, `jz1sfjbdl` DOUBLE, `jz1fdjyggl` DOUBLE, `jz1fdjwggl` DOUBLE, `jz1sjzs` DOUBLE, `jz1zfl` DOUBLE, `jz1ltyl` DOUBLE, `jz1smb` DOUBLE, `jz1rll` DOUBLE, `jz1grd` DOUBLE, `jz1zjwd` DOUBLE, `jz1yl` DOUBLE, `jz1kyqckwd` DOUBLE, `jz1abmfsybrkcy` DOUBLE, `jz1bbmfsybrkcy` DOUBLE, `jz1abjcsdmfytwdzdz` DOUBLE, `jz1bbjcsdmfytwdzdz` DOUBLE, `jz2fdgl` DOUBLE, `jz2ssfdfh` DOUBLE, `jz2fdmh` DOUBLE, `jz2gdmh` DOUBLE, `jz2qjrhl` DOUBLE, `jz2zhcydl` DOUBLE, `jz2zkby` DOUBLE, `jz2zzqyl` DOUBLE, `jz2zzqwda` DOUBLE, `jz2zzqwdb` DOUBLE, `jz2zzqll` DOUBLE, `jz2gswd` DOUBLE, `jz2gsll` DOUBLE, `jz2glxl` DOUBLE, `jz2qjrh` DOUBLE, `jz2zhrxl` DOUBLE, `jz2gmjassllfk` DOUBLE, `jz2gmjasslllj` DOUBLE, `jz2gmjbssllfk` DOUBLE, `jz2gmjbsslllj` DOUBLE, `jz2gmjcssllfk` DOUBLE, `jz2gmjcsslllj` DOUBLE, `jz2gmjdssllfk` DOUBLE, `jz2gmjdsslllj` DOUBLE, `jz2gmjessllfk` DOUBLE, `jz2gmjesslllj` DOUBLE, `jz2gmjfssllfk` DOUBLE, `jz2gmjfsslllj` DOUBLE, `jz2zrqwda` DOUBLE, `jz2zrqwdb` DOUBLE, `jz2zrzqyl` DOUBLE, `jz2mmjadl` DOUBLE, `jz2mmjbdl` DOUBLE, `jz2mmjcdl` DOUBLE, `jz2mmjddl` DOUBLE, `jz2mmjedl` DOUBLE, `jz2mmjfdl` DOUBLE, `jz2cyqckwda` DOUBLE, `jz2cyqckwdb` DOUBLE, `jz2njswd` DOUBLE, `jz2nqqxhsckawd` DOUBLE, `jz2nqqxhsckbwd` DOUBLE, `jz2nqqxhsrkawd` DOUBLE, `jz2nqqxhsrkbwd` DOUBLE, `jz2kyqackyqwdsel` DOUBLE, `jz2kyqbckyqwdsel` DOUBLE, `jz2yfjackyqwd` DOUBLE, `jz2yfjbckyqwd` DOUBLE, `jz2trkyqwd` DOUBLE, `jz2trkyqwd1` DOUBLE, `jz2trkyqwd2` DOUBLE, `jz2trkyqwd3` DOUBLE, `jz2tckjyqwd1` DOUBLE, `jz2tckjyqwd2` DOUBLE, `jz2tckyqwd1` DOUBLE, `jz2bya` DOUBLE, `jz2byb` DOUBLE, `jz2pqwda` DOUBLE, `jz2pqwdb` DOUBLE, `jz2gmjadl` DOUBLE, `jz2gmjbdl` DOUBLE, `jz2gmjcdl` DOUBLE, `jz2gmjddl` DOUBLE, `jz2gmjedl` DOUBLE, `jz2gmjfdl` DOUBLE, `jz2yfjadl` DOUBLE, `jz2yfjbdl` DOUBLE, `jz2ycfjadl` DOUBLE, `jz2ycfjbdl` DOUBLE, `jz2sfjadl` DOUBLE, `jz2sfjbdl` DOUBLE, `jz2fdjyggl` DOUBLE, `jz2fdjwggl` DOUBLE, `jz2sjzs` DOUBLE, `jz2zfl` DOUBLE, `jz2ltyl` DOUBLE, `jz2smb` DOUBLE, `jz2rll` DOUBLE, `jz2grd` DOUBLE, `jz2zjwd` DOUBLE, `jz2yl` DOUBLE, `jz2kyqckwd` DOUBLE, `jz2abmfsybrkcy` DOUBLE, `jz2bbmfsybrkcy` DOUBLE, `jz2abjcsdmfytwdzdz` DOUBLE, `jz2bbjcsdmfytwdzdz` DOUBLE) TAGS (`iot_hub_id` VARCHAR(100), `device_group_code` VARCHAR(100), `device_code` VARCHAR(100))"
tdLog.info("stable ep_iot.sldc_dp created")
tdSql.execute(sql, queryTimes=1)
sql = "insert into ep_iot.sldc_dp_t1 using ep_iot.sldc_dp tags('a','a','a') values(now, now, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9,0,1);"
tdSql.execute(sql, queryTimes=1)
sql = "insert into ep_iot.sldc_dp_t1 using ep_iot.sldc_dp tags('b','b','b') values(now, now, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9,0,1);"
tdSql.execute(sql, queryTimes=1)
sql = "insert into ep_iot.sldc_dp_t1 using ep_iot.sldc_dp tags('c','c','c') values(now, now, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9,0,1);"
tdSql.execute(sql, queryTimes=1)
sql = "insert into ep_iot.sldc_dp_t1 using ep_iot.sldc_dp tags('d','d','d') values(now, now, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9,0,1);"
tdSql.execute(sql, queryTimes=1)
sql = "insert into ep_iot.sldc_dp_t1 using ep_iot.sldc_dp tags('e','e','e') values(now, now, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9, 0,1,2,3,4,5,6,7,8,9,0,1);"
tdSql.execute(sql, queryTimes=1)
sql = "select scdw_code, scdw_name, jzmc, fdgl, jzzt from ((select '01072016' as scdw_code, '盛鲁电厂' as scdw_name, '机组1' as jzmc, last(jz1fdjyggl) as fdgl, '填报' as jzzt, last(ts) as ts from ep_iot.sldc_dp) union all ( select '01072016' as scdw_code, '盛鲁电厂' as scdw_name, '机组2' as jzmc, last(jz2fdjyggl) as fdgl, '填报' as jzzt, last(ts) as ts from ep_iot.sldc_dp) union all ( select '00103673' as scdw_code, '鲁西电厂' as scdw_name, '机组1'as jzmc, last(jz1fdjyggl) as fdgl, '填报' as jzzt, last(ts) as ts from ep_iot.sldc_dp) union all ( select '00103673' as scdw_code, '鲁西电厂' as scdw_name, '机组2'as jzmc, last(jz2fdjyggl) as fdgl, '填报' as jzzt, last(ts) as ts from ep_iot.sldc_dp) union all ( select '01061584' as scdw_code, '富源热电' as scdw_name, '机组1'as jzmc, last(jz1fdjyggl) as fdgl, '填报' as jzzt ,last(ts) as ts from ep_iot.sldc_dp) union all ( select '01061584' as scdw_code, '富源热电' as scdw_name, '机组2'as jzmc, last(jz2fdjyggl) as fdgl, '填报' as jzzt ,last(ts) as ts from ep_iot.sldc_dp)) where scdw_code like '%%';"
tdSql.query(sql, queryTimes=1)
tdSql.checkCols(5)
tdSql.checkRows(6)
sql = "select scdw_name, scdw_code, jzmc, fdgl, jzzt from ((select '01072016' as scdw_code, '盛鲁电厂' as scdw_name, '机组1' as jzmc, last(jz1fdjyggl) as fdgl, '填报' as jzzt, last(ts) as ts from ep_iot.sldc_dp) union all ( select '01072016' as scdw_code, '盛鲁电厂' as scdw_name, '机组2' as jzmc, last(jz2fdjyggl) as fdgl, '填报' as jzzt, last(ts) as ts from ep_iot.sldc_dp) union all ( select '00103673' as scdw_code, '鲁西电厂' as scdw_name, '机组1'as jzmc, last(jz1fdjyggl) as fdgl, '填报' as jzzt, last(ts) as ts from ep_iot.sldc_dp) union all ( select '00103673' as scdw_code, '鲁西电厂' as scdw_name, '机组2'as jzmc, last(jz2fdjyggl) as fdgl, '填报' as jzzt, last(ts) as ts from ep_iot.sldc_dp) union all ( select '01061584' as scdw_code, '富源热电' as scdw_name, '机组1'as jzmc, last(jz1fdjyggl) as fdgl, '填报' as jzzt ,last(ts) as ts from ep_iot.sldc_dp) union all ( select '01061584' as scdw_code, '富源热电' as scdw_name, '机组2'as jzmc, last(jz2fdjyggl) as fdgl, '填报' as jzzt ,last(ts) as ts from ep_iot.sldc_dp)) where scdw_code like '%%';"
tdSql.query(sql, queryTimes=1)
tdSql.checkCols(5)
tdSql.checkRows(6)
sql = "select scdw_name, scdw_code, jzzt from ((select '01072016' as scdw_code, '盛鲁电厂' as scdw_name, '机组1' as jzmc, last(jz1fdjyggl) as fdgl, '填报' as jzzt, last(ts) as ts from ep_iot.sldc_dp) union all ( select '01072016' as scdw_code, '盛鲁电厂' as scdw_name, '机组2' as jzmc, last(jz2fdjyggl) as fdgl, '填报' as jzzt, last(ts) as ts from ep_iot.sldc_dp) union all ( select '00103673' as scdw_code, '鲁西电厂' as scdw_name, '机组1'as jzmc, last(jz1fdjyggl) as fdgl, '填报' as jzzt, last(ts) as ts from ep_iot.sldc_dp) union all ( select '00103673' as scdw_code, '鲁西电厂' as scdw_name, '机组2'as jzmc, last(jz2fdjyggl) as fdgl, '填报' as jzzt, last(ts) as ts from ep_iot.sldc_dp) union all ( select '01061584' as scdw_code, '富源热电' as scdw_name, '机组1'as jzmc, last(jz1fdjyggl) as fdgl, '填报' as jzzt ,last(ts) as ts from ep_iot.sldc_dp) union all ( select '01061584' as scdw_code, '富源热电' as scdw_name, '机组2'as jzmc, last(jz2fdjyggl) as fdgl, '填报' as jzzt ,last(ts) as ts from ep_iot.sldc_dp)) where scdw_code like '%%';"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(6)
tdSql.checkCols(3)
sql = "select scdw_code, scdw_name, jzmc, fdgl, jzzt,ts from ((select '01072016' as scdw_code, '盛鲁电厂' as scdw_name, '机组1' as jzmc, last(jz1fdjyggl) as fdgl, '填报' as jzzt, last(ts) as ts from ep_iot.sldc_dp) union all ( select '01072016' as scdw_code, '盛鲁电厂' as scdw_name, '机组2' as jzmc, last(jz2fdjyggl) as fdgl, '填报' as jzzt, last(ts) as ts from ep_iot.sldc_dp) union all ( select '00103673' as scdw_code, '鲁西电厂' as scdw_name, '机组1'as jzmc, last(jz1fdjyggl) as fdgl, '填报' as jzzt, last(ts) as ts from ep_iot.sldc_dp) union all ( select '00103673' as scdw_code, '鲁西电厂' as scdw_name, '机组2'as jzmc, last(jz2fdjyggl) as fdgl, '填报' as jzzt, last(ts) as ts from ep_iot.sldc_dp) union all ( select '01061584' as scdw_code, '富源热电' as scdw_name, '机组1'as jzmc, last(jz1fdjyggl) as fdgl, '填报' as jzzt ,last(ts) as ts from ep_iot.sldc_dp) union all ( select '01061584' as scdw_code, '富源热电' as scdw_name, '机组2'as jzmc, last(jz2fdjyggl) as fdgl, '填报' as jzzt ,last(ts) as ts from ep_iot.sldc_dp)) where scdw_code like '%%';"
tdSql.query(sql, queryTimes=1)
tdSql.checkCols(6)
tdSql.checkRows(6)
##tdSql.execute("drop database ep_iot")
def test_case_for_nodes_match_node(self):
sql = "create table db.nt (ts timestamp, c1 int primary key, c2 int)"
tdSql.execute(sql, queryTimes=1)
sql = 'select diff (ts) from (select * from db.tt union select * from db.tt order by c1, case when ts < now - 1h then ts + 1h else ts end) partition by c1, case when ts < now - 1h then ts + 1h else ts end'
tdSql.error(sql, -2147473917)
def run(self):
tdSql.prepare()
self.test_TS_5630()
tdLog.printNoPrefix("==========step1:create table")
self.__create_tb()
tdLog.printNoPrefix("==========step2:insert data")
self.rows = 10
self.__insert_data(self.rows)
tdLog.printNoPrefix("==========step3:all check")
self.all_test()
tdSql.execute("flush database db")
tdSql.execute("use db")
tdLog.printNoPrefix("==========step4:after wal, all check again ")
self.all_test()
self.test_TD_33137()
self.test_case_for_nodes_match_node()
def test_TD_33137(self):
sql = "select 'asd' union all select 'asdasd'"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(2)
sql = "select db_name `TABLE_CAT`, '' `TABLE_SCHEM`, stable_name `TABLE_NAME`, 'TABLE' `TABLE_TYPE`, table_comment `REMARKS` from information_schema.ins_stables union all select db_name `TABLE_CAT`, '' `TABLE_SCHEM`, table_name `TABLE_NAME`, case when `type`='SYSTEM_TABLE' then 'TABLE' when `type`='NORMAL_TABLE' then 'TABLE' when `type`='CHILD_TABLE' then 'TABLE' else 'UNKNOWN' end `TABLE_TYPE`, table_comment `REMARKS` from information_schema.ins_tables union all select db_name `TABLE_CAT`, '' `TABLE_SCHEM`, view_name `TABLE_NAME`, 'VIEW' `TABLE_TYPE`, NULL `REMARKS` from information_schema.ins_views"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(49)
sql = "select null union select null"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(1)
tdSql.checkData(0, 0, None)
sql = "select null union all select null"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(2)
tdSql.checkData(0, 0, None)
tdSql.checkData(1, 0, None)
sql = "select null union select 1"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(2)
tdSql.checkData(0, 0, None)
tdSql.checkData(1, 0, 1)
sql = "select null union select 'asd'"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(2)
tdSql.checkData(0, 0, None)
tdSql.checkData(1, 0, 'asd')
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

View File

@ -0,0 +1,602 @@
import taos
import os
import sys
import time
from pathlib import Path
sys.path.append(os.path.dirname(Path(__file__).resolve().parent.parent.parent) + "/7-tmq")
from util.log import *
from util.sql import *
from util.cases import *
from util.dnodes import *
from util.common import *
from util.sqlset import *
from tmqCommon import *
class TDTestCase:
"""This test case is used to veirfy the tmq consume data from non marterial view
"""
def init(self, conn, logSql, replicaVar=1):
self.replicaVar = int(replicaVar)
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor())
self.setsql = TDSetSql()
# db info
self.dbname = "view_db"
self.stbname = 'stb'
self.ctbname_list = ["ct1", "ct2"]
self.stable_column_dict = {
'ts': 'timestamp',
'col1': 'float',
'col2': 'int',
}
self.tag_dict = {
'ctbname': 'binary(10)'
}
def prepare_data(self, conn=None):
"""Create the db and data for test
"""
tdLog.debug("Start to prepare the data")
if not conn:
conn = tdSql
# create datebase
conn.execute(f"create database {self.dbname}")
conn.execute(f"use {self.dbname}")
time.sleep(2)
# create stable
conn.execute(self.setsql.set_create_stable_sql(self.stbname, self.stable_column_dict, self.tag_dict))
tdLog.debug("Create stable {} successfully".format(self.stbname))
# create child tables
for ctname in self.ctbname_list:
conn.execute(f"create table {ctname} using {self.stbname} tags('{ctname}');")
tdLog.debug("Create child table {} successfully".format(ctname))
# insert data into child tables
conn.execute(f"insert into {ctname} values(now, 1.1, 1)(now+1s, 2.2, 2)(now+2s, 3.3, 3)(now+3s, 4.4, 4)(now+4s, 5.5, 5)(now+5s, 6.6, 6)(now+6s, 7.7, 7)(now+7s, 8.8, 8)(now+8s, 9.9, 9)(now+9s, 10.1, 10);)")
tdLog.debug(f"Insert into data to {ctname} successfully")
def prepare_tmq_data(self, para_dic):
tdLog.debug("Start to prepare the tmq data")
tmqCom.initConsumerTable()
tdCom.create_database(tdSql, para_dic["dbName"], para_dic["dropFlag"], vgroups=para_dic["vgroups"], replica=1)
tdLog.info("create stb")
tdCom.create_stable(tdSql, dbname=para_dic["dbName"], stbname=para_dic["stbName"], column_elm_list=para_dic['colSchema'], tag_elm_list=para_dic['tagSchema'])
tdLog.info("create ctb")
tdCom.create_ctable(tdSql, dbname=para_dic["dbName"], stbname=para_dic["stbName"],tag_elm_list=para_dic['tagSchema'], count=para_dic["ctbNum"], default_ctbname_prefix=para_dic['ctbPrefix'])
tdLog.info("insert data")
tmqCom.insert_data(tdSql, para_dic["dbName"], para_dic["ctbPrefix"], para_dic["ctbNum"], para_dic["rowsPerTbl"], para_dic["batchNum"], para_dic["startTs"])
tdLog.debug("Finish to prepare the tmq data")
def check_view_num(self, num):
tdSql.query("show views;")
rows = tdSql.queryRows
assert(rows == num)
tdLog.debug(f"Verify the view number successfully")
def create_user(self, username, password):
tdSql.execute(f"create user {username} pass '{password}';")
tdSql.execute(f"alter user {username} createdb 1;")
tdLog.debug("Create user {} with password {} successfully".format(username, password))
def check_permissions(self, username, db_name, permission_dict, view_name=None):
"""
:param permission_dict: {'db': ["read", "write], 'view': ["read", "write", "alter"]}
"""
tdSql.query("select * from information_schema.ins_user_privileges;")
for item in permission_dict.keys():
if item == "db":
for permission in permission_dict[item]:
assert((username, permission, db_name, "", "", "") in tdSql.queryResult)
tdLog.debug(f"Verify the {item} {db_name} {permission} permission successfully")
elif item == "view":
for permission in permission_dict[item]:
assert((username, permission, db_name, view_name, "", "view") in tdSql.queryResult)
tdLog.debug(f"Verify the {item} {db_name} {view_name} {permission} permission successfully")
else:
raise Exception(f"Invalid permission type: {item}")
def test_create_view_from_one_database(self):
"""This test case is used to verify the create view from one database
"""
self.prepare_data()
tdSql.execute(f"create view v1 as select * from {self.stbname};")
self.check_view_num(1)
tdSql.error(f'create view v1 as select * from {self.stbname};', expectErrInfo='view already exists in db')
tdSql.error(f'create view db2.v2 as select * from {self.stbname};', expectErrInfo='Fail to get table info, error: Database not exist')
tdSql.error(f'create view v2 as select c2 from {self.stbname};', expectErrInfo='Invalid column name: c2')
tdSql.error(f'create view v2 as select ts, col1 from tt1;', expectErrInfo='Fail to get table info, error: Table does not exist')
tdSql.execute(f"drop database {self.dbname}")
tdLog.debug("Finish test case 'test_create_view_from_one_database'")
def test_create_view_from_multi_database(self):
"""This test case is used to verify the create view from multi database
"""
self.prepare_data()
tdSql.execute(f"create view v1 as select * from view_db.{self.stbname};")
self.check_view_num(1)
self.dbname = "view_db2"
self.prepare_data()
tdSql.execute(f"create view v1 as select * from view_db2.{self.stbname};")
tdSql.execute(f"create view v2 as select * from view_db.v1;")
self.check_view_num(2)
self.dbname = "view_db"
tdSql.execute(f"drop database view_db;")
tdSql.execute(f"drop database view_db2;")
tdLog.debug("Finish test case 'test_create_view_from_multi_database'")
def test_create_view_name_params(self):
"""This test case is used to verify the create view with different view name params
"""
self.prepare_data()
tdSql.execute(f"create view v1 as select * from {self.stbname};")
self.check_view_num(1)
tdSql.error(f"create view v/2 as select * from {self.stbname};", expectErrInfo='syntax error near "/2 as select * from stb;"')
tdSql.execute(f"create view v2 as select ts, col1 from {self.stbname};")
self.check_view_num(2)
view_name_192_characters = "rzuoxoIXilAGgzNjYActiQwgzZK7PZYpDuaOe1lSJMFMVYXaexh1OfMmk3LvJcQbTeXXW7uGJY8IHuweHF73VHgoZgf0waO33YpZiTKfDQbdWtN4YmR2eWjL84ZtkfjM4huCP6lCysbDMj8YNwWksTdUq70LIyNhHp2V8HhhxyYSkREYFLJ1kOE78v61MQT6"
tdSql.execute(f"create view {view_name_192_characters} as select * from {self.stbname};")
self.check_view_num(3)
tdSql.error(f"create view {view_name_192_characters}1 as select * from {self.stbname};", expectErrInfo='Invalid identifier name: rzuoxoixilaggznjyactiqwgzzk7pzypduaoe1lsjmfmvyxaexh1ofmmk3lvjcqbtexxw7ugjy8ihuwehf73vhgozgf0wao33ypzitkfdqbdwtn4ymr2ewjl84ztkfjm4hucp6lcysbdmj8ynwwkstduq70liynhhp2v8hhhxyyskreyflj1koe78v61mqt61 as select * from stb;')
tdSql.execute(f"drop database {self.dbname}")
tdLog.debug("Finish test case 'test_create_view_name_params'")
def test_create_view_query(self):
"""This test case is used to verify the create view with different data type in query
"""
self.prepare_data()
# add different data type table
tdSql.execute(f"create table tb (ts timestamp, c1 int, c2 int unsigned, c3 bigint, c4 bigint unsigned, c5 float, c6 double, c7 binary(16), c8 smallint, c9 smallint unsigned, c10 tinyint, c11 tinyint unsigned, c12 bool, c13 varchar(16), c14 nchar(8), c15 geometry(21), c16 varbinary(16));")
tdSql.execute(f"create view v1 as select ts, c1, c2, c3, c4, c5, c6, c7, c8, c9, c10, c11, c12, c13, c14, c15, c16 from tb;")
# check data type in create view sql
tdSql.query("desc v1;")
res = tdSql.queryResult
data_type_list = [res[index][1] for index in range(len(res))]
tdLog.debug(data_type_list)
assert('TIMESTAMP' in data_type_list and 'INT' in data_type_list and 'INT UNSIGNED' in data_type_list and 'BIGINT' in data_type_list and 'BIGINT UNSIGNED' in data_type_list and 'FLOAT' in data_type_list and 'DOUBLE' in data_type_list and 'VARCHAR' in data_type_list and 'SMALLINT' in data_type_list and 'SMALLINT UNSIGNED' in data_type_list and 'TINYINT' in data_type_list and 'TINYINT UNSIGNED' in data_type_list and 'BOOL' in data_type_list and 'VARCHAR' in data_type_list and 'NCHAR' in data_type_list and 'GEOMETRY' in data_type_list and 'VARBINARY' in data_type_list)
tdSql.execute("create view v2 as select * from tb where c1 >5 and c7 like '%ab%';")
self.check_view_num(2)
tdSql.error("create view v3 as select * from tb where c1 like '%ab%';", expectErrInfo='Invalid operation')
tdSql.execute("create view v3 as select first(ts), sum(c1) from tb group by c2 having avg(c4) > 0;")
tdSql.execute("create view v4 as select _wstart,sum(c6) from tb interval(10s);")
tdSql.execute("create view v5 as select * from tb join v2 on tb.ts = v2.ts;")
tdSql.execute("create view v6 as select * from (select ts, c1, c2 from (select * from v2));")
self.check_view_num(6)
for v in ['v1', 'v2', 'v3', 'v4', 'v5', 'v6']:
tdSql.execute(f"drop view {v};")
tdSql.execute(f"drop database {self.dbname}")
tdLog.debug("Finish test case 'test_create_view_query'")
def test_show_view(self):
"""This test case is used to verify the show view
"""
self.prepare_data()
tdSql.execute(f"create view v1 as select * from {self.ctbname_list[0]};")
# query from show sql
tdSql.query("show views;")
res = tdSql.queryResult
assert(res[0][0] == 'v1' and res[0][1] == 'view_db' and res[0][2] == 'root' and res[0][4] == 'NORMAL' and res[0][5] == 'select * from ct1;')
# show create sql
tdSql.query("show create view v1;")
res = tdSql.queryResult
assert(res[0][1] == 'CREATE VIEW `view_db`.`v1` AS select * from ct1;')
# query from desc results
tdSql.query("desc view_db.v1;")
res = tdSql.queryResult
assert(res[0][1] == 'TIMESTAMP' and res[1][1] == 'FLOAT' and res[2][1] == 'INT')
# query from system table
tdSql.query("select * from information_schema.ins_views;")
res = tdSql.queryResult
assert(res[0][0] == 'v1' and res[0][1] == 'view_db' and res[0][2] == 'root' and res[0][4] == 'NORMAL' and res[0][5] == 'select * from ct1;')
tdSql.error("show db3.views;", expectErrInfo='Database not exist')
tdSql.error("desc viewx;", expectErrInfo='Table does not exist')
tdSql.error(f"show create view {self.dbname}.viewx;", expectErrInfo='view not exists in db')
tdSql.execute(f"drop database {self.dbname}")
tdSql.error("show views;", expectErrInfo='Database not exist')
tdLog.debug("Finish test case 'test_show_view'")
def test_drop_view(self):
"""This test case is used to verify the drop view
"""
self.prepare_data()
self.dbname = "view_db2"
self.prepare_data()
tdSql.execute("create view view_db.v1 as select * from view_db.stb;")
tdSql.execute("create view view_db2.v1 as select * from view_db2.stb;")
# delete view without database name
tdSql.execute("drop view v1;")
# delete view with database name
tdSql.execute("drop view view_db.v1;")
# delete non exist view
tdSql.error("drop view view_db.v11;", expectErrInfo='view not exists in db')
tdSql.execute("drop database view_db")
tdSql.execute("drop database view_db2;")
self.dbname = "view_db"
tdLog.debug("Finish test case 'test_drop_view'")
def test_view_permission_db_all_view_all(self):
"""This test case is used to verify the view permission with db all and view all,
the time sleep to wait the permission take effect
"""
self.prepare_data()
username = "view_test"
password = "test123@#$"
self.create_user(username, password)
# grant all db permission to user
tdSql.execute("grant all on view_db.* to view_test;")
conn = taos.connect(user=username, password=password)
conn.execute(f"use {self.dbname};")
conn.execute("create view v1 as select * from stb;")
res = conn.query("show views;")
assert(len(res.fetch_all()) == 1)
tdLog.debug(f"Verify the show view permission of user '{username}' with db all and view all successfully")
self.check_permissions("view_test", "view_db", {"db": ["read", "write"], "view": ["read", "write", "alter"]}, "v1")
tdLog.debug(f"Verify the view permission from system table successfully")
time.sleep(2)
conn.execute("drop view v1;")
tdSql.execute("revoke all on view_db.* from view_test;")
tdSql.execute(f"drop database {self.dbname};")
time.sleep(1)
# prepare data by user 'view_test'
self.prepare_data(conn)
conn.execute("create view v1 as select * from stb;")
res = conn.query("show views;")
assert(len(res.fetch_all()) == 1)
tdLog.debug(f"Verify the view permission of user '{username}' with db all and view all successfully")
self.check_permissions("view_test", "view_db", {"db": ["read", "write"], "view": ["read", "write", "alter"]}, "v1")
tdLog.debug(f"Verify the view permission from system table successfully")
time.sleep(2)
conn.execute("drop view v1;")
tdSql.execute("revoke all on view_db.* from view_test;")
tdSql.execute("revoke all on view_db.v1 from view_test;")
tdSql.execute(f"drop database {self.dbname}")
tdSql.execute("drop user view_test;")
tdLog.debug("Finish test case 'test_view_permission_db_all_view_all'")
def test_view_permission_db_write_view_all(self):
"""This test case is used to verify the view permission with db write and view all
"""
username = "view_test"
password = "test123@#$"
self.create_user(username, password)
conn = taos.connect(user=username, password=password)
self.prepare_data(conn)
conn.execute("create view v1 as select * from stb;")
tdSql.execute("revoke read on view_db.* from view_test;")
self.check_permissions("view_test", "view_db", {"db": ["write"], "view": ["read", "write", "alter"]}, "v1")
# create view permission error
try:
conn.execute("create view v2 as select * from v1;")
except Exception as ex:
assert("[0x2644]: Permission denied or target object not exist" in str(ex))
# query from view permission error
try:
conn.query("select * from v1;")
except Exception as ex:
assert("[0x2644]: Permission denied or target object not exist" in str(ex))
# view query permission
res = conn.query("show views;")
assert(len(res.fetch_all()) == 1)
time.sleep(2)
conn.execute("drop view v1;")
tdSql.execute("revoke write on view_db.* from view_test;")
tdSql.execute(f"drop database {self.dbname}")
tdSql.execute("drop user view_test;")
tdLog.debug("Finish test case 'test_view_permission_db_write_view_all'")
def test_view_permission_db_write_view_read(self):
"""This test case is used to verify the view permission with db write and view read
"""
username = "view_test"
password = "test123@#$"
self.create_user(username, password)
conn = taos.connect(user=username, password=password)
self.prepare_data()
tdSql.execute("create view v1 as select * from stb;")
tdSql.execute("grant write on view_db.* to view_test;")
tdSql.execute("grant read on view_db.v1 to view_test;")
conn.execute(f"use {self.dbname};")
time.sleep(2)
res = conn.query("select * from v1;")
assert(len(res.fetch_all()) == 20)
conn.execute("create view v2 as select * from v1;")
# create view from super table of database
try:
conn.execute("create view v3 as select * from stb;")
except Exception as ex:
assert("[0x2644]: Permission denied or target object not exist" in str(ex))
time.sleep(2)
conn.execute("drop view v2;")
try:
conn.execute("drop view v1;")
except Exception as ex:
assert("[0x2644]: Permission denied or target object not exist" in str(ex))
tdSql.execute("revoke read on view_db.v1 from view_test;")
tdSql.execute("revoke write on view_db.* from view_test;")
tdSql.execute(f"drop database {self.dbname}")
tdSql.execute("drop user view_test;")
tdLog.debug("Finish test case 'test_view_permission_db_write_view_read'")
def test_view_permission_db_write_view_alter(self):
"""This test case is used to verify the view permission with db write and view alter
"""
username = "view_test"
password = "test123@#$"
self.create_user(username, password)
conn = taos.connect(user=username, password=password)
self.prepare_data()
tdSql.execute("create view v1 as select * from stb;")
tdSql.execute("grant write on view_db.* to view_test;")
tdSql.execute("grant alter on view_db.v1 to view_test;")
try:
conn.execute(f"use {self.dbname};")
conn.execute("select * from v1;")
except Exception as ex:
assert("[0x2644]: Permission denied or target object not exist" in str(ex))
time.sleep(2)
conn.execute("drop view v1;")
tdSql.execute("revoke write on view_db.* from view_test;")
tdSql.execute(f"drop database {self.dbname}")
tdSql.execute("drop user view_test;")
tdLog.debug("Finish test case 'test_view_permission_db_write_view_alter'")
def test_view_permission_db_read_view_all(self):
"""This test case is used to verify the view permission with db read and view all
"""
username = "view_test"
password = "test123@#$"
self.create_user(username, password)
conn = taos.connect(user=username, password=password)
self.prepare_data()
tdSql.execute("create view v1 as select * from stb;")
tdSql.execute("grant read on view_db.* to view_test;")
tdSql.execute("grant all on view_db.v1 to view_test;")
try:
conn.execute(f"use {self.dbname};")
conn.execute("create view v2 as select * from v1;")
except Exception as ex:
assert("[0x2644]: Permission denied or target object not exist" in str(ex))
time.sleep(2)
res = conn.query("select * from v1;")
assert(len(res.fetch_all()) == 20)
conn.execute("drop view v1;")
tdSql.execute("revoke read on view_db.* from view_test;")
tdSql.execute(f"drop database {self.dbname}")
tdSql.execute("drop user view_test;")
tdLog.debug("Finish test case 'test_view_permission_db_read_view_all'")
def test_view_permission_db_read_view_alter(self):
"""This test case is used to verify the view permission with db read and view alter
"""
username = "view_test"
password = "test123@#$"
self.create_user(username, password)
conn = taos.connect(user=username, password=password)
self.prepare_data()
tdSql.execute("create view v1 as select * from stb;")
tdSql.execute("grant read on view_db.* to view_test;")
tdSql.execute("grant alter on view_db.v1 to view_test;")
try:
conn.execute(f"use {self.dbname};")
conn.execute("select * from v1;")
except Exception as ex:
assert("[0x2644]: Permission denied or target object not exist" in str(ex))
time.sleep(2)
conn.execute("drop view v1;")
tdSql.execute("revoke read on view_db.* from view_test;")
tdSql.execute(f"drop database {self.dbname}")
tdSql.execute("drop user view_test;")
tdLog.debug("Finish test case 'test_view_permission_db_read_view_alter'")
def test_view_permission_db_read_view_read(self):
"""This test case is used to verify the view permission with db read and view read
"""
username = "view_test"
password = "test123@#$"
self.create_user(username, password)
conn = taos.connect(user=username, password=password)
self.prepare_data()
tdSql.execute("create view v1 as select * from stb;")
tdSql.execute("grant read on view_db.* to view_test;")
tdSql.execute("grant read on view_db.v1 to view_test;")
conn.execute(f"use {self.dbname};")
time.sleep(2)
res = conn.query("select * from v1;")
assert(len(res.fetch_all()) == 20)
try:
conn.execute("drop view v1;")
except Exception as ex:
assert("[0x2644]: Permission denied or target object not exist" in str(ex))
tdSql.execute("revoke read on view_db.* from view_test;")
tdSql.execute("revoke read on view_db.v1 from view_test;")
tdSql.execute(f"drop database {self.dbname}")
tdSql.execute("drop user view_test;")
tdLog.debug("Finish test case 'test_view_permission_db_read_view_read'")
def test_query_from_view(self):
"""This test case is used to verify the query from view
"""
self.prepare_data()
view_name_list = []
# common query from super table
tdSql.execute(f"create view v1 as select * from {self.stbname};")
tdSql.query(f"select * from v1;")
rows = tdSql.queryRows
assert(rows == 20)
view_name_list.append("v1")
tdLog.debug("Verify the query from super table successfully")
# common query from child table
tdSql.execute(f"create view v2 as select * from {self.ctbname_list[0]};")
tdSql.query(f"select * from v2;")
rows = tdSql.queryRows
assert(rows == 10)
view_name_list.append("v2")
tdLog.debug("Verify the query from child table successfully")
# join query
tdSql.execute(f"create view v3 as select * from {self.stbname} join {self.ctbname_list[1]} on {self.ctbname_list[1]}.ts = {self.stbname}.ts;")
tdSql.query(f"select * from v3;")
rows = tdSql.queryRows
assert(rows == 10)
view_name_list.append("v3")
tdLog.debug("Verify the join query successfully")
# group by query
tdSql.execute(f"create view v4 as select count(*) from {self.stbname} group by tbname;")
tdSql.query(f"select * from v4;")
rows = tdSql.queryRows
assert(rows == 2)
res = tdSql.queryResult
assert(res[0][0] == 10)
view_name_list.append("v4")
tdLog.debug("Verify the group by query successfully")
# partition by query
tdSql.execute(f"create view v5 as select sum(col1) from {self.stbname} where col2 > 4 partition by tbname interval(3s);")
tdSql.query(f"select * from v5;")
rows = tdSql.queryRows
assert(rows >= 4)
view_name_list.append("v5")
tdLog.debug("Verify the partition by query successfully")
# query from nested view
tdSql.execute(f"create view v6 as select * from v5;")
tdSql.query(f"select * from v6;")
rows = tdSql.queryRows
assert(rows >= 4)
view_name_list.append("v6")
tdLog.debug("Verify the query from nested view successfully")
# delete view
for view in view_name_list:
tdSql.execute(f"drop view {view};")
tdLog.debug(f"Drop view {view} successfully")
tdSql.execute(f"drop database {self.dbname}")
tdLog.debug("Finish test case 'test_query_from_view'")
def test_tmq_from_view(self):
"""This test case is used to verify the tmq consume data from view
"""
# params for db
paraDict = {'dbName': 'view_db',
'dropFlag': 1,
'event': '',
'vgroups': 4,
'stbName': 'stb',
'colPrefix': 'c',
'tagPrefix': 't',
'colSchema': [{'type': 'INT', 'count':1}, {'type': 'binary', 'len':20, 'count':1}],
'tagSchema': [{'type': 'INT', 'count':1}, {'type': 'binary', 'len':20, 'count':1}],
'ctbPrefix': 'ctb',
'ctbNum': 1,
'rowsPerTbl': 10000,
'batchNum': 10,
'startTs': 1640966400000, # 2022-01-01 00:00:00.000
'pollDelay': 10,
'showMsg': 1,
'showRow': 1}
# topic info
topic_name_list = ['topic1']
view_name_list = ['view1']
expectRowsList = []
self.prepare_tmq_data(paraDict)
# init consume info, and start tmq_sim, then check consume result
tmqCom.initConsumerTable()
queryString = "select * from %s.%s"%(paraDict['dbName'], paraDict['stbName'])
tdSql.execute(f"create view {view_name_list[0]} as {queryString}")
sqlString = "create topic %s as %s" %(topic_name_list[0], "select * from %s"%view_name_list[0])
tdLog.info("create topic sql: %s"%sqlString)
tdSql.execute(sqlString)
tdSql.query(queryString)
expectRowsList.append(tdSql.getRows())
consumerId = 1
topicList = topic_name_list[0]
expectrowcnt = paraDict["rowsPerTbl"] * paraDict["ctbNum"]
keyList = 'group.id:cgrp1, enable.auto.commit:false, auto.commit.interval.ms:6000, auto.offset.reset:earliest'
ifcheckdata = 1
ifManualCommit = 1
tmqCom.insertConsumerInfo(consumerId, expectrowcnt, topicList, keyList, ifcheckdata, ifManualCommit)
tdLog.info("start consume processor")
tmqCom.startTmqSimProcess(paraDict['pollDelay'], paraDict["dbName"], paraDict['showMsg'], paraDict['showRow'])
tdLog.info("wait the consume result")
expectRows = 1
resultList = tmqCom.selectConsumeResult(expectRows)
if expectRowsList[0] != resultList[0]:
tdLog.info("expect consume rows: %d, act consume rows: %d"%(expectRowsList[0], resultList[0]))
tdLog.exit("1 tmq consume rows error!")
tmqCom.checkFileContent(consumerId, queryString)
time.sleep(10)
for i in range(len(topic_name_list)):
tdSql.query("drop topic %s"%topic_name_list[i])
for i in range(len(view_name_list)):
tdSql.query("drop view %s"%view_name_list[i])
# drop database
tdSql.execute(f"drop database {paraDict['dbName']}")
tdSql.execute("drop database cdb;")
tdLog.debug("Finish test case 'test_tmq_from_view'")
def test_TD_33390(self):
tdSql.execute('create database test')
tdSql.execute('create table test.nt(ts timestamp, c1 int)')
for i in range(0, 200):
tdSql.execute(f'create view test.view{i} as select * from test.nt')
tdSql.query("show test.views")
for i in range(0, 200):
tdSql.execute(f'drop view test.view{i}')
def run(self):
self.test_TD_33390()
self.test_create_view_from_one_database()
self.test_create_view_from_multi_database()
self.test_create_view_name_params()
self.test_create_view_query()
self.test_show_view()
self.test_drop_view()
self.test_view_permission_db_all_view_all()
self.test_view_permission_db_write_view_all()
self.test_view_permission_db_write_view_read()
self.test_view_permission_db_write_view_alter()
self.test_view_permission_db_read_view_all()
self.test_view_permission_db_read_view_alter()
self.test_view_permission_db_read_view_read()
self.test_query_from_view()
self.test_tmq_from_view()
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

View File

@ -0,0 +1,155 @@
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import taos
from taos.tmq import *
from util.cases import *
from util.common import *
from util.log import *
from util.sql import *
from util.sqlset import *
class TDTestCase:
def init(self, conn, logSql, replicaVar=1):
self.replicaVar = int(replicaVar)
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor())
self.setsql = TDSetSql()
self.stbname = 'stb'
self.user_name = 'test'
self.binary_length = 20 # the length of binary for column_dict
self.nchar_length = 20 # the length of nchar for column_dict
self.dbnames = ['db1', 'db2']
self.column_dict = {
'ts': 'timestamp',
'col1': 'float',
'col2': 'int',
'col3': 'float',
}
self.tag_dict = {
't1': 'int',
't2': f'binary({self.binary_length})'
}
self.tag_list = [
f'1, "Beijing"',
f'2, "Shanghai"',
f'3, "Guangzhou"',
f'4, "Shenzhen"'
]
self.values_list = [
f'now, 9.1, 200, 0.3'
]
self.tbnum = 4
self.stbnum_grant = 200
def create_user(self):
tdSql.execute(f'create user {self.user_name} pass "test123@#$"')
tdSql.execute(f'grant read on {self.dbnames[0]}.{self.stbname} with t2 = "Beijing" to {self.user_name}')
tdSql.execute(f'grant write on {self.dbnames[1]}.{self.stbname} with t1 = 2 to {self.user_name}')
def prepare_data(self):
for db in self.dbnames:
tdSql.execute(f"create database {db}")
tdSql.execute(f"use {db}")
tdSql.execute(self.setsql.set_create_stable_sql(self.stbname, self.column_dict, self.tag_dict))
for i in range(self.tbnum):
tdSql.execute(f'create table {self.stbname}_{i} using {self.stbname} tags({self.tag_list[i]})')
for j in self.values_list:
tdSql.execute(f'insert into {self.stbname}_{i} values({j})')
for i in range(self.stbnum_grant):
tdSql.execute(f'create table {self.stbname}_grant_{i} (ts timestamp, c0 int) tags(t0 int)')
def user_read_privilege_check(self, dbname):
testconn = taos.connect(user='test', password='test123@#$')
expectErrNotOccured = False
try:
sql = f"select count(*) from {dbname}.stb where t2 = 'Beijing'"
res = testconn.query(sql)
data = res.fetch_all()
count = data[0][0]
except BaseException:
expectErrNotOccured = True
if expectErrNotOccured:
caller = inspect.getframeinfo(inspect.stack()[1][0])
tdLog.exit(f"{caller.filename}({caller.lineno}) failed: sql:{sql}, expect error not occured")
elif count != 1:
tdLog.exit(f"{sql}, expect result doesn't match")
pass
def user_write_privilege_check(self, dbname):
testconn = taos.connect(user='test', password='test123@#$')
expectErrNotOccured = False
try:
sql = f"insert into {dbname}.stb_1 values(now, 1.1, 200, 0.3)"
testconn.execute(sql)
except BaseException:
expectErrNotOccured = True
if expectErrNotOccured:
caller = inspect.getframeinfo(inspect.stack()[1][0])
tdLog.exit(f"{caller.filename}({caller.lineno}) failed: sql:{sql}, expect error not occured")
else:
pass
def user_privilege_error_check(self):
testconn = taos.connect(user='test', password='test123@#$')
expectErrNotOccured = False
sql_list = [f"alter talbe {self.dbnames[0]}.stb_1 set t2 = 'Wuhan'",
f"insert into {self.dbnames[0]}.stb_1 values(now, 1.1, 200, 0.3)",
f"drop table {self.dbnames[0]}.stb_1",
f"select count(*) from {self.dbnames[1]}.stb"]
for sql in sql_list:
try:
res = testconn.execute(sql)
except BaseException:
expectErrNotOccured = True
if expectErrNotOccured:
pass
else:
caller = inspect.getframeinfo(inspect.stack()[1][0])
tdLog.exit(f"{caller.filename}({caller.lineno}) failed: sql:{sql}, expect error not occured")
pass
def user_privilege_grant_check(self):
for db in self.dbnames:
tdSql.execute(f"use {db}")
for i in range(self.stbnum_grant):
tdSql.execute(f'grant read on {db}.{self.stbname}_grant_{i} to {self.user_name}')
tdSql.execute(f'grant write on {db}.{self.stbname}_grant_{i} to {self.user_name}')
def run(self):
self.prepare_data()
self.create_user()
self.user_read_privilege_check(self.dbnames[0])
self.user_write_privilege_check(self.dbnames[1])
self.user_privilege_error_check()
self.user_privilege_grant_check()
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())

View File

@ -0,0 +1,388 @@
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import time
import random
import taos
import frame
import frame.etool
from frame.log import *
from frame.cases import *
from frame.sql import *
from frame.caseBase import *
from frame import *
from frame.autogen import *
class TDTestCase(TBase):
updatecfgDict = {
"compressMsgSize" : "100",
}
# compress
compresses = ["lz4","zlib","zstd","disabled","xz"]
compressDefaultDict = {};
compressDefaultDict["BOOL"] = "zstd"
compressDefaultDict["TINYINT"] = "zlib"
compressDefaultDict["SMALLINT"] = "zlib"
compressDefaultDict["INT"] = "lz4"
compressDefaultDict["BIGINT"] = "lz4"
compressDefaultDict["FLOAT"] = "lz4"
compressDefaultDict["DOUBLE"] = "lz4"
compressDefaultDict["VARCHAR"] = "zstd"
compressDefaultDict["TIMESTAMP"] = "lz4"
compressDefaultDict["NCHAR"] = "zstd"
compressDefaultDict["TINYINT UNSIGNED"] = "zlib"
compressDefaultDict["SMALLINT UNSIGNED"] = "zlib"
compressDefaultDict["INT UNSIGNED"] = "lz4"
compressDefaultDict["BIGINT UNSIGNED"] = "lz4"
compressDefaultDict["NCHAR"] = "zstd"
compressDefaultDict["BLOB"] = "lz4"
compressDefaultDict["VARBINARY"] = "zstd"
# level
levels = ["high","medium","low"]
# default compress
defCompress = "lz4"
# default level
defLevel = "medium"
# datatype 17
dtypes = [ "tinyint","tinyint unsigned","smallint","smallint unsigned","int","int unsigned",
"bigint","bigint unsigned","timestamp","bool","float","double","binary(16)","nchar(16)",
"varchar(16)","varbinary(16)"]
# encode
encodes = [
[["tinyint","tinyint unsigned","smallint","smallint unsigned","int","int unsigned","bigint","bigint unsigned"], ["simple8B"]],
[["timestamp","bigint","bigint unsigned"], ["Delta-i"]],
[["bool"], ["Bit-packing"]],
[["float","double"], ["Delta-d"]]
]
def combineValid(self, datatype, encode, compress):
if datatype != "float" and datatype != "double":
if compress == "tsz":
return False
return True
def genAllSqls(self, stbName, max):
c = 0 # column number
t = 0 # table number
sqls = []
sql = ""
# loop append sqls
for lines in self.encodes:
for datatype in lines[0]:
for encode in lines[1]:
for compress in self.compresses:
for level in self.levels:
if sql == "":
# first
sql = f"create table {self.db}.st{t} (ts timestamp"
else:
if self.combineValid(datatype, encode, compress):
sql += f", c{c} {datatype} ENCODE '{encode}' COMPRESS '{compress}' LEVEL '{level}'"
c += 1
if c >= max:
# append sqls
sql += f") tags(groupid int) "
sqls.append(sql)
# reset
sql = ""
c = 0
t += 1
# break loop
if c > 0:
# append sqls
sql += f") tags(groupid int) "
sqls.append(sql)
return sqls
# check error create
def errorCreate(self):
sqls = [
f"create table terr(ts timestamp, c0 int ENCODE 'simple8B' COMPRESS 'tsz' LEVEL 'high') ",
f"create table terr(ts timestamp, bi bigint encode 'bit-packing') tags (area int);"
f"create table terr(ts timestamp, ic int encode 'delta-d') tags (area int);"
]
tdSql.errors(sqls)
for dtype in self.dtypes:
# encode
sql = f"create table terr(ts timestamp, c0 {dtype} ENCODE 'abc') "
tdSql.error(sql)
# compress
sql = f"create table terr(ts timestamp, c0 {dtype} COMPRESS 'def') "
tdSql.error(sql)
# level
sql = f"create table terr(ts timestamp, c0 {dtype} LEVEL 'hig') "
tdSql.error(sql)
# tsz check
if dtype != "float" and dtype != "double":
sql = f"create table terr(ts timestamp, c0 {dtype} COMPRESS 'tsz') "
tdSql.error(sql)
# default value correct
def defaultCorrect(self):
# get default encode compress level
sql = f"describe {self.db}.{self.stb}"
tdSql.query(sql)
# see AutoGen.types
defEncodes = [ "delta-i","delta-i","simple8b","simple8b","simple8b","simple8b","simple8b","simple8b",
"simple8b","simple8b","delta-d","delta-d","bit-packing",
"disabled","disabled","disabled","disabled"]
count = tdSql.getRows()
for i in range(count):
node = tdSql.getData(i, 3)
if node == "TAG":
break
# check
tdLog.info(f"check default encode {tdSql.getData(i, 1)}")
#tdLog.info(f"check default encode compressDefaultDict[tdSql.getData(i, 2)]")
defaultValue = self.compressDefaultDict[tdSql.getData(i, 1)]
if defaultValue == None:
defaultValue = self.defCompress
tdLog.info(f"check default compress {tdSql.getData(i, 1)} {defaultValue}")
tdSql.checkData(i, 5, defaultValue)
tdSql.checkData(i, 6, self.defLevel)
# geometry encode is disabled
sql = f"create table {self.db}.ta(ts timestamp, pos geometry(64)) "
tdSql.execute(sql)
sql = f"describe {self.db}.ta"
tdSql.query(sql)
tdSql.checkData(1, 4, "disabled")
tdLog.info("check default encode compress and level successfully.")
def checkDataDesc(self, tbname, row, col, value):
sql = f"describe {tbname}"
tdSql.query(sql)
tdSql.checkData(row, col, value)
def writeData(self, count):
self.autoGen.insert_data(count, True)
# alter encode compress level
def checkAlter(self):
tbname = f"{self.db}.{self.stb}"
# alter encode 4
comp = "delta-i"
sql = f"alter table {tbname} modify column c7 ENCODE '{comp}';"
tdSql.execute(sql, show=True)
self.checkDataDesc(tbname, 8, 4, comp)
self.writeData(1000)
sql = f"alter table {tbname} modify column c8 ENCODE '{comp}';"
tdSql.execute(sql, show=True)
self.checkDataDesc(tbname, 9, 4, comp)
self.writeData(1000)
# alter compress 5
comps = self.compresses[2:]
comps.append(self.compresses[0]) # add lz4
for comp in comps:
for i in range(self.colCnt - 1):
self.writeData(1000)
# alter float(c9) double(c10) to tsz
comp = "tsz"
sql = f"alter table {tbname} modify column c9 COMPRESS '{comp}';"
tdSql.execute(sql, show=True)
self.checkDataDesc(tbname, 10, 5, comp)
self.writeData(10000)
sql = f"alter table {tbname} modify column c10 COMPRESS '{comp}';"
tdSql.execute(sql, show=True)
self.checkDataDesc(tbname, 11, 5, comp)
self.writeData(10000)
# alter level 6
for level in self.levels:
for i in range(self.colCnt - 1):
col = f"c{i}"
sql = f"alter table {tbname} modify column {col} LEVEL '{level}';"
tdSql.execute(sql, show=True)
self.checkDataDesc(tbname, i + 1, 6, level)
self.writeData(1000)
# modify two combine
i = 9
encode = "delta-d"
compress = "zlib"
sql = f"alter table {tbname} modify column c{i} ENCODE '{encode}' COMPRESS '{compress}';"
tdSql.execute(sql, show=True)
self.checkDataDesc(tbname, i + 1, 4, encode)
self.checkDataDesc(tbname, i + 1, 5, compress)
i = 10
encode = "delta-d"
level = "high"
sql = f"alter table {tbname} modify column c{i} ENCODE '{encode}' LEVEL '{level}';"
tdSql.execute(sql, show=True)
self.checkDataDesc(tbname, i + 1, 4, encode)
self.checkDataDesc(tbname, i + 1, 6, level)
i = 2
compress = "zlib"
level = "high"
sql = f"alter table {tbname} modify column c{i} COMPRESS '{compress}' LEVEL '{level}';"
tdSql.execute(sql, show=True)
self.checkDataDesc(tbname, i + 1, 5, compress)
self.checkDataDesc(tbname, i + 1, 6, level)
# modify three combine
i = 7
encode = "simple8b"
compress = "zstd"
level = "medium"
sql = f"alter table {tbname} modify column c{i} ENCODE '{encode}' COMPRESS '{compress}' LEVEL '{level}';"
tdSql.execute(sql, show=True)
self.checkDataDesc(tbname, i + 1, 4, encode)
self.checkDataDesc(tbname, i + 1, 5, compress)
self.checkDataDesc(tbname, i + 1, 6, level)
# alter error
sqls = [
"alter table nodb.nostb modify column ts LEVEL 'high';",
"alter table db.stb modify column ts encode 'simple8b';",
"alter table db.stb modify column c1 compress 'errorcompress';",
"alter table db.stb modify column c2 level 'errlevel';",
"alter table db.errstb modify column c3 compress 'xz';"
]
tdSql.errors(sqls)
# add column
def checkAddColumn(self):
c = 0
tbname = f"{self.db}.tbadd"
sql = f"create table {tbname}(ts timestamp, c0 int) tags(area int);"
tdSql.execute(sql)
# loop append sqls
for lines in self.encodes:
for datatype in lines[0]:
for encode in lines[1]:
for compress in self.compresses:
for level in self.levels:
if self.combineValid(datatype, encode, compress):
sql = f"alter table {tbname} add column col{c} {datatype} ENCODE '{encode}' COMPRESS '{compress}' LEVEL '{level}';"
tdSql.execute(sql, 3, True)
c += 1
# alter error
sqls = [
f"alter table {tbname} add column a1 int ONLYOPTION",
f"alter table {tbname} add column a1 int 'simple8b';",
f"alter table {tbname} add column a1 int WRONG 'simple8b';",
f"alter table {tbname} add column a1 int 123456789 'simple8b';",
f"alter table {tbname} add column a1 int WRONGANDVERYLONG 'simple8b';",
f"alter table {tbname} add column a1 int ENCODE 'veryveryveryveryveryverylong';",
f"alter table {tbname} add column a1 int ENCODE 'simple8bAA';",
f"alter table {tbname} add column a2 int COMPRESS 'AABB';",
f"alter table {tbname} add column a3 bigint LEVEL 'high1';",
f"alter table {tbname} add column a4 BINARY(12) ENCODE 'simple8b' LEVEL 'high2';",
f"alter table {tbname} add column a5 VARCHAR(16) ENCODE 'simple8b' COMPRESS 'gzip' LEVEL 'high3';"
]
tdSql.errors(sqls)
def validCreate(self):
sqls = self.genAllSqls(self.stb, 50)
tdSql.executes(sqls, show=True)
# sql syntax
def checkSqlSyntax(self):
# create tables positive
self.validCreate()
# create table negtive
self.errorCreate()
# check default value corrent
self.defaultCorrect()
# check alter and write
self.checkAlter()
# check add column
self.checkAddColumn()
def checkCorrect(self):
# check data correct
tbname = f"{self.db}.{self.stb}"
# count
sql = f"select count(*) from {tbname}"
count = tdSql.getFirstValue(sql)
step = 100000
offset = 0
while offset < count:
sql = f"select * from {tbname} limit {step} offset {offset}"
tdLog.info(sql)
tdSql.query(sql)
self.autoGen.dataCorrect(tdSql.res, tdSql.getRows(), step)
offset += step
tdLog.info(f"check data correct rows={offset}")
tdLog.info(F"check {tbname} rows {count} data correct successfully.")
# run
def run(self):
tdLog.debug(f"start to excute {__file__}")
# create db and stable
self.autoGen = AutoGen(step = 10, genDataMode = "fillts")
self.autoGen.create_db(self.db, 2, 3)
tdSql.execute(f"use {self.db}")
self.colCnt = 17
self.autoGen.create_stable(self.stb, 5, self.colCnt, 32, 32)
self.childCnt = 4
self.autoGen.create_child(self.stb, "d", self.childCnt)
self.autoGen.insert_data(1000)
# sql syntax
self.checkSqlSyntax()
# operateor
self.writeData(1000)
self.flushDb()
self.writeData(1000)
# check corrent
self.checkCorrect()
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

View File

@ -0,0 +1,235 @@
import sys
import threading
from util.log import *
from util.sql import *
from util.cases import *
from util.common import *
class TDTestCase:
updatecfgDict = {'debugFlag': 135, 'asynclog': 0}
def init(self, conn, logSql, replicaVar=1):
self.replicaVar = int(replicaVar)
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
self.tdCom = tdCom
def at_once_interval(self, interval, partition="tbname", delete=False, fill_value=None, fill_history_value=None, case_when=None):
tdLog.info(f"*** testing stream at_once+interval: interval: {interval}, partition: {partition}, fill_history: {fill_history_value}, fill: {fill_value}, delete: {delete}, case_when: {case_when} ***")
col_value_type = "Incremental" if partition=="c1" else "random"
custom_col_index = 1 if partition=="c1" else None
self.tdCom.custom_col_val = 0
self.delete = delete
self.tdCom.case_name = sys._getframe().f_code.co_name
self.tdCom.prepare_data(interval=interval, fill_history_value=fill_history_value, custom_col_index=custom_col_index, col_value_type=col_value_type)
self.stb_name = self.tdCom.stb_name.replace(f"{self.tdCom.dbname}.", "")
self.ctb_name = self.tdCom.ctb_name.replace(f"{self.tdCom.dbname}.", "")
self.tb_name = self.tdCom.tb_name.replace(f"{self.tdCom.dbname}.", "")
self.stb_stream_des_table = f'{self.stb_name}{self.tdCom.des_table_suffix}'
self.ctb_stream_des_table = f'{self.ctb_name}{self.tdCom.des_table_suffix}'
self.tb_stream_des_table = f'{self.tb_name}{self.tdCom.des_table_suffix}'
if partition == "tbname":
if case_when:
stream_case_when_partition = case_when
else:
stream_case_when_partition = self.tdCom.partition_tbname_alias
partition_elm_alias = self.tdCom.partition_tbname_alias
elif partition == "c1":
if case_when:
stream_case_when_partition = case_when
else:
stream_case_when_partition = self.tdCom.partition_col_alias
partition_elm_alias = self.tdCom.partition_col_alias
elif partition == "abs(c1)":
partition_elm_alias = self.tdCom.partition_expression_alias
elif partition is None:
partition_elm_alias = '"no_partition"'
else:
partition_elm_alias = self.tdCom.partition_tag_alias
if partition == "tbname" or partition is None:
if case_when:
stb_subtable_value = f'concat(concat("{self.stb_name}_{self.tdCom.subtable_prefix}", {stream_case_when_partition}), "{self.tdCom.subtable_suffix}")' if self.tdCom.subtable else None
ctb_subtable_value = f'concat(concat("{self.ctb_name}_{self.tdCom.subtable_prefix}", {stream_case_when_partition}), "{self.tdCom.subtable_suffix}")' if self.tdCom.subtable else None
tb_subtable_value = f'concat(concat("{self.tb_name}_{self.tdCom.subtable_prefix}", {stream_case_when_partition}), "{self.tdCom.subtable_suffix}")' if self.tdCom.subtable else None
else:
stb_subtable_value = f'concat(concat("{self.stb_name}_{self.tdCom.subtable_prefix}", {partition_elm_alias}), "{self.tdCom.subtable_suffix}")' if self.tdCom.subtable else None
ctb_subtable_value = f'concat(concat("{self.ctb_name}_{self.tdCom.subtable_prefix}", {partition_elm_alias}), "{self.tdCom.subtable_suffix}")' if self.tdCom.subtable else None
tb_subtable_value = f'concat(concat("{self.tb_name}_{self.tdCom.subtable_prefix}", {partition_elm_alias}), "{self.tdCom.subtable_suffix}")' if self.tdCom.subtable else None
else:
stb_subtable_value = f'concat(concat("{self.stb_name}_{self.tdCom.subtable_prefix}", cast(cast(abs(cast({partition_elm_alias} as int)) as bigint) as varchar(100))), "{self.tdCom.subtable_suffix}")' if self.tdCom.subtable else None
ctb_subtable_value = f'concat(concat("{self.ctb_name}_{self.tdCom.subtable_prefix}", cast(cast(abs(cast({partition_elm_alias} as int)) as bigint) as varchar(100))), "{self.tdCom.subtable_suffix}")' if self.tdCom.subtable else None
tb_subtable_value = f'concat(concat("{self.tb_name}_{self.tdCom.subtable_prefix}", cast(cast(abs(cast({partition_elm_alias} as int)) as bigint) as varchar(100))), "{self.tdCom.subtable_suffix}")' if self.tdCom.subtable else None
if partition:
partition_elm = f'partition by {partition} {partition_elm_alias}'
else:
partition_elm = ""
if fill_value:
if "value" in fill_value.lower():
fill_value='VALUE,1,2,3,4,5,6,7,8,9,10,11,1,2,3,4,5,6,7,8,9,10,11'
self.tdCom.create_stream(stream_name=f'{self.stb_name}{self.tdCom.stream_suffix}', des_table=self.stb_stream_des_table, source_sql=f'select _wstart AS wstart, {self.tdCom.stb_source_select_str} from {self.stb_name} {partition_elm} interval({self.tdCom.dataDict["interval"]}s)', trigger_mode="at_once", subtable_value=stb_subtable_value, fill_value=fill_value, fill_history_value=fill_history_value)
self.tdCom.create_stream(stream_name=f'{self.ctb_name}{self.tdCom.stream_suffix}', des_table=self.ctb_stream_des_table, source_sql=f'select _wstart AS wstart, {self.tdCom.stb_source_select_str} from {self.ctb_name} {partition_elm} interval({self.tdCom.dataDict["interval"]}s)', trigger_mode="at_once", subtable_value=ctb_subtable_value, fill_value=fill_value, fill_history_value=fill_history_value)
if fill_value:
if "value" in fill_value.lower():
fill_value='VALUE,1,2,3,4,5,6,7,8,9,10,11'
self.tdCom.create_stream(stream_name=f'{self.tb_name}{self.tdCom.stream_suffix}', des_table=self.tb_stream_des_table, source_sql=f'select _wstart AS wstart, {self.tdCom.tb_source_select_str} from {self.tb_name} {partition_elm} interval({self.tdCom.dataDict["interval"]}s)', trigger_mode="at_once", subtable_value=tb_subtable_value, fill_value=fill_value, fill_history_value=fill_history_value)
start_time = self.tdCom.date_time
time.sleep(1)
for i in range(self.tdCom.range_count):
ts_value = str(self.tdCom.date_time+self.tdCom.dataDict["interval"])+f'+{i*10}s'
ts_cast_delete_value = self.tdCom.time_cast(ts_value)
self.tdCom.sinsert_rows(tbname=self.tdCom.ctb_name, ts_value=ts_value, custom_col_index=custom_col_index, col_value_type=col_value_type)
if i%2 == 0:
self.tdCom.sinsert_rows(tbname=self.tdCom.ctb_name, ts_value=ts_value, custom_col_index=custom_col_index, col_value_type=col_value_type)
if self.delete and i%2 != 0:
self.tdCom.sdelete_rows(tbname=self.tdCom.ctb_name, start_ts=ts_cast_delete_value)
self.tdCom.date_time += 1
self.tdCom.sinsert_rows(tbname=self.tdCom.tb_name, ts_value=ts_value, custom_col_index=custom_col_index, col_value_type=col_value_type)
if i%2 == 0:
self.tdCom.sinsert_rows(tbname=self.tdCom.tb_name, ts_value=ts_value, custom_col_index=custom_col_index, col_value_type=col_value_type)
if self.delete and i%2 != 0:
self.tdCom.sdelete_rows(tbname=self.tdCom.tb_name, start_ts=ts_cast_delete_value)
self.tdCom.date_time += 1
if partition:
partition_elm = f'partition by {partition}'
else:
partition_elm = ""
if not fill_value:
for tbname in [self.stb_name, self.ctb_name, self.tb_name]:
if tbname != self.tb_name:
self.tdCom.check_query_data(f'select wstart, {self.tdCom.stb_output_select_str} from {tbname}{self.tdCom.des_table_suffix} order by wstart', f'select _wstart AS wstart, {self.tdCom.stb_source_select_str} from {tbname} {partition_elm} interval({self.tdCom.dataDict["interval"]}s) order by wstart', sorted=True)
else:
self.tdCom.check_query_data(f'select wstart, {self.tdCom.tb_output_select_str} from {tbname}{self.tdCom.des_table_suffix} order by wstart', f'select _wstart AS wstart, {self.tdCom.tb_source_select_str} from {tbname} {partition_elm} interval({self.tdCom.dataDict["interval"]}s) order by wstart', sorted=True)
if self.tdCom.subtable:
for tname in [self.stb_name, self.ctb_name]:
group_id = self.tdCom.get_group_id_from_stb(f'{tname}_output')
tdSql.query(f'select * from {self.ctb_name}')
ptn_counter = 0
for c1_value in tdSql.queryResult:
if partition == "c1":
tbname = self.tdCom.get_subtable_wait(f'{tname}_{self.tdCom.subtable_prefix}{abs(c1_value[1])}{self.tdCom.subtable_suffix}')
tdSql.query(f'select count(*) from `{tbname}`')
elif partition is None:
tbname = self.tdCom.get_subtable_wait(f'{tname}_{self.tdCom.subtable_prefix}no_partition{self.tdCom.subtable_suffix}')
tdSql.query(f'select count(*) from `{tbname}`')
elif partition == "abs(c1)":
abs_c1_value = abs(c1_value[1])
tbname = self.tdCom.get_subtable_wait(f'{tname}_{self.tdCom.subtable_prefix}{abs_c1_value}{self.tdCom.subtable_suffix}')
tdSql.query(f'select count(*) from `{tbname}`')
elif partition == "tbname" and ptn_counter == 0:
tbname = self.tdCom.get_subtable_wait(f'{tname}_{self.tdCom.subtable_prefix}{self.ctb_name}{self.tdCom.subtable_suffix}_{tname}_output_{group_id}')
tdSql.query(f'select count(*) from `{tbname}`')
ptn_counter += 1
tdSql.checkEqual(tdSql.queryResult[0][0] > 0, True)
group_id = self.tdCom.get_group_id_from_stb(f'{self.tb_name}_output')
tdSql.query(f'select * from {self.tb_name}')
ptn_counter = 0
for c1_value in tdSql.queryResult:
if partition == "c1":
tbname = self.tdCom.get_subtable_wait(f'{self.tb_name}_{self.tdCom.subtable_prefix}{abs(c1_value[1])}{self.tdCom.subtable_suffix}')
tdSql.query(f'select count(*) from `{tbname}`')
elif partition is None:
tbname = self.tdCom.get_subtable_wait(f'{self.tb_name}_{self.tdCom.subtable_prefix}no_partition{self.tdCom.subtable_suffix}')
tdSql.query(f'select count(*) from `{tbname}`')
elif partition == "abs(c1)":
abs_c1_value = abs(c1_value[1])
tbname = self.tdCom.get_subtable_wait(f'{self.tb_name}_{self.tdCom.subtable_prefix}{abs_c1_value}{self.tdCom.subtable_suffix}')
tdSql.query(f'select count(*) from `{tbname}`')
elif partition == "tbname" and ptn_counter == 0:
tbname = self.tdCom.get_subtable_wait(f'{self.tb_name}_{self.tdCom.subtable_prefix}{self.tb_name}{self.tdCom.subtable_suffix}_{self.tb_name}_output_{group_id}')
tdSql.query(f'select count(*) from `{tbname}`')
ptn_counter += 1
tdSql.checkEqual(tdSql.queryResult[0][0] > 0, True)
if fill_value:
end_date_time = self.tdCom.date_time
final_range_count = self.tdCom.range_count
history_ts = str(start_time)+f'-{self.tdCom.dataDict["interval"]*(final_range_count+2)}s'
start_ts = self.tdCom.time_cast(history_ts, "-")
future_ts = str(end_date_time)+f'+{self.tdCom.dataDict["interval"]*(final_range_count+2)}s'
end_ts = self.tdCom.time_cast(future_ts)
self.tdCom.sinsert_rows(tbname=self.ctb_name, ts_value=history_ts)
self.tdCom.sinsert_rows(tbname=self.tb_name, ts_value=history_ts)
self.tdCom.sinsert_rows(tbname=self.ctb_name, ts_value=future_ts)
self.tdCom.sinsert_rows(tbname=self.tb_name, ts_value=future_ts)
self.tdCom.date_time = start_time
# update
history_ts = str(start_time)+f'-{self.tdCom.dataDict["interval"]*(final_range_count+2)}s'
start_ts = self.tdCom.time_cast(history_ts, "-")
future_ts = str(end_date_time)+f'+{self.tdCom.dataDict["interval"]*(final_range_count+2)}s'
end_ts = self.tdCom.time_cast(future_ts)
self.tdCom.sinsert_rows(tbname=self.ctb_name, ts_value=history_ts)
self.tdCom.sinsert_rows(tbname=self.tb_name, ts_value=history_ts)
self.tdCom.sinsert_rows(tbname=self.ctb_name, ts_value=future_ts)
self.tdCom.sinsert_rows(tbname=self.tb_name, ts_value=future_ts)
self.tdCom.date_time = start_time
for i in range(self.tdCom.range_count):
ts_value = str(self.tdCom.date_time+self.tdCom.dataDict["interval"])+f'+{i*10}s'
ts_cast_delete_value = self.tdCom.time_cast(ts_value)
self.tdCom.sinsert_rows(tbname=self.ctb_name, ts_value=ts_value)
self.tdCom.date_time += 1
self.tdCom.sinsert_rows(tbname=self.tb_name, ts_value=ts_value)
self.tdCom.date_time += 1
if self.delete:
self.tdCom.sdelete_rows(tbname=self.ctb_name, start_ts=self.tdCom.time_cast(start_time), end_ts=ts_cast_delete_value)
self.tdCom.sdelete_rows(tbname=self.tb_name, start_ts=self.tdCom.time_cast(start_time), end_ts=ts_cast_delete_value)
for tbname in [self.stb_name, self.ctb_name, self.tb_name]:
if tbname != self.tb_name:
if "value" in fill_value.lower():
fill_value='VALUE,1,2,3,6,7,8,9,10,11,1,2,3,4,5,6,7,8,9,10,11'
if partition == "tbname":
self.tdCom.check_query_data(f'select wstart, {self.tdCom.fill_stb_output_select_str} from {tbname}{self.tdCom.des_table_suffix} order by wstart', f'select _wstart AS wstart, {self.tdCom.fill_stb_source_select_str} from {tbname} where ts >= {start_ts} and ts <= {end_ts} partition by {partition} interval({self.tdCom.dataDict["interval"]}s) fill ({fill_value}) order by wstart', fill_value=fill_value)
else:
self.tdCom.check_query_data(f'select wstart, {self.tdCom.fill_stb_output_select_str} from {tbname}{self.tdCom.des_table_suffix} where `min(c1)` is not Null order by wstart,`min(c1)`', f'select * from (select _wstart AS wstart, {self.tdCom.fill_stb_source_select_str} from {tbname} where ts >= {start_ts} and ts <= {end_ts} partition by {partition} interval({self.tdCom.dataDict["interval"]}s) fill ({fill_value}) order by wstart) where `min(c1)` is not Null order by wstart,`min(c1)`', fill_value=fill_value)
else:
if "value" in fill_value.lower():
fill_value='VALUE,1,2,3,6,7,8,9,10,11'
if partition == "tbname":
self.tdCom.check_query_data(f'select wstart, {self.tdCom.fill_tb_output_select_str} from {tbname}{self.tdCom.des_table_suffix} order by wstart', f'select _wstart AS wstart, {self.tdCom.fill_tb_source_select_str} from {tbname} where ts >= {start_ts} and ts <= {end_ts} partition by {partition} interval({self.tdCom.dataDict["interval"]}s) fill ({fill_value}) order by wstart', fill_value=fill_value)
else:
self.tdCom.check_query_data(f'select wstart, {self.tdCom.fill_tb_output_select_str} from {tbname}{self.tdCom.des_table_suffix} where `min(c1)` is not Null order by wstart,`min(c1)`', f'select * from (select _wstart AS wstart, {self.tdCom.fill_tb_source_select_str} from {tbname} where ts >= {start_ts} and ts <= {end_ts} partition by {partition} interval({self.tdCom.dataDict["interval"]}s) fill ({fill_value}) order by wstart) where `min(c1)` is not Null order by wstart,`min(c1)`', fill_value=fill_value)
if self.delete:
self.tdCom.sdelete_rows(tbname=self.ctb_name, start_ts=start_ts, end_ts=ts_cast_delete_value)
self.tdCom.sdelete_rows(tbname=self.tb_name, start_ts=start_ts, end_ts=ts_cast_delete_value)
for tbname in [self.stb_name, self.ctb_name, self.tb_name]:
if tbname != self.tb_name:
if "value" in fill_value.lower():
fill_value='VALUE,1,2,3,6,7,8,9,10,11,1,2,3,4,5,6,7,8,9,10,11'
if partition == "tbname":
self.tdCom.check_query_data(f'select wstart, {self.tdCom.fill_stb_output_select_str} from {tbname}{self.tdCom.des_table_suffix} order by wstart', f'select _wstart AS wstart, {self.tdCom.fill_stb_source_select_str} from {tbname} where ts >= {start_ts.replace("-", "+")} and ts <= {end_ts} partition by {partition} interval({self.tdCom.dataDict["interval"]}s) fill ({fill_value}) order by wstart', fill_value=fill_value)
else:
self.tdCom.check_query_data(f'select wstart, {self.tdCom.fill_stb_output_select_str} from {tbname}{self.tdCom.des_table_suffix} order by wstart,`min(c1)`', f'select * from (select _wstart AS wstart, {self.tdCom.fill_stb_source_select_str} from {tbname} where ts >= {start_ts} and ts <= {end_ts} partition by {partition} interval({self.tdCom.dataDict["interval"]}s) fill ({fill_value}) order by wstart) where `min(c1)` is not Null order by wstart,`min(c1)`', fill_value=fill_value)
else:
if "value" in fill_value.lower():
fill_value='VALUE,1,2,3,6,7,8,9,10,11'
if partition == "tbname":
self.tdCom.check_query_data(f'select wstart, {self.tdCom.fill_tb_output_select_str} from {tbname}{self.tdCom.des_table_suffix} order by wstart', f'select _wstart AS wstart, {self.tdCom.fill_tb_source_select_str} from {tbname} where ts >= {start_ts.replace("-", "+")} and ts <= {end_ts} partition by {partition} interval({self.tdCom.dataDict["interval"]}s) fill ({fill_value}) order by wstart', fill_value=fill_value)
else:
self.tdCom.check_query_data(f'select wstart, {self.tdCom.fill_tb_output_select_str} from {tbname}{self.tdCom.des_table_suffix} order by wstart,`min(c1)`', f'select * from (select _wstart AS wstart, {self.tdCom.fill_tb_source_select_str} from {tbname} where ts >= {start_ts} and ts <= {end_ts} partition by {partition} interval({self.tdCom.dataDict["interval"]}s) fill ({fill_value}) order by wstart) where `min(c1)` is not Null order by wstart,`min(c1)`', fill_value=fill_value)
def run(self):
self.at_once_interval(interval=random.randint(10, 15), partition="tbname", delete=True)
self.at_once_interval(interval=random.randint(10, 15), partition="c1", delete=True)
self.at_once_interval(interval=random.randint(10, 15), partition="abs(c1)", delete=True)
self.at_once_interval(interval=random.randint(10, 15), partition=None, delete=True)
self.at_once_interval(interval=random.randint(10, 15), partition=self.tdCom.stream_case_when_tbname, case_when=f'case when {self.tdCom.stream_case_when_tbname} = tbname then {self.tdCom.partition_tbname_alias} else tbname end')
self.at_once_interval(interval=random.randint(10, 15), partition="tbname", fill_history_value=1, fill_value="NULL")
for fill_value in ["NULL", "PREV", "NEXT", "LINEAR", "VALUE,1,2,3,4,5,6,7,8,9,10,11,1,2,3,4,5,6,7,8,9,10,11"]:
# for fill_value in ["PREV", "NEXT", "LINEAR", "VALUE,1,2,3,4,5,6,7,8,9,10,11,1,2,3,4,5,6,7,8,9,10,11"]:
self.at_once_interval(interval=random.randint(10, 15), partition="tbname", fill_value=fill_value)
self.at_once_interval(interval=random.randint(10, 15), partition="tbname", fill_value=fill_value, delete=True)
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
event = threading.Event()
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

67
test/tdgpt/test_gpt.py Normal file
View File

@ -0,0 +1,67 @@
from util.log import *
from util.cases import *
from util.sql import *
from util.common import *
import taos
class TDTestCase:
clientCfgDict = {'debugFlag': 135}
updatecfgDict = {
"debugFlag" : "135",
"queryBufferSize" : 10240,
'clientCfg' : clientCfgDict
}
def init(self, conn, logSql, replicaVal=1):
self.replicaVar = int(replicaVal)
tdLog.debug(f"start to excute {__file__}")
self.conn = conn
tdSql.init(conn.cursor(), False)
self.passwd = {'root':'taosdata',
'test':'test'}
def prepare_anode_data(self):
tdSql.execute(f"create anode '127.0.0.1:6090'")
tdSql.execute(f"create database db_gpt")
tdSql.execute(f"create table if not exists db_gpt.stb (ts timestamp, c1 int, c2 float, c3 double) tags (t1 int unsigned);")
tdSql.execute(f"create table db_gpt.ct1 using db_gpt.stb tags(1000);")
tdSql.execute(f"insert into db_gpt.ct1(ts, c1) values(now-1a, 5)(now+1a, 14)(now+2a, 15)(now+3a, 15)(now+4a, 14);")
tdSql.execute(f"insert into db_gpt.ct1(ts, c1) values(now+5a, 19)(now+6a, 17)(now+7a, 16)(now+8a, 20)(now+9a, 22);")
tdSql.execute(f"insert into db_gpt.ct1(ts, c1) values(now+10a, 8)(now+11a, 21)(now+12a, 28)(now+13a, 11)(now+14a, 9);")
tdSql.execute(f"insert into db_gpt.ct1(ts, c1) values(now+15a, 29)(now+16a, 40);")
def test_forecast(self):
"""
Test forecast
"""
tdLog.info(f"Test forecast")
tdSql.query(f"SELECT _frowts, FORECAST(c1, \"algo=arima,alpha=95,period=10,start_p=1,max_p=5,start_q=1,max_q=5,d=1\") from db_gpt.ct1 ;")
tdSql.checkRows(10)
def test_anomaly_window(self):
"""
Test anomaly window
"""
tdLog.info(f"Test anomaly window")
tdSql.query(f"SELECT _wstart, _wend, SUM(c1) FROM db_gpt.ct1 ANOMALY_WINDOW(c1, \"algo=iqr\");")
tdSql.checkData(0,2,40)
def run(self):
self.prepare_anode_data()
self.test_forecast()
self.test_anomaly_window()
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())

505
test/tmq/subscribeDb.py Normal file
View File

@ -0,0 +1,505 @@
import taos
import sys
import time
import socket
import os
import threading
from util.log import *
from util.sql import *
from util.cases import *
from util.dnodes import *
class TDTestCase:
hostname = socket.gethostname()
# rpcDebugFlagVal = '143'
#clientCfgDict = {'serverPort': '', 'firstEp': '', 'secondEp':'', 'rpcDebugFlag':'135', 'fqdn':''}
#clientCfgDict["rpcDebugFlag"] = rpcDebugFlagVal
#updatecfgDict = {'clientCfg': {}, 'serverPort': '', 'firstEp': '', 'secondEp':'', 'rpcDebugFlag':'135', 'fqdn':''}
# updatecfgDict["rpcDebugFlag"] = rpcDebugFlagVal
#print ("===================: ", updatecfgDict)
def init(self, conn, logSql, replicaVar=1):
self.replicaVar = int(replicaVar)
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor())
#tdSql.init(conn.cursor(), logSql) # output sql.txt file
def getBuildPath(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
projPath = selfPath[:selfPath.find("community")]
else:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]
break
return buildPath
def newcur(self,cfg,host,port):
user = "root"
password = "taosdata"
con=taos.connect(host=host, user=user, password=password, config=cfg ,port=port)
cur=con.cursor()
print(cur)
return cur
def initConsumerTable(self,cdbName='cdb'):
tdLog.info("create consume database, and consume info table, and consume result table")
tdSql.query("create database if not exists %s vgroups 1 wal_retention_period 3600"%(cdbName))
tdSql.query("drop table if exists %s.consumeinfo "%(cdbName))
tdSql.query("drop table if exists %s.consumeresult "%(cdbName))
tdSql.query("create table %s.consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)"%cdbName)
tdSql.query("create table %s.consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)"%cdbName)
def insertConsumerInfo(self,consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifmanualcommit,cdbName='cdb'):
sql = "insert into %s.consumeinfo values "%cdbName
sql += "(now + %ds, %d, '%s', '%s', %d, %d, %d)"%(consumerId, consumerId, topicList, keyList, expectrowcnt, ifcheckdata, ifmanualcommit)
tdLog.info("consume info sql: %s"%sql)
tdSql.query(sql)
def selectConsumeResult(self,expectRows,cdbName='cdb'):
resultList=[]
while 1:
tdSql.query("select * from %s.consumeresult"%cdbName)
#tdLog.info("row: %d, %l64d, %l64d"%(tdSql.getData(0, 1),tdSql.getData(0, 2),tdSql.getData(0, 3))
if tdSql.getRows() == expectRows:
break
else:
time.sleep(5)
for i in range(expectRows):
tdLog.info ("consume id: %d, consume msgs: %d, consume rows: %d"%(tdSql.getData(i , 1), tdSql.getData(i , 2), tdSql.getData(i , 3)))
resultList.append(tdSql.getData(i , 3))
return resultList
def startTmqSimProcess(self,buildPath,cfgPath,pollDelay,dbName,showMsg=1,showRow=1,cdbName='cdb',valgrind=0):
if valgrind == 1:
logFile = cfgPath + '/../log/valgrind-tmq.log'
shellCmd = 'nohup valgrind --log-file=' + logFile
shellCmd += '--tool=memcheck --leak-check=full --show-reachable=no --track-origins=yes --show-leak-kinds=all --num-callers=20 -v --workaround-gcc296-bugs=yes '
if (platform.system().lower() == 'windows'):
shellCmd = 'mintty -h never -w hide ' + buildPath + '\\build\\bin\\tmq_sim.exe -c ' + cfgPath
shellCmd += " -y %d -d %s -g %d -r %d -w %s "%(pollDelay, dbName, showMsg, showRow, cdbName)
shellCmd += "> nul 2>&1 &"
else:
shellCmd = 'nohup ' + buildPath + '/build/bin/tmq_sim -c ' + cfgPath
shellCmd += " -y %d -d %s -g %d -r %d -w %s "%(pollDelay, dbName, showMsg, showRow, cdbName)
shellCmd += "> /dev/null 2>&1 &"
tdLog.info(shellCmd)
os.system(shellCmd)
def create_tables(self,tsql, dbName,vgroups,stbName,ctbNum):
tsql.execute("create database if not exists %s vgroups %d wal_retention_period 3600"%(dbName, vgroups))
tsql.execute("use %s" %dbName)
tsql.execute("create table if not exists %s (ts timestamp, c1 bigint, c2 binary(16)) tags(t1 int)"%stbName)
pre_create = "create table"
sql = pre_create
#tdLog.debug("doing create one stable %s and %d child table in %s ..." %(stbname, count ,dbname))
for i in range(ctbNum):
sql += " %s_%d using %s tags(%d)"%(stbName,i,stbName,i+1)
if (i > 0) and (i%100 == 0):
tsql.execute(sql)
sql = pre_create
if sql != pre_create:
tsql.execute(sql)
event.set()
tdLog.debug("complete to create database[%s], stable[%s] and %d child tables" %(dbName, stbName, ctbNum))
return
def insert_data(self,tsql,dbName,stbName,ctbNum,rowsPerTbl,batchNum,startTs):
tdLog.debug("start to insert data ............")
tsql.execute("use %s" %dbName)
pre_insert = "insert into "
sql = pre_insert
t = time.time()
startTs = int(round(t * 1000))
#tdLog.debug("doing insert data into stable:%s rows:%d ..."%(stbName, allRows))
for i in range(ctbNum):
sql += " %s_%d values "%(stbName,i)
for j in range(rowsPerTbl):
sql += "(%d, %d, 'tmqrow_%d') "%(startTs + j, j, j)
if (j > 0) and ((j%batchNum == 0) or (j == rowsPerTbl - 1)):
tsql.execute(sql)
if j < rowsPerTbl - 1:
sql = "insert into %s_%d values " %(stbName,i)
else:
sql = "insert into "
#end sql
if sql != pre_insert:
#print("insert sql:%s"%sql)
tsql.execute(sql)
tdLog.debug("insert data ............ [OK]")
return
def prepareEnv(self, **parameterDict):
print ("input parameters:")
print (parameterDict)
# create new connector for my thread
tsql=self.newcur(parameterDict['cfg'], 'localhost', 6030)
self.create_tables(tsql,\
parameterDict["dbName"],\
parameterDict["vgroups"],\
parameterDict["stbName"],\
parameterDict["ctbNum"])
self.insert_data(tsql,\
parameterDict["dbName"],\
parameterDict["stbName"],\
parameterDict["ctbNum"],\
parameterDict["rowsPerTbl"],\
parameterDict["batchNum"],\
parameterDict["startTs"])
return
def tmqCase1(self, cfgPath, buildPath):
tdLog.printNoPrefix("======== test case 1: Produce while one consume to subscribe one db, inclue 1 stb")
tdLog.info("step 1: create database, stb, ctb and insert data")
# create and start thread
parameterDict = {'cfg': '', \
'dbName': 'db1', \
'vgroups': 4, \
'stbName': 'stb', \
'ctbNum': 10, \
'rowsPerTbl': 5000, \
'batchNum': 100, \
'replica': self.replicaVar, \
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
parameterDict['cfg'] = cfgPath
self.initConsumerTable()
tdSql.execute("create database if not exists %s vgroups %d replica %d wal_retention_period 3600" %(parameterDict['dbName'], parameterDict['vgroups'], parameterDict['replica']))
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
prepareEnvThread.start()
tdLog.info("create topics from db")
topicName1 = 'topic_db1'
tdSql.execute("create topic %s as database %s" %(topicName1, parameterDict['dbName']))
consumerId = 0
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
topicList = topicName1
ifcheckdata = 0
ifManualCommit = 0
keyList = 'group.id:cgrp1,\
enable.auto.commit:false,\
auto.commit.interval.ms:6000,\
auto.offset.reset:earliest'
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
event.wait()
tdLog.info("start consume processor")
pollDelay = 100
showMsg = 1
showRow = 1
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
# wait for data ready
prepareEnvThread.join()
tdLog.info("1-insert process end, and start to check consume result")
expectRows = 1
resultList = self.selectConsumeResult(expectRows)
totalConsumeRows = 0
for i in range(expectRows):
totalConsumeRows += resultList[i]
if totalConsumeRows != expectrowcnt:
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
tdLog.exit("tmq consume rows error!")
tdSql.query("drop topic %s"%topicName1)
tdLog.info("creat the same topic name , and start to consume")
self.initConsumerTable()
tdLog.info("create topics from db")
topicName1 = 'topic_db1'
tdSql.execute("create topic %s as database %s" %(topicName1, parameterDict['dbName']))
consumerId = 0
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
topicList = topicName1
ifcheckdata = 0
ifManualCommit = 0
keyList = 'group.id:cgrp1,\
enable.auto.commit:false,\
auto.commit.interval.ms:6000,\
auto.offset.reset:earliest'
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
tdLog.info("start consume processor")
pollDelay = 20
showMsg = 1
showRow = 1
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
expectRows = 1
resultList = self.selectConsumeResult(expectRows)
totalConsumeRows = 0
for i in range(expectRows):
totalConsumeRows += resultList[i]
if totalConsumeRows != expectrowcnt:
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
tdLog.exit("tmq consume rows error!")
tdSql.query("drop topic %s"%topicName1)
tdLog.printNoPrefix("======== test case 1 end ...... ")
def tmqCase2(self, cfgPath, buildPath):
tdLog.printNoPrefix("======== test case 2: Produce while two consumers to subscribe one db, inclue 1 stb")
tdLog.info("step 1: create database, stb, ctb and insert data")
# create and start thread
parameterDict = {'cfg': '', \
'dbName': 'db2', \
'vgroups': 4, \
'stbName': 'stb', \
'ctbNum': 10, \
'rowsPerTbl': 5000, \
'batchNum': 100, \
'replica': self.replicaVar, \
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
parameterDict['cfg'] = cfgPath
self.initConsumerTable()
tdSql.execute("create database if not exists %s vgroups %d replica %d wal_retention_period 3600" %(parameterDict['dbName'], parameterDict['vgroups'], parameterDict['replica']))
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
prepareEnvThread.start()
tdLog.info("create topics from db")
topicName1 = 'topic_db1'
tdSql.execute("create topic %s as database %s" %(topicName1, parameterDict['dbName']))
consumerId = 0
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
topicList = topicName1
ifcheckdata = 0
ifManualCommit = 1
keyList = 'group.id:cgrp1,\
enable.auto.commit:false,\
auto.commit.interval.ms:6000,\
auto.offset.reset:earliest'
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
consumerId = 1
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
event.wait()
tdLog.info("start consume processor")
pollDelay = 20
showMsg = 1
showRow = 1
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
# wait for data ready
prepareEnvThread.join()
tdLog.info("2-insert process end, and start to check consume result")
expectRows = 2
resultList = self.selectConsumeResult(expectRows)
totalConsumeRows = 0
for i in range(expectRows):
totalConsumeRows += resultList[i]
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
if not (totalConsumeRows >= expectrowcnt):
tdLog.exit("tmq consume rows error!")
tdSql.query("drop topic %s"%topicName1)
tdLog.printNoPrefix("======== test case 2 end ...... ")
def tmqCase2a(self, cfgPath, buildPath):
tdLog.printNoPrefix("======== test case 2a: Produce while two consumers to subscribe one db, inclue 1 stb")
tdLog.info("step 1: create database, stb, ctb and insert data")
# create and start thread
parameterDict = {'cfg': '', \
'dbName': 'db2a', \
'vgroups': 4, \
'stbName': 'stb1', \
'ctbNum': 10, \
'rowsPerTbl': 5000, \
'batchNum': 100, \
'replica': self.replicaVar, \
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
parameterDict['cfg'] = cfgPath
self.initConsumerTable()
tdSql.execute("create database if not exists %s vgroups %d wal_retention_period 3600" %(parameterDict['dbName'], parameterDict['vgroups']))
tdSql.execute("create table if not exists %s.%s (ts timestamp, c1 bigint, c2 binary(16)) tags(t1 int)"%(parameterDict['dbName'], parameterDict['stbName']))
tdLog.info("create topics from db")
topicName1 = 'topic_db1'
tdSql.execute("create topic %s as database %s" %(topicName1, parameterDict['dbName']))
consumerId = 0
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"]
topicList = topicName1
ifcheckdata = 0
ifManualCommit = 1
keyList = 'group.id:cgrp1,\
enable.auto.commit:false,\
auto.commit.interval.ms:6000,\
auto.offset.reset:earliest'
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
consumerId = 1
keyList = 'group.id:cgrp2,\
enable.auto.commit:false,\
auto.commit.interval.ms:6000,\
auto.offset.reset:earliest'
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
tdLog.info("start consume processor")
pollDelay = 100
showMsg = 1
showRow = 1
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
prepareEnvThread.start()
# wait for data ready
prepareEnvThread.join()
tdLog.info("3-insert process end, and start to check consume result")
expectRows = 2
resultList = self.selectConsumeResult(expectRows)
totalConsumeRows = 0
for i in range(expectRows):
totalConsumeRows += resultList[i]
if totalConsumeRows != expectrowcnt * 2:
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt*2))
tdLog.exit("tmq consume rows error!")
tdSql.query("drop topic %s"%topicName1)
tdLog.printNoPrefix("======== test case 2a end ...... ")
def tmqCase3(self, cfgPath, buildPath):
tdLog.printNoPrefix("======== test case 3: Produce while one consumers to subscribe one db, include 2 stb")
tdLog.info("step 1: create database, stb, ctb and insert data")
# create and start thread
parameterDict = {'cfg': '', \
'dbName': 'db3', \
'vgroups': 4, \
'stbName': 'stb', \
'ctbNum': 10, \
'rowsPerTbl': 5000, \
'batchNum': 100, \
'replica': self.replicaVar, \
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
parameterDict['cfg'] = cfgPath
self.initConsumerTable()
tdSql.execute("create database if not exists %s vgroups %d replica %d wal_retention_period 3600" %(parameterDict['dbName'], parameterDict['vgroups'], parameterDict['replica']))
prepareEnvThread = threading.Thread(target=self.prepareEnv, kwargs=parameterDict)
prepareEnvThread.start()
parameterDict2 = {'cfg': '', \
'dbName': 'db3', \
'vgroups': 4, \
'stbName': 'stb2', \
'ctbNum': 10, \
'rowsPerTbl': 5000, \
'batchNum': 100, \
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
parameterDict['cfg'] = cfgPath
prepareEnvThread2 = threading.Thread(target=self.prepareEnv, kwargs=parameterDict2)
prepareEnvThread2.start()
tdLog.info("create topics from db")
topicName1 = 'topic_db1'
tdSql.execute("create topic %s as database %s" %(topicName1, parameterDict['dbName']))
consumerId = 0
expectrowcnt = parameterDict["rowsPerTbl"] * parameterDict["ctbNum"] + parameterDict2["rowsPerTbl"] * parameterDict2["ctbNum"]
topicList = topicName1
ifcheckdata = 0
ifManualCommit = 0
keyList = 'group.id:cgrp1,\
enable.auto.commit:false,\
auto.commit.interval.ms:6000,\
auto.offset.reset:earliest'
self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
# consumerId = 1
# self.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
event.wait()
tdLog.info("start consume processor")
pollDelay = 100
showMsg = 1
showRow = 1
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
# wait for data ready
prepareEnvThread.join()
prepareEnvThread2.join()
tdLog.info("4-insert process end, and start to check consume result")
expectRows = 1
resultList = self.selectConsumeResult(expectRows)
totalConsumeRows = 0
for i in range(expectRows):
totalConsumeRows += resultList[i]
if totalConsumeRows != expectrowcnt:
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
tdLog.exit("tmq consume rows error!")
tdSql.query("drop topic %s"%topicName1)
tdLog.printNoPrefix("======== test case 3 end ...... ")
def run(self):
tdSql.prepare()
buildPath = self.getBuildPath()
if (buildPath == ""):
tdLog.exit("taosd not found!")
else:
tdLog.info("taosd found in %s" % buildPath)
cfgPath = buildPath + "/../sim/psim/cfg"
tdLog.info("cfgPath: %s" % cfgPath)
self.tmqCase1(cfgPath, buildPath)
self.tmqCase2(cfgPath, buildPath)
self.tmqCase2a(cfgPath, buildPath)
self.tmqCase3(cfgPath, buildPath)
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
event = threading.Event()
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

703
test/udf/udf_create.py Normal file
View File

@ -0,0 +1,703 @@
from distutils.log import error
import taos
import sys
import time
import os
from util.log import *
from util.sql import *
from util.cases import *
from util.dnodes import *
import subprocess
if (platform.system().lower() == 'windows'):
import win32gui
import threading
class TDTestCase:
def init(self, conn, logSql, replicaVar=1):
self.replicaVar = int(replicaVar)
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor(), logSql)
def getBuildPath(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
projPath = selfPath[:selfPath.find("community")]
else:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]
break
return buildPath
def prepare_udf_so(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
projPath = selfPath[:selfPath.find("community")]
else:
projPath = selfPath[:selfPath.find("tests")]
print(projPath)
if platform.system().lower() == 'windows':
self.libudf1 = subprocess.Popen('(for /r %s %%i in ("udf1.d*") do @echo %%i)|grep lib|head -n1'%projPath , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
self.libudf1_dup = subprocess.Popen('(for /r %s %%i in ("udf1_dup.d*") do @echo %%i)|grep lib|head -n1'%projPath , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
self.libudf2 = subprocess.Popen('(for /r %s %%i in ("udf2.d*") do @echo %%i)|grep lib|head -n1'%projPath , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
self.libudf2_dup = subprocess.Popen('(for /r %s %%i in ("udf2_dup.d*") do @echo %%i)|grep lib|head -n1'%projPath , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
if (not tdDnodes.dnodes[0].remoteIP == ""):
tdDnodes.dnodes[0].remote_conn.get(tdDnodes.dnodes[0].config["path"]+'/debug/build/lib/libudf1.so',projPath+"\\debug\\build\\lib\\")
tdDnodes.dnodes[0].remote_conn.get(tdDnodes.dnodes[0].config["path"]+'/debug/build/lib/libudf1_dup.so',projPath+"\\debug\\build\\lib\\")
tdDnodes.dnodes[0].remote_conn.get(tdDnodes.dnodes[0].config["path"]+'/debug/build/lib/libudf2.so',projPath+"\\debug\\build\\lib\\")
tdDnodes.dnodes[0].remote_conn.get(tdDnodes.dnodes[0].config["path"]+'/debug/build/lib/libudf2_dup.so',projPath+"\\debug\\build\\lib\\")
self.libudf1 = self.libudf1.replace('udf1.dll','libudf1.so')
self.libudf1_dup = self.libudf1_dup.replace('udf1_dup.dll','libudf1_dup.so')
self.libudf2 = self.libudf2.replace('udf2.dll','libudf2.so')
self.libudf2_dup = self.libudf2_dup.replace('udf2_dup.dll','libudf2_dup.so')
else:
self.libudf1 = subprocess.Popen('find %s -name "libudf1.so"|grep lib|head -n1'%projPath , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
self.libudf1_dup = subprocess.Popen('find %s -name "libudf1_dup.so"|grep lib|head -n1'%projPath , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
self.libudf2 = subprocess.Popen('find %s -name "libudf2.so"|grep lib|head -n1'%projPath , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
self.libudf2_dup = subprocess.Popen('find %s -name "libudf2_dup.so"|grep lib|head -n1'%projPath , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
self.libudf1 = self.libudf1.replace('\r','').replace('\n','')
self.libudf1_dup = self.libudf1_dup.replace('\r','').replace('\n','')
self.libudf2 = self.libudf2.replace('\r','').replace('\n','')
self.libudf2_dup = self.libudf2_dup.replace('\r','').replace('\n','')
def prepare_data(self):
tdSql.execute("drop database if exists db ")
tdSql.execute("create database if not exists db duration 100")
tdSql.execute("use db")
tdSql.execute(
'''create table stb1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
tags (t1 int)
'''
)
tdSql.execute(
'''
create table t1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
'''
)
for i in range(4):
tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )')
for i in range(9):
tdSql.execute(
f"insert into ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
)
tdSql.execute(
f"insert into ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
)
tdSql.execute("insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
tdSql.execute("insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute("insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute("insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute("insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute("insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute("insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute(
f'''insert into t1 values
( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
( '2021-01-01 01:01:06.000', 3, 33333, 333, 33, 3.33, 33.33, 0, "binary3", "nchar3", now()+3a )
( '2021-05-07 01:01:10.000', 4, 44444, 444, 44, 4.44, 44.44, 1, "binary4", "nchar4", now()+4a )
( '2021-07-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( '2021-09-30 01:01:16.000', 5, 55555, 555, 55, 5.55, 55.55, 0, "binary5", "nchar5", now()+5a )
( '2022-02-01 01:01:20.000', 6, 66666, 666, 66, 6.66, 66.66, 1, "binary6", "nchar6", now()+6a )
( '2022-10-28 01:01:26.000', 7, 00000, 000, 00, 0.00, 00.00, 1, "binary7", "nchar7", "1970-01-01 08:00:00.000" )
( '2022-12-01 01:01:30.000', 8, -88888, -888, -88, -8.88, -88.88, 0, "binary8", "nchar8", "1969-01-01 01:00:00.000" )
( '2022-12-31 01:01:36.000', 9, -99999999999999999, -999, -99, -9.99, -999999999999999999999.99, 1, "binary9", "nchar9", "1900-01-01 00:00:00.000" )
( '2023-02-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
'''
)
tdSql.execute("create table tb (ts timestamp , num1 int , num2 int, num3 double , num4 binary(30))")
tdSql.execute(
f'''insert into tb values
( '2020-04-21 01:01:01.000', NULL, 1, 1, "binary1" )
( '2020-10-21 01:01:01.000', 1, 1, 1.11, "binary1" )
( '2020-12-31 01:01:01.000', 2, 22222, 22, "binary1" )
( '2021-01-01 01:01:06.000', 3, 33333, 33, "binary1" )
( '2021-05-07 01:01:10.000', 4, 44444, 44, "binary1" )
( '2021-07-21 01:01:01.000', NULL, NULL, NULL, "binary1" )
( '2021-09-30 01:01:16.000', 5, 55555, 55, "binary1" )
( '2022-02-01 01:01:20.000', 6, 66666, 66, "binary1" )
( '2022-10-28 01:01:26.000', 0, 00000, 00, "binary1" )
( '2022-12-01 01:01:30.000', 8, -88888, -88, "binary1" )
( '2022-12-31 01:01:36.000', 9, -9999999, -99, "binary1" )
( '2023-02-21 01:01:01.000', NULL, NULL, NULL, "binary1" )
'''
)
# udf functions with join
ts_start = 1652517451000
tdSql.execute("create stable st (ts timestamp , c1 int , c2 int ,c3 double ,c4 double ) tags(ind int)")
tdSql.execute("create table sub1 using st tags(1)")
tdSql.execute("create table sub2 using st tags(2)")
for i in range(10):
ts = ts_start + i *1000
tdSql.execute(" insert into sub1 values({} , {},{},{},{})".format(ts,i ,i*10,i*100.0,i*1000.0))
tdSql.execute(" insert into sub2 values({} , {},{},{},{})".format(ts,i ,i*10,i*100.0,i*1000.0))
def create_udf_function(self):
for i in range(5):
# create scalar functions
tdSql.execute("create function udf1 as '%s' outputtype int;"%self.libudf1)
# create aggregate functions
tdSql.execute("create aggregate function udf2 as '%s' outputtype double bufSize 8;"%self.libudf2)
functions = tdSql.getResult("show functions")
function_nums = len(functions)
if function_nums == 2:
tdLog.info("create two udf functions success ")
# drop functions
tdSql.execute("drop function udf1")
tdSql.execute("drop function udf2")
functions = tdSql.getResult("show functions")
for function in functions:
if "udf1" in function[0] or "udf2" in function[0]:
tdLog.info("drop udf functions failed ")
tdLog.exit("drop udf functions failed")
tdLog.info("drop two udf functions success ")
# create scalar functions
tdSql.execute("create function udf1 as '%s' outputtype int;"%self.libudf1)
tdSql.execute("create function udf1_dup as '%s' outputtype int;"%self.libudf1_dup)
# create aggregate functions
tdSql.execute("create aggregate function udf2 as '%s' outputtype double bufSize 8;"%self.libudf2)
tdSql.execute("create aggregate function udf2_dup as '%s' outputtype double bufSize 8;"%self.libudf2_dup)
functions = tdSql.getResult("show functions")
function_nums = len(functions)
if function_nums == 2:
tdLog.info("create two udf functions success ")
def basic_udf_query(self):
# create tsma of udf
tdSql.error("create tsma tsma_udf on db.tb function(udf1(num1)) interval(10m);") # DB error: Not buildin function (0.001656s)
tdSql.error("create tsma tsma_udf on db.stb1 function(udf1(c1)) interval(10m);") # DB error: Not buildin function (0.001656s)
# scalar functions
# udf1_dup
tdSql.query("select udf1(num1) ,udf1_dup(num1) from tb")
tdSql.checkData(1,0,1)
tdSql.checkData(1,1,2)
tdSql.checkData(2,0,1)
tdSql.checkData(2,1,2)
tdSql.execute("use db ")
tdSql.query("select num1 , udf1(num1) ,num2 ,udf1(num2),num3 ,udf1(num3),num4 ,udf1(num4) from tb")
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,None)
tdSql.checkData(0,2,1)
tdSql.checkData(0,3,1)
tdSql.checkData(0,4,1.000000000)
tdSql.checkData(0,5,1)
tdSql.checkData(0,6,"binary1")
tdSql.checkData(0,7,1)
tdSql.checkData(3,0,3)
tdSql.checkData(3,1,1)
tdSql.checkData(3,2,33333)
tdSql.checkData(3,3,1)
tdSql.checkData(3,4,33.000000000)
tdSql.checkData(3,5,1)
tdSql.checkData(3,6,"binary1")
tdSql.checkData(3,7,1)
tdSql.checkData(11,0,None)
tdSql.checkData(11,1,None)
tdSql.checkData(11,2,None)
tdSql.checkData(11,3,None)
tdSql.checkData(11,4,None)
tdSql.checkData(11,5,None)
tdSql.checkData(11,6,"binary1")
tdSql.checkData(11,7,1)
tdSql.query("select c1 , udf1(c1) ,c2 ,udf1(c2), c3 ,udf1(c3), c4 ,udf1(c4) from stb1 order by c1")
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,None)
tdSql.checkData(0,2,None)
tdSql.checkData(0,3,None)
tdSql.checkData(0,4,None)
tdSql.checkData(0,5,None)
tdSql.checkData(0,6,None)
tdSql.checkData(0,7,None)
tdSql.checkData(20,0,8)
tdSql.checkData(20,1,1)
tdSql.checkData(20,2,88888)
tdSql.checkData(20,3,1)
tdSql.checkData(20,4,888)
tdSql.checkData(20,5,1)
tdSql.checkData(20,6,88)
tdSql.checkData(20,7,1)
# aggregate functions
tdSql.query("select udf2(num1) ,udf2_dup(num1) from tb")
val = tdSql.queryResult[0][0] + 100
tdSql.checkData(0,1,val)
tdSql.query("select udf2(num1) ,udf2(num2), udf2(num3) from tb")
tdSql.checkData(0,0,15.362291496)
tdSql.checkData(0,1,10000949.553189287)
tdSql.checkData(0,2,168.633425216)
# Arithmetic compute
tdSql.query("select udf2(num1)+100 ,udf2(num2)-100, udf2(num3)*100 ,udf2(num3)/100 from tb")
tdSql.checkData(0,0,115.362291496)
tdSql.checkData(0,1,10000849.553189287)
tdSql.checkData(0,2,16863.342521576)
tdSql.checkData(0,3,1.686334252)
tdSql.query("select udf2(c1) ,udf2(c6) from stb1 ")
tdSql.checkData(0,0,25.514701644)
tdSql.checkData(0,1,265.247614504)
tdSql.query("select udf2(c1)+100 ,udf2(c6)-100 ,udf2(c1)*100 ,udf2(c6)/100 from stb1 ")
tdSql.checkData(0,0,125.514701644)
tdSql.checkData(0,1,165.247614504)
tdSql.checkData(0,2,2551.470164435)
tdSql.checkData(0,3,2.652476145)
# # bug for crash when query sub table
tdSql.query("select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from ct1")
tdSql.checkData(0,0,378.215547010)
tdSql.checkData(0,1,353.808067460)
tdSql.checkData(0,2,2114.237451187)
tdSql.checkData(0,3,2.125468151)
tdSql.query("select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from stb1 ")
tdSql.checkData(0,0,490.358032462)
tdSql.checkData(0,1,400.460106627)
tdSql.checkData(0,2,2551.470164435)
tdSql.checkData(0,3,2.652476145)
# regular table with aggregate functions
tdSql.error("select udf1(num1) , count(num1) from tb;")
tdSql.error("select udf1(num1) , avg(num1) from tb;")
tdSql.error("select udf1(num1) , twa(num1) from tb;")
tdSql.error("select udf1(num1) , irate(num1) from tb;")
tdSql.error("select udf1(num1) , sum(num1) from tb;")
tdSql.error("select udf1(num1) , stddev(num1) from tb;")
tdSql.error("select udf1(num1) , HYPERLOGLOG(num1) from tb;")
# stable
tdSql.error("select udf1(c1) , count(c1) from stb1;")
tdSql.error("select udf1(c1) , avg(c1) from stb1;")
tdSql.error("select udf1(c1) , twa(c1) from stb1;")
tdSql.error("select udf1(c1) , irate(c1) from stb1;")
tdSql.error("select udf1(c1) , sum(c1) from stb1;")
tdSql.error("select udf1(c1) , stddev(c1) from stb1;")
tdSql.error("select udf1(c1) , HYPERLOGLOG(c1) from stb1;")
# regular table with select functions
tdSql.query("select udf1(num1) , max(num1) from tb;")
tdSql.checkRows(1)
tdSql.query("select floor(num1) , max(num1) from tb;")
tdSql.checkRows(1)
tdSql.query("select udf1(num1) , min(num1) from tb;")
tdSql.checkRows(1)
tdSql.query("select ceil(num1) , min(num1) from tb;")
tdSql.checkRows(1)
tdSql.query("select udf1(num1) , first(num1) from tb;")
tdSql.query("select abs(num1) , first(num1) from tb;")
tdSql.query("select udf1(num1) , last(num1) from tb;")
tdSql.query("select round(num1) , last(num1) from tb;")
tdSql.query("select udf1(num1) , top(num1,1) from tb;")
tdSql.checkRows(1)
tdSql.query("select udf1(num1) , bottom(num1,1) from tb;")
tdSql.checkRows(1)
# tdSql.query("select udf1(num1) , last_row(num1) from tb;")
# tdSql.checkRows(1)
# tdSql.query("select round(num1) , last_row(num1) from tb;")
# tdSql.checkRows(1)
# stable
tdSql.query("select udf1(c1) , max(c1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select abs(c1) , max(c1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select udf1(c1) , min(c1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select floor(c1) , min(c1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select udf1(c1) , first(c1) from stb1;")
tdSql.query("select udf1(c1) , last(c1) from stb1;")
tdSql.query("select udf1(c1) , top(c1 ,1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select abs(c1) , top(c1 ,1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select udf1(c1) , bottom(c1,1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select ceil(c1) , bottom(c1,1) from stb1;")
tdSql.checkRows(1)
# tdSql.query("select udf1(c1) , last_row(c1) from stb1;")
# tdSql.checkRows(1)
# tdSql.query("select ceil(c1) , last_row(c1) from stb1;")
# tdSql.checkRows(1)
# regular table with compute functions
tdSql.query("select udf1(num1) , abs(num1) from tb;")
tdSql.checkRows(12)
tdSql.query("select floor(num1) , abs(num1) from tb;")
tdSql.checkRows(12)
# # bug need fix
#tdSql.query("select udf1(num1) , csum(num1) from tb;")
#tdSql.checkRows(9)
#tdSql.query("select ceil(num1) , csum(num1) from tb;")
#tdSql.checkRows(9)
#tdSql.query("select udf1(c1) , csum(c1) from stb1;")
#tdSql.checkRows(22)
#tdSql.query("select floor(c1) , csum(c1) from stb1;")
#tdSql.checkRows(22)
# stable with compute functions
tdSql.query("select udf1(c1) , abs(c1) from stb1;")
tdSql.checkRows(25)
tdSql.query("select abs(c1) , ceil(c1) from stb1;")
tdSql.checkRows(25)
# nest query
tdSql.query("select abs(udf1(c1)) , abs(ceil(c1)) from stb1 order by ts;")
tdSql.checkRows(25)
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,None)
tdSql.checkData(1,0,1)
tdSql.checkData(1,1,8)
tdSql.query("select abs(udf1(c1)) , abs(ceil(c1)) from ct1 order by ts;")
tdSql.checkRows(13)
tdSql.checkData(0,0,1)
tdSql.checkData(0,1,8)
tdSql.checkData(1,0,1)
tdSql.checkData(1,1,7)
# bug fix for crash
# order by udf function result
for _ in range(50):
tdSql.query("select udf2(c1) from stb1 group by 1-udf1(c1)")
print(tdSql.queryResult)
# udf functions with filter
tdSql.query("select abs(udf1(c1)) , abs(ceil(c1)) from stb1 where c1 is null order by ts;")
tdSql.checkRows(3)
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,None)
tdSql.query("select c1 ,udf1(c1) , c6 ,udf1(c6) from stb1 where c1 > 8 order by ts")
tdSql.checkRows(3)
tdSql.checkData(0,0,9)
tdSql.checkData(0,1,1)
tdSql.checkData(0,2,-99.990000000)
tdSql.checkData(0,3,1)
tdSql.query("select sub1.c1, sub2.c2 from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,0)
tdSql.checkData(0,1,0)
tdSql.checkData(1,0,1)
tdSql.checkData(1,1,10)
tdSql.query("select udf1(sub1.c1), udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,1)
tdSql.checkData(0,1,1)
tdSql.checkData(1,0,1)
tdSql.checkData(1,1,1)
tdSql.query("select sub1.c1 , udf1(sub1.c1), sub2.c2 ,udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,0)
tdSql.checkData(0,1,1)
tdSql.checkData(0,2,0)
tdSql.checkData(0,3,1)
tdSql.checkData(1,0,1)
tdSql.checkData(1,1,1)
tdSql.checkData(1,2,10)
tdSql.checkData(1,3,1)
tdSql.query("select udf2(sub1.c1), udf2(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,16.881943016)
tdSql.checkData(0,1,168.819430161)
tdSql.error("select sub1.c1 , udf2(sub1.c1), sub2.c2 ,udf2(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
# udf functions with group by
tdSql.query("select udf1(c1) from ct1 group by c1")
tdSql.checkRows(10)
tdSql.query("select udf1(c1) from stb1 group by c1")
tdSql.checkRows(11)
tdSql.query("select c1,c2, udf1(c1,c2) from ct1 group by c1,c2")
tdSql.checkRows(10)
tdSql.query("select c1,c2, udf1(c1,c2) from stb1 group by c1,c2")
tdSql.checkRows(11)
tdSql.query("select udf2(c1) from ct1 group by c1")
tdSql.checkRows(10)
tdSql.query("select udf2(c1) from stb1 group by c1")
tdSql.checkRows(11)
tdSql.query("select c1,c2, udf2(c1,c6) from ct1 group by c1,c2")
tdSql.checkRows(10)
tdSql.query("select c1,c2, udf2(c1,c6) from stb1 group by c1,c2")
tdSql.checkRows(11)
tdSql.query("select udf2(c1) from stb1 group by udf1(c1)")
tdSql.checkRows(2)
tdSql.query("select udf2(c1) from stb1 group by floor(c1)")
tdSql.checkRows(11)
# udf mix with order by
tdSql.query("select udf2(c1) from stb1 group by floor(c1) order by udf2(c1)")
tdSql.checkRows(11)
def multi_cols_udf(self):
tdSql.query("select num1,num2,num3,udf1(num1,num2,num3) from tb")
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,1)
tdSql.checkData(0,2,1.000000000)
tdSql.checkData(0,3,None)
tdSql.checkData(1,0,1)
tdSql.checkData(1,1,1)
tdSql.checkData(1,2,1.110000000)
tdSql.checkData(1,3,88)
tdSql.query("select c1,c6,udf1(c1,c6) from stb1 order by ts")
tdSql.checkData(1,0,8)
tdSql.checkData(1,1,88.880000000)
tdSql.checkData(1,2,88)
tdSql.query("select abs(udf1(c1,c6,c1,c6)) , abs(ceil(c1)) from stb1 where c1 is not null order by ts;")
tdSql.checkRows(22)
tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,169.661427555)
tdSql.checkData(0,1,169.661427555)
def try_query_sql(self):
udf1_sqls = [
"select num1 , udf1(num1) ,num2 ,udf1(num2),num3 ,udf1(num3),num4 ,udf1(num4) from tb" ,
"select c1 , udf1(c1) ,c2 ,udf1(c2), c3 ,udf1(c3), c4 ,udf1(c4) from stb1 order by c1" ,
"select udf1(num1) , max(num1) from tb;" ,
"select udf1(num1) , min(num1) from tb;" ,
#"select udf1(num1) , top(num1,1) from tb;" ,
#"select udf1(num1) , bottom(num1,1) from tb;" ,
"select udf1(c1) , max(c1) from stb1;" ,
"select udf1(c1) , min(c1) from stb1;" ,
#"select udf1(c1) , top(c1 ,1) from stb1;" ,
#"select udf1(c1) , bottom(c1,1) from stb1;" ,
"select udf1(num1) , abs(num1) from tb;" ,
#"select udf1(num1) , csum(num1) from tb;" ,
#"select udf1(c1) , csum(c1) from stb1;" ,
"select udf1(c1) , abs(c1) from stb1;" ,
"select abs(udf1(c1)) , abs(ceil(c1)) from stb1 order by ts;" ,
"select abs(udf1(c1)) , abs(ceil(c1)) from ct1 order by ts;" ,
"select abs(udf1(c1)) , abs(ceil(c1)) from stb1 where c1 is null order by ts;" ,
"select c1 ,udf1(c1) , c6 ,udf1(c6) from stb1 where c1 > 8 order by ts" ,
"select udf1(sub1.c1), udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select sub1.c1 , udf1(sub1.c1), sub2.c2 ,udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf1(c1) from ct1 group by c1" ,
"select udf1(c1) from stb1 group by c1" ,
"select c1,c2, udf1(c1,c2) from ct1 group by c1,c2" ,
"select c1,c2, udf1(c1,c2) from stb1 group by c1,c2" ,
"select num1,num2,num3,udf1(num1,num2,num3) from tb" ,
"select c1,c6,udf1(c1,c6) from stb1 order by ts" ,
"select abs(udf1(c1,c6,c1,c6)) , abs(ceil(c1)) from stb1 where c1 is not null order by ts;"
]
udf2_sqls = ["select udf2(sub1.c1), udf2(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf2(c1) from stb1 group by 1-udf1(c1)" ,
"select udf2(num1) ,udf2(num2), udf2(num3) from tb" ,
"select udf2(num1)+100 ,udf2(num2)-100, udf2(num3)*100 ,udf2(num3)/100 from tb" ,
"select udf2(c1) ,udf2(c6) from stb1 " ,
"select udf2(c1)+100 ,udf2(c6)-100 ,udf2(c1)*100 ,udf2(c6)/100 from stb1 " ,
"select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from ct1" ,
"select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from stb1 " ,
"select udf2(c1) from ct1 group by c1" ,
"select udf2(c1) from stb1 group by c1" ,
"select c1,c2, udf2(c1,c6) from ct1 group by c1,c2" ,
"select c1,c2, udf2(c1,c6) from stb1 group by c1,c2" ,
"select udf2(c1) from stb1 group by udf1(c1)" ,
"select udf2(c1) from stb1 group by floor(c1)" ,
"select udf2(c1) from stb1 group by floor(c1) order by udf2(c1)" ,
"select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null"]
return udf1_sqls ,udf2_sqls
def checkRunTimeError(self):
if (platform.system().lower() == 'windows' and tdDnodes.dnodes[0].remoteIP == ""):
while 1:
time.sleep(1)
hwnd = win32gui.FindWindow(None, "Microsoft Visual C++ Runtime Library")
if hwnd:
os.system("TASKKILL /F /IM udfd.exe")
def unexpected_create(self):
if (platform.system().lower() == 'windows' and tdDnodes.dnodes[0].remoteIP == ""):
checkErrorThread = threading.Thread(target=self.checkRunTimeError,daemon=True)
checkErrorThread.start()
tdLog.info(" create function with out bufsize ")
tdSql.query("drop function udf1 ")
tdSql.query("drop function udf2 ")
# create function without buffer
tdSql.execute("create function udf1 as '%s' outputtype int"%self.libudf1)
tdSql.execute("create aggregate function udf2 as '%s' outputtype double"%self.libudf2)
udf1_sqls ,udf2_sqls = self.try_query_sql()
for scalar_sql in udf1_sqls:
tdSql.query(scalar_sql)
for aggregate_sql in udf2_sqls:
tdSql.error(aggregate_sql)
# create function without aggregate
tdLog.info(" create function with out aggregate ")
tdSql.query("drop function udf1 ")
tdSql.query("drop function udf2 ")
# create function without buffer
tdSql.execute("create aggregate function udf1 as '%s' outputtype int bufSize 8 "%self.libudf1)
tdSql.execute("create function udf2 as '%s' outputtype double "%self.libudf2)
udf1_sqls ,udf2_sqls = self.try_query_sql()
for scalar_sql in udf1_sqls:
tdSql.error(scalar_sql)
for aggregate_sql in udf2_sqls:
tdSql.error(aggregate_sql)
tdSql.execute(" create function db as '%s' outputtype int "%self.libudf1)
tdSql.execute(" create aggregate function test as '%s' outputtype int bufSize 8 "%self.libudf1)
tdSql.error(" select db(c1) from stb1 ")
tdSql.error(" select db(c1,c6), db(c6) from stb1 ")
tdSql.error(" select db(num1,num2), db(num1) from tb ")
tdSql.error(" select test(c1) from stb1 ")
tdSql.error(" select test(c1,c6), test(c6) from stb1 ")
tdSql.error(" select test(num1,num2), test(num1) from tb ")
def loop_kill_udfd(self):
buildPath = self.getBuildPath()
if (buildPath == ""):
tdLog.exit("taosd not found!")
else:
tdLog.info("taosd found in %s" % buildPath)
cfgPath = buildPath + "/../sim/dnode1/cfg"
udfdPath = buildPath +'/build/bin/udfd'
for i in range(3):
tdLog.info(" loop restart udfd %d_th" % i)
tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,169.661427555)
tdSql.checkData(0,1,169.661427555)
# stop udfd cmds
get_processID = "ps -ef | grep -w udfd | grep -v grep| grep -v defunct | awk '{print $2}'"
processID = subprocess.check_output(get_processID, shell=True).decode("utf-8")
stop_udfd = " kill -9 %s" % processID
os.system(stop_udfd)
time.sleep(2)
tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,169.661427555)
tdSql.checkData(0,1,169.661427555)
# # start udfd cmds
# start_udfd = "nohup " + udfdPath +'-c' +cfgPath +" > /dev/null 2>&1 &"
# tdLog.info("start udfd : %s " % start_udfd)
def test_function_name(self):
tdLog.info(" create function name is not build_in functions ")
tdSql.execute(" drop function udf1 ")
tdSql.execute(" drop function udf2 ")
tdSql.error("create function max as '%s' outputtype int"%self.libudf1)
tdSql.error("create aggregate function sum as '%s' outputtype double bufSize 8"%self.libudf2)
tdSql.error("create function max as '%s' outputtype int"%self.libudf1)
tdSql.error("create aggregate function sum as '%s' outputtype double bufSize 8"%self.libudf2)
tdSql.error("create aggregate function tbname as '%s' outputtype double bufSize 8"%self.libudf2)
tdSql.error("create aggregate function function as '%s' outputtype double bufSize 8"%self.libudf2)
tdSql.error("create aggregate function stable as '%s' outputtype double bufSize 8"%self.libudf2)
tdSql.error("create aggregate function union as '%s' outputtype double bufSize 8"%self.libudf2)
tdSql.error("create aggregate function 123 as '%s' outputtype double bufSize 8"%self.libudf2)
tdSql.error("create aggregate function 123db as '%s' outputtype double bufSize 8"%self.libudf2)
tdSql.error("create aggregate function mnode as '%s' outputtype double bufSize 8"%self.libudf2)
def restart_taosd_query_udf(self):
self.create_udf_function()
for i in range(5):
tdLog.info(" this is %d_th restart taosd " %i)
tdSql.execute("use db ")
tdSql.query("select count(*) from stb1")
tdSql.checkRows(1)
tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,169.661427555)
tdSql.checkData(0,1,169.661427555)
tdDnodes.stop(1)
tdDnodes.start(1)
time.sleep(2)
def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring
print(" env is ok for all ")
self.prepare_udf_so()
self.prepare_data()
self.create_udf_function()
self.basic_udf_query()
self.unexpected_create()
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

View File

@ -21,16 +21,37 @@ from frame.sql import *
from frame.caseBase import *
from frame import *
class TDTestCase(TBase):
def caseDescription(self):
"""
[TD-11510] taosBenchmark test cases
"""
def checkVersion(self):
# run
outputs = etool.runBinFile("taosBenchmark", "-V")
print(outputs)
if len(outputs) != 3:
tdLog.exit(f"checkVersion return lines count {len(outputs) != 3}")
# version string len
assert len(outputs[0]) > 27
assert outputs[0][:22] == "taosBenchmark version:"
# commit id
assert len(outputs[1]) > 43
assert outputs[1][:4] == "git:"
# build info
assert len(outputs[2]) > 36
assert outputs[2][:6] == "build:"
tdLog.info("check taosBenchmark version successfully.")
def run(self):
# check version
self.checkVersion()
# command line
binPath = etool.benchMarkFile()
cmd = (
"%s -F 7 -n 10 -t 2 -x -y -M -C -d newtest -l 5 -A binary,nchar\(31\) -b tinyint,binary\(23\),bool,nchar -w 29 -E -m $%%^*"

View File

@ -27,10 +27,29 @@ class TDTestCase(TBase):
case1<sdsang>: [TS-3072] taosdump dump escaped db name test
"""
def checkVersion(self):
# run
outputs = etool.runBinFile("taosdump", "-V")
print(outputs)
if len(outputs) != 3:
tdLog.exit(f"checkVersion return lines count {len(outputs) != 3}")
# version string len
assert len(outputs[0]) > 22
assert outputs[0][:17] == "taosdump version:"
# commit id
assert len(outputs[1]) > 43
assert outputs[1][:4] == "git:"
# build info
assert len(outputs[2]) > 36
assert outputs[2][:6] == "build:"
tdLog.info("check taosdump version successfully.")
def run(self):
# check version
self.checkVersion()
tdSql.prepare()
tdSql.execute("drop database if exists db")

View File

@ -47,6 +47,80 @@ class TDTestCase:
break
return buildPath
def checkLogBak(self, logPath, expectLogBak):
if platform.system().lower() == 'windows':
return True
result = False
try:
for file in os.listdir(logPath):
file_path = os.path.join(logPath, file)
if os.path.isdir(file_path):
continue
if file.endswith('.gz'):
if expectLogBak:
result = True
else:
raise Exception(f"Error: Found .gz file: {file_path}")
if '.' in file:
prefix, num_part = file.split('.', 1)
logNum=0
if num_part.isdigit():
logNum = int(num_part)
if logNum > 100:
if not expectLogBak:
raise Exception(f"Error: Found log file number >= 100: {file_path}")
except Exception as e:
raise Exception(f"Error: error occurred. Reason: {e}")
return result
def checkTargetStrInFiles(self, filePaths, targetStr):
result = False
for filePath in filePaths:
if not os.path.exists(filePath):
continue
try:
with open(filePath, 'r', encoding='utf-8') as file:
for line in file:
if targetStr in line:
result = True
break
except Exception as e:
continue
return result
def logRotateOccurred(self, logFiles, targetStr, maxRetry=15):
result = False
for i in range(maxRetry):
if self.checkTargetStrInFiles(logFiles, targetStr):
result = True
break
tdLog.info(f"wait {i+1} second(s) for log rotate")
time.sleep(1)
return result
def checkLogCompress(self):
tdLog.info("Running check log compress")
dnodePath = self.buildPath + "/../sim/dnode1"
logPath = f"{dnodePath}/log"
taosdLogFiles = [f"{logPath}/taosdlog.0", f"{logPath}/taosdlog.1"]
logRotateStr="process log rotation"
logRotateResult = self.logRotateOccurred(taosdLogFiles, logRotateStr)
tdSql.checkEqual(True, logRotateResult)
tdSql.checkEqual(False, self.checkLogBak(logPath, False))
tdSql.execute("alter all dnodes 'logKeepDays 3'")
tdSql.execute("alter all dnodes 'numOfLogLines 1000'")
tdSql.execute("alter all dnodes 'debugFlag 143'")
logCompress=False
for i in range(30):
logCompress=self.checkLogBak(logPath, True)
if logCompress:
break
tdLog.info(f"wait {i+1} second(s) for log compress")
time.sleep(1)
tdSql.checkEqual(True, logCompress)
tdSql.execute("alter all dnodes 'numOfLogLines 1000000'")
tdSql.execute("alter all dnodes 'debugFlag 131'")
def prepareCfg(self, cfgPath, cfgDict):
tdLog.info("make dir %s" % cfgPath)
os.makedirs(cfgPath, exist_ok=True)
@ -338,6 +412,7 @@ class TDTestCase:
tdSql.checkEqual(True, os.path.exists(f"{dnodePath}/log/taoslog0.0"))
def run(self):
self.checkLogCompress()
self.checkLogOutput()
self.checkLogRotate()
self.closeTaosd()

View File

@ -65,18 +65,6 @@
#endif
#endif
// get taosdump commit number version
#ifndef TAOSDUMP_COMMIT_SHA1
#define TAOSDUMP_COMMIT_SHA1 "unknown"
#endif
#ifndef TAOSDUMP_TAG
#define TAOSDUMP_TAG "0.1.0"
#endif
#ifndef TAOSDUMP_STATUS
#define TAOSDUMP_STATUS "unknown"
#endif
// use 256 as normal buffer length
#define BUFFER_LEN 256

View File

@ -36,158 +36,52 @@ ELSEIF (CMAKE_SYSTEM_PROCESSOR MATCHES "loongarch64")
MESSAGE(STATUS "Set CPUTYPE to loongarch64")
ENDIF ()
#
# collect --version information
#
MESSAGE("collect --version show info:")
# version
IF (DEFINED TD_VER_NUMBER)
ADD_DEFINITIONS(-DTD_VER_NUMBER="${TD_VER_NUMBER}")
MESSAGE(STATUS "version:${TD_VER_NUMBER}")
ELSE ()
# abort build
MESSAGE(FATAL_ERROR "build taos-tools not found TD_VER_NUMBER define.")
ENDIF ()
# commit id
FIND_PACKAGE(Git)
IF(GIT_FOUND)
IF (EXISTS "${CMAKE_CURRENT_LIST_DIR}/../VERSION")
MESSAGE("Found VERSION file")
EXECUTE_PROCESS(
COMMAND grep "^taosdump" "${CMAKE_CURRENT_LIST_DIR}/../VERSION"
RESULT_VARIABLE RESULT
OUTPUT_VARIABLE TAOSDUMP_FULLTAG
)
EXECUTE_PROCESS(
COMMAND sh -c "git --git-dir=${CMAKE_CURRENT_LIST_DIR}/../.git --work-tree=${CMAKE_CURRENT_LIST_DIR}/.. log --pretty=oneline -n 1 HEAD"
WORKING_DIRECTORY ${CMAKE_CURRENT_LIST_DIR}
RESULT_VARIABLE RESULT
OUTPUT_VARIABLE TAOSDUMP_COMMIT_SHA1
)
EXECUTE_PROCESS(
COMMAND grep "^taosbenchmark" "${CMAKE_CURRENT_LIST_DIR}/../VERSION"
RESULT_VARIABLE RESULT
OUTPUT_VARIABLE TAOSBENCHMARK_FULLTAG
)
EXECUTE_PROCESS(
COMMAND sh -c "git --git-dir=${CMAKE_CURRENT_LIST_DIR}/../.git --work-tree=${CMAKE_CURRENT_LIST_DIR}/.. log --pretty=oneline -n 1 HEAD"
WORKING_DIRECTORY ${CMAKE_CURRENT_LIST_DIR}
RESULT_VARIABLE RESULT
OUTPUT_VARIABLE TAOSBENCHMARK_COMMIT_SHA1
)
ELSE ()
MESSAGE("Use git tag")
EXECUTE_PROCESS(
COMMAND sh -c "git for-each-ref --sort=taggerdate --format '%(tag)' refs/tags|grep taosdump|tail -1"
WORKING_DIRECTORY ${CMAKE_CURRENT_LIST_DIR}
RESULT_VARIABLE TAG_RESULT
OUTPUT_VARIABLE TAOSDUMP_FULLTAG
)
EXECUTE_PROCESS(
COMMAND sh -c "git --git-dir=${CMAKE_CURRENT_LIST_DIR}/../.git --work-tree=${CMAKE_CURRENT_LIST_DIR}/.. log --pretty=oneline -n 1 ${TAOSDUMP_FULLTAG}"
WORKING_DIRECTORY ${CMAKE_CURRENT_LIST_DIR}
RESULT_VARIABLE RESULT
OUTPUT_VARIABLE TAOSDUMP_COMMIT_SHA1
)
EXECUTE_PROCESS(
COMMAND sh -c "git for-each-ref --sort=taggerdate --format '%(tag)' refs/tags|grep taosbenchmark|tail -1"
WORKING_DIRECTORY ${CMAKE_CURRENT_LIST_DIR}
RESULT_VARIABLE TAG_RESULT
OUTPUT_VARIABLE TAOSBENCHMARK_FULLTAG
)
EXECUTE_PROCESS(
COMMAND sh -c "git --git-dir=${CMAKE_CURRENT_LIST_DIR}/../.git --work-tree=${CMAKE_CURRENT_LIST_DIR}/.. log --pretty=oneline -n 1 ${TAOSBENCHMARK_FULLTAG}"
WORKING_DIRECTORY ${CMAKE_CURRENT_LIST_DIR}
RESULT_VARIABLE RESULT
OUTPUT_VARIABLE TAOSBENCHMARK_COMMIT_SHA1
)
ENDIF ()
# get
EXECUTE_PROCESS(
COMMAND sh -c "git --git-dir=${CMAKE_CURRENT_LIST_DIR}/../.git --work-tree=${CMAKE_CURRENT_LIST_DIR}/.. status -z -s ${CMAKE_CURRENT_LIST_DIR}/taosdump.c"
RESULT_VARIABLE RESULT
OUTPUT_VARIABLE TAOSDUMP_STATUS
ERROR_QUIET
)
COMMAND git log -1 --format=%H
WORKING_DIRECTORY ${TD_COMMUNITY_DIR}
OUTPUT_VARIABLE GIT_COMMIT_ID
)
# version
IF (DEFINED TD_VER_NUMBER)
# use tdengine version
SET(TAOSBENCHMARK_TAG ${TD_VER_NUMBER})
SET(TAOSDUMP_TAG ${TD_VER_NUMBER})
MESSAGE(STATUS "use TD_VER_NUMBER version: " ${TD_VER_NUMBER})
ELSE ()
# use internal version
EXECUTE_PROCESS(
COMMAND sh "-c" "echo '${TAOSDUMP_FULLTAG}' | awk -F '-' '{print $2}'"
RESULT_VARIABLE RESULT
OUTPUT_VARIABLE TAOSDUMP_TAG
)
MESSAGE(STATUS "taosdump use origin version: " ${TAOSDUMP_TAG})
EXECUTE_PROCESS(
COMMAND sh "-c" "echo '${TAOSBENCHMARK_FULLTAG}' | awk -F '-' '{print $2}'"
RESULT_VARIABLE RESULT
OUTPUT_VARIABLE TAOSBENCHMARK_TAG
)
MESSAGE(STATUS "taosBenchmark use origin version: " ${TAOSBENCHMARK_TAG})
ENDIF ()
STRING(SUBSTRING "${GIT_COMMIT_ID}" 0 40 TAOSBENCHMARK_COMMIT_ID)
SET(TAOSDUMP_COMMIT_ID "${TAOSBENCHMARK_COMMIT_ID}")
EXECUTE_PROCESS(
COMMAND sh -c "git --git-dir=${CMAKE_CURRENT_LIST_DIR}/../.git --work-tree=${CMAKE_CURRENT_LIST_DIR}/.. status -z -s ${CMAKE_CURRENT_LIST_DIR}/bench*.c"
RESULT_VARIABLE RESULT
OUTPUT_VARIABLE TAOSBENCHMARK_STATUS
ERROR_QUIET
)
IF ("${TAOSDUMP_COMMIT_SHA1}" STREQUAL "")
SET(TAOSDUMP_COMMIT_SHA1 "unknown")
ELSE ()
STRING(SUBSTRING "${TAOSDUMP_COMMIT_SHA1}" 0 40 TAOSDUMP_COMMIT_SHA1)
STRING(STRIP "${TAOSDUMP_COMMIT_SHA1}" TAOSDUMP_COMMIT_SHA1)
ENDIF ()
IF ("${TAOSDUMP_TAG}" STREQUAL "")
SET(TAOSDUMP_TAG "0.1.0")
ELSE ()
STRING(STRIP "${TAOSDUMP_TAG}" TAOSDUMP_TAG)
ENDIF ()
IF ("${TAOSBENCHMARK_COMMIT_SHA1}" STREQUAL "")
SET(TAOSBENCHMARK_COMMIT_SHA1 "unknown")
ELSE ()
STRING(SUBSTRING "${TAOSBENCHMARK_COMMIT_SHA1}" 0 40 TAOSBENCHMARK_COMMIT_SHA1)
STRING(STRIP "${TAOSBENCHMARK_COMMIT_SHA1}" TAOSBENCHMARK_COMMIT_SHA1)
ENDIF ()
IF ("${TAOSBENCHMARK_TAG}" STREQUAL "")
SET(TAOSBENCHMARK_TAG "0.1.0")
ELSE ()
STRING(STRIP "${TAOSBENCHMARK_TAG}" TAOSBENCHMARK_TAG)
ENDIF ()
# show
MESSAGE(STATUS "taosdump commit id: ${TAOSDUMP_COMMIT_ID}")
MESSAGE(STATUS "taosBenchmark commit id: ${TAOSBENCHMARK_COMMIT_ID}")
# define
ADD_DEFINITIONS(-DTAOSDUMP_COMMIT_ID="${TAOSDUMP_COMMIT_ID}")
ADD_DEFINITIONS(-DTAOSBENCHMARK_COMMIT_ID="${TAOSBENCHMARK_COMMIT_ID}")
ELSE()
MESSAGE("Git not found")
SET(TAOSDUMP_COMMIT_SHA1 "unknown")
SET(TAOSBENCHMARK_COMMIT_SHA1 "unknown")
SET(TAOSDUMP_TAG "0.1.0")
SET(TAOSDUMP_STATUS "unknown")
SET(TAOSBENCHMARK_STATUS "unknown")
MESSAGE(FATAL_ERROR "build taos-tools FIND_PACKAGE(Git) failed.")
ENDIF (GIT_FOUND)
STRING(STRIP "${TAOSDUMP_STATUS}" TAOSDUMP_STATUS)
STRING(STRIP "${TAOSBENCHMARK_STATUS}" TAOSBENCHMARK_STATUS)
IF (TAOSDUMP_STATUS MATCHES "M")
SET(TAOSDUMP_STATUS "modified")
ELSE()
SET(TAOSDUMP_STATUS "")
ENDIF ()
IF (TAOSBENCHMARK_STATUS MATCHES "M")
SET(TAOSBENCHMARK_STATUS "modified")
ELSE()
SET(TAOSBENCHMARK_STATUS "")
ENDIF ()
MESSAGE("")
MESSAGE("taosdump last tag: ${TAOSDUMP_TAG}")
MESSAGE("taosdump commit: ${TAOSDUMP_COMMIT_SHA1}")
MESSAGE("taosdump status: ${TAOSDUMP_STATUS}")
MESSAGE("")
MESSAGE("taosBenchmark last tag: ${TAOSBENCHMARK_TAG}")
MESSAGE("taosBenchmark commit: ${TAOSBENCHMARK_COMMIT_SHA1}")
MESSAGE("taosBenchmark status: ${TAOSBENCHMARK_STATUS}")
# build info
SET(BUILD_INFO "${TD_VER_OSTYPE}-${TD_VER_CPUTYPE} ${TD_VER_DATE}")
ADD_DEFINITIONS(-DBUILD_INFO="${BUILD_INFO}")
MESSAGE(STATUS "build:${BUILD_INFO}")
MESSAGE("")
ADD_DEFINITIONS(-DTAOSDUMP_TAG="${TAOSDUMP_TAG}")
ADD_DEFINITIONS(-DTAOSDUMP_COMMIT_SHA1="${TAOSDUMP_COMMIT_SHA1}")
ADD_DEFINITIONS(-DTAOSDUMP_STATUS="${TAOSDUMP_STATUS}")
ADD_DEFINITIONS(-DTAOSBENCHMARK_TAG="${TAOSBENCHMARK_TAG}")
ADD_DEFINITIONS(-DTAOSBENCHMARK_COMMIT_SHA1="${TAOSBENCHMARK_COMMIT_SHA1}")
ADD_DEFINITIONS(-DTAOSBENCHMARK_STATUS="${TAOSBENCHMARK_STATUS}")
#
# build proj
#
LINK_DIRECTORIES(${CMAKE_BINARY_DIR}/build/lib ${CMAKE_BINARY_DIR}/build/lib64)
LINK_DIRECTORIES(/usr/lib /usr/lib64)
INCLUDE_DIRECTORIES(/usr/local/taos/include)
@ -252,6 +146,7 @@ IF (${CMAKE_SYSTEM_NAME} MATCHES "Linux" OR ${CMAKE_SYSTEM_NAME} MATCHES "Darwin
ENDIF()
ENDIF ()
# websocket
IF (${WEBSOCKET})
ADD_DEFINITIONS(-DWEBSOCKET)
INCLUDE_DIRECTORIES(/usr/local/include/)
@ -277,6 +172,7 @@ IF (${CMAKE_SYSTEM_NAME} MATCHES "Linux" OR ${CMAKE_SYSTEM_NAME} MATCHES "Darwin
SET(GCC_COVERAGE_LINK_FLAGS "-lgcov --coverage")
ENDIF ()
# sanitizer
IF (${BUILD_SANITIZER})
MESSAGE("${Yellow} Enable memory sanitize by BUILD_SANITIZER ${ColourReset}")
IF (${OS_ID} MATCHES "Darwin")
@ -288,6 +184,7 @@ IF (${CMAKE_SYSTEM_NAME} MATCHES "Linux" OR ${CMAKE_SYSTEM_NAME} MATCHES "Darwin
SET(TOOLS_SANITIZE_FLAG "")
ENDIF ()
# TOOLS_BUILD_TYPE
IF (${TOOLS_BUILD_TYPE} MATCHES "Debug")
IF ((${TOOLS_SANITIZE} MATCHES "true") OR (${BUILD_SANITIZER}))
MESSAGE("${Yellow} Enable memory sanitize by TOOLS_SANITIZE ${ColourReset}")

View File

@ -16,20 +16,6 @@
extern char g_configDir[MAX_PATH_LEN];
// get taosBenchmark commit number version
#ifndef TAOSBENCHMARK_COMMIT_SHA1
#define TAOSBENCHMARK_COMMIT_SHA1 "unknown"
#endif
#ifndef TAOSBENCHMARK_TAG
#define TAOSBENCHMARK_TAG "0.1.0"
#endif
#ifndef TAOSBENCHMARK_STATUS
#define TAOSBENCHMARK_STATUS "unknown"
#endif
char *g_aggreFuncDemo[] = {"*",
"count(*)",
"avg(current)",
@ -42,16 +28,10 @@ char *g_aggreFunc[] = {"*", "count(*)", "avg(C0)", "sum(C0)",
"max(C0)", "min(C0)", "first(C0)", "last(C0)"};
void printVersion() {
char taosBenchmark_ver[] = TAOSBENCHMARK_TAG;
char taosBenchmark_commit[] = TAOSBENCHMARK_COMMIT_SHA1;
char taosBenchmark_status[] = TAOSBENCHMARK_STATUS;
// version
printf("taosBenchmark version: %s\ngit: %s\n", taosBenchmark_ver, taosBenchmark_commit);
printf("build: %s\n", getBuildInfo());
if (strlen(taosBenchmark_status) > 0) {
printf("status: %s\n", taosBenchmark_status);
}
// version, macro define in src/CMakeLists.txt
printf("taosBenchmark version: %s\n", TD_VER_NUMBER);
printf("git: %s\n", TAOSBENCHMARK_COMMIT_ID);
printf("build: %s\n", BUILD_INFO);
}
void parseFieldDatatype(char *dataType, BArray *fields, bool isTag) {

View File

@ -236,8 +236,6 @@ struct arguments g_args = {
1000 // retrySleepMs
};
static uint64_t getUniqueIDFromEpoch() {
struct timeval tv;
@ -256,24 +254,17 @@ static uint64_t getUniqueIDFromEpoch() {
return id;
}
// --version -V
static void printVersion(FILE *file) {
char taostools_longver[] = TAOSDUMP_TAG;
char taosdump_status[] = TAOSDUMP_STATUS;
char *dupSeq = strdup(taostools_longver);
char *running = dupSeq;
char *taostools_ver = strsep(&running, "-");
char taosdump_commit[] = TAOSDUMP_COMMIT_SHA1;
fprintf(file,"taosdump version: %s\ngit: %s\n", taostools_ver, taosdump_commit);
printf("build: %s\n", getBuildInfo());
if (strlen(taosdump_status) > 0) {
fprintf(file, "status:%s\n", taosdump_status);
if (file == NULL) {
printf("fail, printVersion file is null.\n");
return ;
}
free(dupSeq);
// version, macro define in src/CMakeLists.txt
fprintf(file, "taosdump version: %s\n", TD_VER_NUMBER);
fprintf(file, "git: %s\n", TAOSDUMP_COMMIT_ID);
fprintf(file, "build: %s\n", BUILD_INFO);
}
static char *typeToStr(int type) {
@ -8928,8 +8919,8 @@ static int dumpExtraInfoHead(void *taos, FILE *fp) {
errno, strerror(errno));
}
char taostools_ver[] = TAOSDUMP_TAG;
char taosdump_commit[] = TAOSDUMP_COMMIT_SHA1;
char taostools_ver[] = TD_VER_NUMBER;
char taosdump_commit[] = TAOSDUMP_COMMIT_ID;
snprintf(buffer, BUFFER_LEN, "#!"CUS_PROMPT"dump_ver: %s_%s\n",
taostools_ver, taosdump_commit);
@ -9448,7 +9439,7 @@ static int dumpInDbs(const char *dbPath) {
}
#endif
int taosToolsMajorVer = atoi(TAOSDUMP_TAG);
int taosToolsMajorVer = atoi(TD_VER_NUMBER);
if ((g_dumpInDataMajorVer > 1) && (1 == taosToolsMajorVer)) {
errorPrint("\tThe data file was generated by version %d\n"
"\tCannot be restored by current version: %d\n\n"