Merge branch '3.0' into feature/TD-14481-3.0
This commit is contained in:
commit
ab95c99092
|
@ -145,7 +145,7 @@ void subscribe_callback(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code) {
|
|||
taos_unsubscribe(tsub, keep);
|
||||
```
|
||||
|
||||
其第二个参数,用于决定是否在客户端保留订阅的进度信息。如果这个参数是**false**(**0**),那无论下次调用 `taos_subscribe` 时的 `restart` 参数是什么,订阅都只能重新开始。另外,进度信息的保存位置是 _{DataDir}/subscribe/_ 这个目录下,每个订阅有一个与其 `topic` 同名的文件,删掉某个文件,同样会导致下次创建其对应的订阅时只能重新开始。
|
||||
其第二个参数,用于决定是否在客户端保留订阅的进度信息。如果这个参数是**false**(**0**),那无论下次调用 `taos_subscribe` 时的 `restart` 参数是什么,订阅都只能重新开始。另外,进度信息的保存位置是 _{DataDir}/subscribe/_ 这个目录下(注:`taos.cfg` 配置文件中 `DataDir` 参数值默认为 **/var/lib/taos/**,但是 Windows 服务器上本身不存在该目录,所以需要在 Windows 的配置文件中修改 `DataDir` 参数值为相应的已存在目录"),每个订阅有一个与其 `topic` 同名的文件,删掉某个文件,同样会导致下次创建其对应的订阅时只能重新开始。
|
||||
|
||||
代码介绍完毕,我们来看一下实际的运行效果。假设:
|
||||
|
||||
|
|
|
@ -1766,6 +1766,8 @@ SELECT TIMEDIFF(ts_val1 | datetime_string1 | ts_col1, ts_val2 | datetime_string2
|
|||
1u(微秒),1a(毫秒),1s(秒),1m(分),1h(小时),1d(天)。
|
||||
- 如果时间单位 time_unit 未指定, 返回的时间差值精度与当前 DATABASE 设置的时间精度一致。
|
||||
|
||||
**支持的版本**:2.6.0.0 及以后的版本。
|
||||
|
||||
**示例**:
|
||||
|
||||
```sql
|
||||
|
|
|
@ -33,15 +33,15 @@ title: 常见问题及反馈
|
|||
|
||||
### 2. Windows 平台下 JDBCDriver 找不到动态链接库,怎么办?
|
||||
|
||||
请看为此问题撰写的[技术博客](https://www.taosdata.com/blog/2019/12/03/950.html)。
|
||||
请看为此问题撰写的 [技术博客](https://www.taosdata.com/blog/2019/12/03/950.html)。
|
||||
|
||||
### 3. 创建数据表时提示 more dnodes are needed
|
||||
|
||||
请看为此问题撰写的[技术博客](https://www.taosdata.com/blog/2019/12/03/965.html)。
|
||||
请看为此问题撰写的 [技术博客](https://www.taosdata.com/blog/2019/12/03/965.html)。
|
||||
|
||||
### 4. 如何让 TDengine crash 时生成 core 文件?
|
||||
|
||||
请看为此问题撰写的[技术博客](https://www.taosdata.com/blog/2019/12/06/974.html)。
|
||||
请看为此问题撰写的 [技术博客](https://www.taosdata.com/blog/2019/12/06/974.html)。
|
||||
|
||||
### 5. 遇到错误“Unable to establish connection” 怎么办?
|
||||
|
||||
|
@ -128,19 +128,30 @@ properties.setProperty(TSDBDriver.LOCALE_KEY, "UTF-8");
|
|||
Connection = DriverManager.getConnection(url, properties);
|
||||
```
|
||||
|
||||
### 13.JDBC 报错: the executed SQL is not a DML or a DDL?
|
||||
### 13. Windows 系统下客户端无法正常显示中文字符?
|
||||
|
||||
Windows 系统中一般是采用 GBK/GB18030 存储中文字符,而 TDengine 的默认字符集为 UTF-8 ,在 Windows 系统中使用 TDengine 客户端时,客户端驱动会将字符统一转换为 UTF-8 编码后发送到服务端存储,因此在应用开发过程中,调用接口时正确配置当前的中文字符集即可。
|
||||
|
||||
【 v2.2.1.5以后版本 】在 Windows 10 环境下运行 TDengine 客户端命令行工具 taos 时,若无法正常输入、显示中文,可以对客户端 taos.cfg 做如下配置:
|
||||
|
||||
```
|
||||
locale C
|
||||
charset UTF-8
|
||||
```
|
||||
|
||||
### 14. JDBC 报错: the executed SQL is not a DML or a DDL?
|
||||
|
||||
请更新至最新的 JDBC 驱动,参考 [Java 连接器](/reference/connector/java)
|
||||
|
||||
### 14. taos connect failed, reason: invalid timestamp
|
||||
### 15. taos connect failed, reason: invalid timestamp
|
||||
|
||||
常见原因是服务器和客户端时间没有校准,可以通过和时间服务器同步的方式(Linux 下使用 ntpdate 命令,Windows 在系统时间设置中选择自动同步)校准。
|
||||
|
||||
### 15. 表名显示不全
|
||||
### 16. 表名显示不全
|
||||
|
||||
由于 taos shell 在终端中显示宽度有限,有可能比较长的表名显示不全,如果按照显示的不全的表名进行相关操作会发生 Table does not exist 错误。解决方法可以是通过修改 taos.cfg 文件中的设置项 maxBinaryDisplayWidth, 或者直接输入命令 set max_binary_display_width 100。或者在命令结尾使用 \G 参数来调整结果的显示方式。
|
||||
|
||||
### 16. 如何进行数据迁移?
|
||||
### 17. 如何进行数据迁移?
|
||||
|
||||
TDengine 是根据 hostname 唯一标志一台机器的,在数据文件从机器 A 移动机器 B 时,注意如下两件事:
|
||||
|
||||
|
@ -148,7 +159,7 @@ TDengine 是根据 hostname 唯一标志一台机器的,在数据文件从机
|
|||
- 2.0.7.0 及以后的版本,到/var/lib/taos/dnode 下,修复 dnodeEps.json 的 dnodeId 对应的 FQDN,重启。确保机器内所有机器的此文件是完全相同的。
|
||||
- 1.x 和 2.x 版本的存储结构不兼容,需要使用迁移工具或者自己开发应用导出导入数据。
|
||||
|
||||
### 17. 如何在命令行程序 taos 中临时调整日志级别
|
||||
### 18. 如何在命令行程序 taos 中临时调整日志级别
|
||||
|
||||
为了调试方便,从 2.0.16 版本开始,命令行程序 taos 新增了与日志记录相关的两条指令:
|
||||
|
||||
|
@ -169,7 +180,7 @@ ALTER LOCAL RESETLOG;
|
|||
|
||||
<a class="anchor" id="timezone"></a>
|
||||
|
||||
### 18. go 语言编写组件编译失败怎样解决?
|
||||
### 19. go 语言编写组件编译失败怎样解决?
|
||||
|
||||
TDengine 2.3.0.0 及之后的版本包含一个使用 go 语言开发的 taosAdapter 独立组件,需要单独运行,取代之前 taosd 内置的 httpd ,提供包含原 httpd 功能以及支持多种其他软件(Prometheus、Telegraf、collectd、StatsD 等)的数据接入功能。
|
||||
使用最新 develop 分支代码编译需要先 `git submodule update --init --recursive` 下载 taosAdapter 仓库代码后再编译。
|
||||
|
@ -184,7 +195,7 @@ go env -w GOPROXY=https://goproxy.cn,direct
|
|||
如果希望继续使用之前的内置 httpd,可以关闭 taosAdapter 编译,使用
|
||||
`cmake .. -DBUILD_HTTP=true` 使用原来内置的 httpd。
|
||||
|
||||
### 19. 如何查询数据占用的存储空间大小?
|
||||
### 20. 如何查询数据占用的存储空间大小?
|
||||
|
||||
默认情况下,TDengine 的数据文件存储在 /var/lib/taos ,日志文件存储在 /var/log/taos 。
|
||||
|
||||
|
@ -193,3 +204,50 @@ go env -w GOPROXY=https://goproxy.cn,direct
|
|||
若想查看单个数据库占用的大小,可在命令行程序 taos 内指定要查看的数据库后执行 `show vgroups;` ,通过得到的 VGroup id 去 /var/lib/taos/vnode 下查看包含的文件夹大小。
|
||||
|
||||
若仅仅想查看指定(超级)表的数据块分布及大小,可查看[_block_dist 函数](https://docs.taosdata.com/taos-sql/select/#_block_dist-%E5%87%BD%E6%95%B0)
|
||||
|
||||
### 21. 客户端连接串如何保证高可用?
|
||||
|
||||
请看为此问题撰写的 [技术博客](https://www.taosdata.com/blog/2021/04/16/2287.html)
|
||||
|
||||
### 22. 时间戳的时区信息是怎样处理的?
|
||||
|
||||
TDengine 中时间戳的时区总是由客户端进行处理,而与服务端无关。具体来说,客户端会对 SQL 语句中的时间戳进行时区转换,转为 UTC 时区(即 Unix 时间戳——Unix Timestamp)再交由服务端进行写入和查询;在读取数据时,服务端也是采用 UTC 时区提供原始数据,客户端收到后再根据本地设置,把时间戳转换为本地系统所要求的时区进行显示。
|
||||
|
||||
客户端在处理时间戳字符串时,会采取如下逻辑:
|
||||
|
||||
1. 在未做特殊设置的情况下,客户端默认使用所在操作系统的时区设置。
|
||||
2. 如果在 taos.cfg 中设置了 timezone 参数,则客户端会以这个配置文件中的设置为准。
|
||||
3. 如果在 C/C++/Java/Python 等各种编程语言的 Connector Driver 中,在建立数据库连接时显式指定了 timezone,那么会以这个指定的时区设置为准。例如 Java Connector 的 JDBC URL 中就有 timezone 参数。
|
||||
4. 在书写 SQL 语句时,也可以直接使用 Unix 时间戳(例如 `1554984068000`)或带有时区的时间戳字符串,也即以 RFC 3339 格式(例如 `2013-04-12T15:52:01.123+08:00`)或 ISO-8601 格式(例如 `2013-04-12T15:52:01.123+0800`)来书写时间戳,此时这些时间戳的取值将不再受其他时区设置的影响。
|
||||
|
||||
### 23. TDengine 2.0 都会用到哪些网络端口?
|
||||
|
||||
在 TDengine 2.0 版本中,会用到以下这些网络端口(以默认端口 6030 为前提进行说明,如果修改了配置文件中的设置,那么这里列举的端口都会随之出现变化),管理员可以参考这里的信息调整防火墙设置:
|
||||
|
||||
| 协议 | 默认端口 | 用途说明 | 修改方法 |
|
||||
| :--- | :-------- | :---------------------------------- | :------------------------------- |
|
||||
| TCP | 6030 | 客户端与服务端之间通讯。 | 由配置文件设置 serverPort 决定。 |
|
||||
| TCP | 6035 | 多节点集群的节点间通讯。 | 随 serverPort 端口变化。 |
|
||||
| TCP | 6040 | 多节点集群的节点间数据同步。 | 随 serverPort 端口变化。 |
|
||||
| TCP | 6041 | 客户端与服务端之间的 RESTful 通讯。 | 随 serverPort 端口变化。2.4.0.0 及以上版本由 taosAdapter 配置。 |
|
||||
| TCP | 6042 | Arbitrator 的服务端口。 | 随 Arbitrator 启动参数设置变化。 |
|
||||
| TCP | 6043 | TaosKeeper 监控服务端口。 | 随 TaosKeeper 启动参数设置变化。 |
|
||||
| TCP | 6044 | 支持 StatsD 的数据接入端口。 | 随 taosAdapter 启动参数设置变化( 2.4.0.0 及以上版本)。 |
|
||||
| UDP | 6045 | 支持 collectd 数据接入端口。 | 随 taosAdapter 启动参数设置变化( 2.4.0.0 及以上版本)。 |
|
||||
| TCP | 6060 | 企业版内 Monitor 服务的网络端口。 | |
|
||||
| UDP | 6030-6034 | 客户端与服务端之间通讯。 | 随 serverPort 端口变化。 |
|
||||
| UDP | 6035-6039 | 多节点集群的节点间通讯。 | 随 serverPort 端口变化。 |
|
||||
|
||||
### 24. 为什么 RESTful 接口无响应、Grafana 无法添加 TDengine 为数据源、TDengineGUI 选了 6041 端口还是无法连接成功??
|
||||
|
||||
taosAdapter 从 TDengine 2.4.0.0 版本开始成为 TDengine 服务端软件的组成部分,是 TDengine 集群和应用程序之间的桥梁和适配器。在此之前 RESTful 接口等功能是由 taosd 内置的 HTTP 服务提供的,而如今要实现上述功能需要执行:```systemctl start taosadapter``` 命令来启动 taosAdapter 服务。
|
||||
|
||||
需要说明的是,taosAdapter 的日志路径 path 需要单独配置,默认路径是 /var/log/taos ;日志等级 logLevel 有 8 个等级,默认等级是 info ,配置成 panic 可关闭日志输出。请注意操作系统 / 目录的空间大小,可通过命令行参数、环境变量或配置文件来修改配置,默认配置文件是 /etc/taos/taosadapter.toml 。
|
||||
|
||||
有关 taosAdapter 组件的详细介绍请看文档:[taosAdapter](https://docs.taosdata.com/reference/taosadapter/)
|
||||
|
||||
### 25. 发生了 OOM 怎么办?
|
||||
|
||||
OOM 是操作系统的保护机制,当操作系统内存(包括 SWAP )不足时,会杀掉某些进程,从而保证操作系统的稳定运行。通常内存不足主要是如下两个原因导致,一是剩余内存小于 vm.min_free_kbytes ;二是程序请求的内存大于剩余内存。还有一种情况是内存充足但程序占用了特殊的内存地址,也会触发 OOM 。
|
||||
|
||||
TDengine 会预先为每个 VNode 分配好内存,每个 Database 的 VNode 个数受 maxVgroupsPerDb 影响,每个 VNode 占用的内存大小受 Blocks 和 Cache 影响。要防止 OOM,需要在项目建设之初合理规划内存,并合理设置 SWAP ,除此之外查询过量的数据也有可能导致内存暴涨,这取决于具体的查询语句。TDengine 企业版对内存管理做了优化,采用了新的内存分配器,对稳定性有更高要求的用户可以考虑选择企业版。
|
||||
|
|
|
@ -151,7 +151,7 @@ void subscribe_callback(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code) {
|
|||
taos_unsubscribe(tsub, keep);
|
||||
```
|
||||
|
||||
The second parameter `keep` is used to specify whether to keep the subscription progress on the client sde. If it is **false**, i.e. **0**, then subscription will be restarted from beginning regardless of the `restart` parameter's value when `taos_subscribe` is invoked again. The subscription progress information is stored in _{DataDir}/subscribe/_ , under which there is a file with the same name as `topic` for each subscription, the subscription will be restarted from the beginning if the corresponding progress file is removed.
|
||||
The second parameter `keep` is used to specify whether to keep the subscription progress on the client sde. If it is **false**, i.e. **0**, then subscription will be restarted from beginning regardless of the `restart` parameter's value when `taos_subscribe` is invoked again. The subscription progress information is stored in _{DataDir}/subscribe/_ , under which there is a file with the same name as `topic` for each subscription(Note: The default value of `DataDir` in the `taos.cfg` file is **/var/lib/taos/**. However, **/var/lib/taos/** does not exist on the Windows server. So you need to change the `DataDir` value to the corresponding existing directory."), the subscription will be restarted from the beginning if the corresponding progress file is removed.
|
||||
|
||||
Now let's see the effect of the above sample code, assuming below prerequisites have been done.
|
||||
|
||||
|
|
|
@ -1839,6 +1839,8 @@ SELECT TIMEDIFF(ts_val1 | datetime_string1 | ts_col1, ts_val2 | datetime_string2
|
|||
1u(microsecond),1a(millisecond),1s(second),1m(minute),1h(hour),1d(day).
|
||||
- The precision of the returned timestamp is same as the precision set for the current data base in use
|
||||
|
||||
**Applicable versions**:Since version 2.6.0.0
|
||||
|
||||
**Examples**:
|
||||
|
||||
```sql
|
||||
|
|
|
@ -4,13 +4,13 @@ title: Problem Diagnostics
|
|||
|
||||
## Network Connection Diagnostics
|
||||
|
||||
When the client is unable to access the server, the network connection between the client side and the server side needs to be checked to find out the root cause and resolve problems.
|
||||
When a TDengine client is unable to access a TDengine server, the network connection between the client side and the server side must be checked to find the root cause and resolve problems.
|
||||
|
||||
The diagnostic for network connection can be executed between Linux and Linux or between Linux and Windows.
|
||||
Diagnostics for network connections can be executed between Linux and Linux or between Linux and Windows.
|
||||
|
||||
Diagnostic steps:
|
||||
|
||||
1. If the port range to be diagnosed are being occupied by a `taosd` server process, please first stop `taosd.
|
||||
1. If the port range to be diagnosed is being occupied by a `taosd` server process, please first stop `taosd.
|
||||
2. On the server side, execute command `taos -n server -P <port> -l <pktlen>` to monitor the port range starting from the port specified by `-P` parameter with the role of "server".
|
||||
3. On the client side, execute command `taos -n client -h <fqdn of server> -P <port> -l <pktlen>` to send a testing package to the specified server and port.
|
||||
|
||||
|
@ -65,13 +65,13 @@ Output of the client side for the example is below:
|
|||
12/21 14:50:22.721274 0x7fc95d859200 UTL successed to test UDP port:6011
|
||||
```
|
||||
|
||||
The output needs to be checked carefully for the system operator to find out the root cause and solve the problem.
|
||||
The output needs to be checked carefully for the system operator to find the root cause and resolve the problem.
|
||||
|
||||
## Startup Status and RPC Diagnostic
|
||||
|
||||
`taos -n startup -h <fqdn of server>` can be used to check the startup status of a `taosd` process. This is a comman task for a system operator to do to determine whether `taosd` has been started successfully, especially in case of cluster.
|
||||
`taos -n startup -h <fqdn of server>` can be used to check the startup status of a `taosd` process. This is a common task which should be performed by a system operator, especially in the case of a cluster, to determine whether `taosd` has been started successfully.
|
||||
|
||||
`taos -n rpc -h <fqdn of server>` can be used to check whether the port of a started `taosd` can be accessed or not. If `taosd` process doesn't respond or is working abnormally, this command can be used to initiate a rpc communication with the specified fqdn to determine whether it's a network problem or `taosd` is abnormal.
|
||||
`taos -n rpc -h <fqdn of server>` can be used to check whether the port of a started `taosd` can be accessed or not. If `taosd` process doesn't respond or is working abnormally, this command can be used to initiate a rpc communication with the specified fqdn to determine whether it's a network problem or whether `taosd` is abnormal.
|
||||
|
||||
## Sync and Arbitrator Diagnostic
|
||||
|
||||
|
@ -80,13 +80,13 @@ taos -n sync -P 6040 -h <fqdn of server>
|
|||
taos -n sync -P 6042 -h <fqdn of server>
|
||||
```
|
||||
|
||||
The above commands can be executed on Linux Shell to check whether the port for sync is working well and whether the sync module on the server side is working well. Additionally, `-P 6042` is used to check whether the arbitrator is configured properly and is working well.
|
||||
The above commands can be executed in a Linux shell to check whether the port for sync is working well and whether the sync module on the server side is working well. Additionally, `-P 6042` is used to check whether the arbitrator is configured properly and is working well.
|
||||
|
||||
## Network Speed Diagnostic
|
||||
|
||||
`taos -n speed -h <fqdn of server> -P 6030 -N 10 -l 10000000 -S TCP`
|
||||
|
||||
From version 2.2.0.0, the above command can be executed on Linux Shell to test the network speed, it sends uncompressed package to a running `taosd` server process or a simulated server process started by `taos -n server` to test the network speed. Parameters can be used when testing network speed are as below:
|
||||
From version 2.2.0.0 onwards, the above command can be executed in a Linux shell to test network speed. The command sends uncompressed packages to a running `taosd` server process or a simulated server process started by `taos -n server` to test the network speed. Parameters can be used when testing network speed are as below:
|
||||
|
||||
-n:When set to "speed", it means testing network speed.
|
||||
-h:The FQDN or IP of the server process to be connected to; if not set, the FQDN configured in `taos.cfg` is used.
|
||||
|
@ -99,23 +99,23 @@ From version 2.2.0.0, the above command can be executed on Linux Shell to test t
|
|||
|
||||
`taos -n fqdn -h <fqdn of server>`
|
||||
|
||||
From version 2.2.0.0, the above command can be executed on Linux Shell to test the resolution speed of FQDN. It can be used to try to resolve a FQDN to an IP address and record the time spent in this process. The parameters that can be used for this purpose are as below:
|
||||
From version 2.2.0.0 onward, the above command can be executed in a Linux shell to test the resolution speed of FQDN. It can be used to try to resolve a FQDN to an IP address and record the time spent in this process. The parameters that can be used for this purpose are as below:
|
||||
|
||||
-n:When set to "fqdn", it means testing the speed of resolving FQDN.
|
||||
-h:The FQDN to be resolved. If not set, the `FQDN` parameter in `taos.cfg` is used by default.
|
||||
|
||||
## Server Log
|
||||
|
||||
The parameter `debugFlag` is used to control the log level of the `taosd` server process. The default value is 131, for debug purpose it needs to be escalated to 135 or 143.
|
||||
The parameter `debugFlag` is used to control the log level of the `taosd` server process. The default value is 131. For debugging and tracing, it needs to be set to either 135 or 143 respectively.
|
||||
|
||||
Once this parameter is set to 135 or 143, the log file grows very quickly especially when there is a huge volume of data insertion and data query requests. If all the logs are stored together, some important information may be missed very easily, so on server side important information is stored at different place from other logs.
|
||||
Once this parameter is set to 135 or 143, the log file grows very quickly especially when there is a huge volume of data insertion and data query requests. If all the logs are stored together, some important information may be missed very easily and so on the server side, important information is stored in a different place from other logs.
|
||||
|
||||
- The log at level of INFO, WARNING and ERROR is stored in `taosinfo` so that it is easy to find important information
|
||||
- The log at level of DEBUG (135) and TRACE (143) and other information not handled by `taosinfo` are stored in `taosdlog`
|
||||
|
||||
## Client Log
|
||||
|
||||
An independent log file, named as "taoslog+<seq num\>" is generated for each client program, i.e. a client process. The default value of `debugFlag` is also 131 and only logs at level of INFO/ERROR/WARNING are recorded, for debugging purposes it needs to be changed to 135 or 143 so that logs at DEBUG or TRACE level can be recorded.
|
||||
An independent log file, named as "taoslog+<seq num\>" is generated for each client program, i.e. a client process. The default value of `debugFlag` is also 131 and only logs at level of INFO/ERROR/WARNING are recorded. As stated above, for debugging and tracing, it needs to be changed to 135 or 143 respectively, so that logs at DEBUG or TRACE level can be recorded.
|
||||
|
||||
The maximum length of a single log file is controlled by parameter `numOfLogLines` and only 2 log files are kept for each `taosd` server process.
|
||||
|
||||
|
|
|
@ -2,10 +2,10 @@
|
|||
title: REST API
|
||||
---
|
||||
|
||||
To support the development of various types of platforms, TDengine provides an API that conforms to the REST principle, namely REST API. To minimize the learning cost, different from the other database REST APIs, TDengine directly requests the SQL command contained in the request BODY through HTTP POST to operate the database and only requires a URL.
|
||||
To support the development of various types of applications and platforms, TDengine provides an API that conforms to REST principles; namely REST API. To minimize the learning cost, unlike REST APIs for other database engines, TDengine allows insertion of SQL commands in the BODY of an HTTP POST request, to operate the database.
|
||||
|
||||
:::note
|
||||
One difference from the native connector is that the REST interface is stateless, so the `USE db_name` command has no effect. All references to table names and super table names need to specify the database name prefix. (Since version 2.2.0.0, it is supported to specify db_name in RESTful URL. If the database name prefix is not specified in the SQL command, the `db_name` specified in the URL will be used. Since version 2.4.0.0, REST service is provided by taosAdapter by default. And it requires that the `db_name` must be specified in the URL.)
|
||||
One difference from the native connector is that the REST interface is stateless and so the `USE db_name` command has no effect. All references to table names and super table names need to specify the database name in the prefix. (Since version 2.2.0.0, TDengine supports specification of the db_name in RESTful URL. If the database name prefix is not specified in the SQL command, the `db_name` specified in the URL will be used. Since version 2.4.0.0, REST service is provided by taosAdapter by default and it requires that the `db_name` must be specified in the URL.)
|
||||
:::
|
||||
|
||||
## Installation
|
||||
|
@ -16,9 +16,9 @@ The REST interface does not rely on any TDengine native library, so the client a
|
|||
|
||||
If the TDengine server is already installed, it can be verified as follows:
|
||||
|
||||
The following is an Ubuntu environment using the `curl` tool (to confirm that it is installed) to verify that the REST interface is working.
|
||||
The following example is in an Ubuntu environment and uses the `curl` tool to verify that the REST interface is working. Note that the `curl` tool may need to be installed in your environment.
|
||||
|
||||
The following example lists all databases, replacing `h1.taosdata.com` and `6041` (the default port) with the actual running TDengine service FQDN and port number.
|
||||
The following example lists all databases on the host h1.taosdata.com. To use it in your environment, replace `h1.taosdata.com` and `6041` (the default port) with the actual running TDengine service FQDN and port number.
|
||||
|
||||
```html
|
||||
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' h1.taosdata.com:6041/rest/sql
|
||||
|
@ -89,7 +89,7 @@ For example, `http://h1.taos.com:6041/rest/sql/test` is a URL to `h1.taos.com:60
|
|||
|
||||
TDengine supports both Basic authentication and custom authentication mechanisms, and subsequent versions will provide a standard secure digital signature mechanism for authentication.
|
||||
|
||||
- The custom authentication information is as follows (Let's introduce token later)
|
||||
- The custom authentication information is as follows. More details about "token" later.
|
||||
|
||||
```
|
||||
Authorization: Taosd <TOKEN>
|
||||
|
@ -136,7 +136,7 @@ The return result is in JSON format, as follows:
|
|||
|
||||
Description:
|
||||
|
||||
- status: tell if the operation result is success or failure.
|
||||
- status: tells you whethre the operation result is success or failure.
|
||||
- head: the definition of the table, or just one column "affected_rows" if no result set is returned. (As of version 2.0.17.0, it is recommended not to rely on the head return value to determine the data column type but rather use column_meta. In later versions, the head item may be removed from the return value.)
|
||||
- column_meta: this item is added to the return value to indicate the data type of each column in the data with version 2.0.17.0 and later versions. Each column is described by three values: column name, column type, and type length. For example, `["current",6,4]` means that the column name is "current", the column type is 6, which is the float type, and the type length is 4, which is the float type with 4 bytes. If the column type is binary or nchar, the type length indicates the maximum length of content stored in the column, not the length of the specific data in this return value. When the column type is nchar, the type length indicates the number of Unicode characters that can be saved, not bytes.
|
||||
- data: The exact data returned, presented row by row, or just [[affected_rows]] if no result set is returned. The order of the data columns in each row of data is the same as that of the data columns described in column_meta.
|
||||
|
|
|
@ -2,11 +2,11 @@
|
|||
title: Reference
|
||||
---
|
||||
|
||||
The reference guide is the detailed introduction to TDengine, various TDengine's connectors in different languages, and the tools that come with it.
|
||||
The reference guide is a detailed introduction to TDengine including various TDengine connectors in different languages, and the tools that come with TDengine.
|
||||
|
||||
```mdx-code-block
|
||||
import DocCardList from '@theme/DocCardList';
|
||||
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
|
||||
|
||||
<DocCardList items={useCurrentSidebarCategory().items}/>
|
||||
```
|
||||
```
|
||||
|
|
|
@ -85,7 +85,7 @@ struct STable {
|
|||
#define TABLE_TID(t) (t)->tid
|
||||
#define TABLE_UID(t) (t)->uid
|
||||
|
||||
int tsdbPrepareCommit(STsdb *pTsdb);
|
||||
int tsdbPrepareCommit(STsdb *pTsdb);
|
||||
typedef enum {
|
||||
TSDB_FILE_HEAD = 0, // .head
|
||||
TSDB_FILE_DATA, // .data
|
||||
|
@ -181,7 +181,6 @@ int tsdbUnlockRepo(STsdb *pTsdb);
|
|||
|
||||
static FORCE_INLINE STSchema *tsdbGetTableSchemaImpl(STsdb *pTsdb, STable *pTable, bool lock, bool copy,
|
||||
int32_t version) {
|
||||
|
||||
if ((version != -1) && (schemaVersion(pTable->pSchema) != version)) {
|
||||
taosMemoryFreeClear(pTable->pSchema);
|
||||
pTable->pSchema = metaGetTbTSchema(REPO_META(pTsdb), pTable->uid, version);
|
||||
|
|
|
@ -178,6 +178,7 @@ SSchemaWrapper *metaGetTableSchema(SMeta *pMeta, tb_uid_t uid, int32_t sver, boo
|
|||
if (me.type == TSDB_SUPER_TABLE) {
|
||||
pSchema = tCloneSSchemaWrapper(&me.stbEntry.schemaRow);
|
||||
} else if (me.type == TSDB_NORMAL_TABLE) {
|
||||
pSchema = tCloneSSchemaWrapper(&me.ntbEntry.schemaRow);
|
||||
} else {
|
||||
ASSERT(0);
|
||||
}
|
||||
|
|
|
@ -84,8 +84,8 @@ static int tsdbMergeBlockData(SCommitH *pCommith, SCommitIter *pIter, SDataCols
|
|||
static void tsdbResetCommitTable(SCommitH *pCommith);
|
||||
static void tsdbCloseCommitFile(SCommitH *pCommith, bool hasError);
|
||||
static bool tsdbCanAddSubBlock(SCommitH *pCommith, SBlock *pBlock, SMergeInfo *pInfo);
|
||||
static void tsdbLoadAndMergeFromCache(STsdb *pTsdb, SDataCols *pDataCols, int *iter, SCommitIter *pCommitIter, SDataCols *pTarget,
|
||||
TSKEY maxKey, int maxRows, int8_t update);
|
||||
static void tsdbLoadAndMergeFromCache(STsdb *pTsdb, SDataCols *pDataCols, int *iter, SCommitIter *pCommitIter,
|
||||
SDataCols *pTarget, TSKEY maxKey, int maxRows, int8_t update);
|
||||
int tsdbWriteBlockIdx(SDFile *pHeadf, SArray *pIdxA, void **ppBuf);
|
||||
|
||||
int tsdbApplyRtnOnFSet(STsdb *pRepo, SDFileSet *pSet, SRtn *pRtn) {
|
||||
|
@ -466,7 +466,7 @@ static int tsdbCreateCommitIters(SCommitH *pCommith) {
|
|||
pTbData = (STbData *)pNode->pData;
|
||||
|
||||
pCommitIter = pCommith->iters + i;
|
||||
pTSchema = metaGetTbTSchema(REPO_META(pRepo), pTbData->uid, 1); // TODO: schema version
|
||||
pTSchema = metaGetTbTSchema(REPO_META(pRepo), pTbData->uid, -1); // TODO: schema version
|
||||
|
||||
if (pTSchema) {
|
||||
pCommitIter->pIter = tSkipListCreateIter(pTbData->pData);
|
||||
|
@ -948,7 +948,7 @@ static int tsdbMoveBlkIdx(SCommitH *pCommith, SBlockIdx *pIdx) {
|
|||
}
|
||||
|
||||
static int tsdbSetCommitTable(SCommitH *pCommith, STable *pTable) {
|
||||
STSchema *pSchema = tsdbGetTableSchemaImpl(TSDB_COMMIT_REPO(pCommith),pTable, false, false, -1);
|
||||
STSchema *pSchema = tsdbGetTableSchemaImpl(TSDB_COMMIT_REPO(pCommith), pTable, false, false, -1);
|
||||
|
||||
pCommith->pTable = pTable;
|
||||
|
||||
|
@ -1422,8 +1422,8 @@ static int tsdbMergeBlockData(SCommitH *pCommith, SCommitIter *pIter, SDataCols
|
|||
|
||||
int biter = 0;
|
||||
while (true) {
|
||||
tsdbLoadAndMergeFromCache(TSDB_COMMIT_REPO(pCommith), pCommith->readh.pDCols[0], &biter, pIter, pCommith->pDataCols, keyLimit, defaultRows,
|
||||
pCfg->update);
|
||||
tsdbLoadAndMergeFromCache(TSDB_COMMIT_REPO(pCommith), pCommith->readh.pDCols[0], &biter, pIter, pCommith->pDataCols,
|
||||
keyLimit, defaultRows, pCfg->update);
|
||||
|
||||
if (pCommith->pDataCols->numOfRows == 0) break;
|
||||
|
||||
|
@ -1447,8 +1447,8 @@ static int tsdbMergeBlockData(SCommitH *pCommith, SCommitIter *pIter, SDataCols
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void tsdbLoadAndMergeFromCache(STsdb *pTsdb, SDataCols *pDataCols, int *iter, SCommitIter *pCommitIter, SDataCols *pTarget,
|
||||
TSKEY maxKey, int maxRows, int8_t update) {
|
||||
static void tsdbLoadAndMergeFromCache(STsdb *pTsdb, SDataCols *pDataCols, int *iter, SCommitIter *pCommitIter,
|
||||
SDataCols *pTarget, TSKEY maxKey, int maxRows, int8_t update) {
|
||||
TSKEY key1 = INT64_MAX;
|
||||
TSKEY key2 = INT64_MAX;
|
||||
TSKEY lastKey = TSKEY_INITIAL_VAL;
|
||||
|
|
|
@ -156,13 +156,11 @@ typedef struct STaskAttr {
|
|||
} STaskAttr;
|
||||
|
||||
struct SOperatorInfo;
|
||||
struct SAggSupporter;
|
||||
struct SOptrBasicInfo;
|
||||
//struct SAggSupporter;
|
||||
//struct SOptrBasicInfo;
|
||||
|
||||
typedef void (*__optr_encode_fn_t)(struct SOperatorInfo* pOperator, struct SAggSupporter* pSup,
|
||||
struct SOptrBasicInfo* pInfo, char** result, int32_t* length);
|
||||
typedef bool (*__optr_decode_fn_t)(struct SOperatorInfo* pOperator, struct SAggSupporter* pSup,
|
||||
struct SOptrBasicInfo* pInfo, char* result, int32_t length);
|
||||
typedef int32_t (*__optr_encode_fn_t)(struct SOperatorInfo* pOperator, char** result, int32_t* length);
|
||||
typedef int32_t (*__optr_decode_fn_t)(struct SOperatorInfo* pOperator, char* result, int32_t length);
|
||||
|
||||
typedef int32_t (*__optr_open_fn_t)(struct SOperatorInfo* pOptr);
|
||||
typedef SSDataBlock* (*__optr_fn_t)(struct SOperatorInfo* pOptr);
|
||||
|
@ -445,14 +443,16 @@ typedef struct STimeWindowSupp {
|
|||
} STimeWindowAggSupp;
|
||||
|
||||
typedef struct SIntervalAggOperatorInfo {
|
||||
// SOptrBasicInfo should be first, SAggSupporter should be second for stream encode
|
||||
SOptrBasicInfo binfo; // basic info
|
||||
SAggSupporter aggSup; // aggregate supporter
|
||||
|
||||
SGroupResInfo groupResInfo; // multiple results build supporter
|
||||
SInterval interval; // interval info
|
||||
int32_t primaryTsIndex; // primary time stamp slot id from result of downstream operator.
|
||||
STimeWindow win; // query time range
|
||||
bool timeWindowInterpo; // interpolation needed or not
|
||||
char** pRow; // previous row/tuple of already processed datablock
|
||||
SAggSupporter aggSup; // aggregate supporter
|
||||
STableQueryInfo* pCurrent; // current tableQueryInfo struct
|
||||
int32_t order; // current SSDataBlock scan order
|
||||
EOPTR_EXEC_MODEL execModel; // operator execution model [batch model|stream model]
|
||||
|
@ -463,19 +463,23 @@ typedef struct SIntervalAggOperatorInfo {
|
|||
} SIntervalAggOperatorInfo;
|
||||
|
||||
typedef struct SStreamFinalIntervalOperatorInfo {
|
||||
// SOptrBasicInfo should be first, SAggSupporter should be second for stream encode
|
||||
SOptrBasicInfo binfo; // basic info
|
||||
SAggSupporter aggSup; // aggregate supporter
|
||||
|
||||
SGroupResInfo groupResInfo; // multiple results build supporter
|
||||
SInterval interval; // interval info
|
||||
int32_t primaryTsIndex; // primary time stamp slot id from result of downstream operator.
|
||||
SAggSupporter aggSup; // aggregate supporter
|
||||
int32_t order; // current SSDataBlock scan order
|
||||
STimeWindowAggSupp twAggSup;
|
||||
SArray* pChildren;
|
||||
} SStreamFinalIntervalOperatorInfo;
|
||||
|
||||
typedef struct SAggOperatorInfo {
|
||||
// SOptrBasicInfo should be first, SAggSupporter should be second for stream encode
|
||||
SOptrBasicInfo binfo;
|
||||
SAggSupporter aggSup;
|
||||
|
||||
STableQueryInfo *current;
|
||||
uint64_t groupId;
|
||||
SGroupResInfo groupResInfo;
|
||||
|
@ -488,8 +492,10 @@ typedef struct SAggOperatorInfo {
|
|||
} SAggOperatorInfo;
|
||||
|
||||
typedef struct SProjectOperatorInfo {
|
||||
// SOptrBasicInfo should be first, SAggSupporter should be second for stream encode
|
||||
SOptrBasicInfo binfo;
|
||||
SAggSupporter aggSup;
|
||||
|
||||
SSDataBlock* existDataBlock;
|
||||
SArray* pPseudoColInfo;
|
||||
SLimit limit;
|
||||
|
@ -513,7 +519,10 @@ typedef struct SFillOperatorInfo {
|
|||
} SFillOperatorInfo;
|
||||
|
||||
typedef struct SGroupbyOperatorInfo {
|
||||
// SOptrBasicInfo should be first, SAggSupporter should be second for stream encode
|
||||
SOptrBasicInfo binfo;
|
||||
SAggSupporter aggSup;
|
||||
|
||||
SArray* pGroupCols; // group by columns, SArray<SColumn>
|
||||
SArray* pGroupColVals; // current group column values, SArray<SGroupKeys>
|
||||
SNode* pCondition;
|
||||
|
@ -521,7 +530,6 @@ typedef struct SGroupbyOperatorInfo {
|
|||
char* keyBuf; // group by keys for hash
|
||||
int32_t groupKeyLen; // total group by column width
|
||||
SGroupResInfo groupResInfo;
|
||||
SAggSupporter aggSup;
|
||||
SExprInfo* pScalarExprInfo;
|
||||
int32_t numOfScalarExpr; // the number of scalar expression in group operator
|
||||
SqlFunctionCtx* pScalarFuncCtx;
|
||||
|
@ -558,8 +566,10 @@ typedef struct SWindowRowsSup {
|
|||
} SWindowRowsSup;
|
||||
|
||||
typedef struct SSessionAggOperatorInfo {
|
||||
// SOptrBasicInfo should be first, SAggSupporter should be second for stream encode
|
||||
SOptrBasicInfo binfo;
|
||||
SAggSupporter aggSup;
|
||||
|
||||
SGroupResInfo groupResInfo;
|
||||
SWindowRowsSup winSup;
|
||||
bool reptScan; // next round scan
|
||||
|
@ -598,8 +608,10 @@ typedef struct STimeSliceOperatorInfo {
|
|||
} STimeSliceOperatorInfo;
|
||||
|
||||
typedef struct SStateWindowOperatorInfo {
|
||||
// SOptrBasicInfo should be first, SAggSupporter should be second for stream encode
|
||||
SOptrBasicInfo binfo;
|
||||
SAggSupporter aggSup;
|
||||
|
||||
SGroupResInfo groupResInfo;
|
||||
SWindowRowsSup winSup;
|
||||
SColumn stateCol; // start row index
|
||||
|
@ -611,8 +623,10 @@ typedef struct SStateWindowOperatorInfo {
|
|||
} SStateWindowOperatorInfo;
|
||||
|
||||
typedef struct SSortedMergeOperatorInfo {
|
||||
|
||||
// SOptrBasicInfo should be first, SAggSupporter should be second for stream encode
|
||||
SOptrBasicInfo binfo;
|
||||
SAggSupporter aggSup;
|
||||
|
||||
SArray* pSortInfo;
|
||||
int32_t numOfSources;
|
||||
SSortHandle *pSortHandle;
|
||||
|
@ -624,7 +638,6 @@ typedef struct SSortedMergeOperatorInfo {
|
|||
int32_t numOfResPerPage;
|
||||
char** groupVal;
|
||||
SArray *groupInfo;
|
||||
SAggSupporter aggSup;
|
||||
} SSortedMergeOperatorInfo;
|
||||
|
||||
typedef struct SSortOperatorInfo {
|
||||
|
@ -786,16 +799,31 @@ void queryCostStatis(SExecTaskInfo* pTaskInfo);
|
|||
void doDestroyTask(SExecTaskInfo* pTaskInfo);
|
||||
int32_t getMaximumIdleDurationSec();
|
||||
|
||||
/*
|
||||
* ops: root operator
|
||||
* data: *data save the result of encode, need to be freed by caller
|
||||
* length: *length save the length of *data
|
||||
* return: result code, 0 means success
|
||||
*/
|
||||
int32_t encodeOperator(SOperatorInfo* ops, char** data, int32_t *length);
|
||||
|
||||
/*
|
||||
* ops: root operator, created by caller
|
||||
* data: save the result of decode
|
||||
* length: the length of data
|
||||
* return: result code, 0 means success
|
||||
*/
|
||||
int32_t decodeOperator(SOperatorInfo* ops, char* data, int32_t length);
|
||||
|
||||
void setTaskStatus(SExecTaskInfo* pTaskInfo, int8_t status);
|
||||
int32_t createExecTaskInfoImpl(SSubplan* pPlan, SExecTaskInfo** pTaskInfo, SReadHandle* pHandle, uint64_t taskId,
|
||||
EOPTR_EXEC_MODEL model);
|
||||
int32_t getOperatorExplainExecInfo(SOperatorInfo* operatorInfo, SExplainExecInfo** pRes, int32_t* capacity,
|
||||
int32_t* resNum);
|
||||
|
||||
bool aggDecodeResultRow(SOperatorInfo* pOperator, SAggSupporter* pSup, SOptrBasicInfo* pInfo, char* result,
|
||||
int32_t length);
|
||||
void aggEncodeResultRow(SOperatorInfo* pOperator, SAggSupporter* pSup, SOptrBasicInfo* pInfo, char** result,
|
||||
int32_t* length);
|
||||
int32_t aggDecodeResultRow(SOperatorInfo* pOperator, char* result, int32_t length);
|
||||
int32_t aggEncodeResultRow(SOperatorInfo* pOperator, char** result, int32_t* length);
|
||||
|
||||
STimeWindow getActiveTimeWindow(SDiskbasedBuf* pBuf, SResultRowInfo* pResultRowInfo, int64_t ts,
|
||||
SInterval* pInterval, int32_t precision, STimeWindow* win);
|
||||
int32_t getNumOfRowsInTimeWindow(SDataBlockInfo* pDataBlockInfo, TSKEY* pPrimaryColumn,
|
||||
|
|
|
@ -3498,17 +3498,24 @@ static SSDataBlock* getAggregateResult(SOperatorInfo* pOperator) {
|
|||
return (rows == 0) ? NULL : pInfo->pRes;
|
||||
}
|
||||
|
||||
void aggEncodeResultRow(SOperatorInfo* pOperator, SAggSupporter* pSup, SOptrBasicInfo* pInfo, char** result,
|
||||
int32_t* length) {
|
||||
int32_t aggEncodeResultRow(SOperatorInfo* pOperator, char** result, int32_t* length) {
|
||||
if(result == NULL || length == NULL){
|
||||
return TSDB_CODE_TSC_INVALID_INPUT;
|
||||
}
|
||||
SOptrBasicInfo* pInfo = (SOptrBasicInfo*)(pOperator->info);
|
||||
SAggSupporter* pSup = (SAggSupporter*)POINTER_SHIFT(pOperator->info, sizeof(SOptrBasicInfo));
|
||||
int32_t size = taosHashGetSize(pSup->pResultRowHashTable);
|
||||
size_t keyLen = sizeof(uint64_t) * 2; // estimate the key length
|
||||
int32_t totalSize = sizeof(int32_t) + size * (sizeof(int32_t) + keyLen + sizeof(int32_t) + pSup->resultRowSize);
|
||||
*result = taosMemoryCalloc(1, totalSize);
|
||||
int32_t totalSize = sizeof(int32_t) + sizeof(int32_t) + size * (sizeof(int32_t) + keyLen + sizeof(int32_t) + pSup->resultRowSize);
|
||||
|
||||
*result = (char*)taosMemoryCalloc(1, totalSize);
|
||||
if (*result == NULL) {
|
||||
longjmp(pOperator->pTaskInfo->env, TSDB_CODE_OUT_OF_MEMORY);
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
*(int32_t*)(*result) = size;
|
||||
|
||||
int32_t offset = sizeof(int32_t);
|
||||
*(int32_t*)(*result + offset) = size;
|
||||
offset += sizeof(int32_t);
|
||||
|
||||
// prepare memory
|
||||
SResultRowPosition* pos = &pInfo->resultRowInfo.cur;
|
||||
|
@ -3530,12 +3537,11 @@ void aggEncodeResultRow(SOperatorInfo* pOperator, SAggSupporter* pSup, SOptrBasi
|
|||
// recalculate the result size
|
||||
int32_t realTotalSize = offset + sizeof(int32_t) + keyLen + sizeof(int32_t) + pSup->resultRowSize;
|
||||
if (realTotalSize > totalSize) {
|
||||
char* tmp = taosMemoryRealloc(*result, realTotalSize);
|
||||
char* tmp = (char*)taosMemoryRealloc(*result, realTotalSize);
|
||||
if (tmp == NULL) {
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
taosMemoryFree(*result);
|
||||
*result = NULL;
|
||||
longjmp(pOperator->pTaskInfo->env, TSDB_CODE_OUT_OF_MEMORY);
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
} else {
|
||||
*result = tmp;
|
||||
}
|
||||
|
@ -3555,17 +3561,18 @@ void aggEncodeResultRow(SOperatorInfo* pOperator, SAggSupporter* pSup, SOptrBasi
|
|||
pIter = taosHashIterate(pSup->pResultRowHashTable, pIter);
|
||||
}
|
||||
|
||||
if (length) {
|
||||
*length = offset;
|
||||
}
|
||||
return;
|
||||
*(int32_t*)(*result) = offset;
|
||||
*length = offset;
|
||||
|
||||
return TDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
bool aggDecodeResultRow(SOperatorInfo* pOperator, SAggSupporter* pSup, SOptrBasicInfo* pInfo, char* result,
|
||||
int32_t length) {
|
||||
if (!result || length <= 0) {
|
||||
return false;
|
||||
int32_t aggDecodeResultRow(SOperatorInfo* pOperator, char* result, int32_t length) {
|
||||
if(result == NULL || length <= 0){
|
||||
return TSDB_CODE_TSC_INVALID_INPUT;
|
||||
}
|
||||
SOptrBasicInfo* pInfo = (SOptrBasicInfo*)(pOperator->info);
|
||||
SAggSupporter* pSup = (SAggSupporter*)POINTER_SHIFT(pOperator->info, sizeof(SOptrBasicInfo));
|
||||
|
||||
// int32_t size = taosHashGetSize(pSup->pResultRowHashTable);
|
||||
int32_t count = *(int32_t*)(result);
|
||||
|
@ -3578,7 +3585,7 @@ bool aggDecodeResultRow(SOperatorInfo* pOperator, SAggSupporter* pSup, SOptrBasi
|
|||
uint64_t tableGroupId = *(uint64_t*)(result + offset);
|
||||
SResultRow* resultRow = getNewResultRow_rv(pSup->pResultBuf, tableGroupId, pSup->resultRowSize);
|
||||
if (!resultRow) {
|
||||
longjmp(pOperator->pTaskInfo->env, TSDB_CODE_TSC_INVALID_INPUT);
|
||||
return TSDB_CODE_TSC_INVALID_INPUT;
|
||||
}
|
||||
|
||||
// add a new result set for a new group
|
||||
|
@ -3588,7 +3595,7 @@ bool aggDecodeResultRow(SOperatorInfo* pOperator, SAggSupporter* pSup, SOptrBasi
|
|||
offset += keyLen;
|
||||
int32_t valueLen = *(int32_t*)(result + offset);
|
||||
if (valueLen != pSup->resultRowSize) {
|
||||
longjmp(pOperator->pTaskInfo->env, TSDB_CODE_TSC_INVALID_INPUT);
|
||||
return TSDB_CODE_TSC_INVALID_INPUT;
|
||||
}
|
||||
offset += sizeof(int32_t);
|
||||
int32_t pageId = resultRow->pageId;
|
||||
|
@ -3607,9 +3614,9 @@ bool aggDecodeResultRow(SOperatorInfo* pOperator, SAggSupporter* pSup, SOptrBasi
|
|||
}
|
||||
|
||||
if (offset != length) {
|
||||
longjmp(pOperator->pTaskInfo->env, TSDB_CODE_TSC_INVALID_INPUT);
|
||||
return TSDB_CODE_TSC_INVALID_INPUT;
|
||||
}
|
||||
return true;
|
||||
return TDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
enum {
|
||||
|
@ -4986,6 +4993,91 @@ _error:
|
|||
return NULL;
|
||||
}
|
||||
|
||||
int32_t encodeOperator(SOperatorInfo* ops, char** result, int32_t *length){
|
||||
int32_t code = TDB_CODE_SUCCESS;
|
||||
char *pCurrent = NULL;
|
||||
int32_t currLength = 0;
|
||||
if(ops->fpSet.encodeResultRow){
|
||||
if(result == NULL || length == NULL){
|
||||
return TSDB_CODE_TSC_INVALID_INPUT;
|
||||
}
|
||||
code = ops->fpSet.encodeResultRow(ops, &pCurrent, &currLength);
|
||||
|
||||
if(code != TDB_CODE_SUCCESS){
|
||||
if(*result != NULL){
|
||||
taosMemoryFree(*result);
|
||||
*result = NULL;
|
||||
}
|
||||
return code;
|
||||
}
|
||||
|
||||
if(*result == NULL){
|
||||
*result = (char*)taosMemoryCalloc(1, currLength + sizeof(int32_t));
|
||||
if (*result == NULL) {
|
||||
taosMemoryFree(pCurrent);
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
memcpy(*result + sizeof(int32_t), pCurrent, currLength);
|
||||
*(int32_t*)(*result) = currLength + sizeof(int32_t);
|
||||
}else{
|
||||
int32_t sizePre = *(int32_t*)(*result);
|
||||
char* tmp = (char*)taosMemoryRealloc(*result, sizePre + currLength);
|
||||
if (tmp == NULL) {
|
||||
taosMemoryFree(pCurrent);
|
||||
taosMemoryFree(*result);
|
||||
*result = NULL;
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
*result = tmp;
|
||||
memcpy(*result + sizePre, pCurrent, currLength);
|
||||
*(int32_t*)(*result) += currLength;
|
||||
}
|
||||
taosMemoryFree(pCurrent);
|
||||
*length = *(int32_t*)(*result);
|
||||
}
|
||||
|
||||
for (int32_t i = 0; i < ops->numOfDownstream; ++i) {
|
||||
code = encodeOperator(ops->pDownstream[i], result, length);
|
||||
if(code != TDB_CODE_SUCCESS){
|
||||
return code;
|
||||
}
|
||||
}
|
||||
return TDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t decodeOperator(SOperatorInfo* ops, char* result, int32_t length){
|
||||
int32_t code = TDB_CODE_SUCCESS;
|
||||
if(ops->fpSet.decodeResultRow){
|
||||
if(result == NULL || length <= 0){
|
||||
return TSDB_CODE_TSC_INVALID_INPUT;
|
||||
}
|
||||
char* data = result + 2 * sizeof(int32_t);
|
||||
int32_t dataLength = *(int32_t*)(result + sizeof(int32_t));
|
||||
code = ops->fpSet.decodeResultRow(ops, data, dataLength - sizeof(int32_t));
|
||||
if(code != TDB_CODE_SUCCESS){
|
||||
return code;
|
||||
}
|
||||
|
||||
int32_t totalLength = *(int32_t*)result;
|
||||
if(totalLength == dataLength + sizeof(int32_t)) { // the last data
|
||||
result = NULL;
|
||||
length = 0;
|
||||
}else{
|
||||
result += dataLength;
|
||||
*(int32_t*)(result) = totalLength - dataLength;
|
||||
length = totalLength - dataLength;
|
||||
}
|
||||
}
|
||||
|
||||
for (int32_t i = 0; i < ops->numOfDownstream; ++i) {
|
||||
code = decodeOperator(ops->pDownstream[i], result, length);
|
||||
if(code != TDB_CODE_SUCCESS){
|
||||
return code;
|
||||
}
|
||||
}
|
||||
return TDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t createExecTaskInfoImpl(SSubplan* pPlan, SExecTaskInfo** pTaskInfo, SReadHandle* pHandle, uint64_t taskId,
|
||||
EOPTR_EXEC_MODEL model) {
|
||||
uint64_t queryId = pPlan->id.queryId;
|
||||
|
|
|
@ -1324,7 +1324,7 @@ StreamWithStateResult* streamWithStateNextWith(StreamWithState* sws, StreamCallb
|
|||
if (FST_NODE_ADDR(p->node) != fstGetRootAddr(sws->fst)) {
|
||||
taosArrayPop(sws->inp);
|
||||
}
|
||||
// streamStateDestroy(p);
|
||||
streamStateDestroy(p);
|
||||
continue;
|
||||
}
|
||||
FstTransition trn;
|
||||
|
|
|
@ -93,14 +93,15 @@ FstSlice fstSliceCreate(uint8_t* data, uint64_t len) {
|
|||
// just shallow copy
|
||||
FstSlice fstSliceCopy(FstSlice* s, int32_t start, int32_t end) {
|
||||
FstString* str = s->str;
|
||||
str->ref++;
|
||||
atomic_add_fetch_32(&str->ref, 1);
|
||||
|
||||
FstSlice t = {.str = str, .start = start + s->start, .end = end + s->start};
|
||||
return t;
|
||||
}
|
||||
FstSlice fstSliceDeepCopy(FstSlice* s, int32_t start, int32_t end) {
|
||||
int32_t tlen = end - start + 1;
|
||||
int32_t slen;
|
||||
int32_t tlen = end - start + 1;
|
||||
int32_t slen;
|
||||
|
||||
uint8_t* data = fstSliceData(s, &slen);
|
||||
assert(tlen <= slen);
|
||||
|
||||
|
@ -129,8 +130,9 @@ uint8_t* fstSliceData(FstSlice* s, int32_t* size) {
|
|||
}
|
||||
void fstSliceDestroy(FstSlice* s) {
|
||||
FstString* str = s->str;
|
||||
str->ref--;
|
||||
if (str->ref == 0) {
|
||||
|
||||
int32_t ref = atomic_sub_fetch_32(&str->ref, 1);
|
||||
if (ref == 0) {
|
||||
taosMemoryFree(str->data);
|
||||
taosMemoryFree(str);
|
||||
s->str = NULL;
|
||||
|
|
|
@ -14,23 +14,93 @@
|
|||
import random
|
||||
import string
|
||||
from util.sql import tdSql
|
||||
|
||||
from util.dnodes import tdDnodes
|
||||
import requests
|
||||
import time
|
||||
import socket
|
||||
class TDCom:
|
||||
def init(self, conn, logSql):
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def cleanTb(self):
|
||||
def preDefine(self):
|
||||
header = {'Authorization': 'Basic cm9vdDp0YW9zZGF0YQ=='}
|
||||
sql_url = "http://127.0.0.1:6041/rest/sql"
|
||||
sqlt_url = "http://127.0.0.1:6041/rest/sqlt"
|
||||
sqlutc_url = "http://127.0.0.1:6041/rest/sqlutc"
|
||||
influx_url = "http://127.0.0.1:6041/influxdb/v1/write"
|
||||
telnet_url = "http://127.0.0.1:6041/opentsdb/v1/put/telnet"
|
||||
return header, sql_url, sqlt_url, sqlutc_url, influx_url, telnet_url
|
||||
|
||||
def genTcpParam(self):
|
||||
MaxBytes = 1024*1024
|
||||
host ='127.0.0.1'
|
||||
port = 6046
|
||||
return MaxBytes, host, port
|
||||
|
||||
def tcpClient(self, input):
|
||||
MaxBytes = tdCom.genTcpParam()[0]
|
||||
host = tdCom.genTcpParam()[1]
|
||||
port = tdCom.genTcpParam()[2]
|
||||
sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
|
||||
sock.connect((host, port))
|
||||
sock.send(input.encode())
|
||||
sock.close()
|
||||
|
||||
def restApiPost(self, sql):
|
||||
requests.post(self.preDefine()[1], sql.encode("utf-8"), headers = self.preDefine()[0])
|
||||
|
||||
def createDb(self, dbname="test", db_update_tag=0, api_type="taosc"):
|
||||
if api_type == "taosc":
|
||||
if db_update_tag == 0:
|
||||
tdSql.execute(f"drop database if exists {dbname}")
|
||||
tdSql.execute(f"create database if not exists {dbname} precision 'us'")
|
||||
else:
|
||||
tdSql.execute(f"drop database if exists {dbname}")
|
||||
tdSql.execute(f"create database if not exists {dbname} precision 'us' update 1")
|
||||
elif api_type == "restful":
|
||||
if db_update_tag == 0:
|
||||
self.restApiPost(f"drop database if exists {dbname}")
|
||||
self.restApiPost(f"create database if not exists {dbname} precision 'us'")
|
||||
else:
|
||||
self.restApiPost(f"drop database if exists {dbname}")
|
||||
self.restApiPost(f"create database if not exists {dbname} precision 'us' update 1")
|
||||
tdSql.execute(f'use {dbname}')
|
||||
|
||||
def genUrl(self, url_type, dbname, precision):
|
||||
if url_type == "influxdb":
|
||||
if precision is None:
|
||||
url = self.preDefine()[4] + "?" + "db=" + dbname
|
||||
else:
|
||||
url = self.preDefine()[4] + "?" + "db=" + dbname + "&precision=" + precision
|
||||
elif url_type == "telnet":
|
||||
url = self.preDefine()[5] + "/" + dbname
|
||||
else:
|
||||
url = self.preDefine()[1]
|
||||
return url
|
||||
|
||||
def schemalessApiPost(self, sql, url_type="influxdb", dbname="test", precision=None):
|
||||
if url_type == "influxdb":
|
||||
url = self.genUrl(url_type, dbname, precision)
|
||||
elif url_type == "telnet":
|
||||
url = self.genUrl(url_type, dbname, precision)
|
||||
res = requests.post(url, sql.encode("utf-8"), headers = self.preDefine()[0])
|
||||
return res
|
||||
|
||||
def cleanTb(self, type="taosc"):
|
||||
'''
|
||||
type is taosc or restful
|
||||
'''
|
||||
query_sql = "show stables"
|
||||
res_row_list = tdSql.query(query_sql, True)
|
||||
stb_list = map(lambda x: x[0], res_row_list)
|
||||
for stb in stb_list:
|
||||
tdSql.execute(f'drop table if exists {stb}')
|
||||
if type == "taosc":
|
||||
tdSql.execute(f'drop table if exists {stb}')
|
||||
elif type == "restful":
|
||||
self.restApiPost(f"drop table if exists {stb}")
|
||||
|
||||
query_sql = "show tables"
|
||||
res_row_list = tdSql.query(query_sql, True)
|
||||
tb_list = map(lambda x: x[0], res_row_list)
|
||||
for tb in tb_list:
|
||||
tdSql.execute(f'drop table if exists {tb}')
|
||||
def dateToTs(self, datetime_input):
|
||||
return int(time.mktime(time.strptime(datetime_input, "%Y-%m-%d %H:%M:%S.%f")))
|
||||
|
||||
def getLongName(self, len, mode = "mixed"):
|
||||
"""
|
||||
|
@ -47,6 +117,52 @@ class TDCom:
|
|||
chars = ''.join(random.choice(string.ascii_letters.lower() + string.digits) for i in range(len))
|
||||
return chars
|
||||
|
||||
def restartTaosd(self, index=1, db_name="db"):
|
||||
tdDnodes.stop(index)
|
||||
tdDnodes.startWithoutSleep(index)
|
||||
tdSql.execute(f"use {db_name}")
|
||||
|
||||
def typeof(self, variate):
|
||||
v_type=None
|
||||
if type(variate) is int:
|
||||
v_type = "int"
|
||||
elif type(variate) is str:
|
||||
v_type = "str"
|
||||
elif type(variate) is float:
|
||||
v_type = "float"
|
||||
elif type(variate) is bool:
|
||||
v_type = "bool"
|
||||
elif type(variate) is list:
|
||||
v_type = "list"
|
||||
elif type(variate) is tuple:
|
||||
v_type = "tuple"
|
||||
elif type(variate) is dict:
|
||||
v_type = "dict"
|
||||
elif type(variate) is set:
|
||||
v_type = "set"
|
||||
return v_type
|
||||
|
||||
def splitNumLetter(self, input_mix_str):
|
||||
nums, letters = "", ""
|
||||
for i in input_mix_str:
|
||||
if i.isdigit():
|
||||
nums += i
|
||||
elif i.isspace():
|
||||
pass
|
||||
else:
|
||||
letters += i
|
||||
return nums, letters
|
||||
|
||||
def smlPass(self, func):
|
||||
smlChildTableName = "no"
|
||||
def wrapper(*args):
|
||||
# if tdSql.getVariable("smlChildTableName")[0].upper() == "ID":
|
||||
if smlChildTableName.upper() == "ID":
|
||||
return func(*args)
|
||||
else:
|
||||
pass
|
||||
return wrapper
|
||||
|
||||
def close(self):
|
||||
self.cursor.close()
|
||||
|
||||
|
|
|
@ -169,8 +169,13 @@ class TDDnode:
|
|||
self.cfgDict.update({option: value})
|
||||
|
||||
def remoteExec(self, updateCfgDict, execCmd):
|
||||
remote_conn = Connection(self.remoteIP, port=22, user='root', connect_kwargs={'password':'123456'})
|
||||
remote_top_dir = '~/test'
|
||||
try:
|
||||
config = eval(self.remoteIP)
|
||||
remote_conn = Connection(host=config["host"], port=config["port"], user=config["user"], connect_kwargs={'password':config["password"]})
|
||||
remote_top_dir = config["path"]
|
||||
except Exception as r:
|
||||
remote_conn = Connection(host=self.remoteIP, port=22, user='root', connect_kwargs={'password':'123456'})
|
||||
remote_top_dir = '~/test'
|
||||
valgrindStr = ''
|
||||
if (self.valgrind==1):
|
||||
valgrindStr = '-g'
|
||||
|
|
|
@ -0,0 +1,38 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from enum import Enum
|
||||
|
||||
class TDSmlProtocolType(Enum):
|
||||
'''
|
||||
Schemaless Protocol types
|
||||
0 - unknown
|
||||
1 - InfluxDB Line Protocol
|
||||
2 - OpenTSDB Telnet Protocl
|
||||
3 - OpenTSDB JSON Protocol
|
||||
'''
|
||||
UNKNOWN = 0
|
||||
LINE = 1
|
||||
TELNET = 2
|
||||
JSON = 3
|
||||
|
||||
class TDSmlTimestampType(Enum):
|
||||
NOT_CONFIGURED = 0
|
||||
HOUR = 1
|
||||
MINUTE = 2
|
||||
SECOND = 3
|
||||
MILLI_SECOND = 4
|
||||
MICRO_SECOND = 5
|
||||
NANO_SECOND = 6
|
||||
|
||||
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,107 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
import numpy as np
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor())
|
||||
self.rowNum = 10
|
||||
self.ts = 1537146000000
|
||||
|
||||
def check_apercentile(self,data,expect_data,param,percent,column):
|
||||
if param == "default":
|
||||
if abs((expect_data-data) <= expect_data * 0.2):
|
||||
tdLog.info(f"apercentile function values check success with col{column}, param = {param},percent = {percent}")
|
||||
else:
|
||||
tdLog.notice(f"apercentile function value has not as expected with col{column}, param = {param},percent = {percent}")
|
||||
sys.exit(1)
|
||||
elif param == "t-digest":
|
||||
if abs((expect_data-data) <= expect_data * 0.2):
|
||||
tdLog.info(f"apercentile function values check success with col{column}, param = {param},percent = {percent}")
|
||||
else:
|
||||
tdLog.notice(f"apercentile function value has not as expected with col{column}, param = {param},percent = {percent}")
|
||||
sys.exit(1)
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
intData = []
|
||||
floatData = []
|
||||
percent_list = [0,50,100]
|
||||
param_list = ['default','t-digest']
|
||||
tdSql.execute('''create table test(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
||||
col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned)''')
|
||||
for i in range(self.rowNum):
|
||||
tdSql.execute("insert into test values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
||||
% (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
||||
intData.append(i + 1)
|
||||
floatData.append(i + 0.1)
|
||||
|
||||
# percentile verifacation
|
||||
|
||||
tdSql.error("select apercentile(ts ,20) from test")
|
||||
tdSql.error("select apercentile(col7 ,20) from test")
|
||||
tdSql.error("select apercentile(col8 ,20) from test")
|
||||
tdSql.error("select apercentile(col9 ,20) from test")
|
||||
|
||||
column_list = [1,2,3,4,5,6,11,12,13,14]
|
||||
|
||||
for i in column_list:
|
||||
for j in percent_list:
|
||||
for k in param_list:
|
||||
tdSql.query(f"select apercentile(col{i},{j},'{k}') from test")
|
||||
data = tdSql.getData(0, 0)
|
||||
tdSql.query(f"select percentile(col{i},{j}) from test")
|
||||
expect_data = tdSql.getData(0, 0)
|
||||
self.check_apercentile(data,expect_data,k,j,i)
|
||||
|
||||
error_param_list = [-1,101,'"a"']
|
||||
for i in error_param_list:
|
||||
tdSql.error(f'select apercentile(col1,{i}) from test')
|
||||
|
||||
tdSql.execute("create table meters (ts timestamp, voltage int) tags(loc nchar(20))")
|
||||
tdSql.execute("create table t0 using meters tags('beijing')")
|
||||
tdSql.execute("create table t1 using meters tags('shanghai')")
|
||||
for i in range(self.rowNum):
|
||||
tdSql.execute("insert into t0 values(%d, %d)" % (self.ts + i, i + 1))
|
||||
tdSql.execute("insert into t1 values(%d, %d)" % (self.ts + i, i + 1))
|
||||
|
||||
column_list = ['voltage']
|
||||
for i in column_list:
|
||||
for j in percent_list:
|
||||
for k in param_list:
|
||||
tdSql.query(f"select apercentile({i}, {j},'{k}') from t0")
|
||||
data = tdSql.getData(0, 0)
|
||||
tdSql.query(f"select percentile({i},{j}) from t0")
|
||||
expect_data = tdSql.getData(0,0)
|
||||
self.check_apercentile(data,expect_data,k,j,i)
|
||||
tdSql.query(f"select apercentile({i}, {j},'{k}') from meters")
|
||||
tdSql.checkRows(1)
|
||||
table_list = ["meters","t0"]
|
||||
for i in error_param_list:
|
||||
for j in table_list:
|
||||
for k in column_list:
|
||||
tdSql.error(f'select apercentile({k},{i}) from {j}')
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,424 @@
|
|||
import taos
|
||||
import sys
|
||||
import datetime
|
||||
import inspect
|
||||
|
||||
from util.log import *
|
||||
from util.sql import *
|
||||
from util.cases import *
|
||||
|
||||
class TDTestCase:
|
||||
updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 ,
|
||||
"jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143,
|
||||
"wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"fnDebugFlag":143}
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug(f"start to excute {__file__}")
|
||||
tdSql.init(conn.cursor(), True)
|
||||
|
||||
def prepare_datas(self):
|
||||
tdSql.execute(
|
||||
'''create table stb1
|
||||
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
||||
tags (t1 int)
|
||||
'''
|
||||
)
|
||||
|
||||
tdSql.execute(
|
||||
'''
|
||||
create table t1
|
||||
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
||||
'''
|
||||
)
|
||||
for i in range(4):
|
||||
tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )')
|
||||
|
||||
for i in range(9):
|
||||
tdSql.execute(
|
||||
f"insert into ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
||||
)
|
||||
tdSql.execute(
|
||||
f"insert into ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
||||
)
|
||||
tdSql.execute("insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
|
||||
tdSql.execute("insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
||||
tdSql.execute("insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
|
||||
tdSql.execute("insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
||||
|
||||
tdSql.execute("insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||
tdSql.execute("insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||
tdSql.execute("insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||
|
||||
tdSql.execute(
|
||||
f'''insert into t1 values
|
||||
( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||
( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
|
||||
( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
|
||||
( '2021-01-01 01:01:06.000', 3, 33333, 333, 33, 3.33, 33.33, 0, "binary3", "nchar3", now()+3a )
|
||||
( '2021-05-07 01:01:10.000', 4, 44444, 444, 44, 4.44, 44.44, 1, "binary4", "nchar4", now()+4a )
|
||||
( '2021-07-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||
( '2021-09-30 01:01:16.000', 5, 55555, 555, 55, 5.55, 55.55, 0, "binary5", "nchar5", now()+5a )
|
||||
( '2022-02-01 01:01:20.000', 6, 66666, 666, 66, 6.66, 66.66, 1, "binary6", "nchar6", now()+6a )
|
||||
( '2022-10-28 01:01:26.000', 7, 00000, 000, 00, 0.00, 00.00, 1, "binary7", "nchar7", "1970-01-01 08:00:00.000" )
|
||||
( '2022-12-01 01:01:30.000', 8, -88888, -888, -88, -8.88, -88.88, 0, "binary8", "nchar8", "1969-01-01 01:00:00.000" )
|
||||
( '2022-12-31 01:01:36.000', 9, -99999999999999999, -999, -99, -9.99, -999999999999999999999.99, 1, "binary9", "nchar9", "1900-01-01 00:00:00.000" )
|
||||
( '2023-02-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||
'''
|
||||
)
|
||||
|
||||
def check_avg(self ,origin_query , check_query):
|
||||
avg_result = tdSql.getResult(origin_query)
|
||||
origin_result = tdSql.getResult(check_query)
|
||||
|
||||
check_status = True
|
||||
for row_index , row in enumerate(avg_result):
|
||||
for col_index , elem in enumerate(row):
|
||||
if avg_result[row_index][col_index] != origin_result[row_index][col_index]:
|
||||
check_status = False
|
||||
if not check_status:
|
||||
tdLog.notice("avg function value has not as expected , sql is \"%s\" "%origin_query )
|
||||
sys.exit(1)
|
||||
else:
|
||||
tdLog.info("avg value check pass , it work as expected ,sql is \"%s\" "%check_query )
|
||||
|
||||
def test_errors(self):
|
||||
error_sql_lists = [
|
||||
"select avg from t1",
|
||||
# "select avg(-+--+c1) from t1",
|
||||
# "select +-avg(c1) from t1",
|
||||
# "select ++-avg(c1) from t1",
|
||||
# "select ++--avg(c1) from t1",
|
||||
# "select - -avg(c1)*0 from t1",
|
||||
# "select avg(tbname+1) from t1 ",
|
||||
"select avg(123--123)==1 from t1",
|
||||
"select avg(c1) as 'd1' from t1",
|
||||
"select avg(c1 ,c2 ) from t1",
|
||||
"select avg(c1 ,NULL) from t1",
|
||||
"select avg(,) from t1;",
|
||||
"select avg(avg(c1) ab from t1)",
|
||||
"select avg(c1) as int from t1",
|
||||
"select avg from stb1",
|
||||
# "select avg(-+--+c1) from stb1",
|
||||
# "select +-avg(c1) from stb1",
|
||||
# "select ++-avg(c1) from stb1",
|
||||
# "select ++--avg(c1) from stb1",
|
||||
# "select - -avg(c1)*0 from stb1",
|
||||
# "select avg(tbname+1) from stb1 ",
|
||||
"select avg(123--123)==1 from stb1",
|
||||
"select avg(c1) as 'd1' from stb1",
|
||||
"select avg(c1 ,c2 ) from stb1",
|
||||
"select avg(c1 ,NULL) from stb1",
|
||||
"select avg(,) from stb1;",
|
||||
"select avg(avg(c1) ab from stb1)",
|
||||
"select avg(c1) as int from stb1"
|
||||
]
|
||||
for error_sql in error_sql_lists:
|
||||
tdSql.error(error_sql)
|
||||
|
||||
def support_types(self):
|
||||
type_error_sql_lists = [
|
||||
"select avg(ts) from t1" ,
|
||||
"select avg(c7) from t1",
|
||||
"select avg(c8) from t1",
|
||||
"select avg(c9) from t1",
|
||||
"select avg(ts) from ct1" ,
|
||||
"select avg(c7) from ct1",
|
||||
"select avg(c8) from ct1",
|
||||
"select avg(c9) from ct1",
|
||||
"select avg(ts) from ct3" ,
|
||||
"select avg(c7) from ct3",
|
||||
"select avg(c8) from ct3",
|
||||
"select avg(c9) from ct3",
|
||||
"select avg(ts) from ct4" ,
|
||||
"select avg(c7) from ct4",
|
||||
"select avg(c8) from ct4",
|
||||
"select avg(c9) from ct4",
|
||||
"select avg(ts) from stb1" ,
|
||||
"select avg(c7) from stb1",
|
||||
"select avg(c8) from stb1",
|
||||
"select avg(c9) from stb1" ,
|
||||
|
||||
"select avg(ts) from stbbb1" ,
|
||||
"select avg(c7) from stbbb1",
|
||||
|
||||
"select avg(ts) from tbname",
|
||||
"select avg(c9) from tbname"
|
||||
|
||||
]
|
||||
|
||||
for type_sql in type_error_sql_lists:
|
||||
tdSql.error(type_sql)
|
||||
|
||||
|
||||
type_sql_lists = [
|
||||
"select avg(c1) from t1",
|
||||
"select avg(c2) from t1",
|
||||
"select avg(c3) from t1",
|
||||
"select avg(c4) from t1",
|
||||
"select avg(c5) from t1",
|
||||
"select avg(c6) from t1",
|
||||
|
||||
"select avg(c1) from ct1",
|
||||
"select avg(c2) from ct1",
|
||||
"select avg(c3) from ct1",
|
||||
"select avg(c4) from ct1",
|
||||
"select avg(c5) from ct1",
|
||||
"select avg(c6) from ct1",
|
||||
|
||||
"select avg(c1) from ct3",
|
||||
"select avg(c2) from ct3",
|
||||
"select avg(c3) from ct3",
|
||||
"select avg(c4) from ct3",
|
||||
"select avg(c5) from ct3",
|
||||
"select avg(c6) from ct3",
|
||||
|
||||
"select avg(c1) from stb1",
|
||||
"select avg(c2) from stb1",
|
||||
"select avg(c3) from stb1",
|
||||
"select avg(c4) from stb1",
|
||||
"select avg(c5) from stb1",
|
||||
"select avg(c6) from stb1",
|
||||
|
||||
"select avg(c6) as alisb from stb1",
|
||||
"select avg(c6) alisb from stb1",
|
||||
]
|
||||
|
||||
for type_sql in type_sql_lists:
|
||||
tdSql.query(type_sql)
|
||||
|
||||
def basic_avg_function(self):
|
||||
|
||||
# basic query
|
||||
tdSql.query("select c1 from ct3")
|
||||
tdSql.checkRows(0)
|
||||
tdSql.query("select c1 from t1")
|
||||
tdSql.checkRows(12)
|
||||
tdSql.query("select c1 from stb1")
|
||||
tdSql.checkRows(25)
|
||||
|
||||
# used for empty table , ct3 is empty
|
||||
tdSql.query("select avg(c1) from ct3")
|
||||
tdSql.checkRows(0)
|
||||
tdSql.query("select avg(c2) from ct3")
|
||||
tdSql.checkRows(0)
|
||||
tdSql.query("select avg(c3) from ct3")
|
||||
tdSql.checkRows(0)
|
||||
tdSql.query("select avg(c4) from ct3")
|
||||
tdSql.checkRows(0)
|
||||
tdSql.query("select avg(c5) from ct3")
|
||||
tdSql.checkRows(0)
|
||||
tdSql.query("select avg(c6) from ct3")
|
||||
|
||||
# used for regular table
|
||||
tdSql.query("select avg(c1) from t1")
|
||||
tdSql.checkData(0, 0, 5.000000000)
|
||||
|
||||
|
||||
tdSql.query("select ts,c1, c2, c3 , c4, c5 from t1")
|
||||
tdSql.checkData(1, 5, 1.11000)
|
||||
tdSql.checkData(3, 4, 33)
|
||||
tdSql.checkData(5, 5, None)
|
||||
self.check_avg(" select avg(c1) , avg(c2) , avg(c3) from t1 " , " select sum(c1)/count(c1) , sum(c2)/count(c2) , sum(c3)/count(c3) from t1 ")
|
||||
|
||||
# used for sub table
|
||||
tdSql.query("select avg(c1) from ct1")
|
||||
tdSql.checkData(0, 0, 4.846153846)
|
||||
|
||||
tdSql.query("select avg(c1) from ct3")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
self.check_avg(" select avg(abs(c1)) , avg(abs(c2)) , avg(abs(c3)) from t1 " , " select sum(abs(c1))/count(c1) , sum(abs(c2))/count(c2) , sum(abs(c3))/count(c3) from t1 ")
|
||||
self.check_avg(" select avg(abs(c1)) , avg(abs(c2)) , avg(abs(c3)) from stb1 " , " select sum(abs(c1))/count(c1) , sum(abs(c2))/count(c2) , sum(abs(c3))/count(c3) from stb1 ")
|
||||
|
||||
# used for stable table
|
||||
|
||||
tdSql.query("select avg(c1) from stb1")
|
||||
tdSql.checkRows(1)
|
||||
|
||||
self.check_avg(" select avg(abs(ceil(c1))) , avg(abs(ceil(c2))) , avg(abs(ceil(c3))) from stb1 " , " select sum(abs(ceil(c1)))/count(c1) , sum(abs(ceil(c2)))/count(c2) , sum(abs(ceil(c3)))/count(c3) from stb1 ")
|
||||
|
||||
# used for not exists table
|
||||
tdSql.error("select avg(c1) from stbbb1")
|
||||
tdSql.error("select avg(c1) from tbname")
|
||||
tdSql.error("select avg(c1) from ct5")
|
||||
|
||||
# mix with common col
|
||||
tdSql.error("select c1, avg(c1) from ct1")
|
||||
tdSql.error("select c1, avg(c1) from ct4")
|
||||
|
||||
|
||||
# mix with common functions
|
||||
tdSql.error("select c1, avg(c1),c5, floor(c5) from ct4 ")
|
||||
tdSql.error("select c1, avg(c1),c5, floor(c5) from stb1 ")
|
||||
|
||||
# mix with agg functions , not support
|
||||
tdSql.error("select c1, avg(c1),c5, count(c5) from stb1 ")
|
||||
tdSql.error("select c1, avg(c1),c5, count(c5) from ct1 ")
|
||||
tdSql.error("select c1, count(c5) from ct1 ")
|
||||
tdSql.error("select c1, count(c5) from stb1 ")
|
||||
|
||||
# agg functions mix with agg functions
|
||||
|
||||
tdSql.query(" select max(c5), count(c5) , avg(c5) from stb1 ")
|
||||
tdSql.checkData(0, 0, 8.88000 )
|
||||
tdSql.checkData(0, 1, 22 )
|
||||
tdSql.checkData(0, 2, 2.270454591 )
|
||||
|
||||
tdSql.query(" select max(c5), count(c5) , avg(c5) ,elapsed(ts) , spread(c1) from ct1; ")
|
||||
tdSql.checkData(0, 0, 8.88000 )
|
||||
tdSql.checkData(0, 1, 13 )
|
||||
tdSql.checkData(0, 2, 0.768461603 )
|
||||
|
||||
# bug fix for count
|
||||
tdSql.query("select count(c1) from ct4 ")
|
||||
tdSql.checkData(0,0,9)
|
||||
tdSql.query("select count(*) from ct4 ")
|
||||
tdSql.checkData(0,0,12)
|
||||
tdSql.query("select count(c1) from stb1 ")
|
||||
tdSql.checkData(0,0,22)
|
||||
tdSql.query("select count(*) from stb1 ")
|
||||
tdSql.checkData(0,0,25)
|
||||
|
||||
# bug fix for compute
|
||||
tdSql.error("select c1, avg(c1) -0 ,ceil(c1)-0 from ct4 ")
|
||||
tdSql.error(" select c1, avg(c1) -0 ,avg(ceil(c1-0.1))-0.1 from ct4")
|
||||
|
||||
# mix with nest query
|
||||
self.check_avg("select avg(col) from (select abs(c1) col from stb1)" , "select avg(abs(c1)) from stb1")
|
||||
self.check_avg("select avg(col) from (select ceil(abs(c1)) col from stb1)" , "select avg(abs(c1)) from stb1")
|
||||
|
||||
tdSql.query(" select abs(avg(abs(abs(c1)))) from stb1 ")
|
||||
tdSql.checkData(0, 0, 4.500000000)
|
||||
tdSql.query(" select abs(avg(abs(abs(c1)))) from t1 ")
|
||||
tdSql.checkData(0, 0, 5.000000000)
|
||||
|
||||
tdSql.query(" select abs(avg(abs(abs(c1)))) from stb1 ")
|
||||
tdSql.checkData(0, 0, 4.500000000)
|
||||
|
||||
tdSql.query(" select avg(c1) from stb1 where c1 is null ")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
|
||||
def avg_func_filter(self):
|
||||
tdSql.execute("use db")
|
||||
tdSql.query(" select avg(c1), avg(c1) -0 ,avg(ceil(c1-0.1))-0 ,avg(floor(c1+0.1))-0.1 ,avg(ceil(log(c1,2)-0.5)) from ct4 where c1>5 ")
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0,0,7.000000000)
|
||||
tdSql.checkData(0,1,7.000000000)
|
||||
tdSql.checkData(0,2,7.000000000)
|
||||
tdSql.checkData(0,3,6.900000000)
|
||||
tdSql.checkData(0,4,3.000000000)
|
||||
|
||||
tdSql.query("select avg(c1), avg(c1) -0 ,avg(ceil(c1-0.1))-0 ,avg(floor(c1+0.1))-0.1 ,avg(ceil(log(c1,2)-0.5)) from ct4 where c1=5 ")
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0,0,5.000000000)
|
||||
tdSql.checkData(0,1,5.000000000)
|
||||
tdSql.checkData(0,2,5.000000000)
|
||||
tdSql.checkData(0,3,4.900000000)
|
||||
tdSql.checkData(0,4,2.000000000)
|
||||
|
||||
tdSql.query("select avg(c1) ,avg(c2) , avg(c1) -0 , avg(ceil(c1-0.1))-0 ,avg(floor(c1+0.1))-0.1 ,avg(ceil(log(c1,2))-0.5) from ct4 where c1>log(c1,2) limit 1 ")
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0, 0, 4.500000000)
|
||||
tdSql.checkData(0, 1, 49999.500000000)
|
||||
tdSql.checkData(0, 5, 1.625000000)
|
||||
|
||||
def avg_Arithmetic(self):
|
||||
pass
|
||||
|
||||
def check_boundary_values(self):
|
||||
|
||||
tdSql.execute("drop database if exists bound_test")
|
||||
tdSql.execute("create database if not exists bound_test")
|
||||
time.sleep(3)
|
||||
tdSql.execute("use bound_test")
|
||||
tdSql.execute(
|
||||
"create table stb_bound (ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(32),c9 nchar(32), c10 timestamp) tags (t1 int);"
|
||||
)
|
||||
tdSql.execute(f'create table sub1_bound using stb_bound tags ( 1 )')
|
||||
tdSql.execute(
|
||||
f"insert into sub1_bound values ( now()-1s, 2147483647, 9223372036854775807, 32767, 127, 3.40E+38, 1.7e+308, True, 'binary_tb1', 'nchar_tb1', now() )"
|
||||
)
|
||||
tdSql.execute(
|
||||
f"insert into sub1_bound values ( now()-1s, -2147483647, -9223372036854775807, -32767, -127, -3.40E+38, -1.7e+308, True, 'binary_tb1', 'nchar_tb1', now() )"
|
||||
)
|
||||
tdSql.execute(
|
||||
f"insert into sub1_bound values ( now(), 2147483646, 9223372036854775806, 32766, 126, 3.40E+38, 1.7e+308, True, 'binary_tb1', 'nchar_tb1', now() )"
|
||||
)
|
||||
|
||||
tdSql.execute(
|
||||
f"insert into sub1_bound values ( now(), 2147483645, 9223372036854775805, 32765, 125, 3.40E+37, 1.7e+307, True, 'binary_tb1', 'nchar_tb1', now() )"
|
||||
)
|
||||
|
||||
tdSql.execute(
|
||||
f"insert into sub1_bound values ( now(), 2147483644, 9223372036854775804, 32764, 124, 3.40E+37, 1.7e+307, True, 'binary_tb1', 'nchar_tb1', now() )"
|
||||
)
|
||||
|
||||
tdSql.execute(
|
||||
f"insert into sub1_bound values ( now(), -2147483646, -9223372036854775806, -32766, -126, -3.40E+38, -1.7e+308, True, 'binary_tb1', 'nchar_tb1', now() )"
|
||||
)
|
||||
tdSql.execute(
|
||||
f"insert into sub1_bound values ( now(), 2147483646, 9223372036854775806, 32766, 126, 3.40E+38, 1.7e+308, True, 'binary_tb1', 'nchar_tb1', now() )"
|
||||
)
|
||||
|
||||
|
||||
tdSql.error(
|
||||
f"insert into sub1_bound values ( now()+1s, 2147483648, 9223372036854775808, 32768, 128, 3.40E+38, 1.7e+308, True, 'binary_tb1', 'nchar_tb1', now() )"
|
||||
)
|
||||
self.check_avg("select avg(c1), avg(c2), avg(c3) , avg(c4), avg(c5) ,avg(c6) from sub1_bound " , " select sum(c1)/count(c1), sum(c2)/count(c2) ,sum(c3)/count(c3), sum(c4)/count(c4), sum(c5)/count(c5) ,sum(c6)/count(c6) from sub1_bound ")
|
||||
|
||||
|
||||
# check basic elem for table per row
|
||||
tdSql.query("select avg(c1) ,avg(c2) , avg(c3) , avg(c4), avg(c5), avg(c6) from sub1_bound ")
|
||||
tdSql.checkRows(1)
|
||||
tdSql.checkData(0,0,920350133.571428537)
|
||||
tdSql.checkData(0,1,1.3176245766935393e+18)
|
||||
tdSql.checkData(0,2,14042.142857143)
|
||||
tdSql.checkData(0,3,53.571428571)
|
||||
tdSql.checkData(0,4,5.828571332045761e+37)
|
||||
# tdSql.checkData(0,5,None)
|
||||
|
||||
|
||||
# check + - * / in functions
|
||||
tdSql.query(" select avg(c1+1) ,avg(c2) , avg(c3*1) , avg(c4/2), avg(c5)/2, avg(c6) from sub1_bound ")
|
||||
tdSql.checkData(0,0,920350134.5714285)
|
||||
tdSql.checkData(0,1,1.3176245766935393e+18)
|
||||
tdSql.checkData(0,2,14042.142857143)
|
||||
tdSql.checkData(0,3,26.785714286)
|
||||
tdSql.checkData(0,4,2.9142856660228804e+37)
|
||||
# tdSql.checkData(0,5,None)
|
||||
|
||||
|
||||
|
||||
def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring
|
||||
tdSql.prepare()
|
||||
|
||||
tdLog.printNoPrefix("==========step1:create table ==============")
|
||||
|
||||
self.prepare_datas()
|
||||
|
||||
tdLog.printNoPrefix("==========step2:test errors ==============")
|
||||
|
||||
self.test_errors()
|
||||
|
||||
tdLog.printNoPrefix("==========step3:support types ============")
|
||||
|
||||
self.support_types()
|
||||
|
||||
tdLog.printNoPrefix("==========step4: avg basic query ============")
|
||||
|
||||
self.basic_avg_function()
|
||||
|
||||
tdLog.printNoPrefix("==========step5: avg boundary query ============")
|
||||
|
||||
self.check_boundary_values()
|
||||
|
||||
tdLog.printNoPrefix("==========step6: avg filter query ============")
|
||||
|
||||
self.avg_func_filter()
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success(f"{__file__} successfully executed")
|
||||
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
|
@ -80,6 +80,9 @@ class TDTestCase:
|
|||
tdSql.checkRows(2)
|
||||
tdSql.checkEqual(tdSql.queryResult,[(2,),(1,)])
|
||||
|
||||
tdSql.query("select bottom(col13,50) from test")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
tdSql.query("select bottom(col14, 2) from test")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkEqual(tdSql.queryResult,[(2,),(1,)])
|
||||
|
@ -91,6 +94,7 @@ class TDTestCase:
|
|||
tdSql.query('select bottom(col2,1) from test interval(1y) order by col2')
|
||||
tdSql.checkData(0,0,1)
|
||||
|
||||
|
||||
tdSql.error('select * from test where bottom(col2,1)=1')
|
||||
|
||||
|
||||
|
|
|
@ -0,0 +1,428 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import subprocess
|
||||
import random
|
||||
import math
|
||||
import numpy as np
|
||||
import inspect
|
||||
import re
|
||||
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor())
|
||||
|
||||
def csum_query_form(self, col="c1", alias="", table_expr="t1", condition=""):
|
||||
|
||||
'''
|
||||
csum function:
|
||||
:param col: string, column name, required parameters;
|
||||
:param alias: string, result column another name,or add other funtion;
|
||||
:param table_expr: string or expression, data source(eg,table/stable name, result set), required parameters;
|
||||
:param condition: expression;
|
||||
:param args: other funtions,like: ', last(col)',or give result column another name, like 'c2'
|
||||
:return: csum query statement,default: select csum(c1) from t1
|
||||
'''
|
||||
|
||||
return f"select csum({col}) {alias} from {table_expr} {condition}"
|
||||
|
||||
def checkcsum(self,col="c1", alias="", table_expr="t1", condition="" ):
|
||||
line = sys._getframe().f_back.f_lineno
|
||||
pre_sql = self.csum_query_form(
|
||||
col=col, table_expr=table_expr, condition=condition
|
||||
).replace("csum", "count")
|
||||
tdSql.query(pre_sql)
|
||||
|
||||
if tdSql.queryRows == 0:
|
||||
tdSql.query(self.csum_query_form(
|
||||
col=col, alias=alias, table_expr=table_expr, condition=condition
|
||||
))
|
||||
print(f"case in {line}: ", end='')
|
||||
tdSql.checkRows(0)
|
||||
return
|
||||
|
||||
if "order by tbname" in condition:
|
||||
tdSql.error(self.csum_query_form(
|
||||
col=col, alias=alias, table_expr=table_expr, condition=condition
|
||||
))
|
||||
return
|
||||
|
||||
if "group" in condition:
|
||||
|
||||
tb_condition = condition.split("group by")[1].split(" ")[1]
|
||||
tdSql.query(f"select distinct {tb_condition} from {table_expr}")
|
||||
query_result = tdSql.queryResult
|
||||
query_rows = tdSql.queryRows
|
||||
clear_condition = re.sub('order by [0-9a-z]*|slimit [0-9]*|soffset [0-9]*', "", condition)
|
||||
|
||||
pre_row = 0
|
||||
for i in range(query_rows):
|
||||
group_name = query_result[i][0]
|
||||
if "where" in clear_condition:
|
||||
pre_condition = re.sub('group by [0-9a-z]*', f"{tb_condition}='{group_name}'", clear_condition)
|
||||
else:
|
||||
pre_condition = "where " + re.sub('group by [0-9a-z]*',f"{tb_condition}='{group_name}'", clear_condition)
|
||||
|
||||
tdSql.query(f"select {col} {alias} from {table_expr} {pre_condition}")
|
||||
pre_data = np.array(tdSql.queryResult)[np.array(tdSql.queryResult) != None]
|
||||
print("data is ", pre_data)
|
||||
pre_csum = np.cumsum(pre_data)
|
||||
tdSql.query(self.csum_query_form(
|
||||
col=col, alias=alias, table_expr=table_expr, condition=condition
|
||||
))
|
||||
for j in range(len(pre_csum)):
|
||||
print(f"case in {line}:", end='')
|
||||
tdSql.checkData(pre_row+j, 1, pre_csum[j])
|
||||
pre_row += len(pre_csum)
|
||||
return
|
||||
elif "union" in condition:
|
||||
union_sql_0 = self.csum_query_form(
|
||||
col=col, alias=alias, table_expr=table_expr, condition=condition
|
||||
).split("union all")[0]
|
||||
|
||||
union_sql_1 = self.csum_query_form(
|
||||
col=col, alias=alias, table_expr=table_expr, condition=condition
|
||||
).split("union all")[1]
|
||||
|
||||
tdSql.query(union_sql_0)
|
||||
union_csum_0 = tdSql.queryResult
|
||||
row_union_0 = tdSql.queryRows
|
||||
|
||||
tdSql.query(union_sql_1)
|
||||
union_csum_1 = tdSql.queryResult
|
||||
|
||||
tdSql.query(self.csum_query_form(
|
||||
col=col, alias=alias, table_expr=table_expr, condition=condition
|
||||
))
|
||||
for i in range(tdSql.queryRows):
|
||||
print(f"case in {line}: ", end='')
|
||||
if i < row_union_0:
|
||||
tdSql.checkData(i, 0, union_csum_0[i][0])
|
||||
else:
|
||||
tdSql.checkData(i, 0, union_csum_1[i-row_union_0][0])
|
||||
return
|
||||
|
||||
else:
|
||||
tdSql.query(f"select {col} from {table_expr} {re.sub('limit [0-9]*|offset [0-9]*','',condition)}")
|
||||
offset_val = condition.split("offset")[1].split(" ")[1] if "offset" in condition else 0
|
||||
pre_result = np.array(tdSql.queryResult)[np.array(tdSql.queryResult) != None]
|
||||
pre_csum = np.cumsum(pre_result)[offset_val:]
|
||||
tdSql.query(self.csum_query_form(
|
||||
col=col, alias=alias, table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
for i in range(tdSql.queryRows):
|
||||
print(f"case in {line}: ", end='')
|
||||
if pre_csum[i] >1.7e+308 or pre_csum[i] < -1.7e+308:
|
||||
continue
|
||||
else:
|
||||
tdSql.checkData(i, 0, pre_csum[i])
|
||||
|
||||
pass
|
||||
|
||||
def csum_current_query(self) :
|
||||
|
||||
# table schema :ts timestamp, c1 int, c2 float, c3 timestamp, c4 binary(16), c5 double, c6 bool
|
||||
# c7 bigint, c8 smallint, c9 tinyint, c10 nchar(16)
|
||||
|
||||
# case1~6: numeric col:int/bigint/tinyint/smallint/float/double
|
||||
self.checkcsum()
|
||||
case2 = {"col": "c2"}
|
||||
self.checkcsum(**case2)
|
||||
case3 = {"col": "c5"}
|
||||
self.checkcsum(**case3)
|
||||
case4 = {"col": "c7"}
|
||||
self.checkcsum(**case4)
|
||||
case5 = {"col": "c8"}
|
||||
self.checkcsum(**case5)
|
||||
case6 = {"col": "c9"}
|
||||
self.checkcsum(**case6)
|
||||
|
||||
# case7~8: nested query
|
||||
# case7 = {"table_expr": "(select c1 from stb1)"}
|
||||
# self.checkcsum(**case7)
|
||||
# case8 = {"table_expr": "(select csum(c1) c1 from stb1 group by tbname)"}
|
||||
# self.checkcsum(**case8)
|
||||
|
||||
# case9~10: mix with tbname/ts/tag/col
|
||||
# case9 = {"alias": ", tbname"}
|
||||
# self.checkcsum(**case9)
|
||||
# case10 = {"alias": ", _c0"}
|
||||
# self.checkcsum(**case10)
|
||||
# case11 = {"alias": ", st1"}
|
||||
# self.checkcsum(**case11)
|
||||
# case12 = {"alias": ", c1"}
|
||||
# self.checkcsum(**case12)
|
||||
|
||||
# case13~15: with single condition
|
||||
case13 = {"condition": "where c1 <= 10"}
|
||||
self.checkcsum(**case13)
|
||||
case14 = {"condition": "where c6 in (0, 1)"}
|
||||
self.checkcsum(**case14)
|
||||
case15 = {"condition": "where c1 between 1 and 10"}
|
||||
self.checkcsum(**case15)
|
||||
|
||||
# case16: with multi-condition
|
||||
case16 = {"condition": "where c6=1 or c6 =0"}
|
||||
self.checkcsum(**case16)
|
||||
|
||||
# case17: only support normal table join
|
||||
case17 = {
|
||||
"col": "t1.c1",
|
||||
"table_expr": "t1, t2",
|
||||
"condition": "where t1.ts=t2.ts"
|
||||
}
|
||||
self.checkcsum(**case17)
|
||||
# case18~19: with group by
|
||||
# case18 = {
|
||||
# "table_expr": "t1",
|
||||
# "condition": "group by c6"
|
||||
# }
|
||||
# self.checkcsum(**case18)
|
||||
# case19 = {
|
||||
# "table_expr": "stb1",
|
||||
# "condition": "partition by tbname" # partition by tbname
|
||||
# }
|
||||
# self.checkcsum(**case19)
|
||||
|
||||
# # case20~21: with order by
|
||||
# case20 = {"condition": "order by ts"}
|
||||
# self.checkcsum(**case20)
|
||||
|
||||
# # case22: with union
|
||||
# case22 = {
|
||||
# "condition": "union all select csum(c1) from t2"
|
||||
# }
|
||||
# self.checkcsum(**case22)
|
||||
|
||||
# case23: with limit/slimit
|
||||
case23 = {
|
||||
"condition": "limit 1"
|
||||
}
|
||||
self.checkcsum(**case23)
|
||||
# case24 = {
|
||||
# "table_expr": "stb1",
|
||||
# "condition": "group by tbname slimit 1 soffset 1"
|
||||
# }
|
||||
# self.checkcsum(**case24)
|
||||
|
||||
pass
|
||||
|
||||
def csum_error_query(self) -> None :
|
||||
# unusual test
|
||||
#
|
||||
# table schema :ts timestamp, c1 int, c2 float, c3 timestamp, c4 binary(16), c5 double, c6 bool
|
||||
# c7 bigint, c8 smallint, c9 tinyint, c10 nchar(16)
|
||||
#
|
||||
# form test
|
||||
tdSql.error(self.csum_query_form(col="")) # no col
|
||||
tdSql.error("csum(c1) from stb1") # no select
|
||||
tdSql.error("select csum from t1") # no csum condition
|
||||
tdSql.error("select csum c1 from t1") # no brackets
|
||||
tdSql.error("select csum(c1) t1") # no from
|
||||
tdSql.error("select csum( c1 ) from ") # no table_expr
|
||||
# tdSql.error(self.csum_query_form(col="st1")) # tag col
|
||||
tdSql.error(self.csum_query_form(col=1)) # col is a value
|
||||
tdSql.error(self.csum_query_form(col="'c1'")) # col is a string
|
||||
tdSql.error(self.csum_query_form(col=None)) # col is NULL 1
|
||||
tdSql.error(self.csum_query_form(col="NULL")) # col is NULL 2
|
||||
tdSql.error(self.csum_query_form(col='""')) # col is ""
|
||||
tdSql.error(self.csum_query_form(col='c%')) # col is spercial char 1
|
||||
tdSql.error(self.csum_query_form(col='c_')) # col is spercial char 2
|
||||
tdSql.error(self.csum_query_form(col='c.')) # col is spercial char 3
|
||||
tdSql.error(self.csum_query_form(col='c3')) # timestamp col
|
||||
tdSql.error(self.csum_query_form(col='ts')) # Primary key
|
||||
tdSql.error(self.csum_query_form(col='avg(c1)')) # expr col
|
||||
tdSql.error(self.csum_query_form(col='c6')) # bool col
|
||||
tdSql.error(self.csum_query_form(col='c4')) # binary col
|
||||
tdSql.error(self.csum_query_form(col='c10')) # nachr col
|
||||
tdSql.error(self.csum_query_form(col='c10')) # not table_expr col
|
||||
tdSql.error(self.csum_query_form(col='t1')) # tbname
|
||||
tdSql.error(self.csum_query_form(col='stb1')) # stbname
|
||||
tdSql.error(self.csum_query_form(col='db')) # datbasename
|
||||
tdSql.error(self.csum_query_form(col=True)) # col is BOOL 1
|
||||
tdSql.error(self.csum_query_form(col='True')) # col is BOOL 2
|
||||
tdSql.error(self.csum_query_form(col='*')) # col is all col
|
||||
tdSql.error("select csum[c1] from t1") # sql form error 1
|
||||
tdSql.error("select csum{c1} from t1") # sql form error 2
|
||||
tdSql.error(self.csum_query_form(col="[c1]")) # sql form error 3
|
||||
# tdSql.error(self.csum_query_form(col="c1, c2")) # sql form error 3
|
||||
# tdSql.error(self.csum_query_form(col="c1, 2")) # sql form error 3
|
||||
tdSql.error(self.csum_query_form(alias=", count(c1)")) # mix with aggregate function 1
|
||||
tdSql.error(self.csum_query_form(alias=", avg(c1)")) # mix with aggregate function 2
|
||||
tdSql.error(self.csum_query_form(alias=", min(c1)")) # mix with select function 1
|
||||
tdSql.error(self.csum_query_form(alias=", top(c1, 5)")) # mix with select function 2
|
||||
tdSql.error(self.csum_query_form(alias=", spread(c1)")) # mix with calculation function 1
|
||||
tdSql.error(self.csum_query_form(alias=", diff(c1)")) # mix with calculation function 2
|
||||
# tdSql.error(self.csum_query_form(alias=" + 2")) # mix with arithmetic 1
|
||||
tdSql.error(self.csum_query_form(alias=" + avg(c1)")) # mix with arithmetic 2
|
||||
tdSql.error(self.csum_query_form(alias=", c2")) # mix with other 1
|
||||
# tdSql.error(self.csum_query_form(table_expr="stb1")) # select stb directly
|
||||
stb_join = {
|
||||
"col": "stb1.c1",
|
||||
"table_expr": "stb1, stb2",
|
||||
"condition": "where stb1.ts=stb2.ts and stb1.st1=stb2.st2 order by stb1.ts"
|
||||
}
|
||||
tdSql.error(self.csum_query_form(**stb_join)) # stb join
|
||||
interval_sql = {
|
||||
"condition": "where ts>0 and ts < now interval(1h) fill(next)"
|
||||
}
|
||||
tdSql.error(self.csum_query_form(**interval_sql)) # interval
|
||||
group_normal_col = {
|
||||
"table_expr": "t1",
|
||||
"condition": "group by c6"
|
||||
}
|
||||
tdSql.error(self.csum_query_form(**group_normal_col)) # group by normal col
|
||||
slimit_soffset_sql = {
|
||||
"table_expr": "stb1",
|
||||
"condition": "group by tbname slimit 1 soffset 1"
|
||||
}
|
||||
# tdSql.error(self.csum_query_form(**slimit_soffset_sql))
|
||||
order_by_tbname_sql = {
|
||||
"table_expr": "stb1",
|
||||
"condition": "group by tbname order by tbname"
|
||||
}
|
||||
tdSql.error(self.csum_query_form(**order_by_tbname_sql))
|
||||
|
||||
pass
|
||||
|
||||
def csum_test_data(self, tbnum:int, data_row:int, basetime:int) -> None :
|
||||
for i in range(tbnum):
|
||||
for j in range(data_row):
|
||||
tdSql.execute(
|
||||
f"insert into t{i} values ("
|
||||
f"{basetime + (j+1)*10}, {random.randint(-200, -1)}, {random.uniform(200, -1)}, {basetime + random.randint(-200, -1)}, "
|
||||
f"'binary_{j}', {random.uniform(-200, -1)}, {random.choice([0,1])}, {random.randint(-200,-1)}, "
|
||||
f"{random.randint(-200, -1)}, {random.randint(-127, -1)}, 'nchar_{j}' )"
|
||||
)
|
||||
|
||||
tdSql.execute(
|
||||
f"insert into t{i} values ("
|
||||
f"{basetime - (j+1) * 10}, {random.randint(1, 200)}, {random.uniform(1, 200)}, {basetime - random.randint(1, 200)}, "
|
||||
f"'binary_{j}_1', {random.uniform(1, 200)}, {random.choice([0, 1])}, {random.randint(1,200)}, "
|
||||
f"{random.randint(1,200)}, {random.randint(1,127)}, 'nchar_{j}_1' )"
|
||||
)
|
||||
tdSql.execute(
|
||||
f"insert into tt{i} values ( {basetime-(j+1) * 10}, {random.randint(1, 200)} )"
|
||||
)
|
||||
|
||||
pass
|
||||
|
||||
def csum_test_table(self,tbnum: int) -> None :
|
||||
tdSql.execute("drop database if exists db")
|
||||
tdSql.execute("create database if not exists db keep 3650")
|
||||
tdSql.execute("use db")
|
||||
|
||||
tdSql.execute(
|
||||
"create stable db.stb1 (\
|
||||
ts timestamp, c1 int, c2 float, c3 timestamp, c4 binary(16), c5 double, c6 bool, \
|
||||
c7 bigint, c8 smallint, c9 tinyint, c10 nchar(16)\
|
||||
) \
|
||||
tags(st1 int)"
|
||||
)
|
||||
tdSql.execute(
|
||||
"create stable db.stb2 (ts timestamp, c1 int) tags(st2 int)"
|
||||
)
|
||||
for i in range(tbnum):
|
||||
tdSql.execute(f"create table t{i} using stb1 tags({i})")
|
||||
tdSql.execute(f"create table tt{i} using stb2 tags({i})")
|
||||
|
||||
pass
|
||||
|
||||
def csum_test_run(self) :
|
||||
tdLog.printNoPrefix("==========TD-10594==========")
|
||||
tbnum = 10
|
||||
nowtime = int(round(time.time() * 1000))
|
||||
per_table_rows = 2
|
||||
self.csum_test_table(tbnum)
|
||||
|
||||
tdLog.printNoPrefix("######## no data test:")
|
||||
self.csum_current_query()
|
||||
self.csum_error_query()
|
||||
|
||||
tdLog.printNoPrefix("######## insert only NULL test:")
|
||||
for i in range(tbnum):
|
||||
tdSql.execute(f"insert into t{i}(ts) values ({nowtime - 5})")
|
||||
tdSql.execute(f"insert into t{i}(ts) values ({nowtime + 5})")
|
||||
self.csum_current_query()
|
||||
self.csum_error_query()
|
||||
|
||||
tdLog.printNoPrefix("######## insert data in the range near the max(bigint/double):")
|
||||
self.csum_test_table(tbnum)
|
||||
tdSql.execute(f"insert into t1(ts, c1,c2,c5,c7) values "
|
||||
f"({nowtime - (per_table_rows + 1) * 10}, {2**31-1}, {3.4*10**38}, {1.7*10**308}, {2**63-1})")
|
||||
tdSql.execute(f"insert into t1(ts, c1,c2,c5,c7) values "
|
||||
f"({nowtime - (per_table_rows + 2) * 10}, {2**31-1}, {3.4*10**38}, {1.7*10**308}, {2**63-1})")
|
||||
self.csum_current_query()
|
||||
self.csum_error_query()
|
||||
|
||||
tdLog.printNoPrefix("######## insert data in the range near the min(bigint/double):")
|
||||
self.csum_test_table(tbnum)
|
||||
tdSql.execute(f"insert into t1(ts, c1,c2,c5,c7) values "
|
||||
f"({nowtime - (per_table_rows + 1) * 10}, {1-2**31}, {-3.4*10**38}, {-1.7*10**308}, {1-2**63})")
|
||||
tdSql.execute(f"insert into t1(ts, c1,c2,c5,c7) values "
|
||||
f"({nowtime - (per_table_rows + 2) * 10}, {1-2**31}, {-3.4*10**38}, {-1.7*10**308}, {512-2**63})")
|
||||
self.csum_current_query()
|
||||
self.csum_error_query()
|
||||
|
||||
tdLog.printNoPrefix("######## insert data without NULL data test:")
|
||||
self.csum_test_table(tbnum)
|
||||
self.csum_test_data(tbnum, per_table_rows, nowtime)
|
||||
self.csum_current_query()
|
||||
self.csum_error_query()
|
||||
|
||||
|
||||
tdLog.printNoPrefix("######## insert data mix with NULL test:")
|
||||
for i in range(tbnum):
|
||||
tdSql.execute(f"insert into t{i}(ts) values ({nowtime})")
|
||||
tdSql.execute(f"insert into t{i}(ts) values ({nowtime-(per_table_rows+3)*10})")
|
||||
tdSql.execute(f"insert into t{i}(ts) values ({nowtime+(per_table_rows+3)*10})")
|
||||
self.csum_current_query()
|
||||
self.csum_error_query()
|
||||
|
||||
|
||||
|
||||
tdLog.printNoPrefix("######## check after WAL test:")
|
||||
tdSql.query("show dnodes")
|
||||
index = tdSql.getData(0, 0)
|
||||
tdDnodes.stop(index)
|
||||
tdDnodes.start(index)
|
||||
self.csum_current_query()
|
||||
self.csum_error_query()
|
||||
|
||||
def run(self):
|
||||
import traceback
|
||||
try:
|
||||
# run in develop branch
|
||||
self.csum_test_run()
|
||||
pass
|
||||
except Exception as e:
|
||||
traceback.print_exc()
|
||||
raise e
|
||||
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,432 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import subprocess
|
||||
import random
|
||||
import math
|
||||
import numpy as np
|
||||
import inspect
|
||||
import re
|
||||
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor())
|
||||
|
||||
def diff_query_form(self, col="c1", alias="", table_expr="t1", condition=""):
|
||||
|
||||
'''
|
||||
diff function:
|
||||
:param col: string, column name, required parameters;
|
||||
:param alias: string, result column another name,or add other funtion;
|
||||
:param table_expr: string or expression, data source(eg,table/stable name, result set), required parameters;
|
||||
:param condition: expression;
|
||||
:param args: other funtions,like: ', last(col)',or give result column another name, like 'c2'
|
||||
:return: diff query statement,default: select diff(c1) from t1
|
||||
'''
|
||||
|
||||
return f"select diff({col}) {alias} from {table_expr} {condition}"
|
||||
|
||||
def checkdiff(self,col="c1", alias="", table_expr="t1", condition="" ):
|
||||
line = sys._getframe().f_back.f_lineno
|
||||
pre_sql = self.diff_query_form(
|
||||
col=col, table_expr=table_expr, condition=condition
|
||||
).replace("diff", "count")
|
||||
tdSql.query(pre_sql)
|
||||
|
||||
if tdSql.queryRows == 0:
|
||||
tdSql.query(self.diff_query_form(
|
||||
col=col, alias=alias, table_expr=table_expr, condition=condition
|
||||
))
|
||||
print(f"case in {line}: ", end='')
|
||||
tdSql.checkRows(0)
|
||||
return
|
||||
|
||||
if "order by tbname" in condition:
|
||||
tdSql.error(self.diff_query_form(
|
||||
col=col, alias=alias, table_expr=table_expr, condition=condition
|
||||
))
|
||||
return
|
||||
|
||||
if "group" in condition:
|
||||
|
||||
tb_condition = condition.split("group by")[1].split(" ")[1]
|
||||
tdSql.query(f"select distinct {tb_condition} from {table_expr}")
|
||||
query_result = tdSql.queryResult
|
||||
query_rows = tdSql.queryRows
|
||||
clear_condition = re.sub('order by [0-9a-z]*|slimit [0-9]*|soffset [0-9]*', "", condition)
|
||||
|
||||
pre_row = 0
|
||||
for i in range(query_rows):
|
||||
group_name = query_result[i][0]
|
||||
if "where" in clear_condition:
|
||||
pre_condition = re.sub('group by [0-9a-z]*', f"{tb_condition}='{group_name}'", clear_condition)
|
||||
else:
|
||||
pre_condition = "where " + re.sub('group by [0-9a-z]*',f"{tb_condition}='{group_name}'", clear_condition)
|
||||
|
||||
tdSql.query(f"select {col} {alias} from {table_expr} {pre_condition}")
|
||||
pre_data = np.array(tdSql.queryResult)[np.array(tdSql.queryResult) != None]
|
||||
pre_diff = np.diff(pre_data)
|
||||
# trans precision for data
|
||||
tdSql.query(self.diff_query_form(
|
||||
col=col, alias=alias, table_expr=table_expr, condition=condition
|
||||
))
|
||||
for j in range(len(pre_diff)):
|
||||
print(f"case in {line}:", end='')
|
||||
if isinstance(pre_diff[j] , float) :
|
||||
pass
|
||||
else:
|
||||
tdSql.checkData(pre_row+j, 1, pre_diff[j] )
|
||||
pre_row += len(pre_diff)
|
||||
return
|
||||
elif "union" in condition:
|
||||
union_sql_0 = self.diff_query_form(
|
||||
col=col, alias=alias, table_expr=table_expr, condition=condition
|
||||
).split("union all")[0]
|
||||
|
||||
union_sql_1 = self.diff_query_form(
|
||||
col=col, alias=alias, table_expr=table_expr, condition=condition
|
||||
).split("union all")[1]
|
||||
|
||||
tdSql.query(union_sql_0)
|
||||
union_diff_0 = tdSql.queryResult
|
||||
row_union_0 = tdSql.queryRows
|
||||
|
||||
tdSql.query(union_sql_1)
|
||||
union_diff_1 = tdSql.queryResult
|
||||
|
||||
tdSql.query(self.diff_query_form(
|
||||
col=col, alias=alias, table_expr=table_expr, condition=condition
|
||||
))
|
||||
for i in range(tdSql.queryRows):
|
||||
print(f"case in {line}: ", end='')
|
||||
if i < row_union_0:
|
||||
tdSql.checkData(i, 0, union_diff_0[i][0])
|
||||
else:
|
||||
tdSql.checkData(i, 0, union_diff_1[i-row_union_0][0])
|
||||
return
|
||||
|
||||
else:
|
||||
tdSql.query(f"select {col} from {table_expr} {re.sub('limit [0-9]*|offset [0-9]*','',condition)}")
|
||||
offset_val = condition.split("offset")[1].split(" ")[1] if "offset" in condition else 0
|
||||
pre_result = np.array(tdSql.queryResult)[np.array(tdSql.queryResult) != None]
|
||||
pre_diff = np.diff(pre_result)[offset_val:]
|
||||
tdSql.query(self.diff_query_form(
|
||||
col=col, alias=alias, table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
for i in range(tdSql.queryRows):
|
||||
print(f"case in {line}: ", end='')
|
||||
if isinstance(pre_diff[i] , float ):
|
||||
pass
|
||||
else:
|
||||
tdSql.checkData(i, 0, pre_diff[i])
|
||||
|
||||
pass
|
||||
|
||||
def diff_current_query(self) :
|
||||
|
||||
# table schema :ts timestamp, c1 int, c2 float, c3 timestamp, c4 binary(16), c5 double, c6 bool
|
||||
# c7 bigint, c8 smallint, c9 tinyint, c10 nchar(16)
|
||||
|
||||
# case1~6: numeric col:int/bigint/tinyint/smallint/float/double
|
||||
self.checkdiff()
|
||||
case2 = {"col": "c2"}
|
||||
self.checkdiff(**case2)
|
||||
case3 = {"col": "c5"}
|
||||
self.checkdiff(**case3)
|
||||
case4 = {"col": "c7"}
|
||||
self.checkdiff(**case4)
|
||||
case5 = {"col": "c8"}
|
||||
self.checkdiff(**case5)
|
||||
case6 = {"col": "c9"}
|
||||
self.checkdiff(**case6)
|
||||
|
||||
# case7~8: nested query
|
||||
# case7 = {"table_expr": "(select c1 from stb1)"}
|
||||
# self.checkdiff(**case7)
|
||||
# case8 = {"table_expr": "(select diff(c1) c1 from stb1 group by tbname)"}
|
||||
# self.checkdiff(**case8)
|
||||
|
||||
# case9~10: mix with tbname/ts/tag/col
|
||||
# case9 = {"alias": ", tbname"}
|
||||
# self.checkdiff(**case9)
|
||||
# case10 = {"alias": ", _c0"}
|
||||
# self.checkdiff(**case10)
|
||||
# case11 = {"alias": ", st1"}
|
||||
# self.checkdiff(**case11)
|
||||
# case12 = {"alias": ", c1"}
|
||||
# self.checkdiff(**case12)
|
||||
|
||||
# case13~15: with single condition
|
||||
case13 = {"condition": "where c1 <= 10"}
|
||||
self.checkdiff(**case13)
|
||||
case14 = {"condition": "where c6 in (0, 1)"}
|
||||
self.checkdiff(**case14)
|
||||
case15 = {"condition": "where c1 between 1 and 10"}
|
||||
self.checkdiff(**case15)
|
||||
|
||||
# case16: with multi-condition
|
||||
case16 = {"condition": "where c6=1 or c6 =0"}
|
||||
self.checkdiff(**case16)
|
||||
|
||||
# case17: only support normal table join
|
||||
case17 = {
|
||||
"col": "t1.c1",
|
||||
"table_expr": "t1, t2",
|
||||
"condition": "where t1.ts=t2.ts"
|
||||
}
|
||||
self.checkdiff(**case17)
|
||||
# case18~19: with group by
|
||||
# case18 = {
|
||||
# "table_expr": "t1",
|
||||
# "condition": "group by c6"
|
||||
# }
|
||||
# self.checkdiff(**case18)
|
||||
# case19 = {
|
||||
# "table_expr": "stb1",
|
||||
# "condition": "partition by tbname" # partition by tbname
|
||||
# }
|
||||
# self.checkdiff(**case19)
|
||||
|
||||
# # case20~21: with order by
|
||||
# case20 = {"condition": "order by ts"}
|
||||
# self.checkdiff(**case20)
|
||||
|
||||
# # case22: with union
|
||||
# case22 = {
|
||||
# "condition": "union all select diff(c1) from t2"
|
||||
# }
|
||||
# self.checkdiff(**case22)
|
||||
|
||||
# case23: with limit/slimit
|
||||
case23 = {
|
||||
"condition": "limit 1"
|
||||
}
|
||||
self.checkdiff(**case23)
|
||||
# case24 = {
|
||||
# "table_expr": "stb1",
|
||||
# "condition": "group by tbname slimit 1 soffset 1"
|
||||
# }
|
||||
# self.checkdiff(**case24)
|
||||
|
||||
pass
|
||||
|
||||
def diff_error_query(self) -> None :
|
||||
# unusual test
|
||||
#
|
||||
# table schema :ts timestamp, c1 int, c2 float, c3 timestamp, c4 binary(16), c5 double, c6 bool
|
||||
# c7 bigint, c8 smallint, c9 tinyint, c10 nchar(16)
|
||||
#
|
||||
# form test
|
||||
tdSql.error(self.diff_query_form(col="")) # no col
|
||||
tdSql.error("diff(c1) from stb1") # no select
|
||||
tdSql.error("select diff from t1") # no diff condition
|
||||
tdSql.error("select diff c1 from t1") # no brackets
|
||||
tdSql.error("select diff(c1) t1") # no from
|
||||
tdSql.error("select diff( c1 ) from ") # no table_expr
|
||||
# tdSql.error(self.diff_query_form(col="st1")) # tag col
|
||||
tdSql.query("select diff(st1) from t1 ")
|
||||
# tdSql.error(self.diff_query_form(col=1)) # col is a value
|
||||
tdSql.error(self.diff_query_form(col="'c1'")) # col is a string
|
||||
tdSql.error(self.diff_query_form(col=None)) # col is NULL 1
|
||||
tdSql.error(self.diff_query_form(col="NULL")) # col is NULL 2
|
||||
tdSql.error(self.diff_query_form(col='""')) # col is ""
|
||||
tdSql.error(self.diff_query_form(col='c%')) # col is spercial char 1
|
||||
tdSql.error(self.diff_query_form(col='c_')) # col is spercial char 2
|
||||
tdSql.error(self.diff_query_form(col='c.')) # col is spercial char 3
|
||||
tdSql.error(self.diff_query_form(col='c3')) # timestamp col
|
||||
tdSql.error(self.diff_query_form(col='ts')) # Primary key
|
||||
tdSql.error(self.diff_query_form(col='avg(c1)')) # expr col
|
||||
# tdSql.error(self.diff_query_form(col='c6')) # bool col
|
||||
tdSql.query("select diff(c6) from t1")
|
||||
tdSql.error(self.diff_query_form(col='c4')) # binary col
|
||||
tdSql.error(self.diff_query_form(col='c10')) # nachr col
|
||||
tdSql.error(self.diff_query_form(col='c10')) # not table_expr col
|
||||
tdSql.error(self.diff_query_form(col='t1')) # tbname
|
||||
tdSql.error(self.diff_query_form(col='stb1')) # stbname
|
||||
tdSql.error(self.diff_query_form(col='db')) # datbasename
|
||||
# tdSql.error(self.diff_query_form(col=True)) # col is BOOL 1
|
||||
# tdSql.error(self.diff_query_form(col='True')) # col is BOOL 2
|
||||
tdSql.error(self.diff_query_form(col='*')) # col is all col
|
||||
tdSql.error("select diff[c1] from t1") # sql form error 1
|
||||
tdSql.error("select diff{c1} from t1") # sql form error 2
|
||||
tdSql.error(self.diff_query_form(col="[c1]")) # sql form error 3
|
||||
# tdSql.error(self.diff_query_form(col="c1, c2")) # sql form error 3
|
||||
# tdSql.error(self.diff_query_form(col="c1, 2")) # sql form error 3
|
||||
tdSql.error(self.diff_query_form(alias=", count(c1)")) # mix with aggregate function 1
|
||||
tdSql.error(self.diff_query_form(alias=", avg(c1)")) # mix with aggregate function 2
|
||||
tdSql.error(self.diff_query_form(alias=", min(c1)")) # mix with select function 1
|
||||
tdSql.error(self.diff_query_form(alias=", top(c1, 5)")) # mix with select function 2
|
||||
tdSql.error(self.diff_query_form(alias=", spread(c1)")) # mix with calculation function 1
|
||||
tdSql.error(self.diff_query_form(alias=", diff(c1)")) # mix with calculation function 2
|
||||
# tdSql.error(self.diff_query_form(alias=" + 2")) # mix with arithmetic 1
|
||||
tdSql.error(self.diff_query_form(alias=" + avg(c1)")) # mix with arithmetic 2
|
||||
tdSql.error(self.diff_query_form(alias=", c2")) # mix with other 1
|
||||
# tdSql.error(self.diff_query_form(table_expr="stb1")) # select stb directly
|
||||
stb_join = {
|
||||
"col": "stb1.c1",
|
||||
"table_expr": "stb1, stb2",
|
||||
"condition": "where stb1.ts=stb2.ts and stb1.st1=stb2.st2 order by stb1.ts"
|
||||
}
|
||||
tdSql.error(self.diff_query_form(**stb_join)) # stb join
|
||||
interval_sql = {
|
||||
"condition": "where ts>0 and ts < now interval(1h) fill(next)"
|
||||
}
|
||||
tdSql.error(self.diff_query_form(**interval_sql)) # interval
|
||||
group_normal_col = {
|
||||
"table_expr": "t1",
|
||||
"condition": "group by c6"
|
||||
}
|
||||
tdSql.error(self.diff_query_form(**group_normal_col)) # group by normal col
|
||||
slimit_soffset_sql = {
|
||||
"table_expr": "stb1",
|
||||
"condition": "group by tbname slimit 1 soffset 1"
|
||||
}
|
||||
# tdSql.error(self.diff_query_form(**slimit_soffset_sql))
|
||||
order_by_tbname_sql = {
|
||||
"table_expr": "stb1",
|
||||
"condition": "group by tbname order by tbname"
|
||||
}
|
||||
tdSql.error(self.diff_query_form(**order_by_tbname_sql))
|
||||
|
||||
pass
|
||||
|
||||
def diff_test_data(self, tbnum:int, data_row:int, basetime:int) -> None :
|
||||
for i in range(tbnum):
|
||||
for j in range(data_row):
|
||||
tdSql.execute(
|
||||
f"insert into t{i} values ("
|
||||
f"{basetime + (j+1)*10}, {random.randint(-200, -1)}, {random.uniform(200, -1)}, {basetime + random.randint(-200, -1)}, "
|
||||
f"'binary_{j}', {random.uniform(-200, -1)}, {random.choice([0,1])}, {random.randint(-200,-1)}, "
|
||||
f"{random.randint(-200, -1)}, {random.randint(-127, -1)}, 'nchar_{j}' )"
|
||||
)
|
||||
|
||||
tdSql.execute(
|
||||
f"insert into t{i} values ("
|
||||
f"{basetime - (j+1) * 10}, {random.randint(1, 200)}, {random.uniform(1, 200)}, {basetime - random.randint(1, 200)}, "
|
||||
f"'binary_{j}_1', {random.uniform(1, 200)}, {random.choice([0, 1])}, {random.randint(1,200)}, "
|
||||
f"{random.randint(1,200)}, {random.randint(1,127)}, 'nchar_{j}_1' )"
|
||||
)
|
||||
tdSql.execute(
|
||||
f"insert into tt{i} values ( {basetime-(j+1) * 10}, {random.randint(1, 200)} )"
|
||||
)
|
||||
|
||||
pass
|
||||
|
||||
def diff_test_table(self,tbnum: int) -> None :
|
||||
tdSql.execute("drop database if exists db")
|
||||
tdSql.execute("create database if not exists db keep 3650")
|
||||
tdSql.execute("use db")
|
||||
|
||||
tdSql.execute(
|
||||
"create stable db.stb1 (\
|
||||
ts timestamp, c1 int, c2 float, c3 timestamp, c4 binary(16), c5 double, c6 bool, \
|
||||
c7 bigint, c8 smallint, c9 tinyint, c10 nchar(16)\
|
||||
) \
|
||||
tags(st1 int)"
|
||||
)
|
||||
tdSql.execute(
|
||||
"create stable db.stb2 (ts timestamp, c1 int) tags(st2 int)"
|
||||
)
|
||||
for i in range(tbnum):
|
||||
tdSql.execute(f"create table t{i} using stb1 tags({i})")
|
||||
tdSql.execute(f"create table tt{i} using stb2 tags({i})")
|
||||
|
||||
pass
|
||||
|
||||
def diff_test_run(self) :
|
||||
tdLog.printNoPrefix("==========TD-10594==========")
|
||||
tbnum = 10
|
||||
nowtime = int(round(time.time() * 1000))
|
||||
per_table_rows = 10
|
||||
self.diff_test_table(tbnum)
|
||||
|
||||
tdLog.printNoPrefix("######## no data test:")
|
||||
self.diff_current_query()
|
||||
self.diff_error_query()
|
||||
|
||||
tdLog.printNoPrefix("######## insert only NULL test:")
|
||||
for i in range(tbnum):
|
||||
tdSql.execute(f"insert into t{i}(ts) values ({nowtime - 5})")
|
||||
tdSql.execute(f"insert into t{i}(ts) values ({nowtime + 5})")
|
||||
self.diff_current_query()
|
||||
self.diff_error_query()
|
||||
|
||||
tdLog.printNoPrefix("######## insert data in the range near the max(bigint/double):")
|
||||
self.diff_test_table(tbnum)
|
||||
tdSql.execute(f"insert into t1(ts, c1,c2,c5,c7) values "
|
||||
f"({nowtime - (per_table_rows + 1) * 10}, {2**31-1}, {3.4*10**38}, {1.7*10**308}, {2**63-1})")
|
||||
tdSql.execute(f"insert into t1(ts, c1,c2,c5,c7) values "
|
||||
f"({nowtime - (per_table_rows + 2) * 10}, {2**31-1}, {3.4*10**38}, {1.7*10**308}, {2**63-1})")
|
||||
self.diff_current_query()
|
||||
self.diff_error_query()
|
||||
|
||||
tdLog.printNoPrefix("######## insert data in the range near the min(bigint/double):")
|
||||
self.diff_test_table(tbnum)
|
||||
tdSql.execute(f"insert into t1(ts, c1,c2,c5,c7) values "
|
||||
f"({nowtime - (per_table_rows + 1) * 10}, {1-2**31}, {-3.4*10**38}, {-1.7*10**308}, {1-2**63})")
|
||||
tdSql.execute(f"insert into t1(ts, c1,c2,c5,c7) values "
|
||||
f"({nowtime - (per_table_rows + 2) * 10}, {1-2**31}, {-3.4*10**38}, {-1.7*10**308}, {512-2**63})")
|
||||
self.diff_current_query()
|
||||
self.diff_error_query()
|
||||
|
||||
tdLog.printNoPrefix("######## insert data without NULL data test:")
|
||||
self.diff_test_table(tbnum)
|
||||
self.diff_test_data(tbnum, per_table_rows, nowtime)
|
||||
self.diff_current_query()
|
||||
self.diff_error_query()
|
||||
|
||||
|
||||
tdLog.printNoPrefix("######## insert data mix with NULL test:")
|
||||
for i in range(tbnum):
|
||||
tdSql.execute(f"insert into t{i}(ts) values ({nowtime})")
|
||||
tdSql.execute(f"insert into t{i}(ts) values ({nowtime-(per_table_rows+3)*10})")
|
||||
tdSql.execute(f"insert into t{i}(ts) values ({nowtime+(per_table_rows+3)*10})")
|
||||
self.diff_current_query()
|
||||
self.diff_error_query()
|
||||
|
||||
|
||||
|
||||
tdLog.printNoPrefix("######## check after WAL test:")
|
||||
tdSql.query("show dnodes")
|
||||
index = tdSql.getData(0, 0)
|
||||
tdDnodes.stop(index)
|
||||
tdDnodes.start(index)
|
||||
self.diff_current_query()
|
||||
self.diff_error_query()
|
||||
|
||||
def run(self):
|
||||
import traceback
|
||||
try:
|
||||
# run in develop branch
|
||||
self.diff_test_run()
|
||||
pass
|
||||
except Exception as e:
|
||||
traceback.print_exc()
|
||||
raise e
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,677 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import subprocess
|
||||
import random
|
||||
import math
|
||||
import numpy as np
|
||||
import inspect
|
||||
import re
|
||||
import taos
|
||||
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor())
|
||||
|
||||
def mavg_query_form(self, sel="select", func="mavg(", col="c1", m_comm =",", k=1,r_comm=")", alias="", fr="from",table_expr="t1", condition=""):
|
||||
'''
|
||||
mavg function:
|
||||
|
||||
:param sel: string, must be "select", required parameters;
|
||||
:param func: string, in this case must be "mavg(", otherwise return other function, required parameters;
|
||||
:param col: string, column name, required parameters;
|
||||
:param m_comm: string, comma between col and k , required parameters;
|
||||
:param k: int/float,the width of the sliding window, [1,100], required parameters;
|
||||
:param r_comm: string, must be ")", use with "(" in func, required parameters;
|
||||
:param alias: string, result column another name,or add other funtion;
|
||||
:param fr: string, must be "from", required parameters;
|
||||
:param table_expr: string or expression, data source(eg,table/stable name, result set), required parameters;
|
||||
:param condition: expression;
|
||||
:return: mavg query statement,default: select mavg(c1, 1) from t1
|
||||
'''
|
||||
|
||||
return f"{sel} {func} {col} {m_comm} {k} {r_comm} {alias} {fr} {table_expr} {condition}"
|
||||
|
||||
def checkmavg(self,sel="select", func="mavg(", col="c1", m_comm =",", k=1,r_comm=")", alias="", fr="from",table_expr="t1", condition=""):
|
||||
# print(self.mavg_query_form(sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
# table_expr=table_expr, condition=condition))
|
||||
line = sys._getframe().f_back.f_lineno
|
||||
|
||||
if not all([sel , func , col , m_comm , k , r_comm , fr , table_expr]):
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.mavg_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
sql = "select * from t1"
|
||||
collist = tdSql.getColNameList(sql)
|
||||
|
||||
if not isinstance(col, str):
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.mavg_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
if len([x for x in col.split(",") if x.strip()]) != 1:
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.mavg_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
col = col.replace(",", "").replace(" ", "")
|
||||
|
||||
if any([re.compile('^[a-zA-Z]{1}.*$').match(col) is None , not col.replace(".","").isalnum()]):
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.mavg_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
# if all(["," in col , len(col.split(",")) != 2]):
|
||||
# print(f"case in {line}: ", end='')
|
||||
# return tdSql.error(self.mavg_query_form(
|
||||
# sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
# table_expr=table_expr, condition=condition
|
||||
# ))
|
||||
#
|
||||
# if ("," in col):
|
||||
# if (not col.split(",")[0].strip()) ^ (not col.split(",")[1].strip()):
|
||||
# col = col.strip().split(",")[0] if not col.split(",")[1].strip() else col.strip().split(",")[1]
|
||||
# else:
|
||||
# print(f"case in {line}: ", end='')
|
||||
# return tdSql.error(self.mavg_query_form(
|
||||
# sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
# table_expr=table_expr, condition=condition
|
||||
# ))
|
||||
# pass
|
||||
|
||||
if '.' in col:
|
||||
if any([col.split(".")[0] not in table_expr, col.split(".")[1] not in collist]):
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.mavg_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
pass
|
||||
|
||||
if "." not in col:
|
||||
if col not in collist:
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.mavg_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
pass
|
||||
|
||||
colname = col if "." not in col else col.split(".")[1]
|
||||
col_index = collist.index(colname)
|
||||
if any([tdSql.cursor.istype(col_index, "TIMESTAMP"), tdSql.cursor.istype(col_index, "BOOL")]):
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.mavg_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
if any([tdSql.cursor.istype(col_index, "BINARY") , tdSql.cursor.istype(col_index,"NCHAR")]):
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.mavg_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
if any( [func != "mavg(" , r_comm != ")" , fr != "from", sel != "select"]):
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.mavg_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
if all(["(" not in table_expr, "stb" in table_expr, "group" not in condition.lower()]):
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.mavg_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
if "order by tbname" in condition.lower():
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.mavg_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
if all(["group" in condition.lower(), "tbname" not in condition.lower()]):
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.mavg_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
alias_list = ["tbname", "_c0", "st", "ts"]
|
||||
if all([alias, "," not in alias, not alias.isalnum()]):
|
||||
# actually, column alias also support "_", but in this case,forbidden that。
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.mavg_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
if all([alias, "," in alias]):
|
||||
if all(parm != alias.lower().split(",")[1].strip() for parm in alias_list):
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.mavg_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
pass
|
||||
|
||||
condition_exception = [ "~", "^", "insert", "distinct",
|
||||
"count", "avg", "twa", "irate", "sum", "stddev", "leastquares",
|
||||
"min", "max", "first", "last", "top", "bottom", "percentile",
|
||||
"apercentile", "last_row", "interp", "diff", "derivative",
|
||||
"spread", "ceil", "floor", "round", "interval", "fill", "slimit", "soffset"]
|
||||
if "union" not in condition.lower():
|
||||
if any(parm in condition.lower().strip() for parm in condition_exception):
|
||||
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.mavg_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
pass
|
||||
|
||||
if not any([isinstance(k, int) , isinstance(k, float)]) :
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.mavg_query_form(
|
||||
col=col, k=k, alias=alias, table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
if not(1 <= k < 1001):
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.mavg_query_form(
|
||||
col=col, k=k, alias=alias, table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
k = int(k // 1)
|
||||
pre_sql = re.sub("mavg\([a-z0-9 .,]*\)", f"count({col})", self.mavg_query_form(
|
||||
col=col, table_expr=table_expr, condition=condition
|
||||
))
|
||||
tdSql.query(pre_sql)
|
||||
|
||||
if tdSql.queryRows == 0:
|
||||
tdSql.query(self.mavg_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
print(f"case in {line}: ", end='')
|
||||
tdSql.checkRows(0)
|
||||
return
|
||||
|
||||
if "group" in condition:
|
||||
tb_condition = condition.split("group by")[1].split(" ")[1]
|
||||
tdSql.query(f"select distinct {tb_condition} from {table_expr}")
|
||||
query_result = tdSql.queryResult
|
||||
query_rows = tdSql.queryRows
|
||||
clear_condition = re.sub('order by [0-9a-z]*|slimit [0-9]*|soffset [0-9]*', "", condition)
|
||||
|
||||
pre_row = 0
|
||||
for i in range(query_rows):
|
||||
group_name = query_result[i][0]
|
||||
if "where" in clear_condition:
|
||||
pre_condition = re.sub('group by [0-9a-z]*', f"{tb_condition}='{group_name}'", clear_condition)
|
||||
else:
|
||||
pre_condition = "where " + re.sub('group by [0-9a-z]*',f"{tb_condition}='{group_name}'", clear_condition)
|
||||
|
||||
tdSql.query(f"select {col} {alias} from {table_expr} {pre_condition}")
|
||||
pre_data = np.array(tdSql.queryResult)[np.array(tdSql.queryResult) != None]
|
||||
pre_mavg = np.convolve(pre_data, np.ones(k), "valid")/k
|
||||
tdSql.query(self.mavg_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
for j in range(len(pre_mavg)):
|
||||
print(f"case in {line}:", end='')
|
||||
tdSql.checkData(pre_row+j, 0, pre_mavg[j])
|
||||
pre_row += len(pre_mavg)
|
||||
return
|
||||
elif "union" in condition:
|
||||
union_sql_0 = self.mavg_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
).split("union all")[0]
|
||||
|
||||
union_sql_1 = self.mavg_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
).split("union all")[1]
|
||||
|
||||
tdSql.query(union_sql_0)
|
||||
union_mavg_0 = tdSql.queryResult
|
||||
row_union_0 = tdSql.queryRows
|
||||
|
||||
tdSql.query(union_sql_1)
|
||||
union_mavg_1 = tdSql.queryResult
|
||||
|
||||
tdSql.query(self.mavg_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
for i in range(tdSql.queryRows):
|
||||
print(f"case in {line}: ", end='')
|
||||
if i < row_union_0:
|
||||
tdSql.checkData(i, 0, union_mavg_0[i][0])
|
||||
else:
|
||||
tdSql.checkData(i, 0, union_mavg_1[i-row_union_0][0])
|
||||
return
|
||||
|
||||
else:
|
||||
tdSql.query(f"select {col} from {table_expr} {re.sub('limit [0-9]*|offset [0-9]*','',condition)}")
|
||||
offset_val = condition.split("offset")[1].split(" ")[1] if "offset" in condition else 0
|
||||
pre_result = np.array(tdSql.queryResult)[np.array(tdSql.queryResult) != None]
|
||||
pre_mavg = pre_mavg = np.convolve(pre_result, np.ones(k), "valid")[offset_val:]/k
|
||||
tdSql.query(self.mavg_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
for i in range(tdSql.queryRows):
|
||||
print(f"case in {line}: ", end='')
|
||||
tdSql.checkData(i, 0, pre_mavg[i])
|
||||
|
||||
pass
|
||||
|
||||
def mavg_current_query(self) :
|
||||
|
||||
# table schema :ts timestamp, c1 int, c2 float, c3 timestamp, c4 binary(16), c5 double, c6 bool
|
||||
# c7 bigint, c8 smallint, c9 tinyint, c10 nchar(16)
|
||||
|
||||
# case1~6: numeric col:int/bigint/tinyint/smallint/float/double
|
||||
self.checkmavg()
|
||||
case2 = {"col": "c2"}
|
||||
self.checkmavg(**case2)
|
||||
case3 = {"col": "c5"}
|
||||
self.checkmavg(**case3)
|
||||
case4 = {"col": "c7"}
|
||||
self.checkmavg(**case4)
|
||||
case5 = {"col": "c8"}
|
||||
self.checkmavg(**case5)
|
||||
case6 = {"col": "c9"}
|
||||
self.checkmavg(**case6)
|
||||
|
||||
# # case7~8: nested query
|
||||
# case7 = {"table_expr": "(select c1 from stb1)"}
|
||||
# self.checkmavg(**case7)
|
||||
# case8 = {"table_expr": "(select mavg(c1, 1) c1 from stb1 group by tbname)"}
|
||||
# self.checkmavg(**case8)
|
||||
|
||||
# case9~10: mix with tbname/ts/tag/col
|
||||
# case9 = {"alias": ", tbname"}
|
||||
# self.checkmavg(**case9)
|
||||
# case10 = {"alias": ", _c0"}
|
||||
# self.checkmavg(**case10)
|
||||
# case11 = {"alias": ", st1"}
|
||||
# self.checkmavg(**case11)
|
||||
# case12 = {"alias": ", c1"}
|
||||
# self.checkmavg(**case12)
|
||||
|
||||
# case13~15: with single condition
|
||||
case13 = {"condition": "where c1 <= 10"}
|
||||
self.checkmavg(**case13)
|
||||
case14 = {"condition": "where c6 in (0, 1)"}
|
||||
self.checkmavg(**case14)
|
||||
case15 = {"condition": "where c1 between 1 and 10"}
|
||||
self.checkmavg(**case15)
|
||||
|
||||
# case16: with multi-condition
|
||||
case16 = {"condition": "where c6=1 or c6 =0"}
|
||||
self.checkmavg(**case16)
|
||||
|
||||
# case17: only support normal table join
|
||||
case17 = {
|
||||
"col": "t1.c1",
|
||||
"table_expr": "t1, t2",
|
||||
"condition": "where t1.ts=t2.ts"
|
||||
}
|
||||
self.checkmavg(**case17)
|
||||
# # case18~19: with group by
|
||||
# case19 = {
|
||||
# "table_expr": "stb1",
|
||||
# "condition": "partition by tbname"
|
||||
# }
|
||||
# self.checkmavg(**case19)
|
||||
|
||||
# case20~21: with order by
|
||||
# case20 = {"condition": "order by ts"}
|
||||
# self.checkmavg(**case20)
|
||||
#case21 = {
|
||||
# "table_expr": "stb1",
|
||||
# "condition": "group by tbname order by tbname"
|
||||
#}
|
||||
#self.checkmavg(**case21)
|
||||
|
||||
# # case22: with union
|
||||
# case22 = {
|
||||
# "condition": "union all select mavg( c1 , 1 ) from t2"
|
||||
# }
|
||||
# self.checkmavg(**case22)
|
||||
|
||||
# case23: with limit/slimit
|
||||
case23 = {
|
||||
"condition": "limit 1"
|
||||
}
|
||||
self.checkmavg(**case23)
|
||||
|
||||
# case24: value k range[1, 100], can be int or float, k = floor(k)
|
||||
case24 = {"k": 3}
|
||||
self.checkmavg(**case24)
|
||||
case25 = {"k": 2.999}
|
||||
self.checkmavg(**case25)
|
||||
case26 = {"k": 1000}
|
||||
self.checkmavg(**case26)
|
||||
|
||||
pass
|
||||
|
||||
def mavg_error_query(self) -> None :
|
||||
# unusual test
|
||||
|
||||
# form test
|
||||
err1 = {"col": ""}
|
||||
self.checkmavg(**err1) # no col
|
||||
err2 = {"sel": ""}
|
||||
self.checkmavg(**err2) # no select
|
||||
err3 = {"func": "mavg", "col": "", "m_comm": "", "k": "", "r_comm": ""}
|
||||
self.checkmavg(**err3) # no mavg condition: select mavg from
|
||||
err4 = {"col": "", "m_comm": "", "k": ""}
|
||||
self.checkmavg(**err4) # no mavg condition: select mavg() from
|
||||
err5 = {"func": "mavg", "r_comm": ""}
|
||||
self.checkmavg(**err5) # no brackets: select mavg col, k from
|
||||
err6 = {"fr": ""}
|
||||
self.checkmavg(**err6) # no from
|
||||
err7 = {"k": ""}
|
||||
self.checkmavg(**err7) # no k
|
||||
err8 = {"table_expr": ""}
|
||||
self.checkmavg(**err8) # no table_expr
|
||||
|
||||
# err9 = {"col": "st1"}
|
||||
# self.checkmavg(**err9) # col: tag
|
||||
err10 = {"col": 1}
|
||||
self.checkmavg(**err10) # col: value
|
||||
err11 = {"col": "NULL"}
|
||||
self.checkmavg(**err11) # col: NULL
|
||||
err12 = {"col": "%_"}
|
||||
self.checkmavg(**err12) # col: %_
|
||||
err13 = {"col": "c3"}
|
||||
self.checkmavg(**err13) # col: timestamp col
|
||||
err14 = {"col": "_c0"}
|
||||
self.checkmavg(**err14) # col: Primary key
|
||||
err15 = {"col": "avg(c1)"}
|
||||
self.checkmavg(**err15) # expr col
|
||||
err16 = {"col": "c4"}
|
||||
self.checkmavg(**err16) # binary col
|
||||
err17 = {"col": "c10"}
|
||||
self.checkmavg(**err17) # nchar col
|
||||
err18 = {"col": "c6"}
|
||||
self.checkmavg(**err18) # bool col
|
||||
err19 = {"col": "'c1'"}
|
||||
self.checkmavg(**err19) # col: string
|
||||
err20 = {"col": None}
|
||||
self.checkmavg(**err20) # col: None
|
||||
err21 = {"col": "''"}
|
||||
self.checkmavg(**err21) # col: ''
|
||||
err22 = {"col": "tt1.c1"}
|
||||
self.checkmavg(**err22) # not table_expr col
|
||||
err23 = {"col": "t1"}
|
||||
self.checkmavg(**err23) # tbname
|
||||
err24 = {"col": "stb1"}
|
||||
self.checkmavg(**err24) # stbname
|
||||
err25 = {"col": "db"}
|
||||
self.checkmavg(**err25) # datbasename
|
||||
err26 = {"col": "True"}
|
||||
self.checkmavg(**err26) # col: BOOL 1
|
||||
err27 = {"col": True}
|
||||
self.checkmavg(**err27) # col: BOOL 2
|
||||
err28 = {"col": "*"}
|
||||
self.checkmavg(**err28) # col: all col
|
||||
err29 = {"func": "mavg[", "r_comm": "]"}
|
||||
self.checkmavg(**err29) # form: mavg[col, k]
|
||||
err30 = {"func": "mavg{", "r_comm": "}"}
|
||||
self.checkmavg(**err30) # form: mavg{col, k}
|
||||
err31 = {"col": "[c1]"}
|
||||
self.checkmavg(**err31) # form: mavg([col], k)
|
||||
err32 = {"col": "c1, c2"}
|
||||
self.checkmavg(**err32) # form: mavg(col, col2, k)
|
||||
err33 = {"col": "c1, 2"}
|
||||
self.checkmavg(**err33) # form: mavg(col, k1, k2)
|
||||
err34 = {"alias": ", count(c1)"}
|
||||
self.checkmavg(**err34) # mix with aggregate function 1
|
||||
err35 = {"alias": ", avg(c1)"}
|
||||
self.checkmavg(**err35) # mix with aggregate function 2
|
||||
err36 = {"alias": ", min(c1)"}
|
||||
self.checkmavg(**err36) # mix with select function 1
|
||||
err37 = {"alias": ", top(c1, 5)"}
|
||||
self.checkmavg(**err37) # mix with select function 2
|
||||
err38 = {"alias": ", spread(c1)"}
|
||||
self.checkmavg(**err38) # mix with calculation function 1
|
||||
err39 = {"alias": ", diff(c1)"}
|
||||
self.checkmavg(**err39) # mix with calculation function 2
|
||||
# err40 = {"alias": "+ 2"}
|
||||
# self.checkmavg(**err40) # mix with arithmetic 1
|
||||
#tdSql.query(" select mavg( c1 , 1 ) + 2 from t1 ")
|
||||
err41 = {"alias": "+ avg(c1)"}
|
||||
self.checkmavg(**err41) # mix with arithmetic 2
|
||||
err42 = {"alias": ", c1"}
|
||||
self.checkmavg(**err42) # mix with other col
|
||||
# err43 = {"table_expr": "stb1"}
|
||||
# self.checkmavg(**err43) # select stb directly
|
||||
err44 = {
|
||||
"col": "stb1.c1",
|
||||
"table_expr": "stb1, stb2",
|
||||
"condition": "where stb1.ts=stb2.ts and stb1.st1=stb2.st2 order by stb1.ts"
|
||||
}
|
||||
self.checkmavg(**err44) # stb join
|
||||
err45 = {
|
||||
"condition": "where ts>0 and ts < now interval(1h) fill(next)"
|
||||
}
|
||||
self.checkmavg(**err45) # interval
|
||||
err46 = {
|
||||
"table_expr": "t1",
|
||||
"condition": "group by c6"
|
||||
}
|
||||
self.checkmavg(**err46) # group by normal col
|
||||
err47 = {
|
||||
"table_expr": "stb1",
|
||||
"condition": "group by tbname slimit 1 "
|
||||
}
|
||||
# self.checkmavg(**err47) # with slimit
|
||||
err48 = {
|
||||
"table_expr": "stb1",
|
||||
"condition": "group by tbname slimit 1 soffset 1"
|
||||
}
|
||||
# self.checkmavg(**err48) # with soffset
|
||||
err49 = {"k": "2021-01-01 00:00:00.000"}
|
||||
self.checkmavg(**err49) # k: timestamp
|
||||
err50 = {"k": False}
|
||||
self.checkmavg(**err50) # k: False
|
||||
err51 = {"k": "%"}
|
||||
self.checkmavg(**err51) # k: special char
|
||||
err52 = {"k": ""}
|
||||
self.checkmavg(**err52) # k: ""
|
||||
err53 = {"k": None}
|
||||
self.checkmavg(**err53) # k: None
|
||||
err54 = {"k": "NULL"}
|
||||
self.checkmavg(**err54) # k: null
|
||||
err55 = {"k": "binary(4)"}
|
||||
self.checkmavg(**err55) # k: string
|
||||
err56 = {"k": "c1"}
|
||||
self.checkmavg(**err56) # k: sring,col name
|
||||
err57 = {"col": "c1, 1, c2"}
|
||||
self.checkmavg(**err57) # form: mavg(col1, k1, col2, k2)
|
||||
err58 = {"col": "c1 cc1"}
|
||||
self.checkmavg(**err58) # form: mavg(col newname, k)
|
||||
err59 = {"k": "'1'"}
|
||||
# self.checkmavg(**err59) # formL mavg(colm, "1")
|
||||
err60 = {"k": "-1-(-2)"}
|
||||
# self.checkmavg(**err60) # formL mavg(colm, -1-2)
|
||||
err61 = {"k": 1001}
|
||||
self.checkmavg(**err61) # k: right out of [1, 1000]
|
||||
err62 = {"k": -1}
|
||||
self.checkmavg(**err62) # k: negative number
|
||||
err63 = {"k": 0}
|
||||
self.checkmavg(**err63) # k: 0
|
||||
err64 = {"k": 2**63-1}
|
||||
self.checkmavg(**err64) # k: max(bigint)
|
||||
err65 = {"k": 1-2**63}
|
||||
# self.checkmavg(**err65) # k: min(bigint)
|
||||
err66 = {"k": -2**63}
|
||||
self.checkmavg(**err66) # k: NULL
|
||||
err67 = {"k": 0.999999}
|
||||
self.checkmavg(**err67) # k: left out of [1, 1000]
|
||||
err68 = {
|
||||
"table_expr": "stb1",
|
||||
"condition": "group by tbname order by tbname" # order by tbname not supported
|
||||
}
|
||||
self.checkmavg(**err68)
|
||||
|
||||
pass
|
||||
|
||||
def mavg_test_data(self, tbnum:int, data_row:int, basetime:int) -> None :
|
||||
for i in range(tbnum):
|
||||
for j in range(data_row):
|
||||
tdSql.execute(
|
||||
f"insert into t{i} values ("
|
||||
f"{basetime + (j+1)*10}, {random.randint(-200, -1)}, {random.uniform(200, -1)}, {basetime + random.randint(-200, -1)}, "
|
||||
f"'binary_{j}', {random.uniform(-200, -1)}, {random.choice([0,1])}, {random.randint(-200,-1)}, "
|
||||
f"{random.randint(-200, -1)}, {random.randint(-127, -1)}, 'nchar_{j}' )"
|
||||
)
|
||||
|
||||
tdSql.execute(
|
||||
f"insert into t{i} values ("
|
||||
f"{basetime - (j+1) * 10}, {random.randint(1, 200)}, {random.uniform(1, 200)}, {basetime - random.randint(1, 200)}, "
|
||||
f"'binary_{j}_1', {random.uniform(1, 200)}, {random.choice([0, 1])}, {random.randint(1,200)}, "
|
||||
f"{random.randint(1,200)}, {random.randint(1,127)}, 'nchar_{j}_1' )"
|
||||
)
|
||||
tdSql.execute(
|
||||
f"insert into tt{i} values ( {basetime-(j+1) * 10}, {random.randint(1, 200)} )"
|
||||
)
|
||||
|
||||
pass
|
||||
|
||||
def mavg_test_table(self,tbnum: int) -> None :
|
||||
tdSql.execute("drop database if exists db")
|
||||
tdSql.execute("create database if not exists db keep 3650")
|
||||
tdSql.execute("use db")
|
||||
|
||||
tdSql.execute(
|
||||
"create stable db.stb1 (\
|
||||
ts timestamp, c1 int, c2 float, c3 timestamp, c4 binary(16), c5 double, c6 bool, \
|
||||
c7 bigint, c8 smallint, c9 tinyint, c10 nchar(16)\
|
||||
) \
|
||||
tags(st1 int)"
|
||||
)
|
||||
tdSql.execute(
|
||||
"create stable db.stb2 (ts timestamp, c1 int) tags(st2 int)"
|
||||
)
|
||||
for i in range(tbnum):
|
||||
tdSql.execute(f"create table t{i} using stb1 tags({i})")
|
||||
tdSql.execute(f"create table tt{i} using stb2 tags({i})")
|
||||
|
||||
pass
|
||||
|
||||
def mavg_test_run(self) :
|
||||
tdLog.printNoPrefix("==========TD-10594==========")
|
||||
tbnum = 10
|
||||
nowtime = int(round(time.time() * 1000))
|
||||
per_table_rows = 2
|
||||
self.mavg_test_table(tbnum)
|
||||
|
||||
tdLog.printNoPrefix("######## no data test:")
|
||||
self.mavg_current_query()
|
||||
self.mavg_error_query()
|
||||
|
||||
tdLog.printNoPrefix("######## insert only NULL test:")
|
||||
for i in range(tbnum):
|
||||
tdSql.execute(f"insert into t{i}(ts) values ({nowtime - 5})")
|
||||
tdSql.execute(f"insert into t{i}(ts) values ({nowtime + 5})")
|
||||
self.mavg_current_query()
|
||||
self.mavg_error_query()
|
||||
|
||||
tdLog.printNoPrefix("######## insert data in the range near the max(bigint/double):")
|
||||
# self.mavg_test_table(tbnum)
|
||||
# tdSql.execute(f"insert into t1(ts, c1,c2,c5,c7) values "
|
||||
# f"({nowtime - (per_table_rows + 1) * 10}, {2**31-1}, {3.4*10**38}, {1.7*10**308}, {2**63-1})")
|
||||
# tdSql.execute(f"insert into t1(ts, c1,c2,c5,c7) values "
|
||||
# f"({nowtime - (per_table_rows + 2) * 10}, {2**31-1}, {3.4*10**38}, {1.7*10**308}, {2**63-1})")
|
||||
# self.mavg_current_query()
|
||||
# self.mavg_error_query()
|
||||
|
||||
tdLog.printNoPrefix("######## insert data in the range near the min(bigint/double):")
|
||||
# self.mavg_test_table(tbnum)
|
||||
# tdSql.execute(f"insert into t1(ts, c1,c2,c5,c7) values "
|
||||
# f"({nowtime - (per_table_rows + 1) * 10}, {1-2**31}, {-3.4*10**38}, {-1.7*10**308}, {1-2**63})")
|
||||
# tdSql.execute(f"insert into t1(ts, c1,c2,c5,c7) values "
|
||||
# f"({nowtime - (per_table_rows + 2) * 10}, {1-2**31}, {-3.4*10**38}, {-1.7*10**308}, {512-2**63})")
|
||||
# self.mavg_current_query()
|
||||
# self.mavg_error_query()
|
||||
|
||||
tdLog.printNoPrefix("######## insert data without NULL data test:")
|
||||
self.mavg_test_table(tbnum)
|
||||
self.mavg_test_data(tbnum, per_table_rows, nowtime)
|
||||
self.mavg_current_query()
|
||||
self.mavg_error_query()
|
||||
|
||||
|
||||
tdLog.printNoPrefix("######## insert data mix with NULL test:")
|
||||
for i in range(tbnum):
|
||||
tdSql.execute(f"insert into t{i}(ts) values ({nowtime})")
|
||||
tdSql.execute(f"insert into t{i}(ts) values ({nowtime-(per_table_rows+3)*10})")
|
||||
tdSql.execute(f"insert into t{i}(ts) values ({nowtime+(per_table_rows+3)*10})")
|
||||
self.mavg_current_query()
|
||||
self.mavg_error_query()
|
||||
|
||||
|
||||
|
||||
tdLog.printNoPrefix("######## check after WAL test:")
|
||||
tdSql.query("show dnodes")
|
||||
index = tdSql.getData(0, 0)
|
||||
tdDnodes.stop(index)
|
||||
tdDnodes.start(index)
|
||||
self.mavg_current_query()
|
||||
self.mavg_error_query()
|
||||
|
||||
def run(self):
|
||||
import traceback
|
||||
try:
|
||||
# run in develop branch
|
||||
self.mavg_test_run()
|
||||
pass
|
||||
except Exception as e:
|
||||
traceback.print_exc()
|
||||
raise e
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
File diff suppressed because it is too large
Load Diff
|
@ -11,6 +11,7 @@
|
|||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from platform import java_ver
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
|
@ -41,147 +42,21 @@ class TDTestCase:
|
|||
|
||||
# percentile verifacation
|
||||
tdSql.error("select percentile(ts ,20) from test")
|
||||
tdSql.error("select apercentile(ts ,20) from test")
|
||||
tdSql.error("select percentile(col7 ,20) from test")
|
||||
tdSql.error("select apercentile(col7 ,20) from test")
|
||||
tdSql.error("select percentile(col8 ,20) from test")
|
||||
tdSql.error("select apercentile(col8 ,20) from test")
|
||||
tdSql.error("select percentile(col9 ,20) from test")
|
||||
tdSql.error("select apercentile(col9 ,20) from test")
|
||||
|
||||
tdSql.query("select percentile(col1, 0) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 0))
|
||||
tdSql.query("select apercentile(col1, 0) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
tdSql.query("select percentile(col1, 50) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 50))
|
||||
tdSql.query("select apercentile(col1, 50) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
tdSql.query("select percentile(col1, 100) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 100))
|
||||
tdSql.query("select apercentile(col1, 100) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
|
||||
tdSql.query("select percentile(col2, 0) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 0))
|
||||
tdSql.query("select apercentile(col2, 0) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
tdSql.query("select percentile(col2, 50) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 50))
|
||||
tdSql.query("select apercentile(col2, 50) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
tdSql.query("select percentile(col2, 100) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 100))
|
||||
tdSql.query("select apercentile(col2, 100) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
|
||||
tdSql.query("select percentile(col3, 0) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 0))
|
||||
tdSql.query("select apercentile(col3, 0) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
tdSql.query("select percentile(col3, 50) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 50))
|
||||
tdSql.query("select apercentile(col3, 50) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
tdSql.query("select percentile(col3, 100) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 100))
|
||||
tdSql.query("select apercentile(col3, 100) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
|
||||
tdSql.query("select percentile(col4, 0) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 0))
|
||||
tdSql.query("select apercentile(col4, 0) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
tdSql.query("select percentile(col4, 50) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 50))
|
||||
tdSql.query("select apercentile(col4, 50) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
tdSql.query("select percentile(col4, 100) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 100))
|
||||
tdSql.query("select apercentile(col4, 100) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
|
||||
tdSql.query("select percentile(col11, 0) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 0))
|
||||
tdSql.query("select apercentile(col11, 0) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
tdSql.query("select percentile(col11, 50) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 50))
|
||||
tdSql.query("select apercentile(col11, 50) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
tdSql.query("select percentile(col11, 100) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 100))
|
||||
tdSql.query("select apercentile(col11, 100) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
|
||||
tdSql.query("select percentile(col12, 0) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 0))
|
||||
tdSql.query("select apercentile(col12, 0) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
tdSql.query("select percentile(col12, 50) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 50))
|
||||
tdSql.query("select apercentile(col12, 50) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
tdSql.query("select percentile(col12, 100) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 100))
|
||||
tdSql.query("select apercentile(col12, 100) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
|
||||
tdSql.query("select percentile(col13, 0) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 0))
|
||||
tdSql.query("select apercentile(col13, 0) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
tdSql.query("select percentile(col13, 50) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 50))
|
||||
tdSql.query("select apercentile(col13, 50) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
tdSql.query("select percentile(col13, 100) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 100))
|
||||
tdSql.query("select apercentile(col13, 100) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
|
||||
tdSql.query("select percentile(col14, 0) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 0))
|
||||
tdSql.query("select apercentile(col14, 0) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
tdSql.query("select percentile(col14, 50) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 50))
|
||||
tdSql.query("select apercentile(col14, 50) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
tdSql.query("select percentile(col14, 100) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, 100))
|
||||
tdSql.query("select apercentile(col14, 100) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
|
||||
tdSql.query("select percentile(col5, 0) from test")
|
||||
print("query result: %s" % tdSql.getData(0, 0))
|
||||
print("array result: %s" % np.percentile(floatData, 0))
|
||||
tdSql.query("select apercentile(col5, 0) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
tdSql.query("select percentile(col5, 50) from test")
|
||||
print("query result: %s" % tdSql.getData(0, 0))
|
||||
print("array result: %s" % np.percentile(floatData, 50))
|
||||
tdSql.query("select apercentile(col5, 50) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
tdSql.query("select percentile(col5, 100) from test")
|
||||
print("query result: %s" % tdSql.getData(0, 0))
|
||||
print("array result: %s" % np.percentile(floatData, 100))
|
||||
tdSql.query("select apercentile(col5, 100) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
|
||||
tdSql.query("select percentile(col6, 0) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(floatData, 0))
|
||||
tdSql.query("select apercentile(col6, 0) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
tdSql.query("select percentile(col6, 50) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(floatData, 50))
|
||||
tdSql.query("select apercentile(col6, 50) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
tdSql.query("select percentile(col6, 100) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(floatData, 100))
|
||||
tdSql.query("select apercentile(col6, 100) from test")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
tdSql.error("select percentile(col9 ,20) from test")
|
||||
column_list = [1,2,3,4,11,12,13,14]
|
||||
percent_list = [0,50,100]
|
||||
for i in column_list:
|
||||
for j in percent_list:
|
||||
tdSql.query(f"select percentile(col{i}, {j}) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(intData, j))
|
||||
|
||||
for i in [5,6]:
|
||||
for j in percent_list:
|
||||
tdSql.query(f"select percentile(col{i}, {j}) from test")
|
||||
tdSql.checkData(0, 0, np.percentile(floatData, j))
|
||||
|
||||
tdSql.execute("create table meters (ts timestamp, voltage int) tags(loc nchar(20))")
|
||||
tdSql.execute("create table t0 using meters tags('beijing')")
|
||||
tdSql.execute("create table t1 using meters tags('shanghai')")
|
||||
|
@ -189,9 +64,8 @@ class TDTestCase:
|
|||
tdSql.execute("insert into t0 values(%d, %d)" % (self.ts + i, i + 1))
|
||||
tdSql.execute("insert into t1 values(%d, %d)" % (self.ts + i, i + 1))
|
||||
|
||||
tdSql.error("select percentile(voltage, 20) from meters")
|
||||
tdSql.query("select apercentile(voltage, 20) from meters")
|
||||
print("apercentile result: %s" % tdSql.getData(0, 0))
|
||||
# tdSql.error("select percentile(voltage, 20) from meters")
|
||||
|
||||
|
||||
|
||||
tdSql.execute("create table st(ts timestamp, k int)")
|
||||
|
|
|
@ -0,0 +1,863 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from pstats import Stats
|
||||
import sys
|
||||
import subprocess
|
||||
import random
|
||||
import math
|
||||
import numpy as np
|
||||
import inspect
|
||||
import re
|
||||
import taos
|
||||
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor())
|
||||
|
||||
def sample_query_form(self, sel="select", func="sample(", col="c1", m_comm =",", k=1,r_comm=")", alias="", fr="from",table_expr="t1", condition=""):
|
||||
'''
|
||||
sample function:
|
||||
|
||||
:param sel: string, must be "select", required parameters;
|
||||
:param func: string, in this case must be "sample(", otherwise return other function, required parameters;
|
||||
:param col: string, column name, required parameters;
|
||||
:param m_comm: string, comma between col and k , required parameters;
|
||||
:param k: int/float,the width of the sliding window, [1,100], required parameters;
|
||||
:param r_comm: string, must be ")", use with "(" in func, required parameters;
|
||||
:param alias: string, result column another name,or add other funtion;
|
||||
:param fr: string, must be "from", required parameters;
|
||||
:param table_expr: string or expression, data source(eg,table/stable name, result set), required parameters;
|
||||
:param condition: expression;
|
||||
:return: sample query statement,default: select sample(c1, 1) from t1
|
||||
'''
|
||||
|
||||
return f"{sel} {func} {col} {m_comm} {k} {r_comm} {alias} {fr} {table_expr} {condition}"
|
||||
|
||||
def checksample(self,sel="select", func="sample(", col="c1", m_comm =",", k=1,r_comm=")", alias="", fr="from",table_expr="t1", condition=""):
|
||||
# print(self.sample_query_form(sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
# table_expr=table_expr, condition=condition))
|
||||
line = sys._getframe().f_back.f_lineno
|
||||
|
||||
if not all([sel , func , col , m_comm , k , r_comm , fr , table_expr]):
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.sample_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
|
||||
sql = "select * from t1"
|
||||
collist = tdSql.getColNameList(sql)
|
||||
|
||||
if not isinstance(col, str):
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.sample_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
if len([x for x in col.split(",") if x.strip()]) != 1:
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.sample_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
col = col.replace(",", "").replace(" ","")
|
||||
|
||||
if any([re.compile('^[a-zA-Z]{1}.*$').match(col) is None , not col.replace(".","").isalnum()]):
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.sample_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
if '.' in col:
|
||||
if any([col.split(".")[0] not in table_expr, col.split(".")[1] not in collist]):
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.sample_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
pass
|
||||
|
||||
if "." not in col:
|
||||
if col not in collist:
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.sample_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
pass
|
||||
|
||||
# colname = col if "." not in col else col.split(".")[1]
|
||||
# col_index = collist.index(colname)
|
||||
# if any([tdSql.cursor.istype(col_index, "TIMESTAMP"), tdSql.cursor.istype(col_index, "BOOL")]):
|
||||
# print(f"case in {line}: ", end='')
|
||||
# return tdSql.error(self.sample_query_form(
|
||||
# sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
# table_expr=table_expr, condition=condition
|
||||
# ))
|
||||
#
|
||||
# if any([tdSql.cursor.istype(col_index, "BINARY") , tdSql.cursor.istype(col_index,"NCHAR")]):
|
||||
# print(f"case in {line}: ", end='')
|
||||
# return tdSql.error(self.sample_query_form(
|
||||
# sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
# table_expr=table_expr, condition=condition
|
||||
# ))
|
||||
|
||||
if any( [func != "sample(" , r_comm != ")" , fr != "from", sel != "select"]):
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.sample_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
if all(["(" not in table_expr, "stb" in table_expr, "group" not in condition.lower()]):
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.sample_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
if all(["group" in condition.lower(), "tbname" not in condition.lower()]):
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.sample_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
alias_list = ["tbname", "_c0", "st", "ts"]
|
||||
if all([alias, "," not in alias]):
|
||||
if any([ not alias.isalnum(), re.compile('^[a-zA-Z]{1}.*$').match(col) is None ]):
|
||||
# actually, column alias also support "_", but in this case,forbidden that。
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.sample_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
if all([alias, "," in alias]):
|
||||
if all(parm != alias.lower().split(",")[1].strip() for parm in alias_list):
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.sample_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
pass
|
||||
|
||||
condition_exception = [ "-", "+", "/", "*", "~", "^", "insert", "distinct",
|
||||
"count", "avg", "twa", "irate", "sum", "stddev", "leastquares",
|
||||
"min", "max", "first", "last", "top", "bottom", "percentile",
|
||||
"apercentile", "last_row", "interp", "diff", "derivative",
|
||||
"spread", "ceil", "floor", "round", "interval", "fill", "slimit", "soffset"]
|
||||
if "union" not in condition.lower():
|
||||
if any(parm in condition.lower().strip() for parm in condition_exception):
|
||||
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.sample_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
pass
|
||||
|
||||
if not any([isinstance(k, int) , isinstance(k, float)]) :
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.sample_query_form(
|
||||
col=col, k=k, alias=alias, table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
if not(1 <= k < 1001):
|
||||
print(f"case in {line}: ", end='')
|
||||
return tdSql.error(self.sample_query_form(
|
||||
col=col, k=k, alias=alias, table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
k = int(k // 1)
|
||||
pre_sql = re.sub("sample\([a-z0-9 .,]*\)", f"count({col})", self.sample_query_form(
|
||||
col=col, table_expr=table_expr, condition=condition
|
||||
))
|
||||
tdSql.query(pre_sql)
|
||||
if tdSql.queryRows == 0:
|
||||
tdSql.query(self.sample_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
print(f"case in {line}: ", end='')
|
||||
tdSql.checkRows(0)
|
||||
return
|
||||
|
||||
tdSql.query(self.sample_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
|
||||
sample_result = tdSql.queryResult
|
||||
sample_len = tdSql.queryRows
|
||||
|
||||
if "group" in condition:
|
||||
tb_condition = condition.split("group by")[1].split(" ")[1]
|
||||
tdSql.query(f"select distinct {tb_condition} from {table_expr}")
|
||||
query_result = tdSql.queryResult
|
||||
query_rows = tdSql.queryRows
|
||||
clear_condition = re.sub('order by [0-9a-z]*|slimit [0-9]*|soffset [0-9]*', "", condition)
|
||||
|
||||
pre_row = 0
|
||||
for i in range(query_rows):
|
||||
group_name = query_result[i][0]
|
||||
if "where" in clear_condition:
|
||||
pre_condition = re.sub('group by [0-9a-z]*', f"and {tb_condition}='{group_name}' and {col} is not null", clear_condition)
|
||||
else:
|
||||
pre_condition = "where " + re.sub('group by [0-9a-z]*',f"{tb_condition}='{group_name}' and {col} is not null", clear_condition)
|
||||
|
||||
tdSql.query(f"select ts, {col} {alias} from {table_expr} {pre_condition}")
|
||||
# pre_data = np.array(tdSql.queryResult)[np.array(tdSql.queryResult) != None]
|
||||
# pre_sample = np.convolve(pre_data, np.ones(k), "valid")/k
|
||||
pre_sample = tdSql.queryResult
|
||||
pre_len = tdSql.queryRows
|
||||
step = pre_len if pre_len < k else k
|
||||
# tdSql.query(self.sample_query_form(
|
||||
# sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
# table_expr=table_expr, condition=condition
|
||||
# ))
|
||||
for i in range(step):
|
||||
if sample_result[pre_row:pre_row+step][i] not in pre_sample:
|
||||
tdLog.exit(f"case in {line} is failed: sample data is not in {group_name}")
|
||||
else:
|
||||
tdLog.info(f"case in {line} is success: sample data is in {group_name}")
|
||||
|
||||
# for j in range(len(pre_sample)):
|
||||
# print(f"case in {line}:", end='')
|
||||
# tdSql.checkData(pre_row+j, 1, pre_sample[j])
|
||||
pre_row += step
|
||||
return
|
||||
elif "union" in condition:
|
||||
union_sql_0 = self.sample_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
).split("union all")[0]
|
||||
|
||||
union_sql_1 = self.sample_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
).split("union all")[1]
|
||||
|
||||
tdSql.query(union_sql_0)
|
||||
# union_sample_0 = tdSql.queryResult
|
||||
row_union_0 = tdSql.queryRows
|
||||
|
||||
tdSql.query(union_sql_1)
|
||||
# union_sample_1 = tdSql.queryResult
|
||||
row_union_1 = tdSql.queryRows
|
||||
|
||||
tdSql.query(self.sample_query_form(
|
||||
sel=sel, func=func, col=col, m_comm=m_comm, k=k, r_comm=r_comm, alias=alias, fr=fr,
|
||||
table_expr=table_expr, condition=condition
|
||||
))
|
||||
# for i in range(tdSql.queryRows):
|
||||
# print(f"case in {line}: ", end='')
|
||||
# if i < row_union_0:
|
||||
# tdSql.checkData(i, 1, union_sample_0[i][1])
|
||||
# else:
|
||||
# tdSql.checkData(i, 1, union_sample_1[i-row_union_0][1])
|
||||
if row_union_0 + row_union_1 != sample_len:
|
||||
tdLog.exit(f"case in {line} is failed: sample data is not in ")
|
||||
else:
|
||||
tdLog.info(f"case in {line} is success: sample data is in ")
|
||||
return
|
||||
|
||||
else:
|
||||
if "where" in condition:
|
||||
condition = re.sub('where', f"where {col} is not null and ", condition)
|
||||
else:
|
||||
condition = f"where {col} is not null" + condition
|
||||
print(f"select ts, {col} {alias} from {table_expr} {re.sub('limit [0-9]*|offset [0-9]*','',condition)}")
|
||||
tdSql.query(f"select ts, {col} {alias} from {table_expr} {re.sub('limit [0-9]*|offset [0-9]*','',condition)}")
|
||||
# offset_val = condition.split("offset")[1].split(" ")[1] if "offset" in condition else 0
|
||||
pre_sample = tdSql.queryResult
|
||||
# pre_len = tdSql.queryRows
|
||||
# for i in range(sample_len):
|
||||
# if sample_result[pre_row:pre_row + step][i] not in pre_sample:
|
||||
# tdLog.exit(f"case in {line} is failed: sample data is not in {group_name}")
|
||||
# else:
|
||||
# tdLog.info(f"case in {line} is success: sample data is in {group_name}")
|
||||
|
||||
pass
|
||||
|
||||
def sample_current_query(self) :
|
||||
|
||||
# table schema :ts timestamp, c1 int, c2 float, c3 timestamp, c4 binary(16), c5 double, c6 bool
|
||||
# c7 bigint, c8 smallint, c9 tinyint, c10 nchar(16)
|
||||
|
||||
# case1~6: numeric col:int/bigint/tinyint/smallint/float/double
|
||||
self.checksample()
|
||||
case2 = {"col": "c2"}
|
||||
self.checksample(**case2)
|
||||
case3 = {"col": "c5"}
|
||||
self.checksample(**case3)
|
||||
case4 = {"col": "c7"}
|
||||
self.checksample(**case4)
|
||||
case5 = {"col": "c8"}
|
||||
self.checksample(**case5)
|
||||
case6 = {"col": "c9"}
|
||||
self.checksample(**case6)
|
||||
|
||||
# # case7~8: nested query
|
||||
# case7 = {"table_expr": "(select c1 from stb1)"}
|
||||
# self.checksample(**case7)
|
||||
# case8 = {"table_expr": "(select sample(c1, 1) c1 from stb1 group by tbname)"}
|
||||
# self.checksample(**case8)
|
||||
|
||||
# case9~10: mix with tbname/ts/tag/col
|
||||
# case9 = {"alias": ", tbname"}
|
||||
# self.checksample(**case9)
|
||||
# case10 = {"alias": ", _c0"}
|
||||
# self.checksample(**case10)
|
||||
case11 = {"alias": ", st1"}
|
||||
self.checksample(**case11)
|
||||
case12 = {"alias": ", c1"}
|
||||
self.checksample(**case12)
|
||||
|
||||
# case13~15: with single condition
|
||||
case13 = {"condition": "where c1 <= 10"}
|
||||
self.checksample(**case13)
|
||||
case14 = {"condition": "where c6 in (0, 1)"}
|
||||
self.checksample(**case14)
|
||||
case15 = {"condition": "where c1 between 1 and 10"}
|
||||
self.checksample(**case15)
|
||||
|
||||
# case16: with multi-condition
|
||||
case16 = {"condition": "where c6=1 or c6 =0"}
|
||||
self.checksample(**case16)
|
||||
|
||||
# # case17: only support normal table join
|
||||
# case17 = {
|
||||
# "col": "t1.c1",
|
||||
# "table_expr": "t1, t2",
|
||||
# "condition": "where t1.ts=t2.ts"
|
||||
# }
|
||||
# self.checksample(**case17)
|
||||
# # case18~19: with group by
|
||||
# case19 = {
|
||||
# "table_expr": "stb1",
|
||||
# "condition": "partition by tbname"
|
||||
# }
|
||||
# self.checksample(**case19)
|
||||
|
||||
# # case20~21: with order by
|
||||
# case20 = {"condition": "order by ts"}
|
||||
# self.checksample(**case20)
|
||||
# case21 = {
|
||||
# "table_expr": "stb1",
|
||||
# "condition": "partition by tbname order by tbname"
|
||||
# }
|
||||
# self.checksample(**case21)
|
||||
|
||||
# case22: with union
|
||||
case22 = {
|
||||
"condition": "union all select sample( c1 , 1 ) from t2"
|
||||
}
|
||||
self.checksample(**case22)
|
||||
|
||||
# case23: with limit/slimit
|
||||
case23 = {
|
||||
"condition": "limit 1"
|
||||
}
|
||||
self.checksample(**case23)
|
||||
|
||||
# case24: value k range[1, 100], can be int or float, k = floor(k)
|
||||
case24 = {"k": 3}
|
||||
self.checksample(**case24)
|
||||
case25 = {"k": 2.999}
|
||||
self.checksample(**case25)
|
||||
case26 = {"k": 1000}
|
||||
self.checksample(**case26)
|
||||
case27 = {
|
||||
"table_expr": "stb1",
|
||||
"condition": "group by tbname slimit 1 "
|
||||
}
|
||||
self.checksample(**case27) # with slimit
|
||||
case28 = {
|
||||
"table_expr": "stb1",
|
||||
"condition": "group by tbname slimit 1 soffset 1"
|
||||
}
|
||||
self.checksample(**case28) # with soffset
|
||||
|
||||
pass
|
||||
|
||||
def sample_error_query(self) -> None :
|
||||
# unusual test
|
||||
|
||||
# form test
|
||||
err1 = {"col": ""}
|
||||
self.checksample(**err1) # no col
|
||||
err2 = {"sel": ""}
|
||||
self.checksample(**err2) # no select
|
||||
err3 = {"func": "sample", "col": "", "m_comm": "", "k": "", "r_comm": ""}
|
||||
self.checksample(**err3) # no sample condition: select sample from
|
||||
err4 = {"col": "", "m_comm": "", "k": ""}
|
||||
self.checksample(**err4) # no sample condition: select sample() from
|
||||
err5 = {"func": "sample", "r_comm": ""}
|
||||
self.checksample(**err5) # no brackets: select sample col, k from
|
||||
err6 = {"fr": ""}
|
||||
self.checksample(**err6) # no from
|
||||
err7 = {"k": ""}
|
||||
self.checksample(**err7) # no k
|
||||
err8 = {"table_expr": ""}
|
||||
self.checksample(**err8) # no table_expr
|
||||
|
||||
# err9 = {"col": "st1"}
|
||||
# self.checksample(**err9) # col: tag
|
||||
tdSql.query(" select sample(st1 ,1) from t1 ")
|
||||
err10 = {"col": 1}
|
||||
self.checksample(**err10) # col: value
|
||||
err11 = {"col": "NULL"}
|
||||
self.checksample(**err11) # col: NULL
|
||||
err12 = {"col": "%_"}
|
||||
self.checksample(**err12) # col: %_
|
||||
err13 = {"col": "c3"}
|
||||
self.checksample(**err13) # col: timestamp col
|
||||
err14 = {"col": "_c0"}
|
||||
# self.checksample(**err14) # col: Primary key
|
||||
err15 = {"col": "avg(c1)"}
|
||||
# self.checksample(**err15) # expr col
|
||||
err16 = {"col": "c4"}
|
||||
self.checksample(**err16) # binary col
|
||||
err17 = {"col": "c10"}
|
||||
self.checksample(**err17) # nchar col
|
||||
err18 = {"col": "c6"}
|
||||
self.checksample(**err18) # bool col
|
||||
err19 = {"col": "'c1'"}
|
||||
self.checksample(**err19) # col: string
|
||||
err20 = {"col": None}
|
||||
self.checksample(**err20) # col: None
|
||||
err21 = {"col": "''"}
|
||||
self.checksample(**err21) # col: ''
|
||||
err22 = {"col": "tt1.c1"}
|
||||
self.checksample(**err22) # not table_expr col
|
||||
err23 = {"col": "t1"}
|
||||
self.checksample(**err23) # tbname
|
||||
err24 = {"col": "stb1"}
|
||||
self.checksample(**err24) # stbname
|
||||
err25 = {"col": "db"}
|
||||
self.checksample(**err25) # datbasename
|
||||
err26 = {"col": "True"}
|
||||
self.checksample(**err26) # col: BOOL 1
|
||||
err27 = {"col": True}
|
||||
self.checksample(**err27) # col: BOOL 2
|
||||
err28 = {"col": "*"}
|
||||
self.checksample(**err28) # col: all col
|
||||
err29 = {"func": "sample[", "r_comm": "]"}
|
||||
self.checksample(**err29) # form: sample[col, k]
|
||||
err30 = {"func": "sample{", "r_comm": "}"}
|
||||
self.checksample(**err30) # form: sample{col, k}
|
||||
err31 = {"col": "[c1]"}
|
||||
self.checksample(**err31) # form: sample([col], k)
|
||||
err32 = {"col": "c1, c2"}
|
||||
self.checksample(**err32) # form: sample(col, col2, k)
|
||||
err33 = {"col": "c1, 2"}
|
||||
self.checksample(**err33) # form: sample(col, k1, k2)
|
||||
err34 = {"alias": ", count(c1)"}
|
||||
self.checksample(**err34) # mix with aggregate function 1
|
||||
err35 = {"alias": ", avg(c1)"}
|
||||
self.checksample(**err35) # mix with aggregate function 2
|
||||
err36 = {"alias": ", min(c1)"}
|
||||
self.checksample(**err36) # mix with select function 1
|
||||
err37 = {"alias": ", top(c1, 5)"}
|
||||
self.checksample(**err37) # mix with select function 2
|
||||
err38 = {"alias": ", spread(c1)"}
|
||||
self.checksample(**err38) # mix with calculation function 1
|
||||
err39 = {"alias": ", diff(c1)"}
|
||||
self.checksample(**err39) # mix with calculation function 2
|
||||
# err40 = {"alias": "+ 2"}
|
||||
# self.checksample(**err40) # mix with arithmetic 1
|
||||
# tdSql.query(" select sample(c1 , 1) + 2 from t1 ")
|
||||
err41 = {"alias": "+ avg(c1)"}
|
||||
self.checksample(**err41) # mix with arithmetic 2
|
||||
err42 = {"alias": ", c1"}
|
||||
self.checksample(**err42) # mix with other col
|
||||
# err43 = {"table_expr": "stb1"}
|
||||
# self.checksample(**err43) # select stb directly
|
||||
err44 = {
|
||||
"col": "stb1.c1",
|
||||
"table_expr": "stb1, stb2",
|
||||
"condition": "where stb1.ts=stb2.ts and stb1.st1=stb2.st2 order by stb1.ts"
|
||||
}
|
||||
self.checksample(**err44) # stb join
|
||||
err45 = {
|
||||
"condition": "where ts>0 and ts < now interval(1h) fill(next)"
|
||||
}
|
||||
self.checksample(**err45) # interval
|
||||
err46 = {
|
||||
"table_expr": "t1",
|
||||
"condition": "group by c6"
|
||||
}
|
||||
# self.checksample(**err46) # group by normal col
|
||||
|
||||
err49 = {"k": "2021-01-01 00:00:00.000"}
|
||||
self.checksample(**err49) # k: timestamp
|
||||
err50 = {"k": False}
|
||||
self.checksample(**err50) # k: False
|
||||
err51 = {"k": "%"}
|
||||
self.checksample(**err51) # k: special char
|
||||
err52 = {"k": ""}
|
||||
self.checksample(**err52) # k: ""
|
||||
err53 = {"k": None}
|
||||
self.checksample(**err53) # k: None
|
||||
err54 = {"k": "NULL"}
|
||||
self.checksample(**err54) # k: null
|
||||
err55 = {"k": "binary(4)"}
|
||||
self.checksample(**err55) # k: string
|
||||
err56 = {"k": "c1"}
|
||||
self.checksample(**err56) # k: sring,col name
|
||||
err57 = {"col": "c1, 1, c2"}
|
||||
self.checksample(**err57) # form: sample(col1, k1, col2, k2)
|
||||
err58 = {"col": "c1 cc1"}
|
||||
self.checksample(**err58) # form: sample(col newname, k)
|
||||
err59 = {"k": "'1'"}
|
||||
# self.checksample(**err59) # formL sample(colm, "1")
|
||||
err60 = {"k": "-1-(-2)"}
|
||||
# self.checksample(**err60) # formL sample(colm, -1-2)
|
||||
err61 = {"k": 1001}
|
||||
self.checksample(**err61) # k: right out of [1, 1000]
|
||||
err62 = {"k": -1}
|
||||
self.checksample(**err62) # k: negative number
|
||||
err63 = {"k": 0}
|
||||
self.checksample(**err63) # k: 0
|
||||
err64 = {"k": 2**63-1}
|
||||
self.checksample(**err64) # k: max(bigint)
|
||||
err65 = {"k": 1-2**63}
|
||||
# self.checksample(**err65) # k: min(bigint)
|
||||
err66 = {"k": -2**63}
|
||||
self.checksample(**err66) # k: NULL
|
||||
err67 = {"k": 0.999999}
|
||||
self.checksample(**err67) # k: left out of [1, 1000]
|
||||
|
||||
pass
|
||||
|
||||
def sample_test_data(self, tbnum:int, data_row:int, basetime:int) -> None :
|
||||
for i in range(tbnum):
|
||||
for j in range(data_row):
|
||||
tdSql.execute(
|
||||
f"insert into t{i} values ("
|
||||
f"{basetime + (j+1)*10}, {random.randint(-200, -1)}, {random.uniform(200, -1)}, {basetime + random.randint(-200, -1)}, "
|
||||
f"'binary_{j}', {random.uniform(-200, -1)}, {random.choice([0,1])}, {random.randint(-200,-1)}, "
|
||||
f"{random.randint(-200, -1)}, {random.randint(-127, -1)}, 'nchar_{j}' )"
|
||||
)
|
||||
|
||||
tdSql.execute(
|
||||
f"insert into t{i} values ("
|
||||
f"{basetime - (j+1) * 10}, {random.randint(1, 200)}, {random.uniform(1, 200)}, {basetime - random.randint(1, 200)}, "
|
||||
f"'binary_{j}_1', {random.uniform(1, 200)}, {random.choice([0, 1])}, {random.randint(1,200)}, "
|
||||
f"{random.randint(1,200)}, {random.randint(1,127)}, 'nchar_{j}_1' )"
|
||||
)
|
||||
tdSql.execute(
|
||||
f"insert into tt{i} values ( {basetime-(j+1) * 10}, {random.randint(1, 200)} )"
|
||||
)
|
||||
|
||||
pass
|
||||
|
||||
def sample_test_table(self,tbnum: int) -> None :
|
||||
tdSql.execute("drop database if exists db")
|
||||
tdSql.execute("create database if not exists db keep 3650")
|
||||
tdSql.execute("use db")
|
||||
|
||||
tdSql.execute(
|
||||
"create stable db.stb1 (\
|
||||
ts timestamp, c1 int, c2 float, c3 timestamp, c4 binary(16), c5 double, c6 bool, \
|
||||
c7 bigint, c8 smallint, c9 tinyint, c10 nchar(16)\
|
||||
) \
|
||||
tags(st1 int)"
|
||||
)
|
||||
tdSql.execute(
|
||||
"create stable db.stb2 (ts timestamp, c1 int) tags(st2 int)"
|
||||
)
|
||||
for i in range(tbnum):
|
||||
tdSql.execute(f"create table t{i} using stb1 tags({i})")
|
||||
tdSql.execute(f"create table tt{i} using stb2 tags({i})")
|
||||
|
||||
pass
|
||||
|
||||
|
||||
def check_sample(self , sample_query , origin_query ):
|
||||
|
||||
tdSql.query(origin_query)
|
||||
|
||||
origin_datas = tdSql.queryResult
|
||||
|
||||
tdSql.query(sample_query)
|
||||
|
||||
sample_datas = tdSql.queryResult
|
||||
status = True
|
||||
for ind , sample_data in enumerate(sample_datas):
|
||||
if sample_data not in origin_datas:
|
||||
status = False
|
||||
|
||||
if status:
|
||||
tdLog.info(" sample data is in datas groups ,successed sql is : %s" % sample_query )
|
||||
else:
|
||||
tdLog.exit(" sample data is not in datas groups ,failed sql is : %s" % sample_query )
|
||||
|
||||
|
||||
def basic_sample_query(self):
|
||||
tdSql.execute(" drop database if exists db ")
|
||||
tdSql.execute(" create database if not exists db days 300 ")
|
||||
tdSql.execute(" use db ")
|
||||
tdSql.execute(
|
||||
'''create table stb1
|
||||
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
||||
tags (t1 int)
|
||||
'''
|
||||
)
|
||||
|
||||
tdSql.execute(
|
||||
'''
|
||||
create table t1
|
||||
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
||||
'''
|
||||
)
|
||||
for i in range(4):
|
||||
tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )')
|
||||
|
||||
for i in range(9):
|
||||
tdSql.execute(
|
||||
f"insert into ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
||||
)
|
||||
tdSql.execute(
|
||||
f"insert into ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
||||
)
|
||||
tdSql.execute("insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
|
||||
tdSql.execute("insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
||||
tdSql.execute("insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
|
||||
tdSql.execute("insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
||||
|
||||
tdSql.execute("insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||
tdSql.execute("insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||
tdSql.execute("insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||
|
||||
tdSql.execute(
|
||||
f'''insert into t1 values
|
||||
( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||
( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
|
||||
( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
|
||||
( '2021-01-01 01:01:06.000', 3, 33333, 333, 33, 3.33, 33.33, 0, "binary3", "nchar3", now()+3a )
|
||||
( '2021-05-07 01:01:10.000', 4, 44444, 444, 44, 4.44, 44.44, 1, "binary4", "nchar4", now()+4a )
|
||||
( '2021-07-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||
( '2021-09-30 01:01:16.000', 5, 55555, 555, 55, 5.55, 55.55, 0, "binary5", "nchar5", now()+5a )
|
||||
( '2022-02-01 01:01:20.000', 6, 66666, 666, 66, 6.66, 66.66, 1, "binary6", "nchar6", now()+6a )
|
||||
( '2022-10-28 01:01:26.000', 7, 00000, 000, 00, 0.00, 00.00, 1, "binary7", "nchar7", "1970-01-01 08:00:00.000" )
|
||||
( '2022-12-01 01:01:30.000', 8, -88888, -888, -88, -8.88, -88.88, 0, "binary8", "nchar8", "1969-01-01 01:00:00.000" )
|
||||
( '2022-12-31 01:01:36.000', 9, -99999999999999999, -999, -99, -9.99, -999999999999999999999.99, 1, "binary9", "nchar9", "1900-01-01 00:00:00.000" )
|
||||
( '2023-02-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||
'''
|
||||
)
|
||||
|
||||
# basic query for sample
|
||||
|
||||
# params test for all
|
||||
tdSql.error(" select sample(c1,c1) from t1 ")
|
||||
tdSql.error(" select sample(c1,now) from t1 ")
|
||||
tdSql.error(" select sample(c1,tbname) from t1 ")
|
||||
tdSql.error(" select sample(c1,ts) from t1 ")
|
||||
tdSql.error(" select sample(c1,false) from t1 ")
|
||||
tdSql.error(" select sample(123,1) from t1 ")
|
||||
|
||||
tdSql.query(" select sample(c1,2) from t1 ")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.query(" select sample(c1,10) from t1 ")
|
||||
tdSql.checkRows(9)
|
||||
tdSql.query(" select sample(c8,10) from t1 ")
|
||||
tdSql.checkRows(9)
|
||||
tdSql.query(" select sample(c1,999) from t1 ")
|
||||
tdSql.checkRows(9)
|
||||
tdSql.query(" select sample(c1,1000) from t1 ")
|
||||
tdSql.checkRows(9)
|
||||
tdSql.query(" select sample(c8,1000) from t1 ")
|
||||
tdSql.checkRows(9)
|
||||
tdSql.error(" select sample(c1,-1) from t1 ")
|
||||
|
||||
# bug need fix
|
||||
# tdSql.query("select sample(c1 ,2) , 123 from stb1;")
|
||||
|
||||
# all type support
|
||||
tdSql.query(" select sample(c1 , 20 ) from ct4 ")
|
||||
tdSql.checkRows(9)
|
||||
|
||||
tdSql.query(" select sample(c2 , 20 ) from ct4 ")
|
||||
tdSql.checkRows(9)
|
||||
|
||||
tdSql.query(" select sample(c3 , 20 ) from ct4 ")
|
||||
tdSql.checkRows(9)
|
||||
|
||||
tdSql.query(" select sample(c4 , 20 ) from ct4 ")
|
||||
tdSql.checkRows(9)
|
||||
|
||||
tdSql.query(" select sample(c5 , 20 ) from ct4 ")
|
||||
tdSql.checkRows(9)
|
||||
|
||||
tdSql.query(" select sample(c6 , 20 ) from ct4 ")
|
||||
tdSql.checkRows(9)
|
||||
|
||||
tdSql.query(" select sample(c7 , 20 ) from ct4 ")
|
||||
tdSql.checkRows(9)
|
||||
|
||||
tdSql.query(" select sample(c8 , 20 ) from ct4 ")
|
||||
tdSql.checkRows(9)
|
||||
|
||||
tdSql.query(" select sample(c9 , 20 ) from ct4 ")
|
||||
tdSql.checkRows(9)
|
||||
|
||||
tdSql.query(" select sample(c10 , 20 ) from ct4 ")
|
||||
tdSql.checkRows(9)
|
||||
|
||||
tdSql.query(" select sample(t1 , 20 ) from ct1 ")
|
||||
tdSql.checkRows(13)
|
||||
# filter data
|
||||
|
||||
tdSql.query(" select sample(c1, 20 ) from t1 where c1 is null ")
|
||||
tdSql.checkRows(0)
|
||||
|
||||
tdSql.query(" select sample(c1, 20 ) from t1 where c1 =6 ")
|
||||
tdSql.checkRows(1)
|
||||
|
||||
tdSql.query(" select sample(c1, 20 ) from t1 where c1 > 6 ")
|
||||
tdSql.checkRows(3)
|
||||
|
||||
self.check_sample("select sample(c1, 20 ) from t1 where c1 > 6" , "select c1 from t1 where c1 > 6")
|
||||
|
||||
tdSql.query(" select sample( c1 , 1 ) from t1 where c1 in (0, 1,2) ")
|
||||
tdSql.checkRows(1)
|
||||
|
||||
tdSql.query("select sample( c1 ,3 ) from t1 where c1 between 1 and 10 ")
|
||||
tdSql.checkRows(3)
|
||||
|
||||
self.check_sample("select sample( c1 ,3 ) from t1 where c1 between 1 and 10" ,"select c1 from t1 where c1 between 1 and 10")
|
||||
|
||||
# join
|
||||
|
||||
tdSql.query("select sample( ct4.c1 , 1 ) from ct1, ct4 where ct4.ts=ct1.ts")
|
||||
|
||||
# partition by tbname
|
||||
|
||||
tdSql.query("select sample(c1,2) from stb1 partition by tbname")
|
||||
tdSql.checkRows(4)
|
||||
|
||||
self.check_sample("select sample(c1,2) from stb1 partition by tbname" , "select c1 from stb1 partition by tbname")
|
||||
|
||||
# nest query
|
||||
# tdSql.query("select sample(c1,2) from (select c1 from t1); ")
|
||||
# tdSql.checkRows(2)
|
||||
|
||||
# union all
|
||||
tdSql.query("select sample(c1,2) from t1 union all select sample(c1,3) from t1")
|
||||
tdSql.checkRows(5)
|
||||
|
||||
# fill interval
|
||||
|
||||
# not support mix with other function
|
||||
tdSql.error("select top(c1,2) , sample(c1,2) from ct1")
|
||||
tdSql.error("select max(c1) , sample(c1,2) from ct1")
|
||||
tdSql.error("select c1 , sample(c1,2) from ct1")
|
||||
|
||||
# bug for mix with scalar
|
||||
# tdSql.error("select 123 , sample(c1,100) from ct1")
|
||||
# tdSql.error("select sample(c1,100)+2 from ct1")
|
||||
# tdSql.error("select abs(sample(c1,100)) from ct1")
|
||||
|
||||
def sample_test_run(self) :
|
||||
tdLog.printNoPrefix("==========TD-10594==========")
|
||||
tbnum = 10
|
||||
nowtime = int(round(time.time() * 1000))
|
||||
per_table_rows = 10
|
||||
self.sample_test_table(tbnum)
|
||||
|
||||
tdLog.printNoPrefix("######## no data test:")
|
||||
self.sample_current_query()
|
||||
self.sample_error_query()
|
||||
|
||||
tdLog.printNoPrefix("######## insert only NULL test:")
|
||||
for i in range(tbnum):
|
||||
tdSql.execute(f"insert into t{i}(ts) values ({nowtime - 5})")
|
||||
tdSql.execute(f"insert into t{i}(ts) values ({nowtime + 5})")
|
||||
self.sample_current_query()
|
||||
self.sample_error_query()
|
||||
|
||||
tdLog.printNoPrefix("######## insert data in the range near the max(bigint/double):")
|
||||
# self.sample_test_table(tbnum)
|
||||
# tdSql.execute(f"insert into t1(ts, c1,c2,c5,c7) values "
|
||||
# f"({nowtime - (per_table_rows + 1) * 10}, {2**31-1}, {3.4*10**38}, {1.7*10**308}, {2**63-1})")
|
||||
# tdSql.execute(f"insert into t1(ts, c1,c2,c5,c7) values "
|
||||
# f"({nowtime - (per_table_rows + 2) * 10}, {2**31-1}, {3.4*10**38}, {1.7*10**308}, {2**63-1})")
|
||||
# self.sample_current_query()
|
||||
# self.sample_error_query()
|
||||
|
||||
tdLog.printNoPrefix("######## insert data in the range near the min(bigint/double):")
|
||||
# self.sample_test_table(tbnum)
|
||||
# tdSql.execute(f"insert into t1(ts, c1,c2,c5,c7) values "
|
||||
# f"({nowtime - (per_table_rows + 1) * 10}, {1-2**31}, {-3.4*10**38}, {-1.7*10**308}, {1-2**63})")
|
||||
# tdSql.execute(f"insert into t1(ts, c1,c2,c5,c7) values "
|
||||
# f"({nowtime - (per_table_rows + 2) * 10}, {1-2**31}, {-3.4*10**38}, {-1.7*10**308}, {512-2**63})")
|
||||
# self.sample_current_query()
|
||||
# self.sample_error_query()
|
||||
|
||||
tdLog.printNoPrefix("######## insert data without NULL data test:")
|
||||
self.sample_test_table(tbnum)
|
||||
self.sample_test_data(tbnum, per_table_rows, nowtime)
|
||||
self.sample_current_query()
|
||||
self.sample_error_query()
|
||||
|
||||
|
||||
tdLog.printNoPrefix("######## insert data mix with NULL test:")
|
||||
for i in range(tbnum):
|
||||
tdSql.execute(f"insert into t{i}(ts) values ({nowtime})")
|
||||
tdSql.execute(f"insert into t{i}(ts) values ({nowtime-(per_table_rows+3)*10})")
|
||||
tdSql.execute(f"insert into t{i}(ts) values ({nowtime+(per_table_rows+3)*10})")
|
||||
self.sample_current_query()
|
||||
self.sample_error_query()
|
||||
|
||||
|
||||
|
||||
tdLog.printNoPrefix("######## check after WAL test:")
|
||||
tdSql.query("show dnodes")
|
||||
index = tdSql.getData(0, 0)
|
||||
tdDnodes.stop(index)
|
||||
tdDnodes.start(index)
|
||||
self.sample_current_query()
|
||||
self.sample_error_query()
|
||||
|
||||
self.basic_sample_query()
|
||||
|
||||
def run(self):
|
||||
import traceback
|
||||
try:
|
||||
# run in develop branch
|
||||
self.sample_test_run()
|
||||
pass
|
||||
except Exception as e:
|
||||
traceback.print_exc()
|
||||
raise e
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -89,14 +89,15 @@ class TDTestCase:
|
|||
tdSql.checkEqual(tdSql.queryResult,[(9,),(10,)])
|
||||
tdSql.query("select ts,top(col1, 2),ts from test1")
|
||||
tdSql.checkRows(2)
|
||||
|
||||
tdSql.query("select top(col14, 100) from test")
|
||||
tdSql.checkRows(10)
|
||||
tdSql.query("select ts,top(col1, 2),ts from test group by tbname")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.query('select top(col2,1) from test interval(1y) order by col2')
|
||||
tdSql.checkData(0,0,10)
|
||||
|
||||
tdSql.error('select * from test where bottom(col2,1)=1')
|
||||
|
||||
tdSql.error("select * from test where bottom(col2,1)=1")
|
||||
tdSql.error("select top(col14, 0) from test;")
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
|
|
@ -14,6 +14,8 @@ python3 ./test.py -f 0-others/udf_restart_taosd.py
|
|||
python3 ./test.py -f 0-others/user_control.py
|
||||
python3 ./test.py -f 0-others/fsync.py
|
||||
|
||||
python3 ./test.py -f 1-insert/opentsdb_telnet_line_taosc_insert.py
|
||||
|
||||
python3 ./test.py -f 2-query/between.py
|
||||
python3 ./test.py -f 2-query/distinct.py
|
||||
python3 ./test.py -f 2-query/varchar.py
|
||||
|
@ -70,6 +72,14 @@ python3 ./test.py -f 2-query/arccos.py
|
|||
python3 ./test.py -f 2-query/arctan.py
|
||||
python3 ./test.py -f 2-query/query_cols_tags_and_or.py
|
||||
# python3 ./test.py -f 2-query/nestedQuery.py
|
||||
python3 ./test.py -f 2-query/nestedQuery_str.py
|
||||
python3 ./test.py -f 2-query/avg.py
|
||||
python3 ./test.py -f 2-query/elapsed.py
|
||||
python3 ./test.py -f 2-query/csum.py
|
||||
python3 ./test.py -f 2-query/mavg.py
|
||||
python3 ./test.py -f 2-query/diff.py
|
||||
python3 ./test.py -f 2-query/sample.py
|
||||
python3 ./test.py -f 2-query/function_diff.py
|
||||
|
||||
python3 ./test.py -f 7-tmq/basic5.py
|
||||
python3 ./test.py -f 7-tmq/subscribeDb.py
|
||||
|
@ -81,4 +91,4 @@ python3 ./test.py -f 7-tmq/subscribeStb1.py
|
|||
python3 ./test.py -f 7-tmq/subscribeStb2.py
|
||||
python3 ./test.py -f 7-tmq/subscribeStb3.py
|
||||
python3 ./test.py -f 7-tmq/subscribeStb4.py
|
||||
python3 ./test.py -f 7-tmq/subscribeStb2.py
|
||||
python3 ./test.py -f 7-tmq/subscribeStb2.py
|
|
@ -145,7 +145,11 @@ if __name__ == "__main__":
|
|||
if masterIp == "":
|
||||
host = '127.0.0.1'
|
||||
else:
|
||||
host = masterIp
|
||||
try:
|
||||
config = eval(masterIp)
|
||||
host = config["host"]
|
||||
except Exception as r:
|
||||
host = masterIp
|
||||
|
||||
tdLog.info("Procedures for tdengine deployed in %s" % (host))
|
||||
if windows:
|
||||
|
|
|
@ -1 +1 @@
|
|||
Subproject commit 2f3dfddd4d9a869e706ba3cf98fb6d769404cd7c
|
||||
Subproject commit 4d83d8c62973506f760bcaa3a33f4665ed9046d0
|
Loading…
Reference in New Issue