Merge branch '3.0' into merge/mainto3.0

This commit is contained in:
Shengliang Guan 2024-12-09 19:06:11 +08:00
commit b99f32b8f0
104 changed files with 5080 additions and 338 deletions

View File

@ -12,7 +12,7 @@ TDengine is configured by default with only one root user, who has the highest p
Only the root user can perform the operation of creating users, with the syntax as follows.
```sql
create user user_name pass'password' [sysinfo {1|0}]
create user user_name pass'password' [sysinfo {1|0}] [createdb {1|0}]
```
The parameters are explained as follows.
@ -20,6 +20,7 @@ The parameters are explained as follows.
- user_name: Up to 23 B long.
- password: Up to 128 B long, valid characters include letters and numbers as well as special characters other than single and double quotes, apostrophes, backslashes, and spaces, and it cannot be empty.
- sysinfo: Whether the user can view system information. 1 means they can view it, 0 means they cannot. System information includes server configuration information, various node information such as dnode, query node (qnode), etc., as well as storage-related information, etc. The default is to view system information.
- createdb: Whether the user can create databases. 1 means they can create databases, 0 means they cannot. The default value is 0. // Supported starting from TDengine Enterprise version 3.3.2.0
The following SQL can create a user named test with the password 123456 who can view system information.
@ -51,6 +52,7 @@ alter_user_clause: {
pass 'literal'
| enable value
| sysinfo value
| createdb value
}
```
@ -59,6 +61,7 @@ The parameters are explained as follows.
- pass: Modify the user's password.
- enable: Whether to enable the user. 1 means to enable this user, 0 means to disable this user.
- sysinfo: Whether the user can view system information. 1 means they can view system information, 0 means they cannot.
- createdb: Whether the user can create databases. 1 means they can create databases, 0 means they cannot. // Supported starting from TDengine Enterprise version 3.3.2.0
The following SQL disables the user test.

View File

@ -130,11 +130,25 @@ The forward sliding time of SLIDING cannot exceed the time range of one window.
SELECT COUNT(*) FROM temp_tb_1 INTERVAL(1m) SLIDING(2m);
```
The INTERVAL clause allows the use of the AUTO keyword to specify the window offset. If the WHERE condition provides a clear applicable start time limit, the required offset will be automatically calculated, dividing the time window from that point; otherwise, it defaults to an offset of 0. Here are some simple examples:
```sql
-- With a start time limit, divide the time window from '2018-10-03 14:38:05'
SELECT COUNT(*) FROM meters WHERE _rowts >= '2018-10-03 14:38:05' INTERVAL (1m, AUTO);
-- Without a start time limit, defaults to an offset of 0
SELECT COUNT(*) FROM meters WHERE _rowts < '2018-10-03 15:00:00' INTERVAL (1m, AUTO);
-- Unclear start time limit, defaults to an offset of 0
SELECT COUNT(*) FROM meters WHERE _rowts - voltage > 1000000;
```
When using time windows, note:
- The window width of the aggregation period is specified by the keyword INTERVAL, with the shortest interval being 10 milliseconds (10a); it also supports an offset (the offset must be less than the interval), which is the offset of the time window division compared to "UTC moment 0". The SLIDING statement is used to specify the forward increment of the aggregation period, i.e., the duration of each window slide forward.
- When using the INTERVAL statement, unless in very special cases, it is required to configure the timezone parameter in the taos.cfg configuration files of both the client and server to the same value to avoid frequent cross-time zone conversions by time processing functions, which can cause severe performance impacts.
- The returned results have a strictly monotonically increasing time-series.
- When using AUTO as the window offset, if the window width unit is d (day), n (month), w (week), y (year), such as: INTERVAL(1d, AUTO), INTERVAL(3w, AUTO), the TSMA optimization cannot take effect. If TSMA is manually created on the target table, the statement will report an error and exit; in this case, you can explicitly specify the Hint SKIP_TSMA or not use AUTO as the window offset.
### State Window

View File

@ -8,7 +8,7 @@ User and permission management is a feature of TDengine Enterprise Edition. This
## Create User
```sql
CREATE USER user_name PASS 'password' [SYSINFO {1|0}];
CREATE USER user_name PASS 'password' [SYSINFO {1|0}] [CREATEDB {1|0}];
```
The username can be up to 23 bytes long.
@ -17,6 +17,8 @@ The password can be up to 31 bytes long. The password can include letters, numbe
`SYSINFO` indicates whether the user can view system information. `1` means they can view, `0` means they have no permission to view. System information includes service configuration, dnode, vnode, storage, etc. The default value is `1`.
`CREATEDB` indicates whether the user can create databases. `1` means they can create databases, `0` means they have no permission to create databases. The default value is `0`. // Supported starting from TDengine Enterprise version 3.3.2.0
In the example below, we create a user with the password `123456` who can view system information.
```sql
@ -76,7 +78,7 @@ alter_user_clause: {
- PASS: Change the password, followed by the new password
- ENABLE: Enable or disable the user, `1` means enable, `0` means disable
- SYSINFO: Allow or prohibit viewing system information, `1` means allow, `0` means prohibit
- CREATEDB: Allow or prohibit creating databases, `1` means allow, `0` means prohibit
- CREATEDB: Allow or prohibit creating databases, `1` means allow, `0` means prohibit. // Supported starting from TDengine Enterprise version 3.3.2.0
The following example disables the user named `test`:

View File

@ -15,10 +15,20 @@ This document details the server error codes that may be encountered when using
| 0x80000015 | Unable to resolve FQDN | Invalid fqdn set | Check fqdn settings |
| 0x80000017 | Port already in use | The port is already occupied by some service, and the newly started service still tries to bind to that port | 1. Change the server port of the new service 2. Kill the service that previously occupied the port |
| 0x80000018 | Conn is broken | Due to network jitter or request time being too long (over 900 seconds), the system actively disconnects | 1. Set the system's maximum timeout duration 2. Check request duration |
| 0x80000019 | Conn read timeout | Not enabled | |
| 0x80000019 | Conn read timeout | 1. The request processing time is too long 2. The server is overwhelmed 3. The server is deadlocked | 1. Explicitly configure the readTimeout parameter 2. Analyze the stack on taos |
| 0x80000020 | some vnode/qnode/mnode(s) out of service | After multiple retries, still unable to connect to the cluster, possibly all nodes have crashed, or the surviving nodes are not Leader nodes | 1. Check the status of taosd, analyze the reasons for taosd crash 2. Analyze why the surviving taosd cannot elect a Leader |
| 0x80000021 | some vnode/qnode/mnode(s) conn is broken | After multiple retries, still unable to connect to the cluster, possibly due to network issues, request time too long, server deadlock, etc. | 1. Check network 2. Request execution time |
| 0x80000022 | rpc open too many session | 1. High concurrency causing the number of occupied connections to reach the limit 2. Server BUG, causing connections not to be released | 1. Adjust configuration parameter numOfRpcSessions 2. Adjust configuration parameter timeToGetAvailableConn 3. Analyze reasons for server not releasing connections |
| 0x80000023 | RPC network error | 1. Network issues, possibly intermittent 2. Server crash | 1. Check the network 2. Check if the server has restarted |
| 0x80000024 | RPC network bus | 1. When pulling data between clusters, no available connections are obtained, or the number of connections has reached the limit | 1. Check if the concurrency is too high 2. Check if there are any anomalies in the cluster nodes, such as deadlocks |
| 0x80000025 | HTTP-report already quit | 1. Issues with HTTP reporting | Internal issue, can be ignored |
| 0x80000026 | RPC module already quit | 1. The client instance has already exited, but still uses the instance for queries | Check the business code to see if there is a mistake in usage |
| 0x80000027 | RPC async module already quit | 1. Engine error, can be ignored, this error code will not be returned to the user side | If returned to the user side, the engine side needs to investigate the issue |
| 0x80000028 | RPC async in process | 1. Engine error, can be ignored, this error code will not be returned to the user side | If returned to the user side, the engine side needs to investigate the issue |
| 0x80000029 | RPC no state | 1. Engine error, can be ignored, this error code will not be returned to the user side | If returned to the user side, the engine side needs to investigate the issue |
| 0x8000002A | RPC state already dropped | 1. Engine error, can be ignored, this error code will not be returned to the user side | If returned to the user side, the engine side needs to investigate the issue |
| 0x8000002B | RPC msg exceed limit | 1. Single RPC message exceeds the limit, this error code will not be returned to the user side | If returned to the user side, the engine side needs to investigate the issue |
## common
@ -493,6 +503,7 @@ This document details the server error codes that may be encountered when using
| 0x80003103 | Invalid tsma state | The vgroup of the stream computing result is inconsistent with the vgroup that created the TSMA index | Check error logs, contact development for handling |
| 0x80003104 | Invalid tsma pointer | Processing the results issued by stream computing, the message body is a null pointer. | Check error logs, contact development for handling |
| 0x80003105 | Invalid tsma parameters | Processing the results issued by stream computing, the result count is 0. | Check error logs, contact development for handling |
| 0x80003113 | Tsma optimization cannot be applied with INTERVAL AUTO offset. | Tsma optimization cannot be enabled with INTERVAL AUTO OFFSET under the current query conditions. | Use SKIP_TSMA Hint or specify a manual INTERVAL OFFSET. |
| 0x80003150 | Invalid rsma env | Rsma execution environment is abnormal. | Check error logs, contact development for handling |
| 0x80003151 | Invalid rsma state | Rsma execution state is abnormal. | Check error logs, contact development for handling |
| 0x80003152 | Rsma qtaskinfo creation error | Creating stream computing environment failed. | Check error logs, contact development for handling |

View File

@ -17,7 +17,7 @@ TDengine 可以高效地从 AVEVA Historian 读取数据并将其写入 TDengine
在数据写入页面中,点击 **+新增数据源** 按钮,进入新增数据源页面。
![avevaHistorian-01.png](pic/avevaHistorian-01.png)
![aveva-historian-01.png](pic/aveva-historian-01.png)
### 2. 配置基本信息
@ -25,11 +25,11 @@ TDengine 可以高效地从 AVEVA Historian 读取数据并将其写入 TDengine
**类型** 下拉列表中选择 **AVEVA Historian**
**代理** 是非必填项,如有需要,可以在下拉框中选择指定的代理,也可以先点击右侧的 **+创建新的代理** 按钮
**代理** 是非必填项,如有需要,可以在下拉框中选择指定的代理,也可以先点击右侧的 **+创建新的代理** 按钮
**目标数据库** 下拉列表中选择一个目标数据库,也可以先点击右侧的 **+创建数据库** 按钮
![avevaHistorian-02.png](pic/avevaHistorian-02.png)
![aveva-historian-02.png](pic/aveva-historian-02.png)
### 3. 配置连接信息
@ -39,7 +39,7 @@ TDengine 可以高效地从 AVEVA Historian 读取数据并将其写入 TDengine
点击 **连通性检查** 按钮,检查数据源是否可用。
![avevaHistorian-03.png](pic/avevaHistorian-03.png)
![aveva-historian-03.png](pic/aveva-historian-03.png)
### 4. 配置采集信息
@ -61,7 +61,7 @@ TDengine 可以高效地从 AVEVA Historian 读取数据并将其写入 TDengine
**查询的时间窗口** 中,填写一个时间间隔,数据迁移任务将按照这个时间间隔划分时间窗口。
![avevaHistorian-04.png](pic/avevaHistorian-04.png)
![aveva-historian-04.png](pic/aveva-historian-04.png)
#### 4.2. 同步 History 表的数据
@ -83,7 +83,7 @@ TDengine 可以高效地从 AVEVA Historian 读取数据并将其写入 TDengine
**乱序时间上限** 中,填写一个时间间隔,实时数据同步过程中,超过这个时间才入库的数据可能会丢失。
![avevaHistorian-05.png](pic/avevaHistorian-05.png)
![aveva-historian-05.png](pic/aveva-historian-05.png)
#### 4.3. 同步 Live 表的数据
@ -97,7 +97,7 @@ TDengine 可以高效地从 AVEVA Historian 读取数据并将其写入 TDengine
**实时同步的时间间隔** 中,填写一个时间间隔,实时数据部分将按照这个时间间隔轮询数据。
![avevaHistorian-06.png](pic/avevaHistorian-06.png)
![aveva-historian-06.png](pic/aveva-historian-06.png)
### 5. 配置数据映射
@ -105,7 +105,8 @@ TDengine 可以高效地从 AVEVA Historian 读取数据并将其写入 TDengine
点击 **从服务器检索** 按钮,从 AVEVA Historian 服务器获取示例数据。
**从列中提取或拆分** 中填写从消息体中提取或拆分的字段,例如:将 vValue 字段拆分成 `vValue_0``vValue_1` 这 2 个字段,选择 split 提取器seperator 填写分割符 `,`, number 填写 2。
**从列中提取或拆分** 中填写从消息体中提取或拆分的字段,例如:将 vValue 字段拆分成 `vValue_0``vValue_1` 这 2 个字段,选择
split 提取器seperator 填写分割符 `,`, number 填写 2。
**过滤** 中,填写过滤条件,例如:填写`Value > 0`,则只有 Value 大于 0 的数据才会被写入 TDengine。
@ -113,7 +114,7 @@ TDengine 可以高效地从 AVEVA Historian 读取数据并将其写入 TDengine
点击 **预览**,可以查看映射的结果。
![avevaHistorian-07.png](pic/avevaHistorian-07.png)
![aveva-historian-07.png](pic/aveva-historian-07.png)
### 6. 配置高级选项
@ -131,7 +132,7 @@ TDengine 可以高效地从 AVEVA Historian 读取数据并将其写入 TDengine
**原始数据存储目录** 中设置原始数据保存路径。
![avevaHistorian-08.png](pic/avevaHistorian-08.png)
![aveva-historian-08.png](pic/aveva-historian-08.png)
### 7. 创建完成

View File

Before

Width:  |  Height:  |  Size: 36 KiB

After

Width:  |  Height:  |  Size: 36 KiB

View File

Before

Width:  |  Height:  |  Size: 71 KiB

After

Width:  |  Height:  |  Size: 71 KiB

View File

Before

Width:  |  Height:  |  Size: 92 KiB

After

Width:  |  Height:  |  Size: 92 KiB

View File

Before

Width:  |  Height:  |  Size: 120 KiB

After

Width:  |  Height:  |  Size: 120 KiB

View File

Before

Width:  |  Height:  |  Size: 142 KiB

After

Width:  |  Height:  |  Size: 142 KiB

View File

Before

Width:  |  Height:  |  Size: 74 KiB

After

Width:  |  Height:  |  Size: 74 KiB

View File

Before

Width:  |  Height:  |  Size: 247 KiB

After

Width:  |  Height:  |  Size: 247 KiB

View File

Before

Width:  |  Height:  |  Size: 124 KiB

After

Width:  |  Height:  |  Size: 124 KiB

View File

@ -12,13 +12,14 @@ TDengine 默认仅配置了一个 root 用户该用户拥有最高权限。TD
创建用户的操作只能由 root 用户进行,语法如下。
```sql
create user user_name pass'password' [sysinfo {1|0}]
create user user_name pass'password' [sysinfo {1|0}] [createdb {1|0}]
```
相关参数说明如下。
- user_name最长为 23 B。
- password最长为 128 B合法字符包括字母和数字以及单双引号、撇号、反斜杠和空格以外的特殊字符且不可以为空。
- sysinfo 用户是否可以查看系统信息。1 表示可以查看0 表示不可以查看。系统信息包括服务端配置信息、服务端各种节点信息,如 dnode、查询节点qnode以及与存储相关的信息等。默认为可以查看系统信息。
- createdb用户是否可以创建数据库。1 表示可以创建0 表示不可以创建。缺省值为 0。// 从 TDengine 企业版 3.3.2.0 开始支持
如下 SQL 可以创建密码为 123456 且可以查看系统信息的用户 test。
@ -47,6 +48,7 @@ alter_user_clause: {
pass 'literal'
| enable value
| sysinfo value
| createdb value
}
```
@ -54,6 +56,7 @@ alter_user_clause: {
- pass修改用户密码。
- enable是否启用用户。1 表示启用此用户0 表示禁用此用户。
- sysinfo 用户是否可查看系统信息。1 表示可以查看系统信息0 表示不可以查看系统信息
- createdb用户是否可创建数据库。1 表示可以创建数据库0 表示不可以创建数据库。// 从 TDengine 企业版 3.3.2.0 开始支持
如下 SQL 禁用 test 用户。
```sql

View File

@ -120,11 +120,25 @@ SLIDING 的向前滑动的时间不能超过一个窗口的时间范围。以下
SELECT COUNT(*) FROM temp_tb_1 INTERVAL(1m) SLIDING(2m);
```
INTERVAL 子句允许使用 AUTO 关键字来指定窗口偏移量,此时如果 WHERE 条件给定了明确可应用的起始时间限制,则会自动计算所需偏移量,使得从该时间点切分时间窗口;否则不生效,即:仍以 0 作为偏移量。以下是简单示例说明:
```sql
-- 有起始时间限制,从 '2018-10-03 14:38:05' 切分时间窗口
SELECT COUNT(*) FROM meters WHERE _rowts >= '2018-10-03 14:38:05' INTERVAL (1m AUTO);
-- 无起始时间限制,不生效,仍以 0 为偏移量
SELECT COUNT(*) FROM meters WHERE _rowts < '2018-10-03 15:00:00' INTERVAL (1m, AUTO);
-- 起始时间限制不明确,不生效,仍以 0 为偏移量
SELECT COUNT(*) FROM meters WHERE _rowts - voltage > 1000000;
```
使用时间窗口需要注意:
- 聚合时间段的窗口宽度由关键词 INTERVAL 指定,最短时间间隔 10 毫秒10a并且支持偏移 offset偏移必须小于间隔也即时间窗口划分与“UTC 时刻 0”相比的偏移量。SLIDING 语句用于指定聚合时间段的前向增量,也即每次窗口向前滑动的时长。
- 使用 INTERVAL 语句时,除非极特殊的情况,都要求把客户端和服务端的 taos.cfg 配置文件中的 timezone 参数配置为相同的取值,以避免时间处理函数频繁进行跨时区转换而导致的严重性能影响。
- 返回的结果中时间序列严格单调递增。
- 使用 AUTO 作为窗口偏移量时,如果窗口宽度的单位是 d (天), n (月), w (周), y (年),比如: INTERVAL(1d, AUTO), INTERVAL(3w, AUTO),此时 TSMA 优化无法生效。如果目标表上手动创建了TSMA语句会报错退出这种情况下可以显式指定 Hint SKIP_TSMA 或者不使用 AUTO 作为窗口偏移量。
### 状态窗口

View File

@ -9,7 +9,7 @@ description: 本节讲述基本的用户管理功能
## 创建用户
```sql
CREATE USER user_name PASS 'password' [SYSINFO {1|0}];
CREATE USER user_name PASS 'password' [SYSINFO {1|0}] [CREATEDB {1|0}];
```
用户名最长不超过 23 个字节。
@ -18,6 +18,8 @@ CREATE USER user_name PASS 'password' [SYSINFO {1|0}];
`SYSINFO` 表示该用户是否能够查看系统信息。`1` 表示可以查看,`0` 表示无权查看。系统信息包括服务配置、dnode、vnode、存储等信息。缺省值为 `1`
`CREATEDB` 表示该用户是否能够创建数据库。`1` 表示可以创建,`0` 表示无权创建。缺省值为 `0`。// 从 TDengine 企业版 3.3.2.0 开始支持
在下面的示例中,我们创建一个密码为 `123456` 且可以查看系统信息的用户。
```sql
@ -77,7 +79,7 @@ alter_user_clause: {
- PASS: 修改密码,后跟新密码
- ENABLE: 启用或禁用该用户,`1` 表示启用,`0` 表示禁用
- SYSINFO: 允许或禁止查看系统信息,`1` 表示允许,`0` 表示禁止
- CREATEDB: 允许或禁止创建数据库,`1` 表示允许,`0` 表示禁止
- CREATEDB: 允许或禁止创建数据库,`1` 表示允许,`0` 表示禁止。// 从 TDengine 企业版 3.3.2.0 开始支持
下面的示例禁用了名为 `test` 的用户:

View File

@ -12,16 +12,25 @@ description: TDengine 服务端的错误码列表和详细说明
## rpc
| 错误码 | 错误描述 | 可能的出错场景或者可能的原因 | 建议用户采取的措施 |
| ---------- | -----------------------------| --- | --- |
| ---------- | -----------------------------| --------- | ------- |
| 0x8000000B | Unable to establish connection | 1.网络不通 2.多次重试、依然不能执行请求 | 1.检查网络 2.分析日志,具体原因比较复杂 |
| 0x80000013 | Client and server's time is not synchronized | 1.客户端和服务端不在同一个时区 2.客户端和服务端在同一个时区,但是两者的时间不同步,相差超过 900 秒 | 1.调整到同一个时区 2.校准客户端和服务端的时间|
| 0x80000015 | Unable to resolve FQDN | 设置了无效的 fqdn | 检查fqdn 的设置 |
| 0x80000017 | Port already in use | 端口已经被某个服务占用的情况下,新启的服务依然尝试绑定该端口 | 1.改动新服务的服务端口 2.杀死之前占用端口的服务 |
| 0x80000018 | Conn is broken | 由于网络抖动或者请求时间过长(超过 900 秒),导致系统主动摘掉连接 | 1.设置系统的最大超时时长 2.检查请求时长 |
| 0x80000019 | Conn read timeout | 未启用 | |
| 0x80000019 | Conn read timeout | 1.请求是否处理时间过长 2. 服务端处理不过来 3. 服务端已经死锁| 1. 显式配置readTimeout参数2. 分析taosd上堆栈 |
| 0x80000020 | some vnode/qnode/mnode(s) out of service | 多次重试之后,仍然无法连接到集群,可能是所有的节点都宕机了,或者存活的节点不是 Leader 节点 | 1.查看 taosd 的状态、分析 taosd 宕机的原因 2.分析存活的 taosd 为什么无法选取 Leader |
| 0x80000021 | some vnode/qnode/mnode(s) conn is broken | 多次重试之后,仍然无法连接到集群,可能是网络异常、请求时间太长、服务端死锁等问题 | 1.检查网络 2.请求的执行时间 |
| 0x80000022 | rpc open too many session | 1.并发太高导致占用链接已经到达上限 2.服务端的 BUG导致连接一直不释放 | 1.调整配置参数 numOfRpcSessions 2.调整配置参数 timeToGetAvailableConn 3.分析服务端不释放的连接的原因 |
| 0x80000023 | rpc network error | 1. 网络问题可能是闪断2. 服务端crash | 1. 检查网络 2. 检查服务端是否重启|
| 0x80000024 |rpc network bus | 1.集群间互相拉数据的时候,没有拿到可用链接,或者链接数目已经到上限 | 1.是否并发太高 2. 检查集群各个节点是否有异常,是否出现了死锁等情况|
| 0x80000025 | http-report already quit | 1. http上报出现的问题| 内部问题,可以忽略|
| 0x80000026 | rpc module already quit | 1.客户端实例已经退出,依然用该实例做查询 | 检查业务代码,是否用错|
| 0x80000027 | rpc async module already quit | 1. 引擎错误, 可以忽略, 该错误码不会返回到用户侧| 如果返回到用户侧, 需要引擎侧追查问题|
| 0x80000028 | rpc async in proces | 1. 引擎错误, 可以忽略, 该错误码不会返回到用户侧 | 如果返回到用户侧, 需要引擎侧追查问题|
| 0x80000029 | rpc no state | 1. 引擎错误, 可以忽略, 该错误码不会返回到用户侧 | 如果返回到用户侧, 需要引擎侧追查问题 |
| 0x8000002A | rpc state already dropped | 1. 引擎错误, 可以忽略, 该错误码不会返回到用户侧 | 如果返回到用户侧, 需要引擎侧追查问题|
| 0x8000002B | rpc msg exceed limit | 1. 单个rpc 消息超过上限,该错误码不会返回到用户侧 | 如果返回到用户侧, 需要引擎侧追查问题|
## common
@ -514,6 +523,7 @@ description: TDengine 服务端的错误码列表和详细说明
| 0x80003103 | Invalid tsma state | 流计算下发结果的 vgroup 与创建 TSMA index 的 vgroup 不一致 | 检查错误日志,联系开发处理 |
| 0x80003104 | Invalid tsma pointer | 在处理写入流计算下发的结果,消息体为空指针。 | 检查错误日志,联系开发处理 |
| 0x80003105 | Invalid tsma parameters | 在处理写入流计算下发的结果结果数量为0。 | 检查错误日志,联系开发处理 |
| 0x80003113 | Tsma optimization cannot be applied with INTERVAL AUTO offset. | 当前查询条件下使用 INTERVAL AUTO OFFSET 无法启用 tsma 优化。 | 使用 SKIP_TSMA Hint 或者手动指定 INTERVAL OFFSET。 |
| 0x80003150 | Invalid rsma env | Rsma 执行环境异常。 | 检查错误日志,联系开发处理 |
| 0x80003151 | Invalid rsma state | Rsma 执行状态异常。 | 检查错误日志,联系开发处理 |
| 0x80003152 | Rsma qtaskinfo creation error | 创建流计算环境异常。 | 检查错误日志,联系开发处理 |

View File

@ -44,7 +44,7 @@ typedef struct {
char file_path[TSDB_FILENAME_LEN]; // local file path
int64_t file_size; // local file size, for upload
int32_t file_last_modified; // local file last modified time, for upload
int64_t file_last_modified; // local file last modified time, for upload
char file_md5[64]; // md5 of the local file content, for upload, reserved
char object_name[128]; // object name
@ -67,9 +67,9 @@ int32_t cos_cp_load(char const* filepath, SCheckpoint* checkpoint);
int32_t cos_cp_dump(SCheckpoint* checkpoint);
void cos_cp_get_undo_parts(SCheckpoint* checkpoint, int* part_num, SCheckpointPart* parts, int64_t* consume_bytes);
void cos_cp_update(SCheckpoint* checkpoint, int32_t part_index, char const* etag, uint64_t crc64);
void cos_cp_build_upload(SCheckpoint* checkpoint, char const* filepath, int64_t size, int32_t mtime,
void cos_cp_build_upload(SCheckpoint* checkpoint, char const* filepath, int64_t size, int64_t mtime,
char const* upload_id, int64_t part_size);
bool cos_cp_is_valid_upload(SCheckpoint* checkpoint, int64_t size, int32_t mtime);
bool cos_cp_is_valid_upload(SCheckpoint* checkpoint, int64_t size, int64_t mtime);
void cos_cp_build_download(SCheckpoint* checkpoint, char const* filepath, char const* object_name, int64_t object_size,
char const* object_lmtime, char const* object_etag, int64_t part_size);

View File

@ -1242,14 +1242,15 @@ typedef struct {
} STsBufInfo;
typedef struct {
int32_t tz; // query client timezone
char intervalUnit;
char slidingUnit;
char offsetUnit;
int8_t precision;
int64_t interval;
int64_t sliding;
int64_t offset;
int32_t tz; // query client timezone
char intervalUnit;
char slidingUnit;
char offsetUnit;
int8_t precision;
int64_t interval;
int64_t sliding;
int64_t offset;
STimeWindow timeRange;
} SInterval;
typedef struct STbVerInfo {

View File

@ -36,6 +36,9 @@ extern "C" {
#define TIME_UNIT_MONTH 'n'
#define TIME_UNIT_YEAR 'y'
#define AUTO_DURATION_LITERAL "auto"
#define AUTO_DURATION_VALUE -1
/*
* @return timestamp decided by global conf variable, tsTimePrecision
* if precision == TSDB_TIME_PRECISION_MICRO, it returns timestamp in microsecond.
@ -78,6 +81,7 @@ int64_t taosTimeAdd(int64_t t, int64_t duration, char unit, int32_t precision);
int64_t taosTimeTruncate(int64_t ts, const SInterval* pInterval);
int64_t taosTimeGetIntervalEnd(int64_t ts, const SInterval* pInterval);
int32_t taosTimeCountIntervalForFill(int64_t skey, int64_t ekey, int64_t interval, char unit, int32_t precision, int32_t order);
void calcIntervalAutoOffset(SInterval* interval);
int32_t parseAbsoluteDuration(const char* token, int32_t tokenlen, int64_t* ts, char* unit, int32_t timePrecision);
int32_t parseNatualDuration(const char* token, int32_t tokenLen, int64_t* duration, char* unit, int32_t timePrecision, bool negativeAllow);

View File

@ -192,8 +192,9 @@ void qProcessRspMsg(void* parent, struct SRpcMsg* pMsg, struct SEpSet* pEpSet);
int32_t qGetExplainExecInfo(qTaskInfo_t tinfo, SArray* pExecInfoList);
void getNextTimeWindow(const SInterval* pInterval, STimeWindow* tw, int32_t order);
void getInitialStartTimeWindow(SInterval* pInterval, TSKEY ts, STimeWindow* w, bool ascQuery);
TSKEY getNextTimeWindowStart(const SInterval* pInterval, TSKEY start, int32_t order);
void getNextTimeWindow(const SInterval* pInterval, STimeWindow* tw, int32_t order);
void getInitialStartTimeWindow(SInterval* pInterval, TSKEY ts, STimeWindow* w, bool ascQuery);
STimeWindow getAlignQueryTimeWindow(const SInterval* pInterval, int64_t key);
SArray* qGetQueriedTableListInfo(qTaskInfo_t tinfo);

View File

@ -319,6 +319,7 @@ typedef struct SWindowLogicNode {
int64_t sliding;
int8_t intervalUnit;
int8_t slidingUnit;
STimeWindow timeRange;
int64_t sessionGap;
SNode* pTspk;
SNode* pTsEnd;
@ -682,6 +683,7 @@ typedef struct SIntervalPhysiNode {
int64_t sliding;
int8_t intervalUnit;
int8_t slidingUnit;
STimeWindow timeRange;
} SIntervalPhysiNode;
typedef SIntervalPhysiNode SMergeIntervalPhysiNode;

View File

@ -325,12 +325,13 @@ typedef struct SSessionWindowNode {
} SSessionWindowNode;
typedef struct SIntervalWindowNode {
ENodeType type; // QUERY_NODE_INTERVAL_WINDOW
SNode* pCol; // timestamp primary key
SNode* pInterval; // SValueNode
SNode* pOffset; // SValueNode
SNode* pSliding; // SValueNode
SNode* pFill;
ENodeType type; // QUERY_NODE_INTERVAL_WINDOW
SNode* pCol; // timestamp primary key
SNode* pInterval; // SValueNode
SNode* pOffset; // SValueNode
SNode* pSliding; // SValueNode
SNode* pFill;
STimeWindow timeRange;
} SIntervalWindowNode;
typedef struct SEventWindowNode {

View File

@ -194,6 +194,7 @@ typedef struct SBoundColInfo {
int32_t numOfCols;
int32_t numOfBound;
bool hasBoundCols;
bool mixTagsCols;
} SBoundColInfo;
typedef struct STableColsData {

View File

@ -82,9 +82,9 @@ int32_t taosUnLockFile(TdFilePtr pFile);
int32_t taosUmaskFile(int32_t maskVal);
int32_t taosStatFile(const char *path, int64_t *size, int32_t *mtime, int32_t *atime);
int32_t taosStatFile(const char *path, int64_t *size, int64_t *mtime, int64_t *atime);
int32_t taosDevInoFile(TdFilePtr pFile, int64_t *stDev, int64_t *stIno);
int32_t taosFStatFile(TdFilePtr pFile, int64_t *size, int32_t *mtime);
int32_t taosFStatFile(TdFilePtr pFile, int64_t *size, int64_t *mtime);
bool taosCheckExistFile(const char *pathname);
int64_t taosLSeekFile(TdFilePtr pFile, int64_t offset, int32_t whence);

View File

@ -196,21 +196,21 @@ int32_t taosGetErrSize();
#define TSDB_CODE_TSC_INVALID_JSON TAOS_DEF_ERROR_CODE(0, 0x0221)
#define TSDB_CODE_TSC_INVALID_JSON_TYPE TAOS_DEF_ERROR_CODE(0, 0x0222)
#define TSDB_CODE_TSC_VALUE_OUT_OF_RANGE TAOS_DEF_ERROR_CODE(0, 0x0224)
#define TSDB_CODE_TSC_INVALID_INPUT TAOS_DEF_ERROR_CODE(0, 0X0229)
#define TSDB_CODE_TSC_STMT_API_ERROR TAOS_DEF_ERROR_CODE(0, 0X022A)
#define TSDB_CODE_TSC_STMT_TBNAME_ERROR TAOS_DEF_ERROR_CODE(0, 0X022B)
#define TSDB_CODE_TSC_STMT_CLAUSE_ERROR TAOS_DEF_ERROR_CODE(0, 0X022C)
#define TSDB_CODE_TSC_QUERY_KILLED TAOS_DEF_ERROR_CODE(0, 0X022D)
#define TSDB_CODE_TSC_NO_EXEC_NODE TAOS_DEF_ERROR_CODE(0, 0X022E)
#define TSDB_CODE_TSC_NOT_STABLE_ERROR TAOS_DEF_ERROR_CODE(0, 0X022F)
#define TSDB_CODE_TSC_STMT_CACHE_ERROR TAOS_DEF_ERROR_CODE(0, 0X0230)
#define TSDB_CODE_TSC_ENCODE_PARAM_ERROR TAOS_DEF_ERROR_CODE(0, 0X0231)
#define TSDB_CODE_TSC_ENCODE_PARAM_NULL TAOS_DEF_ERROR_CODE(0, 0X0232)
#define TSDB_CODE_TSC_COMPRESS_PARAM_ERROR TAOS_DEF_ERROR_CODE(0, 0X0233)
#define TSDB_CODE_TSC_COMPRESS_LEVEL_ERROR TAOS_DEF_ERROR_CODE(0, 0X0234)
#define TSDB_CODE_TSC_FAIL_GENERATE_JSON TAOS_DEF_ERROR_CODE(0, 0X0235)
#define TSDB_CODE_TSC_STMT_BIND_NUMBER_ERROR TAOS_DEF_ERROR_CODE(0, 0X0236)
#define TSDB_CODE_TSC_INTERNAL_ERROR TAOS_DEF_ERROR_CODE(0, 0X02FF)
#define TSDB_CODE_TSC_INVALID_INPUT TAOS_DEF_ERROR_CODE(0, 0x0229)
#define TSDB_CODE_TSC_STMT_API_ERROR TAOS_DEF_ERROR_CODE(0, 0x022A)
#define TSDB_CODE_TSC_STMT_TBNAME_ERROR TAOS_DEF_ERROR_CODE(0, 0x022B)
#define TSDB_CODE_TSC_STMT_CLAUSE_ERROR TAOS_DEF_ERROR_CODE(0, 0x022C)
#define TSDB_CODE_TSC_QUERY_KILLED TAOS_DEF_ERROR_CODE(0, 0x022D)
#define TSDB_CODE_TSC_NO_EXEC_NODE TAOS_DEF_ERROR_CODE(0, 0x022E)
#define TSDB_CODE_TSC_NOT_STABLE_ERROR TAOS_DEF_ERROR_CODE(0, 0x022F)
#define TSDB_CODE_TSC_STMT_CACHE_ERROR TAOS_DEF_ERROR_CODE(0, 0x0230)
#define TSDB_CODE_TSC_ENCODE_PARAM_ERROR TAOS_DEF_ERROR_CODE(0, 0x0231)
#define TSDB_CODE_TSC_ENCODE_PARAM_NULL TAOS_DEF_ERROR_CODE(0, 0x0232)
#define TSDB_CODE_TSC_COMPRESS_PARAM_ERROR TAOS_DEF_ERROR_CODE(0, 0x0233)
#define TSDB_CODE_TSC_COMPRESS_LEVEL_ERROR TAOS_DEF_ERROR_CODE(0, 0x0234)
#define TSDB_CODE_TSC_FAIL_GENERATE_JSON TAOS_DEF_ERROR_CODE(0, 0x0235)
#define TSDB_CODE_TSC_STMT_BIND_NUMBER_ERROR TAOS_DEF_ERROR_CODE(0, 0x0236)
#define TSDB_CODE_TSC_INTERNAL_ERROR TAOS_DEF_ERROR_CODE(0, 0x02FF)
// mnode-common
#define TSDB_CODE_MND_REQ_REJECTED TAOS_DEF_ERROR_CODE(0, 0x0300)
@ -968,6 +968,7 @@ int32_t taosGetErrSize();
#define TSDB_CODE_TSMA_MUST_BE_DROPPED TAOS_DEF_ERROR_CODE(0, 0x3110)
#define TSDB_CODE_TSMA_NAME_TOO_LONG TAOS_DEF_ERROR_CODE(0, 0x3111)
#define TSDB_CODE_TSMA_INVALID_RECURSIVE_INTERVAL TAOS_DEF_ERROR_CODE(0, 0x3112)
#define TSDB_CODE_TSMA_INVALID_AUTO_OFFSET TAOS_DEF_ERROR_CODE(0, 0x3113)
//rsma
#define TSDB_CODE_RSMA_INVALID_ENV TAOS_DEF_ERROR_CODE(0, 0x3150)

View File

@ -41,6 +41,7 @@ extern bool tsAsyncLog;
extern bool tsAssert;
extern int32_t tsNumOfLogLines;
extern int32_t tsLogKeepDays;
extern char *tsLogOutput;
extern LogFp tsLogFp;
extern int64_t tsNumOfErrorLogs;
extern int64_t tsNumOfInfoLogs;
@ -71,7 +72,7 @@ extern int32_t sndDebugFlag;
extern int32_t simDebugFlag;
extern int32_t tqClientDebugFlag;
int32_t taosInitLogOutput(const char **ppLogName);
int32_t taosInitLog(const char *logName, int32_t maxFiles, bool tsc);
void taosCloseLog();
void taosResetLog();

View File

@ -56,6 +56,8 @@ void taosIpPort2String(uint32_t ip, uint16_t port, char *str);
void *tmemmem(const char *haystack, int hlen, const char *needle, int nlen);
int32_t parseCfgReal(const char *str, float *out);
bool tIsValidFileName(const char *fileName, const char *pattern);
bool tIsValidFilePath(const char *filePath, const char *pattern);
static FORCE_INLINE void taosEncryptPass(uint8_t *inBuf, size_t inLen, char *target) {
T_MD5_CTX context;

View File

@ -41,6 +41,10 @@
#include "cus_name.h"
#endif
#ifndef CUS_PROMPT
#define CUS_PROMPT "tao"
#endif
#define TSC_VAR_NOT_RELEASE 1
#define TSC_VAR_RELEASED 0
@ -954,14 +958,10 @@ void taos_init_imp(void) {
taosHashSetFreeFp(appInfo.pInstMap, destroyAppInst);
deltaToUtcInitOnce();
char logDirName[64] = {0};
#ifdef CUS_PROMPT
snprintf(logDirName, 64, "%slog", CUS_PROMPT);
#else
(void)snprintf(logDirName, 64, "taoslog");
#endif
if (taosCreateLog(logDirName, 10, configDir, NULL, NULL, NULL, NULL, 1) != 0) {
(void)printf(" WARING: Create %s failed:%s. configDir=%s\n", logDirName, strerror(errno), configDir);
const char *logName = CUS_PROMPT "slog";
ENV_ERR_RET(taosInitLogOutput(&logName), "failed to init log output");
if (taosCreateLog(logName, 10, configDir, NULL, NULL, NULL, NULL, 1) != 0) {
(void)printf(" WARING: Create %s failed:%s. configDir=%s\n", logName, strerror(errno), configDir);
tscInitRes = -1;
return;
}

View File

@ -2109,6 +2109,18 @@ int taos_stmt2_get_stb_fields(TAOS_STMT2 *stmt, int *count, TAOS_FIELD_STB **fie
return terrno;
}
STscStmt2 *pStmt = (STscStmt2 *)stmt;
if (pStmt->sql.type == 0) {
int isInsert = 0;
(void)stmtIsInsert2(stmt, &isInsert);
if (!isInsert) {
pStmt->sql.type = STMT_TYPE_QUERY;
}
}
if (pStmt->sql.type == STMT_TYPE_QUERY) {
return stmtGetParamNum2(stmt, count);
}
return stmtGetStbColFields2(stmt, count, fields);
}

View File

@ -1094,6 +1094,13 @@ static int stmtFetchStbColFields2(STscStmt2* pStmt, int32_t* fieldNum, TAOS_FIEL
}
STMT_ERR_RET(qBuildStmtStbColFields(*pDataBlock, pStmt->bInfo.boundTags, pStmt->bInfo.preCtbname, fieldNum, fields));
if (pStmt->bInfo.tbType == TSDB_SUPER_TABLE) {
pStmt->bInfo.needParse = true;
if (taosHashRemove(pStmt->exec.pBlockHash, pStmt->bInfo.tbFName, strlen(pStmt->bInfo.tbFName)) != 0) {
tscError("get fileds %s remove exec blockHash fail", pStmt->bInfo.tbFName);
STMT_ERR_RET(TSDB_CODE_APP_ERROR);
}
}
return TSDB_CODE_SUCCESS;
}

View File

@ -775,7 +775,7 @@ _exit:
TAOS_RETURN(code);
}
static int32_t s3PutObjectFromFileWithCp(S3BucketContext *bucket_context, const char *file, int32_t lmtime,
static int32_t s3PutObjectFromFileWithCp(S3BucketContext *bucket_context, const char *file, int64_t lmtime,
char const *object_name, int64_t contentLength, S3PutProperties *put_prop,
put_object_callback_data *data) {
int32_t code = 0, lino = 0;
@ -963,7 +963,7 @@ _exit:
int32_t s3PutObjectFromFile2ByEp(const char *file, const char *object_name, int8_t withcp, int8_t epIndex) {
int32_t code = 0;
int32_t lmtime = 0;
int64_t lmtime = 0;
const char *filename = 0;
uint64_t contentLength = 0;
const char *cacheControl = 0, *contentType = 0, *md5 = 0;
@ -1040,7 +1040,7 @@ int32_t s3PutObjectFromFile2(const char *file, const char *object_name, int8_t w
static int32_t s3PutObjectFromFileOffsetByEp(const char *file, const char *object_name, int64_t offset, int64_t size,
int8_t epIndex) {
int32_t code = 0;
int32_t lmtime = 0;
int64_t lmtime = 0;
const char *filename = 0;
uint64_t contentLength = 0;
const char *cacheControl = 0, *contentType = 0, *md5 = 0;
@ -1847,7 +1847,7 @@ _exit:
typedef struct {
int64_t size;
int32_t atime;
int64_t atime;
char name[TSDB_FILENAME_LEN];
} SEvictFile;

View File

@ -350,7 +350,7 @@ void cos_cp_update(SCheckpoint* checkpoint, int32_t part_index, char const* etag
checkpoint->parts[part_index].crc64 = crc64;
}
void cos_cp_build_upload(SCheckpoint* checkpoint, char const* filepath, int64_t size, int32_t mtime,
void cos_cp_build_upload(SCheckpoint* checkpoint, char const* filepath, int64_t size, int64_t mtime,
char const* upload_id, int64_t part_size) {
int i = 0;
@ -375,7 +375,7 @@ void cos_cp_build_upload(SCheckpoint* checkpoint, char const* filepath, int64_t
static bool cos_cp_verify_md5(SCheckpoint* cp) { return true; }
bool cos_cp_is_valid_upload(SCheckpoint* checkpoint, int64_t size, int32_t mtime) {
bool cos_cp_is_valid_upload(SCheckpoint* checkpoint, int64_t size, int64_t mtime) {
if (cos_cp_verify_md5(checkpoint) && checkpoint->file_size == size && checkpoint->file_last_modified == mtime) {
return true;
}

View File

@ -17,6 +17,7 @@
#include "tglobal.h"
#include "defines.h"
#include "os.h"
#include "osString.h"
#include "tconfig.h"
#include "tgrant.h"
#include "tlog.h"
@ -1010,12 +1011,43 @@ static int32_t taosUpdateServerCfg(SConfig *pCfg) {
TAOS_RETURN(TSDB_CODE_SUCCESS);
}
static int32_t taosSetLogOutput(SConfig *pCfg) {
if (tsLogOutput) {
char *pLog = tsLogOutput;
char *pEnd = NULL;
if (strcasecmp(pLog, "stdout") && strcasecmp(pLog, "stderr") && strcasecmp(pLog, "/dev/null")) {
if ((pEnd = strrchr(pLog, '/')) || (pEnd = strrchr(pLog, '\\'))) {
int32_t pathLen = POINTER_DISTANCE(pEnd, pLog) + 1;
if (*pLog == '/' || *pLog == '\\') {
if (pathLen <= 0 || pathLen > PATH_MAX) TAOS_RETURN(TSDB_CODE_OUT_OF_RANGE);
tstrncpy(tsLogDir, pLog, pathLen);
} else {
int32_t len = strlen(tsLogDir);
if (len < 0 || len >= (PATH_MAX - 1)) TAOS_RETURN(TSDB_CODE_OUT_OF_RANGE);
if (len == 0 || (tsLogDir[len - 1] != '/' && tsLogDir[len - 1] != '\\')) {
tsLogDir[len++] = TD_DIRSEP_CHAR;
}
int32_t remain = PATH_MAX - len - 1;
if (remain < pathLen) TAOS_RETURN(TSDB_CODE_OUT_OF_RANGE);
tstrncpy(tsLogDir + len, pLog, pathLen);
}
TAOS_CHECK_RETURN(cfgSetItem(pCfg, "logDir", tsLogDir, CFG_STYPE_DEFAULT, true));
}
} else {
tstrncpy(tsLogDir, pLog, PATH_MAX);
TAOS_CHECK_RETURN(cfgSetItem(pCfg, "logDir", tsLogDir, CFG_STYPE_DEFAULT, true));
}
}
return 0;
}
static int32_t taosSetClientLogCfg(SConfig *pCfg) {
SConfigItem *pItem = NULL;
TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "logDir");
tstrncpy(tsLogDir, pItem->str, PATH_MAX);
TAOS_CHECK_RETURN(taosExpandDir(tsLogDir, tsLogDir, PATH_MAX));
TAOS_CHECK_RETURN(taosSetLogOutput(pCfg));
TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "minimalLogDirGB");
tsLogSpace.reserved = (int64_t)(((double)pItem->fval) * 1024 * 1024 * 1024);
@ -1860,8 +1892,10 @@ int32_t taosInitCfg(const char *cfgDir, const char **envCmd, const char *envFile
}
if (tsc) {
TAOS_CHECK_GOTO(taosSetClientLogCfg(tsCfg), &lino, _exit);
TAOS_CHECK_GOTO(taosSetClientCfg(tsCfg), &lino, _exit);
} else {
TAOS_CHECK_GOTO(taosSetClientLogCfg(tsCfg), &lino, _exit);
TAOS_CHECK_GOTO(taosSetClientCfg(tsCfg), &lino, _exit);
TAOS_CHECK_GOTO(taosUpdateServerCfg(tsCfg), &lino, _exit);
TAOS_CHECK_GOTO(taosSetServerCfg(tsCfg), &lino, _exit);

View File

@ -809,6 +809,7 @@ int64_t taosTimeTruncate(int64_t ts, const SInterval* pInterval) {
news += (int64_t)(timezone * TSDB_TICK_PER_SECOND(precision));
}
start = news;
if (news <= ts) {
int64_t prev = news;
int64_t newe = taosTimeAdd(news, pInterval->interval, pInterval->intervalUnit, precision) - 1;
@ -828,7 +829,7 @@ int64_t taosTimeTruncate(int64_t ts, const SInterval* pInterval) {
}
}
return prev;
start = prev;
}
} else {
int64_t delta = ts - pInterval->interval;
@ -881,8 +882,8 @@ int64_t taosTimeTruncate(int64_t ts, const SInterval* pInterval) {
while (newe >= ts) {
start = slidingStart;
slidingStart = taosTimeAdd(slidingStart, -pInterval->sliding, pInterval->slidingUnit, precision);
int64_t slidingEnd = taosTimeAdd(slidingStart, pInterval->interval, pInterval->intervalUnit, precision) - 1;
newe = taosTimeAdd(slidingEnd, pInterval->offset, pInterval->offsetUnit, precision);
int64_t news = taosTimeAdd(slidingStart, pInterval->offset, pInterval->offsetUnit, precision);
newe = taosTimeAdd(news, pInterval->interval, pInterval->intervalUnit, precision) - 1;
}
start = taosTimeAdd(start, pInterval->offset, pInterval->offsetUnit, precision);
}
@ -892,17 +893,37 @@ int64_t taosTimeTruncate(int64_t ts, const SInterval* pInterval) {
// used together with taosTimeTruncate. when offset is great than zero, slide-start/slide-end is the anchor point
int64_t taosTimeGetIntervalEnd(int64_t intervalStart, const SInterval* pInterval) {
if (pInterval->offset > 0) {
int64_t slideStart =
taosTimeAdd(intervalStart, -1 * pInterval->offset, pInterval->offsetUnit, pInterval->precision);
int64_t slideEnd = taosTimeAdd(slideStart, pInterval->interval, pInterval->intervalUnit, pInterval->precision) - 1;
int64_t result = taosTimeAdd(slideEnd, pInterval->offset, pInterval->offsetUnit, pInterval->precision);
return result;
} else {
int64_t result = taosTimeAdd(intervalStart, pInterval->interval, pInterval->intervalUnit, pInterval->precision) - 1;
return result;
}
return taosTimeAdd(intervalStart, pInterval->interval, pInterval->intervalUnit, pInterval->precision) - 1;
}
void calcIntervalAutoOffset(SInterval* interval) {
if (!interval || interval->offset != AUTO_DURATION_VALUE) {
return;
}
interval->offset = 0;
if (interval->timeRange.skey == INT64_MIN) {
return;
}
TSKEY skey = interval->timeRange.skey;
TSKEY start = taosTimeTruncate(skey, interval);
TSKEY news = start;
while (news <= skey) {
start = news;
news = taosTimeAdd(start, interval->sliding, interval->slidingUnit, interval->precision);
if (news < start) {
// overflow happens
uError("%s failed and skip, skey [%" PRId64 "], inter[%" PRId64 "(%c)], slid[%" PRId64 "(%c)], precision[%d]",
__func__, skey, interval->interval, interval->intervalUnit, interval->sliding, interval->slidingUnit,
interval->precision);
return;
}
}
interval->offset = skey - start;
}
// internal function, when program is paused in debugger,
// one can call this function from debugger to print a
// timestamp as human readable string, for example (gdb):

View File

@ -49,6 +49,7 @@
#define DM_ENV_CMD "The env cmd variable string to use when configuring the server, such as: -e 'TAOS_FQDN=td1'."
#define DM_ENV_FILE "The env variable file path to use when configuring the server, default is './.env', .env text can be 'TAOS_FQDN=td1'."
#define DM_MACHINE_CODE "Get machine code."
#define DM_LOG_OUTPUT "Specify log output. Options:\n\r\t\t\t stdout, stderr, /dev/null, <directory>, <directory>/<filename>, <filename>\n\r\t\t\t * If OUTPUT contains an absolute directory, logs will be stored in that directory instead of logDir.\n\r\t\t\t * If OUTPUT contains a relative directory, logs will be stored in the directory combined with logDir and the relative directory."
#define DM_VERSION "Print program version."
#define DM_EMAIL "<support@taosdata.com>"
#define DM_MEM_DBG "Enable memory debug"
@ -239,6 +240,32 @@ static int32_t dmParseArgs(int32_t argc, char const *argv[]) {
}
} else if (strcmp(argv[i], "-k") == 0) {
global.generateGrant = true;
#if defined(LINUX)
} else if (strcmp(argv[i], "-o") == 0 || strcmp(argv[i], "--log-output") == 0 ||
strncmp(argv[i], "--log-output=", 13) == 0) {
if ((i < argc - 1) || ((i == argc - 1) && strncmp(argv[i], "--log-output=", 13) == 0)) {
int32_t klen = strlen(argv[i]);
int32_t vlen = klen < 13 ? strlen(argv[++i]) : klen - 13;
const char *val = argv[i];
if (klen >= 13) val += 13;
if (vlen <= 0 || vlen >= PATH_MAX) {
printf("failed to set log output since invalid vlen:%d, valid range: [1, %d)\n", vlen, PATH_MAX);
return TSDB_CODE_INVALID_CFG;
}
tsLogOutput = taosMemoryMalloc(PATH_MAX);
if (!tsLogOutput) {
printf("failed to set log output: '%s' since %s\n", val, tstrerror(terrno));
return terrno;
}
if (taosExpandDir(val, tsLogOutput, PATH_MAX) != 0) {
printf("failed to expand log output: '%s' since %s\n", val, tstrerror(terrno));
return terrno;
}
} else {
printf("'%s' requires a parameter\n", argv[i]);
return TSDB_CODE_INVALID_CFG;
}
#endif
} else if (strcmp(argv[i], "-y") == 0) {
global.generateCode = true;
if (i < argc - 1) {
@ -316,6 +343,9 @@ static void dmPrintHelp() {
printf("%s%s%s%s\n", indent, "-e,", indent, DM_ENV_CMD);
printf("%s%s%s%s\n", indent, "-E,", indent, DM_ENV_FILE);
printf("%s%s%s%s\n", indent, "-k,", indent, DM_MACHINE_CODE);
#if defined(LINUX)
printf("%s%s%s%s\n", indent, "-o, --log-output=OUTPUT", indent, DM_LOG_OUTPUT);
#endif
printf("%s%s%s%s\n", indent, "-y,", indent, DM_SET_ENCRYPTKEY);
printf("%s%s%s%s\n", indent, "-dm,", indent, DM_MEM_DBG);
printf("%s%s%s%s\n", indent, "-V,", indent, DM_VERSION);
@ -340,8 +370,11 @@ static int32_t dmCheckS3() {
}
static int32_t dmInitLog() {
return taosCreateLog(CUS_PROMPT "dlog", 1, configDir, global.envCmd, global.envFile, global.apolloUrl, global.pArgs,
0);
const char *logName = CUS_PROMPT "dlog";
TAOS_CHECK_RETURN(taosInitLogOutput(&logName));
return taosCreateLog(logName, 1, configDir, global.envCmd, global.envFile, global.apolloUrl, global.pArgs, 0);
}
static void taosCleanupArgs() {

View File

@ -14,13 +14,13 @@
*/
#define _DEFAULT_SOURCE
#include "mndTrans.h"
#include "mndDb.h"
#include "mndPrivilege.h"
#include "mndShow.h"
#include "mndStb.h"
#include "mndSubscribe.h"
#include "mndSync.h"
#include "mndTrans.h"
#include "mndUser.h"
#define TRANS_VER1_NUMBER 1
@ -223,7 +223,7 @@ SSdbRaw *mndTransEncode(STrans *pTrans) {
SDB_SET_RESERVE(pRaw, dataPos, TRANS_RESERVE_SIZE, _OVER)
SDB_SET_DATALEN(pRaw, dataPos, _OVER)
terrno = 0;
terrno = 0;
_OVER:
if (terrno != 0) {
@ -294,8 +294,8 @@ _OVER:
SSdbRow *mndTransDecode(SSdbRaw *pRaw) {
terrno = TSDB_CODE_INVALID_MSG;
int32_t code = 0;
int32_t lino = 0;
int32_t code = 0;
int32_t lino = 0;
SSdbRow *pRow = NULL;
STrans *pTrans = NULL;
char *pData = NULL;
@ -897,7 +897,7 @@ static bool mndCheckTransConflict(SMnode *pMnode, STrans *pNew) {
if (pNew->conflict == TRN_CONFLICT_ARBGROUP) {
if (pTrans->conflict == TRN_CONFLICT_GLOBAL) conflict = true;
if (pTrans->conflict == TRN_CONFLICT_ARBGROUP) {
void* pGidIter = taosHashIterate(pNew->arbGroupIds, NULL);
void *pGidIter = taosHashIterate(pNew->arbGroupIds, NULL);
while (pGidIter != NULL) {
int32_t groupId = *(int32_t *)pGidIter;
if (taosHashGet(pTrans->arbGroupIds, &groupId, sizeof(int32_t)) != NULL) {
@ -1040,7 +1040,7 @@ static int32_t mndTransCheckCommitActions(SMnode *pMnode, STrans *pTrans) {
int32_t mndTransPrepare(SMnode *pMnode, STrans *pTrans) {
int32_t code = 0;
if (pTrans == NULL) {
return TSDB_CODE_INVALID_PARA;
return TSDB_CODE_INVALID_PARA;
}
TAOS_CHECK_RETURN(mndTransCheckConflict(pMnode, pTrans));
@ -1312,6 +1312,14 @@ static int32_t mndTransWriteSingleLog(SMnode *pMnode, STrans *pTrans, STransActi
TAOS_RETURN(TSDB_CODE_MND_TRANS_CTX_SWITCH);
}
if (pAction->pRaw->type >= SDB_MAX) {
pAction->rawWritten = true;
pAction->errCode = 0;
mndSetTransLastAction(pTrans, pAction);
mInfo("skip sdb raw type:%d since it is not supported", pAction->pRaw->type);
TAOS_RETURN(TSDB_CODE_SUCCESS);
}
int32_t code = sdbWriteWithoutFree(pMnode->pSdb, pAction->pRaw);
if (code == 0 || terrno == TSDB_CODE_SDB_OBJ_NOT_THERE) {
pAction->rawWritten = true;
@ -1355,10 +1363,10 @@ static int32_t mndTransSendSingleMsg(SMnode *pMnode, STrans *pTrans, STransActio
char detail[1024] = {0};
int32_t len = tsnprintf(detail, sizeof(detail), "msgType:%s numOfEps:%d inUse:%d", TMSG_INFO(pAction->msgType),
pAction->epSet.numOfEps, pAction->epSet.inUse);
pAction->epSet.numOfEps, pAction->epSet.inUse);
for (int32_t i = 0; i < pAction->epSet.numOfEps; ++i) {
len += tsnprintf(detail + len, sizeof(detail) - len, " ep:%d-%s:%u", i, pAction->epSet.eps[i].fqdn,
pAction->epSet.eps[i].port);
pAction->epSet.eps[i].port);
}
int32_t code = tmsgSendReq(&pAction->epSet, &rpcMsg);
@ -1461,9 +1469,9 @@ static int32_t mndTransExecuteActions(SMnode *pMnode, STrans *pTrans, SArray *pA
for (int32_t action = 0; action < numOfActions; ++action) {
STransAction *pAction = taosArrayGet(pArray, action);
mDebug("trans:%d, %s:%d Sent:%d, Received:%d, errCode:0x%x, acceptableCode:0x%x, retryCode:0x%x",
pTrans->id, mndTransStr(pAction->stage), pAction->id, pAction->msgSent, pAction->msgReceived,
pAction->errCode, pAction->acceptableCode, pAction->retryCode);
mDebug("trans:%d, %s:%d Sent:%d, Received:%d, errCode:0x%x, acceptableCode:0x%x, retryCode:0x%x", pTrans->id,
mndTransStr(pAction->stage), pAction->id, pAction->msgSent, pAction->msgReceived, pAction->errCode,
pAction->acceptableCode, pAction->retryCode);
if (pAction->msgSent) {
if (pAction->msgReceived) {
if (pAction->errCode != 0 && pAction->errCode != pAction->acceptableCode) {
@ -2047,11 +2055,11 @@ static int32_t mndRetrieveTrans(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pBl
char lastInfo[TSDB_TRANS_ERROR_LEN + VARSTR_HEADER_SIZE] = {0};
char detail[TSDB_TRANS_ERROR_LEN + 1] = {0};
int32_t len = tsnprintf(detail, sizeof(detail), "action:%d code:0x%x(%s) ", pTrans->lastAction,
pTrans->lastErrorNo & 0xFFFF, tstrerror(pTrans->lastErrorNo));
pTrans->lastErrorNo & 0xFFFF, tstrerror(pTrans->lastErrorNo));
SEpSet epset = pTrans->lastEpset;
if (epset.numOfEps > 0) {
len += tsnprintf(detail + len, sizeof(detail) - len, "msgType:%s numOfEps:%d inUse:%d ",
TMSG_INFO(pTrans->lastMsgType), epset.numOfEps, epset.inUse);
TMSG_INFO(pTrans->lastMsgType), epset.numOfEps, epset.inUse);
for (int32_t i = 0; i < pTrans->lastEpset.numOfEps; ++i) {
len += tsnprintf(detail + len, sizeof(detail) - len, "ep:%d-%s:%u ", i, epset.eps[i].fqdn, epset.eps[i].port);
}

View File

@ -1195,10 +1195,15 @@ int64_t mndGetVgroupMemory(SMnode *pMnode, SDbObj *pDbInput, SVgObj *pVgroup) {
int64_t vgroupMemroy = 0;
if (pDb != NULL) {
vgroupMemroy = (int64_t)pDb->cfg.buffer * 1024 * 1024 + (int64_t)pDb->cfg.pages * pDb->cfg.pageSize * 1024;
int64_t buffer = (int64_t)pDb->cfg.buffer * 1024 * 1024;
int64_t cache = (int64_t)pDb->cfg.pages * pDb->cfg.pageSize * 1024;
vgroupMemroy = buffer + cache;
int64_t cacheLast = (int64_t)pDb->cfg.cacheLastSize * 1024 * 1024;
if (pDb->cfg.cacheLast > 0) {
vgroupMemroy += (int64_t)pDb->cfg.cacheLastSize * 1024 * 1024;
vgroupMemroy += cacheLast;
}
mDebug("db:%s, vgroup:%d, buffer:%" PRId64 " cache:%" PRId64 " cacheLast:%" PRId64, pDb->name, pVgroup->vgId,
buffer, cache, cacheLast);
}
if (pDbInput == NULL) {
@ -2712,20 +2717,29 @@ static int32_t mndCheckDnodeMemory(SMnode *pMnode, SDbObj *pOldDb, SDbObj *pNewD
for (int32_t i = 0; i < (int32_t)taosArrayGetSize(pArray); ++i) {
SDnodeObj *pDnode = taosArrayGet(pArray, i);
bool inVgroup = false;
int64_t oldMemUsed = 0;
int64_t newMemUsed = 0;
mDebug("db:%s, vgId:%d, check dnode:%d, avail:%" PRId64 " used:%" PRId64, pNewVgroup->dbName, pNewVgroup->vgId,
pDnode->id, pDnode->memAvail, pDnode->memUsed);
for (int32_t j = 0; j < pOldVgroup->replica; ++j) {
SVnodeGid *pVgId = &pOldVgroup->vnodeGid[i];
SVnodeGid *pVgId = &pOldVgroup->vnodeGid[j];
if (pDnode->id == pVgId->dnodeId) {
pDnode->memUsed -= mndGetVgroupMemory(pMnode, pOldDb, pOldVgroup);
oldMemUsed = mndGetVgroupMemory(pMnode, pOldDb, pOldVgroup);
inVgroup = true;
}
}
for (int32_t j = 0; j < pNewVgroup->replica; ++j) {
SVnodeGid *pVgId = &pNewVgroup->vnodeGid[i];
SVnodeGid *pVgId = &pNewVgroup->vnodeGid[j];
if (pDnode->id == pVgId->dnodeId) {
pDnode->memUsed += mndGetVgroupMemory(pMnode, pNewDb, pNewVgroup);
newMemUsed = mndGetVgroupMemory(pMnode, pNewDb, pNewVgroup);
inVgroup = true;
}
}
mDebug("db:%s, vgId:%d, memory in dnode:%d, oldUsed:%" PRId64 ", newUsed:%" PRId64, pNewVgroup->dbName,
pNewVgroup->vgId, pDnode->id, oldMemUsed, newMemUsed);
pDnode->memUsed = pDnode->memUsed - oldMemUsed + newMemUsed;
if (pDnode->memAvail - pDnode->memUsed <= 0) {
mError("db:%s, vgId:%d, no enough memory in dnode:%d, avail:%" PRId64 " used:%" PRId64, pNewVgroup->dbName,
pNewVgroup->vgId, pDnode->id, pDnode->memAvail, pDnode->memUsed);

View File

@ -388,6 +388,11 @@ static int32_t sdbReadFileImp(SSdb *pSdb) {
goto _OVER;
}
if (pRaw->type >= SDB_MAX) {
mInfo("skip sdb raw type:%d since it is not supported", pRaw->type);
continue;
}
code = sdbWriteWithoutFree(pSdb, pRaw);
if (code != 0) {
mError("failed to read sdb file:%s since %s", file, terrstr());

View File

@ -660,7 +660,7 @@ static int32_t tsdbDoS3Migrate(SRTNer *rtner) {
int32_t lcn = fobj->f->lcn;
if (/*lcn < 1 && */ taosCheckExistFile(fobj->fname)) {
int32_t mtime = 0;
int64_t mtime = 0;
int64_t size = 0;
int32_t r = taosStatFile(fobj->fname, &size, &mtime, NULL);
if (size > chunksize && mtime < rtner->now - tsS3UploadDelaySec) {
@ -687,7 +687,7 @@ static int32_t tsdbDoS3Migrate(SRTNer *rtner) {
tsdbTFileLastChunkName(rtner->tsdb, fobj->f, fname1);
if (taosCheckExistFile(fname1)) {
int32_t mtime = 0;
int64_t mtime = 0;
int64_t size = 0;
if (taosStatFile(fname1, &size, &mtime, NULL) != 0) {
tsdbError("vgId:%d, %s failed at %s:%d ", TD_VID(rtner->tsdb->pVnode), __func__, __FILE__, __LINE__);

View File

@ -977,6 +977,11 @@ void vnodeUpdateMetaRsp(SVnode *pVnode, STableMetaRsp *pMetaRsp) {
extern int32_t vnodeAsyncRetention(SVnode *pVnode, int64_t now);
static int32_t vnodeProcessTrimReq(SVnode *pVnode, int64_t ver, void *pReq, int32_t len, SRpcMsg *pRsp) {
if (!pVnode->restored) {
vInfo("vgId:%d, ignore trim req during restoring. ver:%" PRId64, TD_VID(pVnode), ver);
return 0;
}
int32_t code = 0;
SVTrimDbReq trimReq = {0};

View File

@ -990,6 +990,9 @@ static int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx
EXPLAIN_ROW_APPEND_SLIMIT(pIntNode->window.node.pSlimit);
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level + 1));
EXPLAIN_ROW_NEW(level + 1, EXPLAIN_TIMERANGE_FORMAT, pIntNode->timeRange.skey, pIntNode->timeRange.ekey);
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level + 1));
uint8_t precision = qExplainGetIntervalPrecision(pIntNode);
int64_t time1 = -1;
int64_t time2 = -1;

View File

@ -2289,7 +2289,9 @@ SInterval extractIntervalInfo(const STableScanPhysiNode* pTableScanNode) {
.slidingUnit = pTableScanNode->slidingUnit,
.offset = pTableScanNode->offset,
.precision = pTableScanNode->scan.node.pOutputDataBlockDesc->precision,
.timeRange = pTableScanNode->scanRange,
};
calcIntervalAutoOffset(&interval);
return interval;
}
@ -2409,13 +2411,14 @@ void getInitialStartTimeWindow(SInterval* pInterval, TSKEY ts, STimeWindow* w, b
int64_t key = w->skey;
while (key < ts) { // moving towards end
key = taosTimeAdd(key, pInterval->sliding, pInterval->slidingUnit, pInterval->precision);
key = getNextTimeWindowStart(pInterval, key, TSDB_ORDER_ASC);
if (key > ts) {
break;
}
w->skey = key;
}
w->ekey = taosTimeAdd(w->skey, pInterval->interval, pInterval->intervalUnit, pInterval->precision) - 1;
}
}
@ -2428,14 +2431,12 @@ static STimeWindow doCalculateTimeWindow(int64_t ts, SInterval* pInterval) {
}
STimeWindow getFirstQualifiedTimeWindow(int64_t ts, STimeWindow* pWindow, SInterval* pInterval, int32_t order) {
int32_t factor = (order == TSDB_ORDER_ASC) ? -1 : 1;
STimeWindow win = *pWindow;
STimeWindow save = win;
while (win.skey <= ts && win.ekey >= ts) {
save = win;
win.skey = taosTimeAdd(win.skey, factor * pInterval->sliding, pInterval->slidingUnit, pInterval->precision);
win.ekey = taosTimeAdd(win.ekey, factor * pInterval->sliding, pInterval->slidingUnit, pInterval->precision);
// get previous time window
getNextTimeWindow(pInterval, &win, order == TSDB_ORDER_ASC ? TSDB_ORDER_DESC : TSDB_ORDER_ASC);
}
return save;
@ -2448,7 +2449,6 @@ STimeWindow getActiveTimeWindow(SDiskbasedBuf* pBuf, SResultRowInfo* pResultRowI
STimeWindow w = {0};
if (pResultRowInfo->cur.pageId == -1) { // the first window, from the previous stored value
getInitialStartTimeWindow(pInterval, ts, &w, (order == TSDB_ORDER_ASC));
w.ekey = taosTimeGetIntervalEnd(w.skey, pInterval);
return w;
}
@ -2471,19 +2471,17 @@ STimeWindow getActiveTimeWindow(SDiskbasedBuf* pBuf, SResultRowInfo* pResultRowI
return w;
}
void getNextTimeWindow(const SInterval* pInterval, STimeWindow* tw, int32_t order) {
int64_t slidingStart = 0;
if (pInterval->offset > 0) {
slidingStart = taosTimeAdd(tw->skey, -1 * pInterval->offset, pInterval->offsetUnit, pInterval->precision);
} else {
slidingStart = tw->skey;
}
TSKEY getNextTimeWindowStart(const SInterval* pInterval, TSKEY start, int32_t order) {
int32_t factor = GET_FORWARD_DIRECTION_FACTOR(order);
slidingStart = taosTimeAdd(slidingStart, factor * pInterval->sliding, pInterval->slidingUnit, pInterval->precision);
tw->skey = taosTimeAdd(slidingStart, pInterval->offset, pInterval->offsetUnit, pInterval->precision);
int64_t slidingEnd =
taosTimeAdd(slidingStart, pInterval->interval, pInterval->intervalUnit, pInterval->precision) - 1;
tw->ekey = taosTimeAdd(slidingEnd, pInterval->offset, pInterval->offsetUnit, pInterval->precision);
TSKEY nextStart = taosTimeAdd(start, -1 * pInterval->offset, pInterval->offsetUnit, pInterval->precision);
nextStart = taosTimeAdd(nextStart, factor * pInterval->sliding, pInterval->slidingUnit, pInterval->precision);
nextStart = taosTimeAdd(nextStart, pInterval->offset, pInterval->offsetUnit, pInterval->precision);
return nextStart;
}
void getNextTimeWindow(const SInterval* pInterval, STimeWindow* tw, int32_t order) {
tw->skey = getNextTimeWindowStart(pInterval, tw->skey, order);
tw->ekey = taosTimeAdd(tw->skey, pInterval->interval, pInterval->intervalUnit, pInterval->precision) - 1;
}
bool hasLimitOffsetInfo(SLimitInfo* pLimitInfo) {

View File

@ -600,7 +600,9 @@ int32_t createStreamIntervalSliceOperatorInfo(SOperatorInfo* downstream, SPhysiN
.intervalUnit = pIntervalPhyNode->intervalUnit,
.slidingUnit = pIntervalPhyNode->slidingUnit,
.offset = pIntervalPhyNode->offset,
.precision = ((SColumnNode*)pIntervalPhyNode->window.pTspk)->node.resType.precision};
.precision = ((SColumnNode*)pIntervalPhyNode->window.pTspk)->node.resType.precision,
.timeRange = pIntervalPhyNode->timeRange};
calcIntervalAutoOffset(&pInfo->interval);
pInfo->twAggSup =
(STimeWindowAggSupp){.waterMark = pIntervalPhyNode->window.watermark,

View File

@ -1932,7 +1932,9 @@ int32_t createStreamFinalIntervalOperatorInfo(SOperatorInfo* downstream, SPhysiN
.intervalUnit = pIntervalPhyNode->intervalUnit,
.slidingUnit = pIntervalPhyNode->slidingUnit,
.offset = pIntervalPhyNode->offset,
.precision = ((SColumnNode*)pIntervalPhyNode->window.pTspk)->node.resType.precision};
.precision = ((SColumnNode*)pIntervalPhyNode->window.pTspk)->node.resType.precision,
.timeRange = pIntervalPhyNode->timeRange};
calcIntervalAutoOffset(&pInfo->interval);
pInfo->twAggSup = (STimeWindowAggSupp){
.waterMark = pIntervalPhyNode->window.watermark,
.calTrigger = pIntervalPhyNode->window.triggerType,
@ -5342,7 +5344,9 @@ static int32_t createStreamSingleIntervalOperatorInfo(SOperatorInfo* downstream,
.slidingUnit = pIntervalPhyNode->slidingUnit,
.offset = pIntervalPhyNode->offset,
.precision = ((SColumnNode*)pIntervalPhyNode->window.pTspk)->node.resType.precision,
.timeRange = pIntervalPhyNode->timeRange,
};
calcIntervalAutoOffset(&pInfo->interval);
pInfo->twAggSup =
(STimeWindowAggSupp){.waterMark = pIntervalPhyNode->window.watermark,

View File

@ -1401,7 +1401,9 @@ int32_t createIntervalOperatorInfo(SOperatorInfo* downstream, SIntervalPhysiNode
.intervalUnit = pPhyNode->intervalUnit,
.slidingUnit = pPhyNode->slidingUnit,
.offset = pPhyNode->offset,
.precision = ((SColumnNode*)pPhyNode->window.pTspk)->node.resType.precision};
.precision = ((SColumnNode*)pPhyNode->window.pTspk)->node.resType.precision,
.timeRange = pPhyNode->timeRange};
calcIntervalAutoOffset(&interval);
STimeWindowAggSupp as = {
.waterMark = pPhyNode->window.watermark,
@ -2122,7 +2124,9 @@ int32_t createMergeAlignedIntervalOperatorInfo(SOperatorInfo* downstream, SMerge
.intervalUnit = pNode->intervalUnit,
.slidingUnit = pNode->slidingUnit,
.offset = pNode->offset,
.precision = ((SColumnNode*)pNode->window.pTspk)->node.resType.precision};
.precision = ((SColumnNode*)pNode->window.pTspk)->node.resType.precision,
.timeRange = pNode->timeRange};
calcIntervalAutoOffset(&interval);
SIntervalAggOperatorInfo* iaInfo = miaInfo->intervalAggOperatorInfo;
SExprSupp* pSup = &pOperator->exprSupp;
@ -2462,7 +2466,9 @@ int32_t createMergeIntervalOperatorInfo(SOperatorInfo* downstream, SMergeInterva
.intervalUnit = pIntervalPhyNode->intervalUnit,
.slidingUnit = pIntervalPhyNode->slidingUnit,
.offset = pIntervalPhyNode->offset,
.precision = ((SColumnNode*)pIntervalPhyNode->window.pTspk)->node.resType.precision};
.precision = ((SColumnNode*)pIntervalPhyNode->window.pTspk)->node.resType.precision,
.timeRange = pIntervalPhyNode->timeRange};
calcIntervalAutoOffset(&interval);
pMergeIntervalInfo->groupIntervals = tdListNew(sizeof(SGroupTimeWindow));

View File

@ -1564,8 +1564,8 @@ static void udfdPrintVersion() {
}
static int32_t udfdInitLog() {
char logName[12] = {0};
snprintf(logName, sizeof(logName), "%slog", "udfd");
const char *logName = "udfdlog";
TAOS_CHECK_RETURN(taosInitLogOutput(&logName));
return taosCreateLog(logName, 1, configDir, NULL, NULL, NULL, NULL, 0);
}

View File

@ -387,6 +387,7 @@ static int32_t intervalWindowNodeCopy(const SIntervalWindowNode* pSrc, SInterval
CLONE_NODE_FIELD(pOffset);
CLONE_NODE_FIELD(pSliding);
CLONE_NODE_FIELD(pFill);
COPY_OBJECT_FIELD(timeRange, sizeof(STimeWindow));
return TSDB_CODE_SUCCESS;
}
@ -615,6 +616,7 @@ static int32_t logicWindowCopy(const SWindowLogicNode* pSrc, SWindowLogicNode* p
COPY_SCALAR_FIELD(sliding);
COPY_SCALAR_FIELD(intervalUnit);
COPY_SCALAR_FIELD(slidingUnit);
COPY_OBJECT_FIELD(timeRange, sizeof(STimeWindow));
COPY_SCALAR_FIELD(sessionGap);
CLONE_NODE_FIELD(pTspk);
CLONE_NODE_FIELD(pTsEnd);
@ -805,6 +807,7 @@ static int32_t physiIntervalCopy(const SIntervalPhysiNode* pSrc, SIntervalPhysiN
COPY_SCALAR_FIELD(sliding);
COPY_SCALAR_FIELD(intervalUnit);
COPY_SCALAR_FIELD(slidingUnit);
COPY_OBJECT_FIELD(timeRange, sizeof(STimeWindow));
return TSDB_CODE_SUCCESS;
}

View File

@ -1014,6 +1014,8 @@ static const char* jkWindowLogicPlanOffset = "Offset";
static const char* jkWindowLogicPlanSliding = "Sliding";
static const char* jkWindowLogicPlanIntervalUnit = "IntervalUnit";
static const char* jkWindowLogicPlanSlidingUnit = "SlidingUnit";
static const char* jkWindowLogicPlanStartTime = "StartTime";
static const char* jkWindowLogicPlanEndTime = "EndTime";
static const char* jkWindowLogicPlanSessionGap = "SessionGap";
static const char* jkWindowLogicPlanTspk = "Tspk";
static const char* jkWindowLogicPlanStateExpr = "StateExpr";
@ -1046,6 +1048,12 @@ static int32_t logicWindowNodeToJson(const void* pObj, SJson* pJson) {
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddIntegerToObject(pJson, jkWindowLogicPlanSlidingUnit, pNode->slidingUnit);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddIntegerToObject(pJson, jkWindowLogicPlanStartTime, pNode->timeRange.skey);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddIntegerToObject(pJson, jkWindowLogicPlanEndTime, pNode->timeRange.ekey);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddIntegerToObject(pJson, jkWindowLogicPlanSessionGap, pNode->sessionGap);
}
@ -1093,6 +1101,12 @@ static int32_t jsonToLogicWindowNode(const SJson* pJson, void* pObj) {
if (TSDB_CODE_SUCCESS == code) {
code = tjsonGetTinyIntValue(pJson, jkWindowLogicPlanSlidingUnit, &pNode->slidingUnit);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonGetBigIntValue(pJson, jkWindowLogicPlanStartTime, &pNode->timeRange.skey);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonGetBigIntValue(pJson, jkWindowLogicPlanEndTime, &pNode->timeRange.ekey);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonGetBigIntValue(pJson, jkWindowLogicPlanSessionGap, &pNode->sessionGap);
}
@ -2944,6 +2958,8 @@ static const char* jkIntervalPhysiPlanOffset = "Offset";
static const char* jkIntervalPhysiPlanSliding = "Sliding";
static const char* jkIntervalPhysiPlanIntervalUnit = "intervalUnit";
static const char* jkIntervalPhysiPlanSlidingUnit = "slidingUnit";
static const char* jkIntervalPhysiPlanStartTime = "StartTime";
static const char* jkIntervalPhysiPlanEndTime = "EndTime";
static int32_t physiIntervalNodeToJson(const void* pObj, SJson* pJson) {
const SIntervalPhysiNode* pNode = (const SIntervalPhysiNode*)pObj;
@ -2964,6 +2980,12 @@ static int32_t physiIntervalNodeToJson(const void* pObj, SJson* pJson) {
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddIntegerToObject(pJson, jkIntervalPhysiPlanSlidingUnit, pNode->slidingUnit);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddIntegerToObject(pJson, jkIntervalPhysiPlanStartTime, pNode->timeRange.skey);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddIntegerToObject(pJson, jkIntervalPhysiPlanEndTime, pNode->timeRange.ekey);
}
return code;
}
@ -2987,6 +3009,12 @@ static int32_t jsonToPhysiIntervalNode(const SJson* pJson, void* pObj) {
if (TSDB_CODE_SUCCESS == code) {
code = tjsonGetTinyIntValue(pJson, jkIntervalPhysiPlanSlidingUnit, &pNode->slidingUnit);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonGetBigIntValue(pJson, jkIntervalPhysiPlanStartTime, &pNode->timeRange.skey);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonGetBigIntValue(pJson, jkIntervalPhysiPlanEndTime, &pNode->timeRange.ekey);
}
return code;
}
@ -5058,6 +5086,8 @@ static const char* jkIntervalWindowOffset = "Offset";
static const char* jkIntervalWindowSliding = "Sliding";
static const char* jkIntervalWindowFill = "Fill";
static const char* jkIntervalWindowTsPk = "TsPk";
static const char* jkIntervalStartTime = "StartTime";
static const char* jkIntervalEndTime = "EndTime";
static int32_t intervalWindowNodeToJson(const void* pObj, SJson* pJson) {
const SIntervalWindowNode* pNode = (const SIntervalWindowNode*)pObj;
@ -5075,6 +5105,12 @@ static int32_t intervalWindowNodeToJson(const void* pObj, SJson* pJson) {
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddObject(pJson, jkIntervalWindowTsPk, nodeToJson, pNode->pCol);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddIntegerToObject(pJson, jkIntervalStartTime, pNode->timeRange.skey);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddIntegerToObject(pJson, jkIntervalEndTime, pNode->timeRange.ekey);
}
return code;
}
@ -5095,6 +5131,12 @@ static int32_t jsonToIntervalWindowNode(const SJson* pJson, void* pObj) {
if (TSDB_CODE_SUCCESS == code) {
code = jsonToNodeObject(pJson, jkIntervalWindowTsPk, &pNode->pCol);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonGetBigIntValue(pJson, jkIntervalStartTime, &pNode->timeRange.skey);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonGetBigIntValue(pJson, jkIntervalEndTime, &pNode->timeRange.ekey);
}
return code;
}

View File

@ -3250,7 +3250,7 @@ static int32_t msgToPhysiWindowNode(STlvDecoder* pDecoder, void* pObj) {
return code;
}
enum { PHY_INTERVAL_CODE_WINDOW = 1, PHY_INTERVAL_CODE_INLINE_ATTRS };
enum { PHY_INTERVAL_CODE_WINDOW = 1, PHY_INTERVAL_CODE_INLINE_ATTRS, PHY_INTERVAL_CODE_TIME_RANGE };
static int32_t physiIntervalNodeInlineToMsg(const void* pObj, STlvEncoder* pEncoder) {
const SIntervalPhysiNode* pNode = (const SIntervalPhysiNode*)pObj;
@ -3279,6 +3279,9 @@ static int32_t physiIntervalNodeToMsg(const void* pObj, STlvEncoder* pEncoder) {
if (TSDB_CODE_SUCCESS == code) {
code = tlvEncodeObj(pEncoder, PHY_INTERVAL_CODE_INLINE_ATTRS, physiIntervalNodeInlineToMsg, pNode);
}
if (TSDB_CODE_SUCCESS == code) {
code = tlvEncodeObj(pEncoder, PHY_INTERVAL_CODE_TIME_RANGE, timeWindowToMsg, &pNode->timeRange);
}
return code;
}
@ -3316,6 +3319,8 @@ static int32_t msgToPhysiIntervalNode(STlvDecoder* pDecoder, void* pObj) {
case PHY_INTERVAL_CODE_INLINE_ATTRS:
code = tlvDecodeObjFromTlv(pTlv, msgToPhysiIntervalNodeInline, pNode);
break;
case PHY_INTERVAL_CODE_TIME_RANGE:
code = tlvDecodeObjFromTlv(pTlv, msgToTimeWindow, &pNode->timeRange);
default:
break;
}

View File

@ -143,6 +143,8 @@ TEST_F(NodesCloneTest, intervalWindow) {
if (NULL != pSrcNode->pFill) {
ASSERT_EQ(nodeType(pSrcNode->pFill), nodeType(pDstNode->pFill));
}
ASSERT_EQ(pSrcNode->timeRange.skey, pDstNode->timeRange.skey);
ASSERT_EQ(pSrcNode->timeRange.ekey, pDstNode->timeRange.ekey);
});
std::unique_ptr<SNode, void (*)(SNode*)> srcNode(nullptr, nodesDestroyNode);
@ -156,6 +158,8 @@ TEST_F(NodesCloneTest, intervalWindow) {
code = nodesMakeNode(QUERY_NODE_VALUE, &pNode->pOffset);
code = nodesMakeNode(QUERY_NODE_VALUE, &pNode->pSliding);
code = nodesMakeNode(QUERY_NODE_FILL, &pNode->pFill);
pNode->timeRange.skey = 1666756692907;
pNode->timeRange.ekey = 1666756699907;
return srcNode.get();
}());
}

View File

@ -1523,6 +1523,9 @@ twindow_clause_opt(A) ::=
INTERVAL NK_LP interval_sliding_duration_literal(B) NK_COMMA
interval_sliding_duration_literal(C) NK_RP
sliding_opt(D) fill_opt(E). { A = createIntervalWindowNode(pCxt, releaseRawExprNode(pCxt, B), releaseRawExprNode(pCxt, C), D, E); }
twindow_clause_opt(A) ::=
INTERVAL NK_LP interval_sliding_duration_literal(B) NK_COMMA
AUTO(C) NK_RP sliding_opt(D) fill_opt(E). { A = createIntervalWindowNode(pCxt, releaseRawExprNode(pCxt, B), createDurationValueNode(pCxt, &C), D, E); }
twindow_clause_opt(A) ::=
EVENT_WINDOW START WITH search_condition(B) END WITH search_condition(C). { A = createEventWindowNode(pCxt, B, C); }
twindow_clause_opt(A) ::=

View File

@ -1399,6 +1399,7 @@ SNode* createIntervalWindowNode(SAstCreateContext* pCxt, SNode* pInterval, SNode
interval->pOffset = pOffset;
interval->pSliding = pSliding;
interval->pFill = pFill;
interval->timeRange = TSWINDOW_INITIALIZER;
return (SNode*)interval;
_err:
nodesDestroyNode((SNode*)interval);

View File

@ -247,8 +247,8 @@ static int32_t parseBoundColumns(SInsertParseContext* pCxt, const char** pSql, E
return code;
}
static int32_t parseTimestampOrInterval(const char** end, SToken* pToken, int16_t timePrec, int64_t* ts, int64_t* interval,
SMsgBuf* pMsgBuf, bool* isTs) {
static int32_t parseTimestampOrInterval(const char** end, SToken* pToken, int16_t timePrec, int64_t* ts,
int64_t* interval, SMsgBuf* pMsgBuf, bool* isTs) {
if (pToken->type == TK_NOW) {
*isTs = true;
*ts = taosGetTimestamp(timePrec);
@ -1387,7 +1387,16 @@ static int32_t getTableDataCxt(SInsertParseContext* pCxt, SVnodeModifyOpStmt* pS
}
char tbFName[TSDB_TABLE_FNAME_LEN];
int32_t code = tNameExtractFullName(&pStmt->targetTableName, tbFName);
int32_t code = 0;
if (pCxt->preCtbname) {
tstrncpy(pStmt->targetTableName.tname, pStmt->usingTableName.tname, sizeof(pStmt->targetTableName.tname));
tstrncpy(pStmt->targetTableName.dbname, pStmt->usingTableName.dbname, sizeof(pStmt->targetTableName.dbname));
pStmt->targetTableName.type = TSDB_SUPER_TABLE;
pStmt->pTableMeta->tableType = TSDB_SUPER_TABLE;
}
code = tNameExtractFullName(&pStmt->targetTableName, tbFName);
if (TSDB_CODE_SUCCESS != code) {
return code;
}
@ -1812,8 +1821,10 @@ static int32_t doGetStbRowValues(SInsertParseContext* pCxt, SVnodeModifyOpStmt*
SArray* pTagVals = pStbRowsCxt->aTagVals;
bool canParseTagsAfter = !pStbRowsCxt->pTagCond && !pStbRowsCxt->hasTimestampTag;
int32_t numOfCols = getNumOfColumns(pStbRowsCxt->pStbMeta);
int32_t numOfTags = getNumOfTags(pStbRowsCxt->pStbMeta);
int32_t tbnameIdx = getTbnameSchemaIndex(pStbRowsCxt->pStbMeta);
uint8_t precision = getTableInfo(pStbRowsCxt->pStbMeta).precision;
int idx = 0;
for (int i = 0; i < pCols->numOfBound && (code) == TSDB_CODE_SUCCESS; ++i) {
const char* pTmpSql = *ppSql;
bool ignoreComma = false;
@ -1831,13 +1842,37 @@ static int32_t doGetStbRowValues(SInsertParseContext* pCxt, SVnodeModifyOpStmt*
if (TK_NK_QUESTION == pToken->type) {
pCxt->isStmtBind = true;
pStmt->usingTableProcessing = true;
if (pCols->pColIndex[i] == tbnameIdx) {
pCxt->preCtbname = false;
*bFoundTbName = true;
}
if (NULL == pCxt->pComCxt->pStmtCb) {
code = buildSyntaxErrMsg(&pCxt->msg, "? only used in stmt", pToken->z);
break;
char* tbName = NULL;
if ((*pCxt->pComCxt->pStmtCb->getTbNameFn)(pCxt->pComCxt->pStmtCb->pStmt, &tbName) == TSDB_CODE_SUCCESS) {
tstrncpy(pStbRowsCxt->ctbName.tname, tbName, sizeof(pStbRowsCxt->ctbName.tname));
tstrncpy(pStmt->usingTableName.tname, pStmt->targetTableName.tname, sizeof(pStmt->usingTableName.tname));
tstrncpy(pStmt->targetTableName.tname, tbName, sizeof(pStmt->targetTableName.tname));
tstrncpy(pStmt->usingTableName.dbname, pStmt->targetTableName.dbname, sizeof(pStmt->usingTableName.dbname));
pStmt->usingTableName.type = 1;
*bFoundTbName = true;
}
} else if (pCols->pColIndex[i] < numOfCols) {
// bind column
} else if (pCols->pColIndex[i] < tbnameIdx) {
if (pCxt->tags.pColIndex == NULL) {
pCxt->tags.pColIndex = taosMemoryCalloc(numOfTags, sizeof(int16_t));
if (NULL == pCxt->tags.pColIndex) {
return terrno;
}
}
if (!(idx < numOfTags)) {
return buildInvalidOperationMsg(&pCxt->msg, "not expected numOfTags");
}
pCxt->tags.pColIndex[idx++] = pCols->pColIndex[i] - numOfCols;
pCxt->tags.mixTagsCols = true;
pCxt->tags.numOfBound++;
pCxt->tags.numOfCols++;
} else {
return buildInvalidOperationMsg(&pCxt->msg, "not expected numOfBound");
}
} else {
if (pCols->pColIndex[i] < numOfCols) {
@ -1882,6 +1917,7 @@ static int32_t doGetStbRowValues(SInsertParseContext* pCxt, SVnodeModifyOpStmt*
}
}
}
return code;
}
@ -1901,13 +1937,17 @@ static int32_t getStbRowValues(SInsertParseContext* pCxt, SVnodeModifyOpStmt* pS
code = doGetStbRowValues(pCxt, pStmt, ppSql, pStbRowsCxt, pToken, pCols, pSchemas, tagTokens, tagSchemas,
&numOfTagTokens, &bFoundTbName);
if (code == TSDB_CODE_SUCCESS && !bFoundTbName) {
code = buildSyntaxErrMsg(&pCxt->msg, "tbname value expected", pOrigSql);
if (code != TSDB_CODE_SUCCESS) {
return code;
}
if (code == TSDB_CODE_SUCCESS && pStbRowsCxt->ctbName.tname[0] == '\0') {
*pGotRow = true;
return TSDB_CODE_TSC_STMT_TBNAME_ERROR;
if (!bFoundTbName) {
if (!pCxt->isStmtBind) {
code = buildSyntaxErrMsg(&pCxt->msg, "tbname value expected", pOrigSql);
} else {
*pGotRow = true;
return TSDB_CODE_TSC_STMT_TBNAME_ERROR;
}
}
bool ctbFirst = true;
@ -2050,14 +2090,24 @@ static int32_t parseOneStbRow(SInsertParseContext* pCxt, SVnodeModifyOpStmt* pSt
code = processCtbAutoCreationAndCtbMeta(pCxt, pStmt, pStbRowsCxt);
}
if (code == TSDB_CODE_SUCCESS) {
code =
insGetTableDataCxt(pStmt->pTableBlockHashObj, &pStbRowsCxt->pCtbMeta->uid, sizeof(pStbRowsCxt->pCtbMeta->uid),
pStbRowsCxt->pCtbMeta, &pStbRowsCxt->pCreateCtbReq, ppTableDataCxt, false, true);
if (pCxt->isStmtBind) {
char ctbFName[TSDB_TABLE_FNAME_LEN];
code = tNameExtractFullName(&pStbRowsCxt->ctbName, ctbFName);
if (code != TSDB_CODE_SUCCESS) {
return code;
}
code = insGetTableDataCxt(pStmt->pTableBlockHashObj, ctbFName, strlen(ctbFName), pStbRowsCxt->pCtbMeta,
&pStbRowsCxt->pCreateCtbReq, ppTableDataCxt, true, true);
} else {
code =
insGetTableDataCxt(pStmt->pTableBlockHashObj, &pStbRowsCxt->pCtbMeta->uid, sizeof(pStbRowsCxt->pCtbMeta->uid),
pStbRowsCxt->pCtbMeta, &pStbRowsCxt->pCreateCtbReq, ppTableDataCxt, false, true);
}
}
if (code == TSDB_CODE_SUCCESS) {
code = initTableColSubmitData(*ppTableDataCxt);
}
if (code == TSDB_CODE_SUCCESS) {
if (code == TSDB_CODE_SUCCESS && !pCxt->isStmtBind) {
SRow** pRow = taosArrayReserve((*ppTableDataCxt)->pData->aRowP, 1);
code = tRowBuild(pStbRowsCxt->aColVals, (*ppTableDataCxt)->pSchema, pRow);
if (TSDB_CODE_SUCCESS == code) {

View File

@ -941,8 +941,8 @@ int32_t buildBoundFields(int32_t numOfBound, int16_t* boundColumns, SSchema* pSc
int32_t buildStbBoundFields(SBoundColInfo boundColsInfo, SSchema* pSchema, int32_t* fieldNum, TAOS_FIELD_STB** fields,
STableMeta* pMeta, void* boundTags, bool preCtbname) {
SBoundColInfo* tags = (SBoundColInfo*)boundTags;
int32_t numOfBound = boundColsInfo.numOfBound + tags->numOfBound + (preCtbname ? 1 : 0);
int32_t idx = 0;
int32_t numOfBound = boundColsInfo.numOfBound + (tags->mixTagsCols ? 0 : tags->numOfBound) + (preCtbname ? 1 : 0);
int32_t idx = 0;
if (fields != NULL) {
*fields = taosMemoryCalloc(numOfBound, sizeof(TAOS_FIELD_STB));
if (NULL == *fields) {
@ -957,30 +957,25 @@ int32_t buildStbBoundFields(SBoundColInfo boundColsInfo, SSchema* pSchema, int32
idx++;
}
if (tags->numOfBound > 0) {
SSchema* pSchema = getTableTagSchema(pMeta);
if (TSDB_DATA_TYPE_TIMESTAMP == pSchema->type) {
(*fields)[0].precision = pMeta->tableInfo.precision;
}
if (tags->numOfBound > 0 && !tags->mixTagsCols) {
SSchema* tagSchema = getTableTagSchema(pMeta);
for (int32_t i = 0; i < tags->numOfBound; ++i) {
(*fields)[idx].field_type = TAOS_FIELD_TAG;
SSchema* schema = &pSchema[tags->pColIndex[i]];
SSchema* schema = &tagSchema[tags->pColIndex[i]];
tstrncpy((*fields)[idx].name, schema->name, sizeof((*fields)[i].name));
(*fields)[idx].type = schema->type;
(*fields)[idx].bytes = schema->bytes;
if (TSDB_DATA_TYPE_TIMESTAMP == schema->type) {
(*fields)[idx].precision = pMeta->tableInfo.precision;
}
idx++;
}
}
if (boundColsInfo.numOfBound > 0) {
SSchema* schema = &pSchema[boundColsInfo.pColIndex[0]];
if (TSDB_DATA_TYPE_TIMESTAMP == schema->type) {
(*fields)[0].precision = pMeta->tableInfo.precision;
}
for (int32_t i = 0; i < boundColsInfo.numOfBound; ++i) {
int16_t idxCol = boundColsInfo.pColIndex[i];
@ -990,7 +985,8 @@ int32_t buildStbBoundFields(SBoundColInfo boundColsInfo, SSchema* pSchema, int32
tstrncpy((*fields)[i].name, "tbname", sizeof((*fields)[idx].name));
(*fields)[idx].type = TSDB_DATA_TYPE_BINARY;
(*fields)[idx].bytes = TSDB_TABLE_FNAME_LEN;
idx++;
idx++;
continue;
} else if (idxCol < pMeta->tableInfo.numOfColumns) {
(*fields)[idx].field_type = TAOS_FIELD_COL;
@ -1002,6 +998,9 @@ int32_t buildStbBoundFields(SBoundColInfo boundColsInfo, SSchema* pSchema, int32
tstrncpy((*fields)[idx].name, schema->name, sizeof((*fields)[idx].name));
(*fields)[idx].type = schema->type;
(*fields)[idx].bytes = schema->bytes;
if (TSDB_DATA_TYPE_TIMESTAMP == schema->type) {
(*fields)[idx].precision = pMeta->tableInfo.precision;
}
idx++;
}
}

View File

@ -191,6 +191,7 @@ int32_t insInitBoundColsInfo(int32_t numOfBound, SBoundColInfo* pInfo) {
pInfo->numOfCols = numOfBound;
pInfo->numOfBound = numOfBound;
pInfo->hasBoundCols = false;
pInfo->mixTagsCols = false;
pInfo->pColIndex = taosMemoryCalloc(numOfBound, sizeof(int16_t));
if (NULL == pInfo->pColIndex) {
return terrno;
@ -204,6 +205,7 @@ int32_t insInitBoundColsInfo(int32_t numOfBound, SBoundColInfo* pInfo) {
void insResetBoundColsInfo(SBoundColInfo* pInfo) {
pInfo->numOfBound = pInfo->numOfCols;
pInfo->hasBoundCols = false;
pInfo->mixTagsCols = false;
for (int32_t i = 0; i < pInfo->numOfCols; ++i) {
pInfo->pColIndex[i] = i;
}
@ -739,7 +741,7 @@ int32_t insMergeTableDataCxt(SHashObj* pTableHash, SArray** pVgDataBlocks, bool
STableDataCxt* pTableCxt = *(STableDataCxt**)p;
if (colFormat) {
SColData* pCol = taosArrayGet(pTableCxt->pData->aCol, 0);
if (pCol->nVal <= 0) {
if (pCol && pCol->nVal <= 0) {
p = taosHashIterate(pTableHash, p);
continue;
}

View File

@ -351,6 +351,7 @@ static SKeyword keywordTable[] = {
{"IS_IMPORT", TK_IS_IMPORT},
{"FORCE_WINDOW_CLOSE", TK_FORCE_WINDOW_CLOSE},
{"DISK_INFO", TK_DISK_INFO},
{"AUTO", TK_AUTO},
};
// clang-format on

View File

@ -1994,8 +1994,11 @@ static int32_t parseBoolFromValueNode(STranslateContext* pCxt, SValueNode* pVal)
}
static EDealRes translateDurationValue(STranslateContext* pCxt, SValueNode* pVal) {
if (parseNatualDuration(pVal->literal, strlen(pVal->literal), &pVal->datum.i, &pVal->unit,
pVal->node.resType.precision, false) != TSDB_CODE_SUCCESS) {
if (strncmp(pVal->literal, AUTO_DURATION_LITERAL, strlen(AUTO_DURATION_LITERAL) + 1) == 0) {
pVal->datum.i = AUTO_DURATION_VALUE;
pVal->unit = getPrecisionUnit(pVal->node.resType.precision);
} else if (parseNatualDuration(pVal->literal, strlen(pVal->literal), &pVal->datum.i, &pVal->unit,
pVal->node.resType.precision, false) != TSDB_CODE_SUCCESS) {
return generateDealNodeErrMsg(pCxt, TSDB_CODE_PAR_WRONG_VALUE_TYPE, pVal->literal);
}
*(int64_t*)&pVal->typeData = pVal->datum.i;
@ -5844,7 +5847,13 @@ static int32_t checkIntervalWindow(STranslateContext* pCxt, SIntervalWindowNode*
if (NULL != pInterval->pOffset) {
SValueNode* pOffset = (SValueNode*)pInterval->pOffset;
if (pOffset->datum.i <= 0) {
if (pOffset->datum.i == AUTO_DURATION_VALUE) {
if (pOffset->unit != getPrecisionUnit(precision)) {
parserError("invalid offset unit %d for auto offset with precision %u", pOffset->unit, precision);
return TSDB_CODE_INVALID_PARA;
}
}
else if (pOffset->datum.i < 0) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INTER_OFFSET_NEGATIVE);
}
if (pInter->unit == 'n' && pOffset->unit == 'y') {
@ -5908,9 +5917,52 @@ static int32_t checkIntervalWindow(STranslateContext* pCxt, SIntervalWindowNode*
return TSDB_CODE_SUCCESS;
}
void tryCalcIntervalAutoOffset(SIntervalWindowNode *pInterval) {
SValueNode* pOffset = (SValueNode*)pInterval->pOffset;
uint8_t precision = ((SColumnNode*)pInterval->pCol)->node.resType.precision;
SValueNode* pInter = (SValueNode*)pInterval->pInterval;
SValueNode* pSliding = (SValueNode*)pInterval->pSliding;
if (pOffset == NULL || pOffset->datum.i != AUTO_DURATION_VALUE) {
return;
}
// ignore auto offset if not applicable
if (pInterval->timeRange.skey == INT64_MIN) {
pOffset->datum.i = 0;
return;
}
SInterval interval = {.interval = pInter->datum.i,
.sliding = (pSliding != NULL) ? pSliding->datum.i : pInter->datum.i,
.intervalUnit = pInter->unit,
.slidingUnit = (pSliding != NULL) ? pSliding->unit : pInter->unit,
.offset = pOffset->datum.i,
.precision = precision,
.timeRange = pInterval->timeRange};
/**
* Considering that the client and server may be in different time zones,
* these situations need to be deferred to the server for calculation.
*/
if (IS_CALENDAR_TIME_DURATION(interval.intervalUnit) || interval.intervalUnit == 'd' ||
interval.intervalUnit == 'w' || IS_CALENDAR_TIME_DURATION(interval.slidingUnit) || interval.slidingUnit == 'd' ||
interval.slidingUnit == 'w') {
return;
}
calcIntervalAutoOffset(&interval);
pOffset->datum.i = interval.offset;
}
static int32_t translateIntervalWindow(STranslateContext* pCxt, SSelectStmt* pSelect) {
SIntervalWindowNode* pInterval = (SIntervalWindowNode*)pSelect->pWindow;
int32_t code = checkIntervalWindow(pCxt, pInterval);
int32_t code = TSDB_CODE_SUCCESS;
pInterval->timeRange = pSelect->timeRange;
tryCalcIntervalAutoOffset(pInterval);
code = checkIntervalWindow(pCxt, pInterval);
if (TSDB_CODE_SUCCESS == code) {
code = translateFill(pCxt, pSelect, pInterval);
}
@ -9051,6 +9103,13 @@ static int32_t buildIntervalForSampleAst(SSampleAstInfo* pInfo, SNode** pOutput)
TSWAP(pInterval->pInterval, pInfo->pInterval);
TSWAP(pInterval->pOffset, pInfo->pOffset);
TSWAP(pInterval->pSliding, pInfo->pSliding);
SValueNode* pOffset = (SValueNode*)pInterval->pOffset;
if (pOffset && pOffset->datum.i < 0) {
parserError("%s failed for invalid interval offset %" PRId64, __func__, pOffset->datum.i);
return TSDB_CODE_INVALID_PARA;
}
pInterval->pCol = NULL;
code = nodesMakeNode(QUERY_NODE_COLUMN, (SNode**)&pInterval->pCol);
if (NULL == pInterval->pCol) {
@ -10062,6 +10121,10 @@ int32_t createIntervalFromCreateSmaIndexStmt(SCreateIndexStmt* pStmt, SInterval*
pInterval->slidingUnit =
NULL != pStmt->pOptions->pSliding ? ((SValueNode*)pStmt->pOptions->pSliding)->unit : pInterval->intervalUnit;
pInterval->precision = pStmt->pOptions->tsPrecision;
if (pInterval->offset < 0) {
parserError("%s failed for invalid interval offset %" PRId64, __func__, pInterval->offset);
return TSDB_CODE_INVALID_PARA;
}
return TSDB_CODE_SUCCESS;
}
@ -11907,6 +11970,7 @@ static int32_t buildIntervalForCreateStream(SCreateStreamStmt* pStmt, SInterval*
pInterval->slidingUnit =
(NULL != pWindow->pSliding ? ((SValueNode*)pWindow->pSliding)->unit : pInterval->intervalUnit);
pInterval->precision = ((SColumnNode*)pWindow->pCol)->node.resType.precision;
pInterval->timeRange = pWindow->timeRange;
return code;
}

View File

@ -1186,6 +1186,7 @@ static int32_t createWindowLogicNodeByInterval(SLogicPlanContext* pCxt, SInterva
return code;
}
pWindow->isPartTb = pSelect->pPartitionByList ? keysHasTbname(pSelect->pPartitionByList) : 0;
pWindow->timeRange = pInterval->timeRange;
return createWindowLogicNodeFinalize(pCxt, pSelect, pWindow, pLogicNode);
}

View File

@ -6716,6 +6716,20 @@ static bool tsmaOptMayBeOptimized(SLogicNode* pNode, void* pCtx) {
}
}
if (nodeType(pParent) == QUERY_NODE_LOGIC_PLAN_WINDOW) {
SWindowLogicNode* pWindow = (SWindowLogicNode*)pParent;
if (pWindow->winType == WINDOW_TYPE_INTERVAL && pWindow->offset == AUTO_DURATION_VALUE) {
SLogicNode* pRootNode = getLogicNodeRootNode(pParent);
// When using interval auto offset, tsma optimization cannot take effect.
// Unless the user explicitly specifies not to use tsma, an error should be reported.
if (!getOptHint(pRootNode->pHint, HINT_SKIP_TSMA)) {
planError("%s failed since tsma optimization cannot be applied with interval auto offset", __func__);
*(int32_t*)pCtx = TSDB_CODE_TSMA_INVALID_AUTO_OFFSET;
return false;
}
}
}
return true;
}
return false;
@ -7489,7 +7503,7 @@ static bool tsmaOptIsUsingTsmas(STSMAOptCtx* pCtx) {
static int32_t tsmaOptimize(SOptimizeContext* pCxt, SLogicSubplan* pLogicSubplan) {
int32_t code = 0;
STSMAOptCtx tsmaOptCtx = {0};
SScanLogicNode* pScan = (SScanLogicNode*)optFindPossibleNode(pLogicSubplan->pNode, tsmaOptMayBeOptimized, NULL);
SScanLogicNode* pScan = (SScanLogicNode*)optFindPossibleNode(pLogicSubplan->pNode, tsmaOptMayBeOptimized, &code);
if (!pScan) return code;
SLogicNode* pRootNode = getLogicNodeRootNode((SLogicNode*)pScan);

View File

@ -2248,6 +2248,7 @@ static int32_t createIntervalPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChil
pInterval->sliding = pWindowLogicNode->sliding;
pInterval->intervalUnit = pWindowLogicNode->intervalUnit;
pInterval->slidingUnit = pWindowLogicNode->slidingUnit;
pInterval->timeRange = pWindowLogicNode->timeRange;
int32_t code = createWindowPhysiNodeFinalize(pCxt, pChildren, &pInterval->window, pWindowLogicNode);
if (TSDB_CODE_SUCCESS == code) {

View File

@ -1464,7 +1464,7 @@ int32_t syncNodeRestore(SSyncNode* pSyncNode) {
// if (endIndex != lastVer + 1) return TSDB_CODE_SYN_INTERNAL_ERROR;
pSyncNode->commitIndex = TMAX(pSyncNode->commitIndex, commitIndex);
sInfo("vgId:%d, restore sync until commitIndex:%" PRId64, pSyncNode->vgId, pSyncNode->commitIndex);
sInfo("vgId:%d, restore began, and keep syncing until commitIndex:%" PRId64, pSyncNode->vgId, pSyncNode->commitIndex);
if (pSyncNode->fsmState != SYNC_FSM_STATE_INCOMPLETE &&
(code = syncLogBufferCommit(pSyncNode->pLogBuf, pSyncNode, pSyncNode->commitIndex)) < 0) {
@ -1724,9 +1724,8 @@ void syncNodeResetElectTimer(SSyncNode* pSyncNode) {
electMS = syncUtilElectRandomMS(pSyncNode->electBaseLine, 2 * pSyncNode->electBaseLine);
}
// TODO check return value
if ((code = syncNodeRestartElectTimer(pSyncNode, electMS)) != 0) {
sError("vgId:%d, failed to restart elect timer since %s", pSyncNode->vgId, tstrerror(code));
sWarn("vgId:%d, failed to restart elect timer since %s", pSyncNode->vgId, tstrerror(code));
return;
};

View File

@ -1152,11 +1152,11 @@ int32_t syncLogReplProcessReply(SSyncLogReplMgr* pMgr, SSyncNode* pNode, SyncApp
int32_t code = 0;
if (pMgr->restored) {
if ((code = syncLogReplContinue(pMgr, pNode, pMsg)) != 0) {
sError("vgId:%d, failed to continue sync log repl since %s", pNode->vgId, tstrerror(code));
sWarn("vgId:%d, failed to continue sync log repl since %s", pNode->vgId, tstrerror(code));
}
} else {
if ((code = syncLogReplRecover(pMgr, pNode, pMsg)) != 0) {
sError("vgId:%d, failed to recover sync log repl since %s", pNode->vgId, tstrerror(code));
sWarn("vgId:%d, failed to recover sync log repl since %s", pNode->vgId, tstrerror(code));
}
}
(void)taosThreadMutexUnlock(&pBuf->mutex);

View File

@ -75,8 +75,10 @@ int32_t syncNodeReplicateWithoutLock(SSyncNode* pNode) {
continue;
}
SSyncLogReplMgr* pMgr = pNode->logReplMgrs[i];
if (syncLogReplStart(pMgr, pNode) != 0) {
sError("vgId:%d, failed to start log replication to dnode:%d", pNode->vgId, DID(&(pNode->replicasId[i])));
int32_t ret = 0;
if ((ret = syncLogReplStart(pMgr, pNode)) != 0) {
sWarn("vgId:%d, failed to start log replication to dnode:%d since %s", pNode->vgId, DID(&(pNode->replicasId[i])),
tstrerror(ret));
}
}

View File

@ -326,7 +326,7 @@ static int32_t walRepairLogFileTs(SWal* pWal, bool* updateMeta) {
}
walBuildLogName(pWal, pFileInfo->firstVer, fnameStr);
int32_t mtime = 0;
int64_t mtime = 0;
if (taosStatFile(fnameStr, NULL, &mtime, NULL) < 0) {
wError("vgId:%d, failed to stat file due to %s, file:%s", pWal->cfg.vgId, strerror(errno), fnameStr);

View File

@ -303,7 +303,7 @@ void taosRemoveOldFiles(const char *dirname, int32_t keepDays) {
if (strcmp(taosGetDirEntryName(de), ".") == 0 || strcmp(taosGetDirEntryName(de), "..") == 0) continue;
char filename[1024];
(void)snprintf(filename, sizeof(filename), "%s/%s", dirname, taosGetDirEntryName(de));
(void)snprintf(filename, sizeof(filename), "%s%s%s", dirname, TD_DIRSEP, taosGetDirEntryName(de));
if (taosDirEntryIsDir(de)) {
continue;
} else {
@ -349,7 +349,7 @@ int32_t taosExpandDir(const char *dirname, char *outname, int32_t maxlen) {
wordfree(&full_path);
// FALL THROUGH
default:
return code;
return terrno = TSDB_CODE_INVALID_PARA;
}
if (full_path.we_wordv != NULL && full_path.we_wordv[0] != NULL) {

View File

@ -89,7 +89,7 @@ int32_t osDefaultInit() {
}
tstrncpy(tsDataDir, TD_DATA_DIR_PATH, sizeof(tsDataDir));
tstrncpy(tsLogDir, TD_LOG_DIR_PATH, sizeof(tsLogDir));
if (strlen(tsTempDir) == 0){
if (strlen(tsTempDir) == 0) {
tstrncpy(tsTempDir, TD_TMP_DIR_PATH, sizeof(tsTempDir));
}

View File

@ -273,7 +273,7 @@ int32_t taosRenameFile(const char *oldName, const char *newName) {
#endif
}
int32_t taosStatFile(const char *path, int64_t *size, int32_t *mtime, int32_t *atime) {
int32_t taosStatFile(const char *path, int64_t *size, int64_t *mtime, int64_t *atime) {
OS_PARAM_CHECK(path);
#ifdef WINDOWS
struct _stati64 fileStat;
@ -544,7 +544,7 @@ int64_t taosLSeekFile(TdFilePtr pFile, int64_t offset, int32_t whence) {
return liOffset.QuadPart;
}
int32_t taosFStatFile(TdFilePtr pFile, int64_t *size, int32_t *mtime) {
int32_t taosFStatFile(TdFilePtr pFile, int64_t *size, int64_t *mtime) {
if (pFile == NULL || pFile->hFile == NULL) {
terrno = TSDB_CODE_INVALID_PARA;
return terrno;
@ -571,7 +571,7 @@ int32_t taosFStatFile(TdFilePtr pFile, int64_t *size, int32_t *mtime) {
ULARGE_INTEGER ull;
ull.LowPart = lastWriteTime.dwLowDateTime;
ull.HighPart = lastWriteTime.dwHighDateTime;
*mtime = (int32_t)((ull.QuadPart - 116444736000000000ULL) / 10000000ULL);
*mtime = (int64_t)((ull.QuadPart - 116444736000000000ULL) / 10000000ULL);
}
return 0;
}
@ -937,7 +937,7 @@ int64_t taosLSeekFile(TdFilePtr pFile, int64_t offset, int32_t whence) {
return ret;
}
int32_t taosFStatFile(TdFilePtr pFile, int64_t *size, int32_t *mtime) {
int32_t taosFStatFile(TdFilePtr pFile, int64_t *size, int64_t *mtime) {
if (pFile == NULL) {
terrno = TSDB_CODE_INVALID_PARA;
return terrno;

View File

@ -812,6 +812,7 @@ TAOS_DEFINE_ERROR(TSDB_CODE_TSMA_UNSUPPORTED_FUNC, "Tsma func not suppo
TAOS_DEFINE_ERROR(TSDB_CODE_TSMA_MUST_BE_DROPPED, "Tsma must be dropped first")
TAOS_DEFINE_ERROR(TSDB_CODE_TSMA_NAME_TOO_LONG, "Tsma name too long")
TAOS_DEFINE_ERROR(TSDB_CODE_TSMA_INVALID_RECURSIVE_INTERVAL,"Invalid recursive tsma interval")
TAOS_DEFINE_ERROR(TSDB_CODE_TSMA_INVALID_AUTO_OFFSET, "Tsma optimization cannot be applied with INTERVAL AUTO offset")
//rsma
TAOS_DEFINE_ERROR(TSDB_CODE_RSMA_INVALID_ENV, "Invalid rsma env")

View File

@ -51,6 +51,13 @@
#define LOG_EDITION_FLG ("C")
#endif
typedef enum {
LOG_OUTPUT_FILE = 0, // default
LOG_OUTPUT_STDOUT = 1, // stdout set by -o option on the command line
LOG_OUTPUT_STDERR = 2, // stderr set by -o option on the command line
LOG_OUTPUT_NULL = 4, // /dev/null set by -o option on the command line
} ELogOutputType;
typedef struct {
char *buffer;
int32_t buffStart;
@ -73,6 +80,7 @@ typedef struct {
int32_t openInProgress;
int64_t lastKeepFileSec;
int64_t timestampToday;
int8_t outputType; // ELogOutputType
pid_t pid;
char logName[PATH_MAX];
char slowLogName[PATH_MAX];
@ -86,6 +94,7 @@ static int8_t tsLogInited = 0;
static SLogObj tsLogObj = {.fileNum = 1, .slowHandle = NULL};
static int64_t tsAsyncLogLostLines = 0;
static int32_t tsDaylightActive; /* Currently in daylight saving time. */
static SRWLatch tsLogRotateLatch = 0;
bool tsLogEmbedded = 0;
bool tsAsyncLog = true;
@ -96,6 +105,7 @@ bool tsAssert = true;
#endif
int32_t tsNumOfLogLines = 10000000;
int32_t tsLogKeepDays = 0;
char *tsLogOutput = NULL;
LogFp tsLogFp = NULL;
int64_t tsNumOfErrorLogs = 0;
int64_t tsNumOfInfoLogs = 0;
@ -234,6 +244,58 @@ int32_t taosInitSlowLog() {
return 0;
}
int32_t taosInitLogOutput(const char **ppLogName) {
const char *pLog = tsLogOutput;
const char *pLogName = NULL;
if (pLog) {
if (!tIsValidFilePath(pLog, NULL)) {
fprintf(stderr, "invalid log output destination:%s, contains illegal char\n", pLog);
return TSDB_CODE_INVALID_CFG;
}
if (0 == strcasecmp(pLog, "stdout")) {
tsLogObj.outputType = LOG_OUTPUT_STDOUT;
if (ppLogName) *ppLogName = pLog;
return 0;
}
if (0 == strcasecmp(pLog, "stderr")) {
tsLogObj.outputType = LOG_OUTPUT_STDERR;
if (ppLogName) *ppLogName = pLog;
return 0;
}
if (0 == strcasecmp(pLog, "/dev/null")) {
tsLogObj.outputType = LOG_OUTPUT_NULL;
if (ppLogName) *ppLogName = pLog;
return 0;
}
int32_t len = strlen(pLog);
if (len < 1) {
fprintf(stderr, "invalid log output destination:%s, should not be empty\n", pLog);
return TSDB_CODE_INVALID_CFG;
}
const char *p = pLog + (len - 1);
if (*p == '/' || *p == '\\') {
return 0;
}
if ((p = strrchr(pLog, '/')) || (p = strrchr(pLog, '\\'))) {
pLogName = p + 1;
} else {
pLogName = pLog;
}
if (strcmp(pLogName, ".") == 0 || strcmp(pLogName, "..") == 0) {
fprintf(stderr, "invalid log output destination:%s\n", pLog);
return TSDB_CODE_INVALID_CFG;
}
if (!tIsValidFileName(pLogName, NULL)) {
fprintf(stderr, "invalid log output destination:%s, contains illegal char\n", pLog);
return TSDB_CODE_INVALID_CFG;
}
if (ppLogName) *ppLogName = pLogName;
}
return 0;
}
int32_t taosInitLog(const char *logName, int32_t maxFiles, bool tsc) {
if (atomic_val_compare_exchange_8(&tsLogInited, 0, 1) != 0) return 0;
int32_t code = osUpdate();
@ -241,6 +303,11 @@ int32_t taosInitLog(const char *logName, int32_t maxFiles, bool tsc) {
uError("failed to update os info, reason:%s", tstrerror(code));
}
if (tsLogObj.outputType == LOG_OUTPUT_STDOUT || tsLogObj.outputType == LOG_OUTPUT_STDERR ||
tsLogObj.outputType == LOG_OUTPUT_NULL) {
return 0;
}
TAOS_CHECK_RETURN(taosInitNormalLog(logName, maxFiles));
if (tsc) {
TAOS_CHECK_RETURN(taosInitSlowLog());
@ -283,6 +350,7 @@ void taosCloseLog() {
taosMemoryFreeClear(tsLogObj.logHandle);
tsLogObj.logHandle = NULL;
}
taosMemoryFreeClear(tsLogOutput);
}
static bool taosLockLogFile(TdFilePtr pFile) {
@ -340,10 +408,6 @@ static void taosKeepOldLog(char *oldName) {
}
}
}
if (tsLogKeepDays > 0) {
taosRemoveOldFiles(tsLogDir, tsLogKeepDays);
}
}
typedef struct {
TdFilePtr pOldFile;
@ -392,11 +456,16 @@ static OldFileKeeper *taosOpenNewFile() {
static void *taosThreadToCloseOldFile(void *param) {
if (!param) return NULL;
taosWLockLatch(&tsLogRotateLatch);
OldFileKeeper *oldFileKeeper = (OldFileKeeper *)param;
taosSsleep(20);
taosCloseLogByFd(oldFileKeeper->pOldFile);
taosKeepOldLog(oldFileKeeper->keepName);
taosMemoryFree(oldFileKeeper);
if (tsLogKeepDays > 0) {
taosRemoveOldFiles(tsLogDir, tsLogKeepDays);
}
taosWUnLockLatch(&tsLogRotateLatch);
return NULL;
}
@ -532,8 +601,8 @@ static void decideLogFileName(const char *fn, int32_t maxFileNum) {
static void decideLogFileNameFlag() {
char name[PATH_MAX + 50] = "\0";
int32_t logstat0_mtime = 0;
int32_t logstat1_mtime = 0;
int64_t logstat0_mtime = 0;
int64_t logstat1_mtime = 0;
bool log0Exist = false;
bool log1Exist = false;
@ -680,10 +749,19 @@ static inline void taosPrintLogImp(ELogLevel level, int32_t dflag, const char *b
}
}
if (dflag & DEBUG_SCREEN) {
int fd = 0;
if (tsLogObj.outputType == LOG_OUTPUT_FILE) {
if (dflag & DEBUG_SCREEN) fd = 1;
} else if (tsLogObj.outputType == LOG_OUTPUT_STDOUT) {
fd = 1;
} else if (tsLogObj.outputType == LOG_OUTPUT_STDERR) {
fd = 2;
}
if (fd) {
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-result"
if (write(1, buffer, (uint32_t)len) < 0) {
if (write(fd, buffer, (uint32_t)len) < 0) {
TAOS_UNUSED(printf("failed to write log to screen, reason:%s\n", strerror(errno)));
}
#pragma GCC diagnostic pop
@ -999,6 +1077,92 @@ static void taosWriteLog(SLogBuff *pLogBuf) {
pLogBuf->writeInterval = 0;
}
#define LOG_ROTATE_INTERVAL 3600
#if !defined(TD_ENTERPRISE) || defined(ASSERT_NOT_CORE)
#define LOG_INACTIVE_TIME 7200
#define LOG_ROTATE_BOOT 900
#else
#define LOG_INACTIVE_TIME 5
#define LOG_ROTATE_BOOT (LOG_INACTIVE_TIME + 1)
#endif
static void *taosLogRotateFunc(void *param) {
setThreadName("logRotate");
int32_t code = 0;
taosWLockLatch(&tsLogRotateLatch);
// compress or remove the old log files
TdDirPtr pDir = taosOpenDir(tsLogDir);
if (!pDir) goto _exit;
TdDirEntryPtr de = NULL;
while ((de = taosReadDir(pDir))) {
if (taosDirEntryIsDir(de)) {
continue;
}
char *fname = taosGetDirEntryName(de);
if (!fname) {
continue;
}
char *pSec = strrchr(fname, '.');
if (!pSec) {
continue;
}
char *pIter = pSec;
bool isSec = true;
while (*(++pIter)) {
if (!isdigit(*pIter)) {
isSec = false;
break;
}
}
if (!isSec) {
continue;
}
int64_t fileSec = 0;
if ((code = taosStr2int64(pSec + 1, &fileSec)) != 0) {
uWarn("%s:%d failed to convert %s to int64 since %s", __func__, __LINE__, pSec + 1, tstrerror(code));
continue;
}
if (fileSec <= 100) {
continue;
}
char fullName[PATH_MAX] = {0};
snprintf(fullName, sizeof(fullName), "%s%s%s", tsLogDir, TD_DIRSEP, fname);
int64_t mtime = 0;
if ((code = taosStatFile(fullName, NULL, &mtime, NULL)) != 0) {
uWarn("%s:%d failed to stat file %s since %s", __func__, __LINE__, fullName, tstrerror(code));
continue;
}
int64_t inactiveSec = taosGetTimestampMs() / 1000 - mtime;
if (inactiveSec < LOG_INACTIVE_TIME) {
continue;
}
int32_t days = inactiveSec / 86400 + 1;
if (tsLogKeepDays > 0 && days > tsLogKeepDays) {
TAOS_UNUSED(taosRemoveFile(fullName));
uInfo("file:%s is removed, days:%d, keepDays:%d, sed:%" PRId64, fullName, days, tsLogKeepDays, fileSec);
} else {
taosKeepOldLog(fullName); // compress
}
}
if ((code = taosCloseDir(&pDir)) != 0) {
uWarn("%s:%d failed to close dir %s since %s\n", __func__, __LINE__, tsLogDir, tstrerror(code));
}
if (tsLogKeepDays > 0) {
taosRemoveOldFiles(tsLogDir, tsLogKeepDays);
}
_exit:
taosWUnLockLatch(&tsLogRotateLatch);
return NULL;
}
static void *taosAsyncOutputLog(void *param) {
SLogBuff *pLogBuf = (SLogBuff *)tsLogObj.logHandle;
SLogBuff *pSlowBuf = (SLogBuff *)tsLogObj.slowHandle;
@ -1007,6 +1171,7 @@ static void *taosAsyncOutputLog(void *param) {
int32_t count = 0;
int32_t updateCron = 0;
int32_t writeInterval = 0;
int64_t lastCheckSec = taosGetTimestampMs() / 1000 - (LOG_ROTATE_INTERVAL - LOG_ROTATE_BOOT);
while (1) {
if (pSlowBuf) {
@ -1032,6 +1197,26 @@ static void *taosAsyncOutputLog(void *param) {
if (pSlowBuf) taosWriteSlowLog(pSlowBuf);
break;
}
// process the log rotation every LOG_ROTATE_INTERVAL
int64_t curSec = taosGetTimestampMs() / 1000;
if (curSec >= lastCheckSec) {
if ((curSec - lastCheckSec) >= LOG_ROTATE_INTERVAL) {
TdThread thread;
TdThreadAttr attr;
(void)taosThreadAttrInit(&attr);
(void)taosThreadAttrSetDetachState(&attr, PTHREAD_CREATE_DETACHED);
if (taosThreadCreate(&thread, &attr, taosLogRotateFunc, tsLogObj.logHandle) == 0) {
uInfo("process log rotation");
lastCheckSec = curSec;
} else {
uWarn("failed to create thread to process log rotation");
}
(void)taosThreadAttrDestroy(&attr);
}
} else if (curSec < lastCheckSec) {
lastCheckSec = curSec;
}
}
return NULL;

View File

@ -16,6 +16,7 @@
#define _DEFAULT_SOURCE
#include "tutil.h"
#include "tlog.h"
#include "regex.h"
void *tmemmem(const char *haystack, int32_t hlen, const char *needle, int32_t nlen) {
const char *limit;
@ -520,3 +521,29 @@ int32_t parseCfgReal(const char *str, float *out) {
*out = val;
return TSDB_CODE_SUCCESS;
}
bool tIsValidFileName(const char *fileName, const char *pattern) {
const char *fileNamePattern = "^[a-zA-Z0-9_.-]+$";
regex_t fileNameReg;
if (pattern) fileNamePattern = pattern;
if (regcomp(&fileNameReg, fileNamePattern, REG_EXTENDED) != 0) {
fprintf(stderr, "failed to compile file name pattern:%s\n", fileNamePattern);
return false;
}
int32_t code = regexec(&fileNameReg, fileName, 0, NULL, 0);
regfree(&fileNameReg);
if (code != 0) {
return false;
}
return true;
}
bool tIsValidFilePath(const char *filePath, const char *pattern) {
const char *filePathPattern = "^[a-zA-Z0-9:/\\_.-]+$";
return tIsValidFileName(filePath, pattern ? pattern : filePathPattern);
}

View File

@ -1839,7 +1839,8 @@ class TDCom:
tdLog.exit(f"Input file '{inputfile}' does not exist.")
else:
self.query_result_file = f"./temp_{test_case}.result"
os.system(f"taos -f {inputfile} | grep -v 'Query OK'|grep -v 'Copyright'| grep -v 'Welcome to the TDengine Command' > {self.query_result_file} ")
cfgPath = self.getClientCfgPath()
os.system(f"taos -c {cfgPath} -f {inputfile} | grep -v 'Query OK'|grep -v 'Copyright'| grep -v 'Welcome to the TDengine Command' > {self.query_result_file} ")
return self.query_result_file
def compare_result_files(self, file1, file2):

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,160 @@
select _wstart, _wend, _wduration, count(*) from test.st interval(1m, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts < '2020-10-01 00:07:19' interval(1m, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts <= '2020-09-30 23:59:59' interval(1h, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = '2020-10-01 23:45:00' interval(1s, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = '2020-10-01 23:45:00' interval(1m, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = '2020-11-01 23:45:00' interval(1h, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = '2020-12-01 23:45:00' interval(1d, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-10-01 23:45:00') interval(1s, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-10-01 23:45:00') interval(1m, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-11-01 23:45:00') interval(1h, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-12-01 23:45:00') interval(1d, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = '2020-10-01 23:45:00' interval(1s, auto) sliding(700a) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = '2020-10-01 23:45:00' interval(1m, auto) sliding(37s) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = '2020-11-01 23:45:00' interval(1h, auto) sliding(27m) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = '2020-12-01 23:45:00' interval(1d, auto) sliding(17h) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-10-01 23:45:00') interval(1s, auto) sliding(700a) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-10-01 23:45:00') interval(1m, auto) sliding(37s) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-11-01 23:45:00') interval(1h, auto) sliding(27m) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-12-01 23:45:00') interval(1d, auto) sliding(17h) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = '2020-11-12 23:32:43' interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = '2020-11-12 23:32:00' interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-11-12 23:32:43') interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-11-12 23:32:00') interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = '2020-11-12 23:32:43' interval(1n, auto) sliding(9d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = '2020-11-12 23:32:00' interval(1n, auto) sliding(9d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-11-12 23:32:43') interval(1n, auto) sliding(13d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-11-12 23:32:00') interval(1n, auto) sliding(13d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > '2020-10-09 01:23:00' interval(1s, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > '2020-10-09 01:23:00' interval(1m, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > '2020-11-09 01:23:00' interval(1h, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > '2020-12-09 01:23:00' interval(1d, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= '2020-10-09 01:23:00' interval(1s, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= '2020-10-09 01:23:00' interval(1m, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= '2020-11-09 01:23:00' interval(1h, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= '2020-12-09 01:23:00' interval(1d, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1s, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1m, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1h, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1d, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > '2020-10-09 01:23:00' interval(1s, auto) sliding(700a) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > '2020-10-09 01:23:00' interval(1m, auto) sliding(37s) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > '2020-11-09 01:23:00' interval(1h, auto) sliding(27m) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > '2020-12-09 01:23:00' interval(1d, auto) sliding(17h) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= '2020-10-09 01:23:00' interval(1s, auto) sliding(700a) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= '2020-10-09 01:23:00' interval(1m, auto) sliding(37s) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= '2020-11-09 01:23:00' interval(1h, auto) sliding(27m) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= '2020-12-09 01:23:00' interval(1d, auto) sliding(17h) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1s, auto) sliding(700a) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1m, auto) sliding(37s) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1h, auto) sliding(27m) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1d, auto) sliding(17h) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > '2020-10-09 01:23:00' interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > '2020-11-09 01:23:00' interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > '2020-12-09 01:23:00' interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > '2020-12-19 01:23:00' interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= '2020-10-09 01:23:00' interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= '2020-11-09 01:23:00' interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= '2020-12-09 01:23:00' interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= '2020-12-19 01:23:00' interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > '2020-10-09 01:23:00' interval(1n, auto) sliding(9d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > '2020-11-09 01:23:00' interval(1n, auto) sliding(9d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > '2020-12-09 01:23:00' interval(1n, auto) sliding(9d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > '2020-12-19 01:23:00' interval(1n, auto) sliding(9d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= '2020-10-09 01:23:00' interval(1n, auto) sliding(13d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= '2020-11-09 01:23:00' interval(1n, auto) sliding(13d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= '2020-12-09 01:23:00' interval(1n, auto) sliding(13d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= '2020-12-19 01:23:00' interval(1n, auto) sliding(13d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1n, auto) sliding(9d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1n, auto) sliding(13d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-10-01 23:45:00' as timestamp) + 1d interval(1s, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-10-01 23:45:00' as timestamp) + 1d interval(1m, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-11-01 23:45:00' as timestamp) + 1d interval(1h, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-12-01 23:45:00' as timestamp) + 1d interval(1d, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-10-01 23:45:00' as timestamp) + 1d interval(1s, auto) sliding(700a) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-10-01 23:45:00' as timestamp) + 1d interval(1m, auto) sliding(37s) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-11-01 23:45:00' as timestamp) + 1d interval(1h, auto) sliding(27m) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-12-01 23:45:00' as timestamp) + 1d interval(1d, auto) sliding(17h) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-11-12 23:32:43' as timestamp) + 1d interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-11-12 23:32:00' as timestamp) + 1d interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-11-12 23:32:00' as timestamp) + 1d interval(1n, auto) sliding(9d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-11-12 23:32:00' as timestamp) + 1d interval(1n, auto) sliding(13d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > cast('2020-10-08 01:23:00' as timestamp) + 1d interval(1s, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > cast('2020-10-08 01:23:00' as timestamp) + 1d interval(1m, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > cast('2020-11-08 01:23:00' as timestamp) + 1d interval(1h, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > cast('2020-12-08 01:23:00' as timestamp) + 1d interval(1d, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= cast('2020-10-08 01:23:00' as timestamp) + 1d interval(1s, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= cast('2020-10-08 01:23:00' as timestamp) + 1d interval(1m, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= cast('2020-11-08 01:23:00' as timestamp) + 1d interval(1h, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= cast('2020-12-08 01:23:00' as timestamp) + 1d interval(1d, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > cast('2020-10-08 01:23:00' as timestamp) + 1d interval(1s, auto) sliding(700a) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > cast('2020-10-08 01:23:00' as timestamp) + 1d interval(1m, auto) sliding(37s) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > cast('2020-11-08 01:23:00' as timestamp) + 1d interval(1h, auto) sliding(27m) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > cast('2020-12-08 01:23:00' as timestamp) + 1d interval(1d, auto) sliding(17h) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= cast('2020-10-08 01:23:00' as timestamp) + 1d interval(1s, auto) sliding(700a) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= cast('2020-10-08 01:23:00' as timestamp) + 1d interval(1m, auto) sliding(37s) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= cast('2020-11-08 01:23:00' as timestamp) + 1d interval(1h, auto) sliding(27m) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= cast('2020-12-08 01:23:00' as timestamp) + 1d interval(1d, auto) sliding(17h) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > cast('2020-10-08 01:23:00' as timestamp) + 1d interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > cast('2020-11-08 01:23:00' as timestamp) + 1d interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > cast('2020-12-08 01:23:00' as timestamp) + 1d interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > cast('2020-12-18 01:23:00' as timestamp) + 1d interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= cast('2020-10-08 01:23:00' as timestamp) + 1d interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= cast('2020-11-08 01:23:00' as timestamp) + 1d interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= cast('2020-12-08 01:23:00' as timestamp) + 1d interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= cast('2020-12-18 01:23:00' as timestamp) + 1d interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > cast('2020-10-08 01:23:00' as timestamp) + 1d interval(1n, auto) sliding(9d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > cast('2020-11-08 01:23:00' as timestamp) + 1d interval(1n, auto) sliding(9d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > cast('2020-12-08 01:23:00' as timestamp) + 1d interval(1n, auto) sliding(9d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > cast('2020-12-18 01:23:00' as timestamp) + 1d interval(1n, auto) sliding(9d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= cast('2020-10-08 01:23:00' as timestamp) + 1d interval(1n, auto) sliding(13d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= cast('2020-11-08 01:23:00' as timestamp) + 1d interval(1n, auto) sliding(13d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= cast('2020-12-08 01:23:00' as timestamp) + 1d interval(1n, auto) sliding(13d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= cast('2020-12-18 01:23:00' as timestamp) + 1d interval(1n, auto) sliding(13d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > i1 interval(1h, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts > i1 and ts <= bi2 interval(1d, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts >= i1 or ts < bi2 interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts - 1d = '2020-11-11 23:32:00' interval(1h, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts - 1d in ('2020-10-08 01:23:00', '2020-11-08 01:23:00', '2020-12-08 01:23:00', '2020-12-18 01:23:00') interval(1d, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts - 1d >= '2020-11-08 01:23:00' interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-10-08 01:23:00' as timestamp) + 1d or ts >= '2020-11-19 11:22:00' interval(1s, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where (ts >= '2020-10-02 23:45:00' and ts < '2020-10-12 23:32:00') or (ts > '2020-11-12 23:32:00' and ts = '2020-12-02 23:45:00') interval(1s, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where (ts >= '2020-10-02 23:45:00' and ts < '2020-10-12 23:32:00') or ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1s, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-10-08 01:23:00' as timestamp) + 1d or ts >= '2020-11-19 11:22:00' interval(1m, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where (ts >= '2020-10-02 23:45:00' and ts < '2020-10-12 23:32:00') or (ts > '2020-11-12 23:32:00' and ts = '2020-12-02 23:45:00') interval(1m, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where (ts >= '2020-10-02 23:45:00' and ts < '2020-10-12 23:32:00') or ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1m, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-10-08 01:23:00' as timestamp) + 1d or ts >= '2020-11-19 11:22:00' interval(1h, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where (ts >= '2020-10-02 23:45:00' and ts < '2020-10-12 23:32:00') or (ts > '2020-11-12 23:32:00' and ts = '2020-12-02 23:45:00') interval(1h, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where (ts >= '2020-10-02 23:45:00' and ts < '2020-10-12 23:32:00') or ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1h, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-10-08 01:23:00' as timestamp) + 1d or ts >= '2020-11-19 11:22:00' interval(1d, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where (ts >= '2020-10-02 23:45:00' and ts < '2020-10-12 23:32:00') or (ts > '2020-11-12 23:32:00' and ts = '2020-12-02 23:45:00') interval(1d, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where (ts >= '2020-10-02 23:45:00' and ts < '2020-10-12 23:32:00') or ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1d, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-10-08 01:23:00' as timestamp) + 1d or ts >= '2020-11-19 11:22:00' interval(1s, auto) sliding(700a) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where (ts >= '2020-10-02 23:45:00' and ts < '2020-10-12 23:32:00') or (ts > '2020-11-12 23:32:00' and ts = '2020-12-02 23:45:00') interval(1s, auto) sliding(700a) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where (ts >= '2020-10-02 23:45:00' and ts < '2020-10-12 23:32:00') or ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1s, auto) sliding(700a) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-10-08 01:23:00' as timestamp) + 1d or ts >= '2020-11-19 11:22:00' interval(1m, auto) sliding(37s) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where (ts >= '2020-10-02 23:45:00' and ts < '2020-10-12 23:32:00') or (ts > '2020-11-12 23:32:00' and ts = '2020-12-02 23:45:00') interval(1m, auto) sliding(37s) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where (ts >= '2020-10-02 23:45:00' and ts < '2020-10-12 23:32:00') or ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1m, auto) sliding(37s) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-10-08 01:23:00' as timestamp) + 1d or ts >= '2020-11-19 11:22:00' interval(1h, auto) sliding(27m) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where (ts >= '2020-10-02 23:45:00' and ts < '2020-10-12 23:32:00') or (ts > '2020-11-12 23:32:00' and ts = '2020-12-02 23:45:00') interval(1h, auto) sliding(27m) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where (ts >= '2020-10-02 23:45:00' and ts < '2020-10-12 23:32:00') or ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1h, auto) sliding(27m) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-10-08 01:23:00' as timestamp) + 1d or ts >= '2020-11-19 11:22:00' interval(1d, auto) sliding(17h) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where (ts >= '2020-10-02 23:45:00' and ts < '2020-10-12 23:32:00') or (ts > '2020-11-12 23:32:00' and ts = '2020-12-02 23:45:00') interval(1d, auto) sliding(17h) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where (ts >= '2020-10-02 23:45:00' and ts < '2020-10-12 23:32:00') or ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1d, auto) sliding(17h) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-10-08 01:23:00' as timestamp) + 1d or ts >= '2020-11-19 11:22:00' interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where (ts >= '2020-10-02 23:45:00' and ts < '2020-10-12 23:32:00') or (ts > '2020-11-12 23:32:00' and ts = '2020-12-02 23:45:00') interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where (ts >= '2020-10-02 23:45:00' and ts < '2020-10-12 23:32:00') or ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1n, auto) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-10-08 01:23:00' as timestamp) + 1d or ts >= '2020-11-19 11:22:00' interval(1n, auto) sliding(9d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where (ts >= '2020-10-02 23:45:00' and ts < '2020-10-12 23:32:00') or (ts > '2020-11-12 23:32:00' and ts = '2020-12-02 23:45:00') interval(1n, auto) sliding(9d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where (ts >= '2020-10-02 23:45:00' and ts < '2020-10-12 23:32:00') or ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1n, auto) sliding(9d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where ts = cast('2020-10-08 01:23:00' as timestamp) + 1d or ts >= '2020-11-19 11:22:00' interval(1n, auto) sliding(13d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where (ts >= '2020-10-02 23:45:00' and ts < '2020-10-12 23:32:00') or (ts > '2020-11-12 23:32:00' and ts = '2020-12-02 23:45:00') interval(1n, auto) sliding(13d) limit 10;
select _wstart, _wend, _wduration, count(*) from test.st where (ts >= '2020-10-02 23:45:00' and ts < '2020-10-12 23:32:00') or ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1n, auto) sliding(13d) limit 10;
select * from test.sta;
select * from test.stb;
select * from test.stc;
select * from test.std;
select * from test.ste;
select * from test.stf;
select * from test.stg;

View File

@ -0,0 +1,67 @@
{
"filetype": "insert",
"cfgdir": "/etc/taos",
"host": "127.0.0.1",
"port": 6030,
"user": "root",
"password": "taosdata",
"connection_pool_size": 8,
"thread_count": 4,
"create_table_thread_count": 4,
"result_file": "./insert_res.txt",
"confirm_parameter_prompt": "no",
"num_of_records_per_req": 10000,
"prepared_rand": 10000,
"chinese": "no",
"escape_character": "yes",
"continue_if_fail": "no",
"databases": [
{
"dbinfo": {
"name": "test",
"drop": "yes",
"vgroups": 4,
"precision": "ms",
"keep": "3650"
},
"super_tables": [
{
"name": "st",
"child_table_exists": "no",
"childtable_count": 1,
"childtable_prefix": "ct",
"auto_create_table": "no",
"batch_create_tbl_num": 5,
"data_source": "rand",
"insert_mode": "taosc",
"non_stop_mode": "no",
"line_protocol": "line",
"insert_rows": 100000,
"childtable_limit": 0,
"childtable_offset": 0,
"interlace_rows": 0,
"insert_interval": 0,
"partial_col_num": 0,
"timestamp_step": 60000,
"start_timestamp": "2020-10-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"use_sample_ts": "no",
"tags_file": "",
"columns": [
{"type": "INT", "name": "i1"},
{"type": "BIGINT", "name": "bi2"},
{"type": "FLOAT", "name": "f3"},
{"type": "DOUBLE", "name": "d4"}
],
"tags": [
{"type": "TINYINT", "name": "groupid", "max": 10, "min": 1},
{"name": "location","type": "binary", "len": 16, "values":
["San Francisco", "Los Angles", "San Diego", "San Jose", "Palo Alto", "Campbell", "Mountain View","Sunnyvale", "Santa Clara", "Cupertino"]
}
]
}
]
}
]
}

View File

@ -0,0 +1,79 @@
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
from frame import etool
from frame.etool import *
from frame.log import *
from frame.cases import *
from frame.sql import *
from frame.caseBase import *
from frame.common import *
class TDTestCase(TBase):
def init(self, conn, logSql, replicaVar=1):
self.replicaVar = int(replicaVar)
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor(), True)
def insert_data(self):
tdLog.info("insert interval test data.")
# taosBenchmark run
json = etool.curFile(__file__, "interval.json")
etool.benchMark(json = json)
def create_streams(self):
tdSql.execute("use test;")
streams = [
"create stream stream1 fill_history 1 into sta as select _wstart, _wend, _wduration, count(*) from test.st where ts < '2020-10-01 00:07:19' interval(1m, auto);",
"create stream stream2 fill_history 1 into stb as select _wstart, _wend, _wduration, count(*) from test.st where ts = '2020-11-01 23:45:00' interval(1h, auto) sliding(27m);",
"create stream stream3 fill_history 1 into stc as select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-11-12 23:32:00') interval(1n, auto) sliding(13d);",
"create stream stream4 fill_history 1 into std as select _wstart, _wend, _wduration, count(*) from test.st where ts in ('2020-10-09 01:23:00', '2020-11-09 01:23:00', '2020-12-09 01:23:00') interval(1s, auto);",
"create stream stream5 fill_history 1 into ste as select _wstart, _wend, _wduration, count(*) from test.st where ts > '2020-12-09 01:23:00' interval(1d, auto) sliding(17h);",
"create stream stream6 fill_history 1 into stf as select _wstart, _wend, _wduration, count(*) from test.st where ts >= '2020-10-09 01:23:00' interval(1n, auto);",
"create stream stream7 fill_history 1 into stg as select _wstart, _wend, _wduration, count(*) from test.st where ts >= '2020-11-09 01:23:00' interval(1n, auto) sliding(13d);",
]
for sql in streams:
tdSql.execute(sql)
for i in range(50):
rows = tdSql.query("select * from information_schema.ins_stream_tasks where history_task_status is not null;")
if rows == 0:
break;
tdLog.info(f"i={i} wait for history data calculation finish ...")
time.sleep(1)
def query_test(self):
# read sql from .sql file and execute
tdLog.info("test normal query.")
self.sqlFile = etool.curFile(__file__, f"in/interval.in")
self.ansFile = etool.curFile(__file__, f"ans/interval.csv")
tdCom.compare_testcase_result(self.sqlFile, self.ansFile, "interval")
def run(self):
self.insert_data()
self.create_streams()
self.query_test()
def stop(self):
tdSql.execute("drop stream stream1;")
tdSql.execute("drop stream stream2;")
tdSql.execute("drop stream stream3;")
tdSql.execute("drop stream stream4;")
tdSql.execute("drop stream stream5;")
tdSql.execute("drop stream stream6;")
tdSql.execute("drop stream stream7;")
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

View File

@ -0,0 +1,57 @@
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
from frame import etool
from frame.etool import *
from frame.log import *
from frame.cases import *
from frame.sql import *
from frame.caseBase import *
from frame.common import *
class TDTestCase(TBase):
clientCfgDict = { "timezone": "UTC" }
updatecfgDict = {
"timezone": "UTC-8",
"clientCfg": clientCfgDict,
}
def init(self, conn, logSql, replicaVar=1):
self.replicaVar = int(replicaVar)
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor(), True)
def insert_data(self):
tdLog.info("insert interval test data.")
# taosBenchmark run
json = etool.curFile(__file__, "interval.json")
etool.benchMark(json = json)
def query_test(self):
# read sql from .sql file and execute
tdLog.info("test normal query.")
self.sqlFile = etool.curFile(__file__, f"in/interval.in")
self.ansFile = etool.curFile(__file__, f"ans/interval_diff_tz.csv")
tdCom.compare_testcase_result(self.sqlFile, self.ansFile, "interval_diff_tz")
def run(self):
self.insert_data()
self.query_test()
def stop(self):
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

View File

@ -0,0 +1,181 @@
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import time
import random
import taos
import frame
import frame.eos
import frame.etime
import frame.etool
from frame.log import *
from frame.sql import *
from frame.cases import *
from frame.caseBase import *
from frame.srvCtl import *
from frame import *
class TDTestCase(TBase):
# parse line
def parseLine(self, line):
line = line.strip()
PRE_DEFINE = "#define TSDB_CODE_"
n = len(PRE_DEFINE)
if line[:n] != PRE_DEFINE:
return None
# TAOS_DEF_ERROR_CODE(0, 0x000B)
pos = line.find("TAOS_DEF_ERROR_CODE(0, 0x", n)
if pos == -1:
tdLog.info(f"not found \"TAOS_DEF_ERROR_CODE(0, \" line={line}")
return None
code = line[pos:].strip()
pos = code.find(")")
if pos == -1:
tdLog.info(f"not found \")\", line={line}")
return None
code = code[:pos]
if len(code) != 4:
tdLog.info(f"code is len not 4 len:{len(code)} subcode={code}\")\", line={line}")
return None
# return
return "0x8000" + code
# ignore error
def ignoreCode(self, code):
ignoreCodes = {"0x00008, 0x000009"}
if code in ignoreCodes:
return True
else:
return False
# read valid code
def readHeadCodes(self, hFile):
codes = []
start = False
# read
with open(hFile) as file:
for line in file:
code = self.parseLine(line)
# invalid
if code == None:
continue
# ignore
if self.ignoreCode(code):
tdLog.info(f"ignore error {code}\n")
# valid
if code == 0:
start = True
if start:
codes.append(code)
# return
return codes
# parse doc lines
def parseDocLine(self, line):
line = line.strip()
PRE_DEFINE = "| 0x8000"
n = len(PRE_DEFINE)
if line[:n] != PRE_DEFINE:
return None
line = line[2:]
cols = line.split("|")
# remove blank
cols = [col.strip() for col in cols]
# return
return cols
# read valid code
def readDocCodes(self, docFile):
codes = []
start = False
# read
with open(docFile) as file:
for line in file:
code = self.parseDocLine(line)
# invalid
if code == None:
continue
# valid
if start:
codes.append(code)
# return
return codes
# check
def checkConsistency(self, docCodes, codes):
diff = False
# len
docLen = len(docCodes)
len = len(codes)
tdLog.info("head file codes = {len} doc file codes={docLen} \n")
if docLen > len:
maxLen = docLen
else:
maxLen = len
for i in range(maxLen):
if i < len and i < docLen:
if codes[i] == docCodes[i][0]:
tdLog.info(f" i={i} same head code: {codes[i]} doc code:{docCodes[i][0]}\n")
else:
tdLog.info(f" i={i} diff head code: {codes[i]} doc code:{docCodes[i][0]}\n")
diff = True
elif i < len:
tdLog.info(f" i={i} diff head code: {codes[i]} doc code: None\n")
diff = True
elif i < docLen:
tdLog.info(f" i={i} diff head code: None doc code: {docCodes[i][0]}\n")
diff = True
# result
if diff:
tdLog.exit("check error code consistency failed.\n")
# run
def run(self):
tdLog.debug(f"start to excute {__file__}")
# read head error code
hFile = "../../include/util/taoserror.h"
codes = self.readHeadCodes(hFile)
# read zh codes
zhDoc = "../../docs/zh/14-reference/09-error-code.md"
zhCodes = self.readDocCodes(zhDoc, codes)
# read en codes
enDoc = "../../docs/en/14-reference/09-error-code.md"
enCodes = self.readDocCodes(enDoc, codes)
# check zh
tdLog.info(f"check zh docs ...\n")
self.checkConsistency(zhCodes, codes)
# check en
tdLog.info(f"check en docs ...\n")
self.checkConsistency(enCodes, codes)
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

View File

@ -0,0 +1,241 @@
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import time
import random
import taos
import frame
import frame.eos
import frame.etime
import frame.etool
from frame.log import *
from frame.sql import *
from frame.cases import *
from frame.caseBase import *
from frame.srvCtl import *
from frame import *
ignoreCodes = [
'0x80000023', '0x80000024', '0x80000025', '0x80000026', '0x80000027', '0x80000028', '0x80000029', '0x8000002A', '0x80000109', '0x80000110',
'0x80000129', '0x8000012C', '0x8000012D', '0x8000012E', '0x8000012F', '0x80000136', '0x80000137', '0x80000138', '0x80000139', '0x8000013A',
'0x8000013B', '0x80000200', '0x80000201', '0x80000202', '0x80000203', '0x80000204', '0x80000205', '0x80000206', '0x8000020B', '0x8000020E',
'0x80000210', '0x80000212', '0x80000214', '0x80000215', '0x80000217', '0x8000021B', '0x8000021C', '0x8000021D', '0x8000021E', '0x80000220',
'0x80000221', '0x8000022C', '0x80000232', '0x80000233', '0x80000234', '0x80000235', '0x80000236', '0x800002FF', '0x80000300', '0x80000316',
'0x80000317', '0x80000338', '0x80000339', '0x8000033F', '0x80000343', '0x80000345', '0x80000356', '0x80000359', '0x8000035A', '0x8000035B',
'0x8000035C', '0x8000035D', '0x80000362', '0x8000038C', '0x8000038E', '0x8000039B', '0x8000039C', '0x8000039D', '0x8000039E', '0x800003A6',
'0x800003A7', '0x800003AA', '0x800003AB', '0x800003AC', '0x800003AD', '0x800003B0', '0x800003B2', '0x800003B4', '0x800003B5', '0x800003BA',
'0x800003C0', '0x800003C1', '0x800003D0', '0x800003D8', '0x800003D9', '0x800003E2', '0x800003F4', '0x800003F8', '0x80000412', '0x80000413',
'0x80000414', '0x80000415', '0x80000416', '0x80000417', '0x80000418', '0x80000419', '0x80000420', '0x80000421', '0x80000422', '0x80000423',
'0x80000424', '0x80000425', '0x80000426', '0x80000427', '0x80000428', '0x80000429', '0x8000042A', '0x80000430', '0x80000431', '0x80000432',
'0x80000433', '0x80000434', '0x80000435', '0x80000436', '0x80000437', '0x80000438', '0x80000440', '0x80000441', '0x80000442', '0x80000443',
'0x80000444', '0x80000445', '0x80000446', '0x80000447', '0x80000485', '0x80000486', '0x800004A0', '0x800004A1', '0x800004B1', '0x800004B2',
'0x800004B3', '0x80000504', '0x80000528', '0x80000532', '0x80000533', '0x80000534', '0x80000535', '0x80000536', '0x80000537', '0x80000538',
'0x80000539', '0x80000601', '0x80000607', '0x8000060A', '0x8000060D', '0x8000060E', '0x8000060F', '0x80000610', '0x80000611', '0x80000612',
'0x80000613', '0x80000614', '0x80000615', '0x80000616', '0x8000061C', '0x8000061E', '0x80000701', '0x80000705', '0x80000706', '0x80000707',
'0x80000708', '0x8000070B', '0x8000070C', '0x8000070E', '0x8000070F', '0x80000712', '0x80000723', '0x80000724', '0x80000725', '0x80000726',
'0x80000727', '0x80000728', '0x8000072A', '0x8000072C', '0x8000072D', '0x8000072E', '0x80000730', '0x80000731', '0x80000732', '0x80000733',
'0x80000734', '0x80000735', '0x80000736', '0x80000737', '0x80000738', '0x8000080E', '0x8000080F', '0x80000810', '0x80000811', '0x80000812',
'0x80000813', '0x80000814', '0x80000815', '0x80000816', '0x80000817', '0x80000818', '0x80000819', '0x80000820', '0x80000821', '0x80000822',
'0x80000823', '0x80000824', '0x80000825', '0x80000826', '0x80000827', '0x80000828', '0x80000829', '0x8000082A', '0x8000082B', '0x8000082C',
'0x8000082D', '0x80000907', '0x80000919', '0x8000091A', '0x8000091B', '0x8000091C', '0x8000091D', '0x8000091E', '0x8000091F', '0x80000A00',
'0x80000A01', '0x80000A03', '0x80000A06', '0x80000A07', '0x80000A08', '0x80000A09', '0x80000A0A', '0x80000A0B', '0x80000A0E', '0x80002206',
'0x80002207', '0x80002406', '0x80002407', '0x80002503', '0x80002506', '0x80002507', '0x8000261B', '0x80002653', '0x80002668', '0x80002669',
'0x8000266A', '0x8000266B', '0x8000266C', '0x8000266D', '0x8000266E', '0x8000266F', '0x80002670', '0x80002671', '0x80002672', '0x80002673',
'0x80002674', '0x80002675', '0x80002676', '0x80002677', '0x80002678', '0x80002679', '0x8000267A', '0x8000267B', '0x8000267C', '0x8000267D',
'0x8000267E', '0x8000267F', '0x80002680', '0x80002681', '0x80002682', '0x80002683', '0x80002684', '0x80002685', '0x80002703', '0x80002806',
'0x80002807', '0x80002808', '0x80002809', '0x8000280A', '0x8000280B', '0x8000280C', '0x8000280D', '0x8000280E', '0x8000280F', '0x80002810',
'0x80002811', '0x80002812', '0x80002813', '0x80002814', '0x80002815', '0x8000290B', '0x80002920', '0x80003003', '0x80003006', '0x80003106',
'0x80003107', '0x80003108', '0x80003109', '0x80003110', '0x80003111', '0x80003112', '0x80003250', '0x80004003', '0x80004004', '0x80004005',
'0x80004006', '0x80004007', '0x80004008', '0x80004009', '0x80004010', '0x80004011', '0x80004012', '0x80004013', '0x80004014', '0x80004015',
'0x80004016', '0x80004102', '0x80004103', '0x80004104', '0x80004105', '0x80004106', '0x80004107', '0x80004108', '0x80004109', '0x80005100',
'0x80005101', '0x80006000', '0x80006100', '0x80006101', '0x80006102', '0x80000019', '0x80002639', '0x80002666']
class TDTestCase(TBase):
# parse line
def parseLine(self, line, show):
line = line.strip()
# PRE_DEFINE
PRE_DEFINE = "#define TSDB_CODE_"
n = len(PRE_DEFINE)
if line[:n] != PRE_DEFINE:
return None
# MID_DEFINE
MID_DEFINE = "TAOS_DEF_ERROR_CODE(0, 0x"
pos = line.find(MID_DEFINE, n)
if pos == -1:
if show:
tdLog.info(f"not found \"{MID_DEFINE}\" line={line}")
return None
start = pos + len(MID_DEFINE)
code = line[start:].strip()
# )
pos = code.find(")")
if pos == -1:
if show:
tdLog.info(f"not found \")\", code={code}")
return None
# check len
code = code[:pos]
if len(code) != 4:
if show:
tdLog.info(f"code is len not 4 len:{len(code)} subcode={code}\")\", line={line}")
return None
# return
return "0x8000" + code
# ignore error
def ignoreCode(self, code):
if code in ignoreCodes:
return True
else:
return False
# read valid code
def readHeadCodes(self, hFile):
codes = []
start = False
# read
with open(hFile) as file:
for line in file:
code = self.parseLine(line, start)
# invalid
if code == None:
continue
#print(code)
# valid
if code == "0x8000000B":
start = True
if start:
codes.append(code)
# return
return codes
# parse doc lines
def parseDocLine(self, line):
line = line.strip()
PRE_DEFINE = "| 0x8000"
n = len(PRE_DEFINE)
if line[:n] != PRE_DEFINE:
return None
line = line[2:]
cols = line.split("|")
# remove blank
cols = [col.strip() for col in cols]
#print(cols)
# return
return cols
# read valid code
def readDocCodes(self, docFile):
codes = []
start = False
# read
with open(docFile) as file:
for line in file:
code = self.parseDocLine(line)
# invalid
if code == None:
continue
# valid
codes.append(code)
# return
return codes
# check
def checkConsistency(self, docCodes, codes, checkCol2=True, checkCol3=True):
failed = 0
ignored = 0
# len
hLen = len(codes)
docLen = len(docCodes)
#errCodes = []
#tdLog.info(f"head file codes items= {hLen} doc file codes items={docLen} \n")
for i in range(hLen):
problem = True
action = "not found"
for j in range(docLen):
if codes[i] == docCodes[j][0]:
#tdLog.info(f" i={i} error code: {codes[i]} found in doc code:{docCodes[j][0]}\n")
problem = False
if docCodes[j][1] == "":
# describe col1 must not empty
problem = True
action = "found but \"Error Description\" column is empty"
# check col2 empty
elif checkCol2 and docCodes[j][2] == "":
problem = True
action = "found but \"Possible Cause\" column is empty"
# check col3 empty
elif checkCol3 and docCodes[j][3] == "":
problem = True
action = "found but \"Suggested Actions\" column is empty"
# found stop search next
break
if problem:
# ignore
if self.ignoreCode(codes[i]):
#tdLog.info(f" i={i} error code: {codes[i]} ignore ...")
ignored += 1
else:
tdLog.info(f" i={i} error code: {codes[i]} {action} in doc.")
failed +=1
#errCodes.append(codes[i])
#print(errCodes)
# result
if failed > 0:
tdLog.exit("Failed to check the consistency of error codes between header and doc. "
f"failed:{failed}, ignored:{ignored}, all:{hLen}\n")
tdLog.info(f"Check consistency successfully. ok items={hLen - ignored}, ignored items={ignored}\n")
# run
def run(self):
tdLog.debug(f"start to excute {__file__}")
# read head error code
hFile = "../../include/util/taoserror.h"
codes = self.readHeadCodes(hFile)
# read zh codes
zhDoc = "../../docs/zh/14-reference/09-error-code.md"
zhCodes = self.readDocCodes(zhDoc)
# read en codes
enDoc = "../../docs/en/14-reference/09-error-code.md"
enCodes = self.readDocCodes(enDoc)
# check zh
tdLog.info(f"check zh docs ...")
self.checkConsistency(zhCodes, codes)
# check en
tdLog.info(f"check en docs ...")
self.checkConsistency(enCodes, codes)
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

View File

@ -18,6 +18,8 @@
,,y,army,./pytest.sh python3 ./test.py -f query/function/test_percentile.py
,,y,army,./pytest.sh python3 ./test.py -f query/function/test_resinfo.py
,,y,army,./pytest.sh python3 ./test.py -f query/function/test_interp.py
,,y,army,./pytest.sh python3 ./test.py -f query/function/test_interval.py
,,y,army,./pytest.sh python3 ./test.py -f query/function/test_interval_diff_tz.py
,,y,army,./pytest.sh python3 ./test.py -f query/function/concat.py
,,y,army,./pytest.sh python3 ./test.py -f query/function/cast.py
,,y,army,./pytest.sh python3 ./test.py -f query/test_join.py
@ -53,6 +55,7 @@
,,y,army,./pytest.sh python3 ./test.py -f query/test_having.py
,,n,army,python3 ./test.py -f tmq/drop_lost_comsumers.py
,,y,army,./pytest.sh python3 ./test.py -f cmdline/taosCli.py
,,n,army,python3 ./test.py -f whole/checkErrorCode.py
#
# system test
@ -354,6 +357,7 @@
,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/telemetry.py
,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/backquote_check.py
,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/taosdMonitor.py
,,n,system-test,python3 ./test.py -f 0-others/taosdlog.py
,,n,system-test,python3 ./test.py -f 0-others/taosdShell.py -N 5 -M 3 -Q 3
,,n,system-test,python3 ./test.py -f 0-others/udfTest.py
,,n,system-test,python3 ./test.py -f 0-others/udf_create.py

View File

@ -33,30 +33,47 @@ void do_stmt(TAOS* taos) {
char* tbs[2] = {"tb", "tb2"};
int t1_val[2] = {0, 1};
int t2_len[2] = {3, 3};
TAOS_STMT2_BIND tags[2][2] = {{{0, &t1_val[0], NULL, NULL, 0}, {0, "a1", &t2_len[0], NULL, 0}},
{{0, &t1_val[1], NULL, NULL, 0}, {0, "a2", &t2_len[1], NULL, 0}}};
int t2_len[2] = {10, 10};
int t3_len[2] = {sizeof(int), sizeof(int)};
TAOS_STMT2_BIND tags[2][2] = {{{0, &t1_val[0], &t3_len[0], NULL, 0}, {0, "after1", &t2_len[0], NULL, 0}},
{{0, &t1_val[1], &t3_len[1], NULL, 0}, {0, "after2", &t2_len[1], NULL, 0}}};
TAOS_STMT2_BIND params[2][2] = {
{{TSDB_DATA_TYPE_TIMESTAMP, v.ts, NULL, is_null, 2}, {TSDB_DATA_TYPE_BINARY, v.b, b_len, is_null2, 2}},
{{TSDB_DATA_TYPE_TIMESTAMP, v.ts, NULL, is_null, 2}, {TSDB_DATA_TYPE_BINARY, v.b, b_len, is_null2, 2}}};
{{TSDB_DATA_TYPE_TIMESTAMP, v.ts, t64_len, is_null, 2}, {TSDB_DATA_TYPE_BINARY, v.b, b_len, NULL, 2}},
{{TSDB_DATA_TYPE_TIMESTAMP, v.ts, t64_len, is_null, 2}, {TSDB_DATA_TYPE_BINARY, v.b, b_len, NULL, 2}}};
TAOS_STMT2_BIND* tagv[2] = {&tags[0][0], &tags[1][0]};
TAOS_STMT2_BIND* paramv[2] = {&params[0][0], &params[1][0]};
TAOS_STMT2_BINDV bindv = {2, &tbs[0], &tagv[0], &paramv[0]};
TAOS_STMT2* stmt = taos_stmt2_init(taos, &option);
const char* sql = "insert into db.? using db.stb tags(?, ?) values(?,?)";
int code = taos_stmt2_prepare(stmt, sql, 0);
// Equivalent to :
// const char* sql = "insert into db.? using db.stb tags(?, ?) values(?,?)";
const char* sql = "insert into db.stb(tbname,ts,b,t1,t2) values(?,?,?,?,?)";
int code = taos_stmt2_prepare(stmt, sql, 0);
if (code != 0) {
printf("failed to execute taos_stmt2_prepare. error:%s\n", taos_stmt2_error(stmt));
taos_stmt2_close(stmt);
return;
}
int fieldNum = 0;
TAOS_FIELD_STB* pFields = NULL;
code = taos_stmt2_get_stb_fields(stmt, &fieldNum, &pFields);
if (code != 0) {
printf("failed get col,ErrCode: 0x%x, ErrMessage: %s.\n", code, taos_stmt2_error(stmt));
} else {
printf("col nums:%d\n", fieldNum);
for (int i = 0; i < fieldNum; i++) {
printf("field[%d]: %s, data_type:%d, field_type:%d\n", i, pFields[i].name, pFields[i].type,
pFields[i].field_type);
}
}
int64_t ts = 1591060628000;
for (int i = 0; i < 2; ++i) {
// v.ts[i] = ts++;
v.ts[i] = ts;
// t64_len[i] = sizeof(int64_t);
v.ts[i] = ts++;
t64_len[i] = sizeof(int64_t);
}
strcpy(v.b, "abcdefg");
b_len[0] = (int)strlen(v.b);

View File

@ -23,8 +23,8 @@ void getFields(TAOS *taos, const char *sql) {
} else {
printf("bind nums:%d\n", fieldNum);
for (int i = 0; i < fieldNum; i++) {
printf("field[%d]: %s, data_type:%d, field_type:%d\n", i, pFields[i].name, pFields[i].type,
pFields[i].field_type);
printf("field[%d]: %s, data_type:%d, field_type:%d, precision:%d\n", i, pFields[i].name, pFields[i].type,
pFields[i].field_type, pFields[i].precision);
}
}
printf("====================================\n");
@ -32,6 +32,28 @@ void getFields(TAOS *taos, const char *sql) {
taos_stmt2_close(stmt);
}
void getQueryFields(TAOS *taos, const char *sql) {
TAOS_STMT2_OPTION option = {0};
TAOS_STMT2 *stmt = taos_stmt2_init(taos, &option);
int code = taos_stmt2_prepare(stmt, sql, 0);
if (code != 0) {
printf("failed to execute taos_stmt2_prepare. error:%s\n", taos_stmt2_error(stmt));
taos_stmt2_close(stmt);
return;
}
int fieldNum = 0;
TAOS_FIELD_STB *pFields = NULL;
code = taos_stmt2_get_stb_fields(stmt, &fieldNum, &pFields);
if (code != 0) {
printf("failed get col,ErrCode: 0x%x, ErrMessage: %s.\n", code, taos_stmt2_error(stmt));
} else {
printf("bind nums:%d\n", fieldNum);
}
printf("====================================\n");
taos_stmt2_free_stb_fields(stmt, pFields);
taos_stmt2_close(stmt);
}
void do_query(TAOS *taos, const char *sql) {
TAOS_RES *result = taos_query(taos, sql);
int code = taos_errno(result);
@ -45,7 +67,7 @@ void do_query(TAOS *taos, const char *sql) {
void do_stmt(TAOS *taos) {
do_query(taos, "drop database if exists db");
do_query(taos, "create database db");
do_query(taos, "create database db PRECISION 'ns'");
do_query(taos, "use db");
do_query(taos,
"create table db.stb (ts timestamp, b binary(10)) tags(t1 "
@ -143,6 +165,18 @@ void do_stmt(TAOS *taos) {
"v11,v12,v13,v14,v15) values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)";
printf("case 13 : %s\n", sql);
getFields(taos, sql);
// case 14 : select * from ntb where ts = ?
// query type
sql = "select * from ntb where ts = ?";
printf("case 14 : %s\n", sql);
getQueryFields(taos, sql);
// case 15 : select * from ntb where ts = ? and b = ?
// query type
sql = "select * from ntb where ts = ? and b = ?";
printf("case 15 : %s\n", sql);
getQueryFields(taos, sql);
printf("=================error test===================\n");
// case 14 : INSERT INTO db.d0 using db.stb values(?,?)
@ -199,6 +233,12 @@ void do_stmt(TAOS *taos) {
sql = "insert into db.stb values(?,?)";
printf("case 22 : %s\n", sql);
getFields(taos, sql);
// case 23 : select * from ? where ts = ?
// wrong query type
sql = "select * from ? where ts = ?";
printf("case 23 : %s\n", sql);
getQueryFields(taos, sql);
}
int main() {

View File

@ -1,66 +1,351 @@
import taos
import sys
import time
import os
from util.log import *
from util.sql import *
from util.cases import *
from util.dnodes import *
class TDTestCase:
def init(self, conn, logSql, replicaVar=1):
self.replicaVar = int(replicaVar)
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor())
def getBuildPath(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
projPath = selfPath[:selfPath.find("community")]
else:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]
break
return buildPath
def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring
tdSql.prepare()
# time.sleep(2)
tdSql.query("create user testpy pass 'testpy'")
buildPath = self.getBuildPath()
if (buildPath == ""):
tdLog.exit("taosd not found!")
else:
tdLog.info("taosd found in %s" % buildPath)
logPath = buildPath + "/../sim/dnode1/log"
tdLog.info("log path: %s" % logPath)
tdDnodes.stop(1)
time.sleep(2)
tdSql.error("select * from information_schema.ins_databases")
os.system("rm -rf %s" % logPath)
if os.path.exists(logPath) == True:
tdLog.exit("log pat still exist!")
tdDnodes.start(1)
time.sleep(2)
if os.path.exists(logPath) != True:
tdLog.exit("log pat is not generated!")
tdSql.query("select * from information_schema.ins_databases")
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())
import concurrent.futures
import os
import os.path
import platform
import shutil
import subprocess
import time
from util.log import *
from util.sql import *
from util.cases import *
from util.dnodes import *
class TDTestCase:
def init(self, conn, logSql, replicaVar=1):
self.replicaVar = int(replicaVar)
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor())
tdSql.prepare()
self.buildPath = self.getBuildPath()
if (self.buildPath == ""):
tdLog.exit("taosd not found!")
else:
tdLog.info("taosd found in %s" % self.buildPath)
self.taosdPath = self.buildPath + "/build/bin/taosd"
self.taosPath = self.buildPath + "/build/bin/taos"
self.logPath = self.buildPath + "/../sim/dnode1/log"
tdLog.info("taosd path: %s" % self.taosdPath)
tdLog.info("taos path: %s" % self.taosPath)
tdLog.info("log path: %s" % self.logPath)
self.commonCfgDict = {}
def getBuildPath(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
projPath = selfPath[:selfPath.find("community")]
else:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files or "taosd.exe" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]
break
return buildPath
def prepareCfg(self, cfgPath, cfgDict):
tdLog.info("make dir %s" % cfgPath)
os.makedirs(cfgPath, exist_ok=True)
with open(cfgPath + "/taos.cfg", "w") as f:
for key in self.commonCfgDict:
f.write("%s %s\n" % (key, self.commonCfgDict[key]))
for key in cfgDict:
f.write("%s %s\n" % (key, cfgDict[key]))
if not os.path.exists(cfgPath + "/taos.cfg"):
tdLog.exit("taos.cfg not found in %s" % cfgPath)
else:
tdLog.info("taos.cfg found in %s" % cfgPath)
def closeBin(self, binName):
tdLog.info("Closing %s" % binName)
if platform.system().lower() == 'windows':
psCmd = "for /f %%a in ('wmic process where \"name='%s.exe'\" get processId ^| xargs echo ^| awk ^'{print $2}^' ^&^& echo aa') do @(ps | grep %%a | awk '{print $1}' | xargs)" % binName
else:
psCmd = "ps -ef | grep -w %s | grep -v grep | awk '{print $2}'" % binName
tdLog.info(f"psCmd:{psCmd}")
try:
rem = subprocess.run(psCmd, shell=True, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
processID = rem.stdout.decode().strip()
tdLog.info(f"processID:{processID}")
except Exception as e:
tdLog.info(f"closeBin error:{e}")
processID = ""
onlyKillOnceWindows = 0
while(processID):
if not platform.system().lower() == 'windows' or (onlyKillOnceWindows == 0 and platform.system().lower() == 'windows'):
killCmd = "kill -9 %s > /dev/null 2>&1" % processID
if platform.system().lower() == 'windows':
killCmd = "kill -9 %s > nul 2>&1" % processID
tdLog.info(f"killCmd:{killCmd}")
os.system(killCmd)
tdLog.info(f"killed {binName} process {processID}")
onlyKillOnceWindows = 1
time.sleep(1)
try:
rem = subprocess.run(psCmd, shell=True, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
processID = rem.stdout.decode().strip()
except Exception as e:
tdLog.info(f"closeBin error:{e}")
processID = ""
def closeTaosd(self, signal=9):
self.closeBin("taosd")
def closeTaos(self, signal=9):
self.closeBin("taos")
def openBin(self, binPath, waitSec=5):
tdLog.info(f"Opening {binPath}")
try:
process = subprocess.Popen(binPath, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
time.sleep(waitSec)
if process.poll() is None:
tdLog.info(f"{binPath} started successfully")
else:
error = process.stderr.read().decode(encoding="utf-8").strip()
raise Exception(f"Failed to start {binPath}: {error}")
except Exception as e:
raise Exception(f"Failed to start {binPath}: %s" % repr(e))
def openTaosd(self, args="", waitSec=8):
self.openBin(f'{self.taosdPath} {args}', waitSec)
def openTaos(self, args="", waitSec=5):
self.openBin(f'{self.taosPath} {args}', waitSec)
def prepare_logoutput(self, desc, port, logOutput, skipOpenBin=False):
tdLog.info("Preparing %s, port:%s" % (desc, port))
dnodePath = self.buildPath + "/../sim/dnode%s" % port
tdLog.info("remove dnodePath:%s" % dnodePath)
try:
if os.path.exists(dnodePath):
shutil.rmtree(dnodePath)
except Exception as e:
raise Exception(f"Failed to remove directory {dnodePath}: {e}")
try:
self.prepareCfg(dnodePath, {"serverPort": port,
"dataDir": dnodePath + os.sep + "data",
"logDir": dnodePath + os.sep + "log"})
except Exception as e:
raise Exception(f"Failed to prepare configuration for {dnodePath}: {e}")
try:
self.openTaosd(f"-c {dnodePath} {logOutput}")
self.openTaos(f"-c {dnodePath} {logOutput}")
except Exception as e:
if(skipOpenBin):
tdLog.info(f"Failed to prepare taosd and taos with log output {logOutput}: {e}")
else:
raise Exception(f"Failed to prepare taosd and taos with log output {logOutput}: {e}")
def prepare_stdout(self):
list = self.prepare_list[0]
self.prepare_logoutput(list[0], list[1], "-o " + list[0])
def check_stdout(self):
tdLog.info("Running check stdout")
dnodePath = self.buildPath + "/../sim/dnode%s" % self.prepare_list[0][1]
tdSql.checkEqual(False, os.path.isfile(f"{dnodePath}/log/taosdlog.0"))
tdSql.checkEqual(False, os.path.isfile(f"{dnodePath}/log/taoslog0.0"))
def prepare_stderr(self):
list = self.prepare_list[1]
self.prepare_logoutput(list[0], list[1], "--log-output " + list[0])
def check_stderr(self):
tdLog.info("Running check stderr")
dnodePath = self.buildPath + "/../sim/dnode%s" % self.prepare_list[1][1]
tdSql.checkEqual(False, os.path.isfile(f"{dnodePath}/log/taosdlog.0"))
tdSql.checkEqual(False, os.path.isfile(f"{dnodePath}/log/taoslog0.0"))
def prepare_dev_null(self):
list = self.prepare_list[2]
# self.prepare_logoutput(list[0], list[1], "--log-output=" + list[0])
def check_dev_null(self):
tdLog.info("Running check /dev/null")
# dnodePath = self.buildPath + "/../sim/dnode%s" % self.prepare_list[2][1]
# tdSql.checkEqual(False, os.path.isfile(f"{dnodePath}/log/taosdlog.0"))
# tdSql.checkEqual(False, os.path.isfile(f"{dnodePath}/log/taoslog0.0"))
def prepare_fullpath(self):
list = self.prepare_list[3]
dnodePath = self.buildPath + "/../sim/dnode%s" % self.prepare_list[3][1]
self.prepare_logoutput(list[0], list[1], "-o " + dnodePath + "/log0/" )
def check_fullpath(self):
tdLog.info("Running check fullpath")
logPath = self.buildPath + "/../sim/dnode%s/log0/" % self.prepare_list[3][1]
tdSql.checkEqual(True, os.path.exists(f"{logPath}taosdlog.0"))
tdSql.checkEqual(True, os.path.exists(f"{logPath}taoslog0.0"))
def prepare_fullname(self):
list = self.prepare_list[4]
dnodePath = self.buildPath + "/../sim/dnode%s" % self.prepare_list[4][1]
self.prepare_logoutput(list[0], list[1], "--log-output " + dnodePath + "/log0/" + list[0])
dnodePath = self.buildPath + "/../sim/dnode%s" % self.prepare_list[4][1]
def check_fullname(self):
tdLog.info("Running check fullname")
logPath = self.buildPath + "/../sim/dnode%s/log0/" % self.prepare_list[4][1]
tdSql.checkEqual(True, os.path.exists(logPath + self.prepare_list[4][0] + ".0"))
tdSql.checkEqual(True, os.path.exists(logPath + self.prepare_list[4][0] + "0.0"))
def prepare_relativepath(self):
list = self.prepare_list[5]
self.prepare_logoutput(list[0], list[1], "--log-output=" + "log0/")
def check_relativepath(self):
tdLog.info("Running check relativepath")
logPath = self.buildPath + "/../sim/dnode%s/log/log0/" % self.prepare_list[5][1]
tdSql.checkEqual(True, os.path.exists(logPath + "taosdlog.0"))
tdSql.checkEqual(True, os.path.exists(logPath + "taoslog0.0"))
def prepare_relativename(self):
list = self.prepare_list[6]
self.prepare_logoutput(list[0], list[1], "-o " + "log0/" + list[0])
def check_relativename(self):
tdLog.info("Running check relativename")
logPath = self.buildPath + "/../sim/dnode%s/log/log0/" % self.prepare_list[6][1]
tdSql.checkEqual(True, os.path.exists(logPath + self.prepare_list[6][0] + ".0"))
tdSql.checkEqual(True, os.path.exists(logPath + self.prepare_list[6][0] + "0.0"))
def prepare_filename(self):
list = self.prepare_list[7]
self.prepare_logoutput(list[0], list[1], "--log-output " + list[0])
def check_filename(self):
tdLog.info("Running check filename")
logPath = self.buildPath + "/../sim/dnode%s/log/" % self.prepare_list[7][1]
tdSql.checkEqual(True, os.path.exists(logPath + self.prepare_list[7][0] + ".0"))
tdSql.checkEqual(True, os.path.exists(logPath + self.prepare_list[7][0] + "0.0"))
def prepare_empty(self):
list = self.prepare_list[8]
self.prepare_logoutput(list[0], list[1], "--log-output=" + list[0], True)
def check_empty(self):
tdLog.info("Running check empty")
logPath = self.buildPath + "/../sim/dnode%s/log" % self.prepare_list[8][1]
tdSql.checkEqual(False, os.path.exists(f"{logPath}/taosdlog.0"))
tdSql.checkEqual(False, os.path.exists(f"{logPath}/taoslog.0"))
def prepare_illegal(self):
list = self.prepare_list[9]
self.prepare_logoutput(list[0], list[1], "--log-output=" + list[0], True)
def check_illegal(self):
tdLog.info("Running check empty")
logPath = self.buildPath + "/../sim/dnode%s/log" % self.prepare_list[9][1]
tdSql.checkEqual(False, os.path.exists(f"{logPath}/taosdlog.0"))
tdSql.checkEqual(False, os.path.exists(f"{logPath}/taoslog.0"))
def prepareCheckResources(self):
self.prepare_list = [["stdout", "10030"], ["stderr", "10031"], ["/dev/null", "10032"], ["fullpath", "10033"],
["fullname", "10034"], ["relativepath", "10035"], ["relativename", "10036"], ["filename", "10037"],
["empty", "10038"], ["illeg?al", "10039"]]
self.check_functions = {
self.prepare_stdout: self.check_stdout,
self.prepare_stderr: self.check_stderr,
self.prepare_dev_null: self.check_dev_null,
self.prepare_fullpath: self.check_fullpath,
self.prepare_fullname: self.check_fullname,
self.prepare_relativepath: self.check_relativepath,
self.prepare_relativename: self.check_relativename,
self.prepare_filename: self.check_filename,
self.prepare_empty: self.check_empty,
self.prepare_illegal: self.check_illegal,
}
def checkLogOutput(self):
self.closeTaosd()
self.closeTaos()
self.prepareCheckResources()
with concurrent.futures.ThreadPoolExecutor() as executor:
prepare_futures = [executor.submit(prepare_func) for prepare_func, _ in self.check_functions.items()]
for future in concurrent.futures.as_completed(prepare_futures):
try:
future.result()
except Exception as e:
raise Exception(f"Error in prepare function: {e}")
check_futures = [executor.submit(check_func) for _, check_func in self.check_functions.items()]
for future in concurrent.futures.as_completed(check_futures):
try:
future.result()
except Exception as e:
raise Exception(f"Error in prepare function: {e}")
def checkLogRotate(self):
tdLog.info("Running check log rotate")
dnodePath = self.buildPath + "/../sim/dnode10050"
logRotateAfterBoot = 6 # LOG_ROTATE_BOOT
self.closeTaosd()
self.closeTaos()
try:
if os.path.exists(dnodePath):
shutil.rmtree(dnodePath)
self.prepareCfg(dnodePath, {"serverPort": 10050,
"dataDir": dnodePath + os.sep + "data",
"logDir": dnodePath + os.sep + "log",
"logKeepDays": "3" })
except Exception as e:
raise Exception(f"Failed to prepare configuration for {dnodePath}: {e}")
nowSec = int(time.time())
stubFile99 = f"{dnodePath}/log/taosdlog.99"
stubFile101 = f"{dnodePath}/log/taosdlog.101"
stubGzFile98 = f"{dnodePath}/log/taosdlog.98.gz"
stubGzFile102 = f"{dnodePath}/log/taosdlog.102.gz"
stubFileNow = f"{dnodePath}/log/taosdlog.{nowSec}"
stubGzFileNow = f"{dnodePath}/log/taosdlog.%d.gz" % (nowSec - 1)
stubGzFileKeep = f"{dnodePath}/log/taosdlog.%d.gz" % (nowSec - 86400 * 2)
stubGzFileDel = f"{dnodePath}/log/taosdlog.%d.gz" % (nowSec - 86400 * 3)
stubFiles = [stubFile99, stubFile101, stubGzFile98, stubGzFile102, stubFileNow, stubGzFileNow, stubGzFileKeep, stubGzFileDel]
try:
os.makedirs(f"{dnodePath}/log", exist_ok=True)
for stubFile in stubFiles:
with open(stubFile, "w") as f:
f.write("test log rotate")
except Exception as e:
raise Exception(f"Failed to prepare log files for {dnodePath}: {e}")
tdSql.checkEqual(True, os.path.exists(stubFile101))
tdSql.checkEqual(True, os.path.exists(stubGzFile102))
tdSql.checkEqual(True, os.path.exists(stubFileNow))
tdSql.checkEqual(True, os.path.exists(stubGzFileDel))
self.openTaosd(f"-c {dnodePath}")
self.openTaos(f"-c {dnodePath}")
tdLog.info("wait %d seconds for log rotate" % (logRotateAfterBoot + 2))
time.sleep(logRotateAfterBoot + 2)
tdSql.checkEqual(True, os.path.exists(stubFile99))
tdSql.checkEqual(False, os.path.exists(stubFile101))
tdSql.checkEqual(False, os.path.exists(f'{stubFile101}.gz'))
tdSql.checkEqual(True, os.path.exists(stubGzFile98))
tdSql.checkEqual(True, os.path.exists(f'{stubFileNow}.gz'))
tdSql.checkEqual(True, os.path.exists(stubGzFileNow))
tdSql.checkEqual(True, os.path.exists(stubGzFileKeep))
tdSql.checkEqual(False, os.path.exists(stubGzFile102))
tdSql.checkEqual(False, os.path.exists(stubGzFileDel))
tdSql.checkEqual(True, os.path.exists(f"{dnodePath}/log/taosdlog.0"))
tdSql.checkEqual(True, os.path.exists(f"{dnodePath}/log/taoslog0.0"))
def run(self):
self.checkLogOutput()
self.checkLogRotate()
self.closeTaosd()
self.closeTaos()
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

View File

@ -23,8 +23,8 @@ var adapterLog = log.GetLogger("ADP")
type adapterReqType int
const (
rest adapterReqType = iota // 0 - rest
ws // 1 - ws
rest adapterReqType = iota
ws
)
type Adapter struct {
@ -210,7 +210,7 @@ var adapterTableSql = "create stable if not exists `adapter_requests` (" +
"`other_fail` int unsigned, " +
"`query_in_process` int unsigned, " +
"`write_in_process` int unsigned ) " +
"tags (`endpoint` varchar(32), `req_type` tinyint unsigned )"
"tags (`endpoint` varchar(255), `req_type` tinyint unsigned )"
func (a *Adapter) createTable() error {
if a.conn == nil {

View File

@ -2,6 +2,7 @@ package api
import (
"context"
"fmt"
"net/http"
"net/http/httptest"
"strings"
@ -96,3 +97,38 @@ func TestAdapter2(t *testing.T) {
conn.Exec(context.Background(), "drop database "+c.Metrics.Database.Name, util.GetQidOwn())
}
func Test_adapterTableSql(t *testing.T) {
conn, _ := db.NewConnector("root", "taosdata", "127.0.0.1", 6041, false)
defer conn.Close()
dbName := "db_202412031446"
conn.Exec(context.Background(), "create database "+dbName, util.GetQidOwn())
defer conn.Exec(context.Background(), "drop database "+dbName, util.GetQidOwn())
conn, _ = db.NewConnectorWithDb("root", "taosdata", "127.0.0.1", 6041, dbName, false)
defer conn.Close()
conn.Exec(context.Background(), adapterTableSql, util.GetQidOwn())
testCases := []struct {
ep string
wantErr bool
}{
{"", false},
{"hello", false},
{strings.Repeat("a", 128), false},
{strings.Repeat("a", 255), false},
{strings.Repeat("a", 256), true},
}
for i, tc := range testCases {
sql := fmt.Sprintf("create table d%d using adapter_requests tags ('%s', 0)", i, tc.ep)
_, err := conn.Exec(context.Background(), sql, util.GetQidOwn())
if tc.wantErr {
assert.Error(t, err) // [0x2653] Value too long for column/tag: endpoint
} else {
assert.NoError(t, err)
}
}
}

View File

@ -17,10 +17,7 @@ var commonLogger = log.GetLogger("CMN")
func CreateDatabase(username string, password string, host string, port int, usessl bool, dbname string, databaseOptions map[string]interface{}) {
qid := util.GetQidOwn()
commonLogger := commonLogger.WithFields(
logrus.Fields{config.ReqIDKey: qid},
)
commonLogger := commonLogger.WithFields(logrus.Fields{config.ReqIDKey: qid})
ctx := context.Background()
@ -43,7 +40,6 @@ func CreateDatabase(username string, password string, host string, port int, use
}
return
}
panic(err)
}
func generateCreateDBSql(dbname string, databaseOptions map[string]interface{}) string {

View File

@ -748,20 +748,21 @@ func (gm *GeneralMetric) initColumnSeqMap() error {
}
func (gm *GeneralMetric) createSTables() error {
var createTableSql = "create stable if not exists taosd_cluster_basic " +
"(ts timestamp, first_ep varchar(100), first_ep_dnode_id INT, cluster_version varchar(20)) " +
"tags (cluster_id varchar(50))"
if gm.conn == nil {
return errNoConnection
}
createTableSql := "create stable if not exists taosd_cluster_basic " +
"(ts timestamp, first_ep varchar(255), first_ep_dnode_id INT, cluster_version varchar(20)) " +
"tags (cluster_id varchar(50))"
_, err := gm.conn.Exec(context.Background(), createTableSql, util.GetQidOwn())
if err != nil {
return err
}
createTableSql = "create stable if not exists taos_slow_sql_detail" +
" (start_ts TIMESTAMP, request_id BIGINT UNSIGNED PRIMARY KEY, query_time INT, code INT, error_info varchar(128), " +
createTableSql = "create stable if not exists taos_slow_sql_detail " +
"(start_ts TIMESTAMP, request_id BIGINT UNSIGNED PRIMARY KEY, query_time INT, code INT, error_info varchar(128), " +
"type TINYINT, rows_num BIGINT, sql varchar(16384), process_name varchar(32), process_id varchar(32)) " +
"tags (db varchar(1024), `user` varchar(32), ip varchar(32), cluster_id varchar(32))"

View File

@ -7,6 +7,7 @@ import (
"net/http/httptest"
"strings"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/taosdata/taoskeeper/db"
@ -255,6 +256,7 @@ func TestGenMetric(t *testing.T) {
}
})
}
func TestGetSubTableName(t *testing.T) {
tests := []struct {
stbName string
@ -356,3 +358,42 @@ func TestGetSubTableName(t *testing.T) {
})
}
}
func Test_createSTables(t *testing.T) {
conn, _ := db.NewConnector("root", "taosdata", "127.0.0.1", 6041, false)
defer conn.Close()
dbName := "db_202412031527"
conn.Exec(context.Background(), "create database "+dbName, util.GetQidOwn())
defer conn.Exec(context.Background(), "drop database "+dbName, util.GetQidOwn())
conn, _ = db.NewConnectorWithDb("root", "taosdata", "127.0.0.1", 6041, dbName, false)
defer conn.Close()
gm := GeneralMetric{conn: conn}
gm.createSTables()
testCases := []struct {
ep string
wantErr bool
}{
{"", false},
{"hello", false},
{strings.Repeat("a", 128), false},
{strings.Repeat("a", 255), false},
{strings.Repeat("a", 256), true},
}
conn.Exec(context.Background(),
"create table d0 using taosd_cluster_basic tags('cluster_id')", util.GetQidOwn())
for _, tc := range testCases {
sql := fmt.Sprintf("insert into d0 (ts, first_ep) values(%d, '%s')", time.Now().UnixMilli(), tc.ep)
_, err := conn.Exec(context.Background(), sql, util.GetQidOwn())
if tc.wantErr {
assert.Error(t, err) // [0x2653] Value too long for column/tag: endpoint
} else {
assert.NoError(t, err)
}
}
}

View File

@ -384,7 +384,7 @@ func insertClusterInfoSql(info ClusterInfo, ClusterID string, protocol int, ts s
sqls = append(sqls, fmt.Sprintf("insert into d_info_%s using d_info tags (%d, '%s', '%s') values ('%s', '%s')",
ClusterID+strconv.Itoa(dnode.DnodeID), dnode.DnodeID, dnode.DnodeEp, ClusterID, ts, dnode.Status))
dtotal++
if "ready" == dnode.Status {
if dnode.Status == "ready" {
dalive++
}
}
@ -393,8 +393,8 @@ func insertClusterInfoSql(info ClusterInfo, ClusterID string, protocol int, ts s
sqls = append(sqls, fmt.Sprintf("insert into m_info_%s using m_info tags (%d, '%s', '%s') values ('%s', '%s')",
ClusterID+strconv.Itoa(mnode.MnodeID), mnode.MnodeID, mnode.MnodeEp, ClusterID, ts, mnode.Role))
mtotal++
//LEADER FOLLOWER CANDIDATE ERROR
if "ERROR" != mnode.Role {
// LEADER FOLLOWER CANDIDATE ERROR
if mnode.Role != "ERROR" {
malive++
}
}

View File

@ -45,7 +45,7 @@ var dnodeEpLen = strconv.Itoa(255)
var CreateClusterInfoSql = "create table if not exists cluster_info (" +
"ts timestamp, " +
"first_ep binary(134), " +
"first_ep binary(255), " +
"first_ep_dnode_id int, " +
"version binary(12), " +
"master_uptime float, " +

View File

@ -0,0 +1,51 @@
package api
import (
"context"
"fmt"
"strings"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/taosdata/taoskeeper/db"
"github.com/taosdata/taoskeeper/util"
)
func TestCreateClusterInfoSql(t *testing.T) {
conn, _ := db.NewConnector("root", "taosdata", "127.0.0.1", 6041, false)
defer conn.Close()
dbName := "db_202412031539"
conn.Exec(context.Background(), "create database "+dbName, util.GetQidOwn())
defer conn.Exec(context.Background(), "drop database "+dbName, util.GetQidOwn())
conn, _ = db.NewConnectorWithDb("root", "taosdata", "127.0.0.1", 6041, dbName, false)
defer conn.Close()
conn.Exec(context.Background(), CreateClusterInfoSql, util.GetQidOwn())
testCases := []struct {
ep string
wantErr bool
}{
{"", false},
{"hello", false},
{strings.Repeat("a", 128), false},
{strings.Repeat("a", 255), false},
{strings.Repeat("a", 256), true},
}
conn.Exec(context.Background(),
"create table d0 using cluster_info tags('cluster_id')", util.GetQidOwn())
for _, tc := range testCases {
sql := fmt.Sprintf("insert into d0 (ts, first_ep) values(%d, '%s')", time.Now().UnixMilli(), tc.ep)
_, err := conn.Exec(context.Background(), sql, util.GetQidOwn())
if tc.wantErr {
assert.Error(t, err) // [0x2653] Value too long for column/tag: endpoint
} else {
assert.NoError(t, err)
}
}
}

View File

@ -315,14 +315,13 @@ func (cmd *Command) TransferDataToDest(data *db.Data, dstTable string, tagNum in
// cluster_info
func (cmd *Command) TransferTaosdClusterBasicInfo() error {
ctx := context.Background()
endTime := time.Now()
delta := time.Hour * 24 * 10
var createTableSql = "create stable if not exists taosd_cluster_basic " +
"(ts timestamp, first_ep varchar(100), first_ep_dnode_id INT, cluster_version varchar(20)) " +
"(ts timestamp, first_ep varchar(255), first_ep_dnode_id INT, cluster_version varchar(20)) " +
"tags (cluster_id varchar(50))"
if _, err := cmd.conn.Exec(ctx, createTableSql, util.GetQidOwn()); err != nil {

View File

@ -0,0 +1,55 @@
package cmd
import (
"context"
"fmt"
"strings"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/taosdata/taoskeeper/db"
"github.com/taosdata/taoskeeper/infrastructure/config"
"github.com/taosdata/taoskeeper/util"
)
func TestTransferTaosdClusterBasicInfo(t *testing.T) {
config.InitConfig()
conn, _ := db.NewConnector("root", "taosdata", "127.0.0.1", 6041, false)
defer conn.Close()
dbName := "db_202412031539"
conn.Exec(context.Background(), "create database "+dbName, util.GetQidOwn())
defer conn.Exec(context.Background(), "drop database "+dbName, util.GetQidOwn())
conn, _ = db.NewConnectorWithDb("root", "taosdata", "127.0.0.1", 6041, dbName, false)
defer conn.Close()
cmd := Command{conn: conn, fromTime: time.Now().Add(time.Duration(1 * time.Hour))}
cmd.TransferTaosdClusterBasicInfo()
testCases := []struct {
ep string
wantErr bool
}{
{"", false},
{"hello", false},
{strings.Repeat("a", 128), false},
{strings.Repeat("a", 255), false},
{strings.Repeat("a", 256), true},
}
conn.Exec(context.Background(),
"create table d0 using taosd_cluster_basic tags('cluster_id')", util.GetQidOwn())
for _, tc := range testCases {
sql := fmt.Sprintf("insert into d0 (ts, first_ep) values(%d, '%s')", time.Now().UnixMilli(), tc.ep)
_, err := conn.Exec(context.Background(), sql, util.GetQidOwn())
if tc.wantErr {
assert.Error(t, err) // [0x2653] Value too long for column/tag: endpoint
} else {
assert.NoError(t, err)
}
}
}

View File

@ -1,8 +0,0 @@
package cmd
import (
"testing"
)
func TestEmpty(t *testing.T) {
}

View File

@ -10,7 +10,6 @@ import (
"time"
"github.com/sirupsen/logrus"
"github.com/taosdata/driver-go/v3/common"
_ "github.com/taosdata/driver-go/v3/taosRestful"
"github.com/taosdata/taoskeeper/infrastructure/config"
@ -70,9 +69,13 @@ func NewConnectorWithDb(username, password, host string, port int, dbname string
return &Connector{db: db}, nil
}
type ReqIDKeyTy string
const ReqIDKey ReqIDKeyTy = "taos_req_id"
func (c *Connector) Exec(ctx context.Context, sql string, qid uint64) (int64, error) {
dbLogger := dbLogger.WithFields(logrus.Fields{config.ReqIDKey: qid})
ctx = context.WithValue(ctx, common.ReqIDKey, int64(qid))
ctx = context.WithValue(ctx, ReqIDKey, int64(qid))
dbLogger.Tracef("call adapter to execute sql:%s", sql)
startTime := time.Now()
@ -120,7 +123,7 @@ func logData(data *Data, logger *logrus.Entry) {
func (c *Connector) Query(ctx context.Context, sql string, qid uint64) (*Data, error) {
dbLogger := dbLogger.WithFields(logrus.Fields{config.ReqIDKey: qid})
ctx = context.WithValue(ctx, common.ReqIDKey, int64(qid))
ctx = context.WithValue(ctx, ReqIDKey, int64(qid))
dbLogger.Tracef("call adapter to execute query, sql:%s", sql)

View File

@ -1,8 +0,0 @@
package log
import (
"testing"
)
func TestEmpty(t *testing.T) {
}

View File

@ -13,7 +13,6 @@ import (
"github.com/sirupsen/logrus"
rotatelogs "github.com/taosdata/file-rotatelogs/v2"
"github.com/taosdata/taoskeeper/infrastructure/config"
"github.com/taosdata/taoskeeper/version"
)

View File

@ -1,8 +0,0 @@
package monitor
import (
"testing"
)
func TestEmpty(t *testing.T) {
}

View File

@ -11,10 +11,9 @@ import (
"github.com/taosdata/go-utils/web"
"github.com/taosdata/taoskeeper/api"
"github.com/taosdata/taoskeeper/db"
"github.com/taosdata/taoskeeper/util"
"github.com/taosdata/taoskeeper/infrastructure/config"
"github.com/taosdata/taoskeeper/infrastructure/log"
"github.com/taosdata/taoskeeper/util"
)
func TestStart(t *testing.T) {
@ -35,7 +34,7 @@ func TestStart(t *testing.T) {
conf.RotationInterval = "1s"
StartMonitor("", conf, reporter)
time.Sleep(2 * time.Second)
for k, _ := range SysMonitor.outputs {
for k := range SysMonitor.outputs {
SysMonitor.Deregister(k)
}

Some files were not shown because too many files have changed in this diff Show More