Merge branch '3.0' into feature/qnode
This commit is contained in:
commit
717acef00d
|
@ -14,7 +14,7 @@ description: "TDengine 支持的数据类型: 时间戳、浮点型、JSON 类
|
|||
- Epoch Time:时间戳也可以是一个长整数,表示从格林威治时间 1970-01-01 00:00:00.000 (UTC/GMT) 开始的毫秒数(相应地,如果所在 Database 的时间精度设置为“微秒”,则长整型格式的时间戳含义也就对应于从格林威治时间 1970-01-01 00:00:00.000 (UTC/GMT) 开始的微秒数;纳秒精度逻辑类似。)
|
||||
- 时间可以加减,比如 now-2h,表明查询时刻向前推 2 个小时(最近 2 小时)。数字后面的时间单位可以是 b(纳秒)、u(微秒)、a(毫秒)、s(秒)、m(分)、h(小时)、d(天)、w(周)。 比如 `select * from t1 where ts > now-2w and ts <= now-1w`,表示查询两周前整整一周的数据。在指定降采样操作(down sampling)的时间窗口(interval)时,时间单位还可以使用 n (自然月) 和 y (自然年)。
|
||||
|
||||
TDengine 缺省的时间戳精度是毫秒,但通过在 `CREATE DATABASE` 时传递的 PRECISION 参数也可以支持微秒和纳秒。(从 2.1.5.0 版本开始支持纳秒精度)
|
||||
TDengine 缺省的时间戳精度是毫秒,但通过在 `CREATE DATABASE` 时传递的 PRECISION 参数也可以支持微秒和纳秒。
|
||||
|
||||
```sql
|
||||
CREATE DATABASE db_name PRECISION 'ns';
|
||||
|
@ -25,14 +25,14 @@ CREATE DATABASE db_name PRECISION 'ns';
|
|||
|
||||
| # | **类型** | **Bytes** | **说明** |
|
||||
| --- | :-------: | --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | TIMESTAMP | 8 | 时间戳。缺省精度毫秒,可支持微秒和纳秒。从格林威治时间 1970-01-01 00:00:00.000 (UTC/GMT) 开始,计时不能早于该时间。(从 2.0.18.0 版本开始,已经去除了这一时间范围限制)(从 2.1.5.0 版本开始支持纳秒精度) |
|
||||
| 1 | TIMESTAMP | 8 | 时间戳。缺省精度毫秒,可支持微秒和纳秒,详细说明见上节。 |
|
||||
| 2 | INT | 4 | 整型,范围 [-2^31, 2^31-1] |
|
||||
| 3 | INT UNSIGNED| 4| 无符号整数,[0, 2^32-1]
|
||||
| 4 | BIGINT | 8 | 长整型,范围 [-2^63, 2^63-1] |
|
||||
| 5 | BIGINT UNSIGNED | 8 | 长整型,范围 [0, 2^64-1] |
|
||||
| 6 | FLOAT | 4 | 浮点型,有效位数 6-7,范围 [-3.4E38, 3.4E38] |
|
||||
| 7 | DOUBLE | 8 | 双精度浮点型,有效位数 15-16,范围 [-1.7E308, 1.7E308] |
|
||||
| 8 | BINARY | 自定义 | 记录单字节字符串,建议只用于处理 ASCII 可见字符,中文等多字节字符需使用 nchar。理论上,最长可以有 16374 字节。binary 仅支持字符串输入,字符串两端需使用单引号引用。使用时须指定大小,如 binary(20) 定义了最长为 20 个单字节字符的字符串,每个字符占 1 byte 的存储空间,总共固定占用 20 bytes 的空间,此时如果用户字符串超出 20 字节将会报错。对于字符串内的单引号,可以用转义字符反斜线加单引号来表示,即 `\’`。 |
|
||||
| 8 | BINARY | 自定义 | 记录单字节字符串,建议只用于处理 ASCII 可见字符,中文等多字节字符需使用 nchar。 |
|
||||
| 9 | SMALLINT | 2 | 短整型, 范围 [-32768, 32767] |
|
||||
| 10 | SMALLINT UNSIGNED | 2| 无符号短整型,范围 [0, 655357] |
|
||||
| 11 | TINYINT | 1 | 单字节整型,范围 [-128, 127] |
|
||||
|
@ -42,20 +42,15 @@ CREATE DATABASE db_name PRECISION 'ns';
|
|||
| 15 | JSON | | json 数据类型, 只有 tag 可以是 json 格式 |
|
||||
| 16 | VARCHAR | 自定义 | BINARY类型的别名 |
|
||||
|
||||
:::tip
|
||||
TDengine 对 SQL 语句中的英文字符不区分大小写,自动转化为小写执行。因此用户大小写敏感的字符串及密码,需要使用单引号将字符串引起来。
|
||||
|
||||
:::
|
||||
|
||||
:::note
|
||||
虽然 BINARY 类型在底层存储上支持字节型的二进制字符,但不同编程语言对二进制数据的处理方式并不保证一致,因此建议在 BINARY 类型中只存储 ASCII 可见字符,而避免存储不可见字符。多字节的数据,例如中文字符,则需要使用 NCHAR 类型进行保存。如果强行使用 BINARY 类型保存中文字符,虽然有时也能正常读写,但并不带有字符集信息,很容易出现数据乱码甚至数据损坏等情况。
|
||||
- TDengine 对 SQL 语句中的英文字符不区分大小写,自动转化为小写执行。因此用户大小写敏感的字符串及密码,需要使用单引号将字符串引起来。
|
||||
- 虽然 BINARY 类型在底层存储上支持字节型的二进制字符,但不同编程语言对二进制数据的处理方式并不保证一致,因此建议在 BINARY 类型中只存储 ASCII 可见字符,而避免存储不可见字符。多字节的数据,例如中文字符,则需要使用 NCHAR 类型进行保存。如果强行使用 BINARY 类型保存中文字符,虽然有时也能正常读写,但并不带有字符集信息,很容易出现数据乱码甚至数据损坏等情况。
|
||||
- BINARY 类型理论上最长可以有 16374 字节。binary 仅支持字符串输入,字符串两端需使用单引号引用。使用时须指定大小,如 binary(20) 定义了最长为 20 个单字节字符的字符串,每个字符占 1 byte 的存储空间,总共固定占用 20 bytes 的空间,此时如果用户字符串超出 20 字节将会报错。对于字符串内的单引号,可以用转义字符反斜线加单引号来表示,即 `\’`。
|
||||
- SQL 语句中的数值类型将依据是否存在小数点,或使用科学计数法表示,来判断数值类型是否为整型或者浮点型,因此在使用时要注意相应类型越界的情况。例如,9999999999999999999 会认为超过长整型的上边界而溢出,而 9999999999999999999.0 会被认为是有效的浮点数。
|
||||
|
||||
:::
|
||||
|
||||
:::note
|
||||
SQL 语句中的数值类型将依据是否存在小数点,或使用科学计数法表示,来判断数值类型是否为整型或者浮点型,因此在使用时要注意相应类型越界的情况。例如,9999999999999999999 会认为超过长整型的上边界而溢出,而 9999999999999999999.0 会被认为是有效的浮点数。
|
||||
|
||||
:::
|
||||
|
||||
## 常量
|
||||
TDengine支持多个类型的常量,细节如下表:
|
||||
|
@ -63,10 +58,15 @@ TDengine支持多个类型的常量,细节如下表:
|
|||
| # | **语法** | **类型** | **说明** |
|
||||
| --- | :-------: | --------- | -------------------------------------- |
|
||||
| 1 | [{+ \| -}]123 | BIGINT | 整型数值的字面量的类型均为BIGINT。如果用户输入超过了BIGINT的表示范围,TDengine 按BIGINT对数值进行截断。|
|
||||
| 2 | 123.45 | DOUBLE | 浮点数值的字面量的类型均为DOUBLE。TDengine依据是否存在小数点,或使用科学计数法表示,来判断数值类型是否为整型或者浮点型,因此在使用时要注意相应类型越界的情况。例如,9999999999999999999会认为超过长整型的上边界而溢出,而9999999999999999999.0会被认为是有效的浮点数。|
|
||||
| 2 | 123.45 | DOUBLE | 浮点数值的字面量的类型均为DOUBLE。TDengine依据是否存在小数点,或使用科学计数法表示,来判断数值类型是否为整型或者浮点型。|
|
||||
| 3 | 1.2E3 | DOUBLE | 科学计数法的字面量的类型为DOUBLE。|
|
||||
| 4 | 'abc' | BINARY | 单引号括住的内容为字符串字面值,其类型为BINARY,BINARY的size为实际的字符个数。对于字符串内的单引号,可以用转义字符反斜线加单引号来表示,即 \'。|
|
||||
| 5 | "abc" | BINARY | 双引号括住的内容为字符串字面值,其类型为BINARY,BINARY的size为实际的字符个数。对于字符串内的双引号,可以用转义字符反斜线加单引号来表示,即 \"。 |
|
||||
| 6 | TIMESTAMP {'literal' \| "literal"} | TIMESTAMP | TIMESTAMP关键字表示后面的字符串字面量需要被解释为TIMESTAMP类型。字符串需要满足YYYY-MM-DD HH:mm:ss.MS格式,其时间分辨率为当前数据库的时间分辨率。 |
|
||||
| 7 | {TRUE | FALSE} | BOOL | 布尔类型字面量。 |
|
||||
| 7 | {TRUE \| FALSE} | BOOL | 布尔类型字面量。 |
|
||||
| 8 | {'' \| "" \| '\t' \| "\t" \| ' ' \| " " \| NULL } | -- | 空值字面量。可以用于任意类型。|
|
||||
|
||||
:::note
|
||||
- TDengine依据是否存在小数点,或使用科学计数法表示,来判断数值类型是否为整型或者浮点型,因此在使用时要注意相应类型越界的情况。例如,9999999999999999999会认为超过长整型的上边界而溢出,而9999999999999999999.0会被认为是有效的浮点数。
|
||||
|
||||
:::
|
||||
|
|
|
@ -1464,35 +1464,6 @@ SELECT ROUND(field_name) FROM { tb_name | stb_name } [WHERE clause];
|
|||
- 该函数适用于内层查询和外层查询。
|
||||
- 版本2.6.0.x后支持
|
||||
|
||||
### 四则运算
|
||||
|
||||
```
|
||||
SELECT field_name [+|-|*|/|%][Value|field_name] FROM { tb_name | stb_name } [WHERE clause];
|
||||
```
|
||||
|
||||
**功能说明**:统计表/超级表中某列或多列间的值加、减、乘、除、取余计算结果。
|
||||
|
||||
**返回数据类型**:双精度浮点数。
|
||||
|
||||
**应用字段**:不能应用在 timestamp、binary、nchar、bool 类型字段。
|
||||
|
||||
**适用于**:表、超级表。
|
||||
|
||||
**使用说明**:
|
||||
|
||||
- 支持两列或多列之间进行计算,可使用括号控制计算优先级;
|
||||
- NULL 字段不参与计算,如果参与计算的某行中包含 NULL,该行的计算结果为 NULL。
|
||||
|
||||
```
|
||||
taos> SELECT current + voltage * phase FROM d1001;
|
||||
(current+(voltage*phase)) |
|
||||
============================
|
||||
78.190000713 |
|
||||
84.540003240 |
|
||||
80.810000718 |
|
||||
Query OK, 3 row(s) in set (0.001046s)
|
||||
```
|
||||
|
||||
### STATECOUNT
|
||||
|
||||
```
|
||||
|
|
|
@ -86,7 +86,7 @@ title: TDengine 参数限制与保留关键字
|
|||
| CONNS | ID | NOTNULL | STABLE | WAL |
|
||||
| COPY | IF | NOW | STABLES | WHERE |
|
||||
| _C0 | _QSTART | _QSTOP | _QDURATION | _WSTART |
|
||||
| _WSTOP | _WDURATION |
|
||||
| _WSTOP | _WDURATION | _ROWTS |
|
||||
|
||||
## 特殊说明
|
||||
### TBNAME
|
||||
|
@ -119,10 +119,10 @@ taos> SELECT COUNT(tbname) FROM meters WHERE groupId > 2;
|
|||
Query OK, 1 row(s) in set (0.001091s)
|
||||
```
|
||||
### _QSTART/_QSTOP/_QDURATION
|
||||
表示查询过滤窗口的起始,结束以及持续时间 (从2.6.0.0版本开始支持)
|
||||
表示查询过滤窗口的起始,结束以及持续时间。
|
||||
|
||||
### _WSTART/_WSTOP/_WDURATION
|
||||
窗口切分聚合查询(例如 interval/session window/state window)中表示每个切分窗口的起始,结束以及持续时间(从 2.6.0.0 版本开始支持)
|
||||
窗口切分聚合查询(例如 interval/session window/state window)中表示每个切分窗口的起始,结束以及持续时间。
|
||||
|
||||
### _c0
|
||||
表示表或超级表的第一列
|
||||
### _c0/_ROWTS
|
||||
_c0 _ROWTS 等价,表示表或超级表的第一列
|
|
@ -1 +0,0 @@
|
|||
label: 参数限制与保留关键字
|
|
@ -0,0 +1,71 @@
|
|||
---
|
||||
sidebar_label: 运算符
|
||||
title: 运算符
|
||||
---
|
||||
|
||||
## 算术运算符
|
||||
|
||||
| # | **运算符** | **支持的类型** | **说明** |
|
||||
| --- | :--------: | -------------- | -------------------------- |
|
||||
| 1 | +, - | 数值类型 | 表达正数和负数,一元运算符 |
|
||||
| 2 | +, - | 数值类型 | 表示加法和减法,二元运算符 |
|
||||
| 3 | \*, / | 数值类型 | 表示乘法和除法,二元运算符 |
|
||||
| 4 | % | 数值类型 | 表示取余运算,二元运算符 |
|
||||
|
||||
## 位运算符
|
||||
|
||||
| # | **运算符** | **支持的类型** | **说明** |
|
||||
| --- | :--------: | -------------- | ------------------ |
|
||||
| 1 | & | 数值类型 | 按位与,二元运算符 |
|
||||
| 2 | \| | 数值类型 | 按位或,二元运算符 |
|
||||
|
||||
## JSON 运算符
|
||||
|
||||
`->` 运算符可以对 JSON 类型的列按键取值。->左侧是列标识符,右侧是键的字符串常量,如 col->'name',返回键'name'的值。
|
||||
|
||||
## 集合运算符
|
||||
|
||||
集合运算符将两个查询的结果合并为一个结果。包含集合运算符的查询称之为复合查询。复合查询中每条查询的选择列表中的相应表达式在数量上必须匹配,并且必须位于同一数据类型组中(如数值类型或字符串类型)。
|
||||
|
||||
- 对于字符串类型数据,返回值的数据类型按如下方式确定:
|
||||
- 如果具有相同的类型(都为 BINARY 或都为 NCHAR),则返回此类型,并以较大的长度作为返回值长度。
|
||||
- 如果具有不同的类型,则返回 BINARY 类型,并以较大的长度(NCHAR 类型长度按四倍计算)作为返回值长度。
|
||||
- 对于数值类型数据,返回值的数据类型是数字表达范围较大的那个。
|
||||
|
||||
TDengine 支持 `UNION ALL` 操作符。UNION ALL 将查询返回的结果集合并返回,并不去重。在同一个 SQL 语句中,UNION ALL 最多支持 100 个。
|
||||
|
||||
## 比较运算符
|
||||
|
||||
| # | **运算符** | **支持的类型** | **说明** |
|
||||
| --- | :---------------: | -------------------------------------------------------------------- | -------------------- |
|
||||
| 1 | = | 除 BLOB、MEDIUMBLOB 和 JSON 外的所有类型 | 相等 |
|
||||
| 2 | <\>, != | 除 BLOB、MEDIUMBLOB 和 JSON 外的所有类型,且不可以为表的时间戳主键列 | 不相等 |
|
||||
| 3 | \>, \< | 除 BLOB、MEDIUMBLOB 和 JSON 外的所有类型 | 大于,小于 |
|
||||
| 4 | \>=, \<= | 除 BLOB、MEDIUMBLOB 和 JSON 外的所有类型 | 大于等于,小于等于 |
|
||||
| 5 | IS [NOT] NULL | 所有类型 | 是否为空值 |
|
||||
| 6 | [NOT] BETWEEN AND | 除 BOOL、BLOB、MEDIUMBLOB 和 JSON 外的所有类型 | 闭区间比较 |
|
||||
| 7 | IN | 除 BLOB、MEDIUMBLOB 和 JSON 外的所有类型,且不可以为表的时间戳主键列 | 与列表内的任意值相等 |
|
||||
| 8 | LIKE | BINARY、NCHAR 和 VARCHAR | 通配符匹配 |
|
||||
| 9 | MATCH, NMATCH | BINARY、NCHAR 和 VARCHAR | 正则表达式匹配 |
|
||||
| 10 | CONTAINS | JSON | JSON 中是否存在某键 |
|
||||
|
||||
LIKE 条件使用通配符字符串进行匹配检查,规则如下:
|
||||
|
||||
- '%'(百分号)匹配 0 到任意个字符;'\_'(下划线)匹配单个任意 ASCII 字符。
|
||||
- 如果希望匹配字符串中原本就带有的 \_(下划线)字符,那么可以在通配符字符串中写作 \_,即加一个反斜线来进行转义。
|
||||
- 通配符字符串最长不能超过 100 字节。不建议使用太长的通配符字符串,否则将有可能严重影响 LIKE 操作的执行性能。
|
||||
|
||||
MATCH 条件和 NMATCH 条件使用正则表达式进行匹配,规则如下:
|
||||
|
||||
- 支持符合 POSIX 规范的正则表达式,具体规范内容可参见 Regular Expressions。
|
||||
- 只能针对子表名(即 tbname)、字符串类型的标签值进行正则表达式过滤,不支持普通列的过滤。
|
||||
- 正则匹配字符串长度不能超过 128 字节。可以通过参数 maxRegexStringLen 设置和调整最大允许的正则匹配字符串,该参数是客户端配置参数,需要重启客户端才能生效
|
||||
|
||||
## 逻辑运算符
|
||||
|
||||
| # | **运算符** | **支持的类型** | **说明** |
|
||||
| --- | :--------: | -------------- | --------------------------------------------------------------------------- |
|
||||
| 1 | AND | BOOL | 逻辑与,如果两个条件均为 TRUE, 则返回 TRUE。如果任一为 FALSE,则返回 FALSE |
|
||||
| 2 | OR | BOOL | 逻辑或,如果任一条件为 TRUE, 则返回 TRUE。如果两者都是 FALSE,则返回 FALSE |
|
||||
|
||||
TDengine 在计算逻辑条件时,会进行短路径优化,即对于 AND,第一个条件为 FALSE,则不再计算第二个条件,直接返回 FALSE;对于 OR,第一个条件为 TRUE,则不再计算第二个条件,直接返回 TRUE。
|
|
@ -38,7 +38,7 @@ taosdump 有两种安装方式:
|
|||
|
||||
:::tip
|
||||
- taosdump 1.4.1 之后的版本提供 `-I` 参数,用于解析 avro 文件 schema 和数据,如果指定 `-s` 参数将只解析 schema。
|
||||
- taosdump 1.4.2 之后的备份使用 `-B` 参数指定的批次数,默认值为 16384,如果在某些环境下由于网络速度或磁盘性能不足导致 "Error actual dump .. batch .." 可以通过 `-B` 参数挑战为更小的值进行尝试。
|
||||
- taosdump 1.4.2 之后的备份使用 `-B` 参数指定的批次数,默认值为 16384,如果在某些环境下由于网络速度或磁盘性能不足导致 "Error actual dump .. batch .." 可以通过 `-B` 参数调整为更小的值进行尝试。
|
||||
|
||||
:::
|
||||
|
||||
|
|
|
@ -3,6 +3,8 @@ title: Data Types
|
|||
description: "TDengine supports a variety of data types including timestamp, float, JSON and many others."
|
||||
---
|
||||
|
||||
## TIMESTAMP
|
||||
|
||||
When using TDengine to store and query data, the most important part of the data is timestamp. Timestamp must be specified when creating and inserting data rows. Timestamp must follow the rules below:
|
||||
|
||||
- The format must be `YYYY-MM-DD HH:mm:ss.MS`, the default time precision is millisecond (ms), for example `2017-08-12 18:25:58.128`
|
||||
|
@ -17,33 +19,51 @@ Time precision in TDengine can be set by the `PRECISION` parameter when executin
|
|||
CREATE DATABASE db_name PRECISION 'ns';
|
||||
```
|
||||
|
||||
## Data Types
|
||||
|
||||
In TDengine, the data types below can be used when specifying a column or tag.
|
||||
|
||||
| # | **type** | **Bytes** | **Description** |
|
||||
| --- | :-------: | --------- | ------------------------- |
|
||||
| 1 | TIMESTAMP | 8 | Default precision is millisecond, microsecond and nanosecond are also supported |
|
||||
| 2 | INT | 4 | Integer, the value range is [-2^31+1, 2^31-1], while -2^31 is treated as NULL |
|
||||
| 3 | BIGINT | 8 | Long integer, the value range is [-2^63+1, 2^63-1], while -2^63 is treated as NULL |
|
||||
| 4 | FLOAT | 4 | Floating point number, the effective number of digits is 6-7, the value range is [-3.4E38, 3.4E38] |
|
||||
| 5 | DOUBLE | 8 | Double precision floating point number, the effective number of digits is 15-16, the value range is [-1.7E308, 1.7E308] |
|
||||
| 6 | BINARY | User Defined | Single-byte string for ASCII visible characters. Length must be specified when defining a column or tag of binary type. The string length can be up to 16374 bytes. The string value must be quoted with single quotes. The literal single quote inside the string must be preceded with back slash like `\'` |
|
||||
| 7 | SMALLINT | 2 | Short integer, the value range is [-32767, 32767], while -32768 is treated as NULL |
|
||||
| 8 | TINYINT | 1 | Single-byte integer, the value range is [-127, 127], while -128 is treated as NULL |
|
||||
| 9 | BOOL | 1 | Bool, the value range is {true, false} |
|
||||
| 10 | NCHAR | User Defined| Multi-Byte string that can include multi byte characters like Chinese characters. Each character of NCHAR type consumes 4 bytes storage. The string value should be quoted with single quotes. Literal single quote inside the string must be preceded with backslash, like `\’`. The length must be specified when defining a column or tag of NCHAR type, for example nchar(10) means it can store at most 10 characters of nchar type and will consume fixed storage of 40 bytes. An error will be reported if the string value exceeds the length defined. |
|
||||
| 11 | JSON | | JSON type can only be used on tags. A tag of json type is excluded with any other tags of any other type |
|
||||
|
||||
:::tip
|
||||
TDengine is case insensitive and treats any characters in the sql command as lower case by default, case sensitive strings must be quoted with single quotes.
|
||||
|
||||
:::
|
||||
| 2 | INT | 4 | Integer, the value range is [-2^31, 2^31-1] |
|
||||
| 3 |INT UNSIGNED|4 | Unsigned integer, the value range is [0, 2^31-1] |
|
||||
| 4 | BIGINT | 8 | Long integer, the value range is [-2^63, 2^63-1] |
|
||||
| 5 | BIGINT UNSIGNED | 8 | Unsigned long integer, the value range is [0, 2^63-1] |
|
||||
| 6 | FLOAT | 4 | Floating point number, the effective number of digits is 6-7, the value range is [-3.4E38, 3.4E38] |
|
||||
| 7 | DOUBLE | 8 | Double precision floating point number, the effective number of digits is 15-16, the value range is [-1.7E308, 1.7E308] |
|
||||
| 8 | BINARY | User Defined | Single-byte string for ASCII visible characters. Length must be specified when defining a column or tag of binary type. The string length can be up to 16374 bytes. The string value must be quoted with single quotes. The literal single quote inside the string must be preceded with back slash like `\'` |
|
||||
| 9 | SMALLINT | 2 | Short integer, the value range is [-32768, 32767] |
|
||||
| 10 | SMALLINT UNSIGNED | 2 | Unsigned short integer, the value range is [0, 32767] |
|
||||
| 11 | TINYINT | 1 | Single-byte integer, the value range is [-128, 127] |
|
||||
| 12 | TINYINT UNSIGNED | 1 | Unsigned single-byte integer, the value range is [0, 127] |
|
||||
| 13 | BOOL | 1 | Bool, the value range is {true, false} |
|
||||
| 14 | NCHAR | User Defined| Multi-Byte string that can include multi byte characters like Chinese characters. Each character of NCHAR type consumes 4 bytes storage. The string value should be quoted with single quotes. Literal single quote inside the string must be preceded with backslash, like `\’`. The length must be specified when defining a column or tag of NCHAR type, for example nchar(10) means it can store at most 10 characters of nchar type and will consume fixed storage of 40 bytes. An error will be reported if the string value exceeds the length defined. |
|
||||
| 15 | JSON | | JSON type can only be used on tags. A tag of json type is excluded with any other tags of any other type |
|
||||
| 16 | VARCHAR | User Defined| Alias of BINARY type |
|
||||
|
||||
:::note
|
||||
Only ASCII visible characters are suggested to be used in a column or tag of BINARY type. Multi-byte characters must be stored in NCHAR type.
|
||||
- TDengine is case insensitive and treats any characters in the sql command as lower case by default, case sensitive strings must be quoted with single quotes.
|
||||
- Only ASCII visible characters are suggested to be used in a column or tag of BINARY type. Multi-byte characters must be stored in NCHAR type.
|
||||
- Numeric values in SQL statements will be determined as integer or float type according to whether there is decimal point or whether scientific notation is used, so attention must be paid to avoid overflow. For example, 9999999999999999999 will be considered as overflow because it exceeds the upper limit of long integer, but 9999999999999999999.0 will be considered as a legal float number.
|
||||
|
||||
:::
|
||||
|
||||
## Constants
|
||||
TDengine supports constants of multiple data type.
|
||||
|
||||
| # | **Syntax** | **Type** | **Description** |
|
||||
| --- | :-------: | --------- | -------------------------------------- |
|
||||
| 1 | [{+ \| -}]123 | BIGINT | Numeric constants are treated as BIGINT type. The value will be truncated if it exceeds the range of BIGINT type. |
|
||||
| 2 | 123.45 | DOUBLE | Floating number constants are treated as DOUBLE type. TDengine determines whether it's a floating number based on if decimal point or scientific notation is used. |
|
||||
| 3 | 1.2E3 | DOUBLE | Constants in scientific notation are treated ad DOUBLE type. |
|
||||
| 4 | 'abc' | BINARY | String constants enclosed by single quotes are treated as BINARY type. Its size is determined as the acutal length. Single quote itself can be included by preceding backslash, i.e. `\'`, in a string constant. |
|
||||
| 5 | "abc" | BINARY | String constants enclosed by double quotes are treated as BINARY type. Its size is determined as the acutal length. Double quote itself can be included by preceding backslash, i.e. `\"`, in a string constant. |
|
||||
| 6 | TIMESTAMP {'literal' \| "literal"} | TIMESTAMP | A string constant following `TIMESTAMP` keyword is treated as TIMESTAMP type. The string should be in the format of "YYYY-MM-DD HH:mm:ss.MS". Its time precision is same as that of the current database being used. |
|
||||
| 7 | {TRUE \| FALSE} | BOOL | BOOL type contant. |
|
||||
| 8 | {'' \| "" \| '\t' \| "\t" \| ' ' \| " " \| NULL } | -- | NULL constant, it can be used for any type.|
|
||||
|
||||
:::note
|
||||
Numeric values in SQL statements will be determined as integer or float type according to whether there is decimal point or whether scientific notation is used, so attention must be paid to avoid overflow. For example, 9999999999999999999 will be considered as overflow because it exceeds the upper limit of long integer, but 9999999999999999999.0 will be considered as a legal float number.
|
||||
- TDengine determines whether it's a floating number based on if decimal point or scientific notation is used. So whether the value is determined as overflow depends on both the value and the determined type. For example, 9999999999999999999 is determined as overflow because it exceeds the upper limit of BIGINT type, while 9999999999999999999.0 is considered as a valid floating number because it is within the range of DOUBLE type.
|
||||
|
||||
:::
|
||||
|
|
|
@ -47,11 +47,11 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
|
|||
| CONNS | ID | NOTNULL | STable | WAL |
|
||||
| COPY | IF | NOW | STableS | WHERE |
|
||||
| _C0 | _QSTART | _QSTOP | _QDURATION | _WSTART |
|
||||
| _WSTOP | _WDURATION |
|
||||
| _WSTOP | _WDURATION | _ROWTS |
|
||||
|
||||
## Explanations
|
||||
### TBNAME
|
||||
`TBNAME` can be considered as a special tag, which represents the name of the subtable, in STable.
|
||||
`TBNAME` can be considered as a special tag, which represents the name of the subtable, in a STable.
|
||||
|
||||
Get the table name and tag values of all subtables in a STable.
|
||||
```mysql
|
||||
|
@ -80,10 +80,10 @@ taos> SELECT COUNT(tbname) FROM meters WHERE groupId > 2;
|
|||
Query OK, 1 row(s) in set (0.001091s)
|
||||
```
|
||||
### _QSTART/_QSTOP/_QDURATION
|
||||
The start, stop and duration of a query time window (Since version 2.6.0.0).
|
||||
The start, stop and duration of a query time window.
|
||||
|
||||
### _WSTART/_WSTOP/_WDURATION
|
||||
The start, stop and duration of aggegate query by time window, like interval, session window, state window (Since version 2.6.0.0).
|
||||
The start, stop and duration of aggegate query by time window, like interval, session window, state window.
|
||||
|
||||
### _c0
|
||||
The first column of a table or STable.
|
||||
### _c0/_ROWTS
|
||||
_c0 is equal to _ROWTS, it means the first column of a table or STable.
|
||||
|
|
|
@ -1460,8 +1460,10 @@ typedef struct {
|
|||
int32_t code;
|
||||
} STaskDropRsp;
|
||||
|
||||
#define STREAM_TRIGGER_AT_ONCE_SMA 0
|
||||
#define STREAM_TRIGGER_AT_ONCE 1
|
||||
#define STREAM_TRIGGER_WINDOW_CLOSE 2
|
||||
#define STREAM_TRIGGER_WINDOW_CLOSE_SMA 3
|
||||
|
||||
typedef struct {
|
||||
char name[TSDB_TABLE_FNAME_LEN];
|
||||
|
|
|
@ -127,42 +127,42 @@
|
|||
#define TK_BLOB 109
|
||||
#define TK_VARBINARY 110
|
||||
#define TK_DECIMAL 111
|
||||
#define TK_DELAY 112
|
||||
#define TK_FILE_FACTOR 113
|
||||
#define TK_NK_FLOAT 114
|
||||
#define TK_ROLLUP 115
|
||||
#define TK_TTL 116
|
||||
#define TK_SMA 117
|
||||
#define TK_SHOW 118
|
||||
#define TK_DATABASES 119
|
||||
#define TK_TABLES 120
|
||||
#define TK_STABLES 121
|
||||
#define TK_MNODES 122
|
||||
#define TK_MODULES 123
|
||||
#define TK_QNODES 124
|
||||
#define TK_FUNCTIONS 125
|
||||
#define TK_INDEXES 126
|
||||
#define TK_ACCOUNTS 127
|
||||
#define TK_APPS 128
|
||||
#define TK_CONNECTIONS 129
|
||||
#define TK_LICENCE 130
|
||||
#define TK_GRANTS 131
|
||||
#define TK_QUERIES 132
|
||||
#define TK_SCORES 133
|
||||
#define TK_TOPICS 134
|
||||
#define TK_VARIABLES 135
|
||||
#define TK_BNODES 136
|
||||
#define TK_SNODES 137
|
||||
#define TK_CLUSTER 138
|
||||
#define TK_TRANSACTIONS 139
|
||||
#define TK_LIKE 140
|
||||
#define TK_INDEX 141
|
||||
#define TK_FULLTEXT 142
|
||||
#define TK_FUNCTION 143
|
||||
#define TK_INTERVAL 144
|
||||
#define TK_TOPIC 145
|
||||
#define TK_AS 146
|
||||
#define TK_CGROUP 147
|
||||
#define TK_FILE_FACTOR 112
|
||||
#define TK_NK_FLOAT 113
|
||||
#define TK_ROLLUP 114
|
||||
#define TK_TTL 115
|
||||
#define TK_SMA 116
|
||||
#define TK_SHOW 117
|
||||
#define TK_DATABASES 118
|
||||
#define TK_TABLES 119
|
||||
#define TK_STABLES 120
|
||||
#define TK_MNODES 121
|
||||
#define TK_MODULES 122
|
||||
#define TK_QNODES 123
|
||||
#define TK_FUNCTIONS 124
|
||||
#define TK_INDEXES 125
|
||||
#define TK_ACCOUNTS 126
|
||||
#define TK_APPS 127
|
||||
#define TK_CONNECTIONS 128
|
||||
#define TK_LICENCE 129
|
||||
#define TK_GRANTS 130
|
||||
#define TK_QUERIES 131
|
||||
#define TK_SCORES 132
|
||||
#define TK_TOPICS 133
|
||||
#define TK_VARIABLES 134
|
||||
#define TK_BNODES 135
|
||||
#define TK_SNODES 136
|
||||
#define TK_CLUSTER 137
|
||||
#define TK_TRANSACTIONS 138
|
||||
#define TK_LIKE 139
|
||||
#define TK_INDEX 140
|
||||
#define TK_FULLTEXT 141
|
||||
#define TK_FUNCTION 142
|
||||
#define TK_INTERVAL 143
|
||||
#define TK_TOPIC 144
|
||||
#define TK_AS 145
|
||||
#define TK_CONSUMER 146
|
||||
#define TK_GROUP 147
|
||||
#define TK_WITH 148
|
||||
#define TK_SCHEMA 149
|
||||
#define TK_DESC 150
|
||||
|
@ -239,22 +239,21 @@
|
|||
#define TK_PREV 221
|
||||
#define TK_LINEAR 222
|
||||
#define TK_NEXT 223
|
||||
#define TK_GROUP 224
|
||||
#define TK_HAVING 225
|
||||
#define TK_ORDER 226
|
||||
#define TK_SLIMIT 227
|
||||
#define TK_SOFFSET 228
|
||||
#define TK_LIMIT 229
|
||||
#define TK_OFFSET 230
|
||||
#define TK_ASC 231
|
||||
#define TK_NULLS 232
|
||||
#define TK_ID 233
|
||||
#define TK_NK_BITNOT 234
|
||||
#define TK_INSERT 235
|
||||
#define TK_VALUES 236
|
||||
#define TK_IMPORT 237
|
||||
#define TK_NK_SEMI 238
|
||||
#define TK_FILE 239
|
||||
#define TK_HAVING 224
|
||||
#define TK_ORDER 225
|
||||
#define TK_SLIMIT 226
|
||||
#define TK_SOFFSET 227
|
||||
#define TK_LIMIT 228
|
||||
#define TK_OFFSET 229
|
||||
#define TK_ASC 230
|
||||
#define TK_NULLS 231
|
||||
#define TK_ID 232
|
||||
#define TK_NK_BITNOT 233
|
||||
#define TK_INSERT 234
|
||||
#define TK_VALUES 235
|
||||
#define TK_IMPORT 236
|
||||
#define TK_NK_SEMI 237
|
||||
#define TK_FILE 238
|
||||
|
||||
#define TK_NK_SPACE 300
|
||||
#define TK_NK_COMMENT 301
|
||||
|
|
|
@ -80,8 +80,7 @@ typedef struct SAlterDatabaseStmt {
|
|||
typedef struct STableOptions {
|
||||
ENodeType type;
|
||||
char comment[TSDB_TB_COMMENT_LEN];
|
||||
int32_t delay;
|
||||
float filesFactor;
|
||||
double filesFactor;
|
||||
SNodeList* pRollupFuncs;
|
||||
int32_t ttl;
|
||||
SNodeList* pSma;
|
||||
|
|
|
@ -59,6 +59,7 @@ typedef struct SScanLogicNode {
|
|||
int8_t triggerType;
|
||||
int64_t watermark;
|
||||
int16_t tsColId;
|
||||
double filesFactor;
|
||||
} SScanLogicNode;
|
||||
|
||||
typedef struct SJoinLogicNode {
|
||||
|
@ -113,6 +114,7 @@ typedef struct SWindowLogicNode {
|
|||
SNode* pStateExpr;
|
||||
int8_t triggerType;
|
||||
int64_t watermark;
|
||||
double filesFactor;
|
||||
} SWindowLogicNode;
|
||||
|
||||
typedef struct SFillLogicNode {
|
||||
|
@ -222,6 +224,7 @@ typedef struct STableScanPhysiNode {
|
|||
int8_t triggerType;
|
||||
int64_t watermark;
|
||||
int16_t tsColId;
|
||||
double filesFactor;
|
||||
} STableScanPhysiNode;
|
||||
|
||||
typedef STableScanPhysiNode STableSeqScanPhysiNode;
|
||||
|
@ -272,6 +275,7 @@ typedef struct SWinodwPhysiNode {
|
|||
SNode* pTspk; // timestamp primary key
|
||||
int8_t triggerType;
|
||||
int64_t watermark;
|
||||
double filesFactor;
|
||||
} SWinodwPhysiNode;
|
||||
|
||||
typedef struct SIntervalPhysiNode {
|
||||
|
|
|
@ -36,6 +36,7 @@ typedef struct SPlanContext {
|
|||
int64_t watermark;
|
||||
char* pMsg;
|
||||
int32_t msgLen;
|
||||
double filesFactor;
|
||||
} SPlanContext;
|
||||
|
||||
// Create the physical plan for the query, according to the AST.
|
||||
|
|
|
@ -254,6 +254,7 @@ typedef enum ELogicConditionType {
|
|||
#define TSDB_TRANS_STAGE_LEN 12
|
||||
#define TSDB_TRANS_TYPE_LEN 16
|
||||
#define TSDB_TRANS_ERROR_LEN 64
|
||||
#define TSDB_TRANS_DESC_LEN 128
|
||||
|
||||
#define TSDB_STEP_NAME_LEN 32
|
||||
#define TSDB_STEP_DESC_LEN 128
|
||||
|
@ -344,9 +345,6 @@ typedef enum ELogicConditionType {
|
|||
#define TSDB_MIN_ROLLUP_FILE_FACTOR 0
|
||||
#define TSDB_MAX_ROLLUP_FILE_FACTOR 1
|
||||
#define TSDB_DEFAULT_ROLLUP_FILE_FACTOR 0.1
|
||||
#define TSDB_MIN_ROLLUP_DELAY 1
|
||||
#define TSDB_MAX_ROLLUP_DELAY 10
|
||||
#define TSDB_DEFAULT_ROLLUP_DELAY 2
|
||||
#define TSDB_MIN_TABLE_TTL 0
|
||||
#define TSDB_DEFAULT_TABLE_TTL 0
|
||||
|
||||
|
|
|
@ -130,7 +130,7 @@ static void dmProcessRpcMsg(SDnode *pDnode, SRpcMsg *pRpc, SEpSet *pEpSet) {
|
|||
|
||||
_OVER:
|
||||
if (code != 0) {
|
||||
dError("msg:%s, failed to process since %s", TMSG_INFO(pRpc->msgType), terrstr());
|
||||
dTrace("msg:%p, failed to process since %s, type:%s", pMsg, terrstr(), TMSG_INFO(pRpc->msgType));
|
||||
if (terrno != 0) code = terrno;
|
||||
|
||||
if (IsReq(pRpc)) {
|
||||
|
|
|
@ -60,14 +60,12 @@ typedef enum {
|
|||
|
||||
typedef enum {
|
||||
TRN_STAGE_PREPARE = 0,
|
||||
TRN_STAGE_REDO_LOG = 1,
|
||||
TRN_STAGE_REDO_ACTION = 2,
|
||||
TRN_STAGE_ROLLBACK = 3,
|
||||
TRN_STAGE_UNDO_ACTION = 4,
|
||||
TRN_STAGE_UNDO_LOG = 5,
|
||||
TRN_STAGE_COMMIT = 6,
|
||||
TRN_STAGE_COMMIT_LOG = 7,
|
||||
TRN_STAGE_FINISHED = 8
|
||||
TRN_STAGE_REDO_ACTION = 1,
|
||||
TRN_STAGE_ROLLBACK = 2,
|
||||
TRN_STAGE_UNDO_ACTION = 3,
|
||||
TRN_STAGE_COMMIT = 4,
|
||||
TRN_STAGE_COMMIT_ACTION = 5,
|
||||
TRN_STAGE_FINISHED = 6
|
||||
} ETrnStage;
|
||||
|
||||
typedef enum {
|
||||
|
@ -131,7 +129,7 @@ typedef enum {
|
|||
|
||||
typedef enum {
|
||||
TRN_EXEC_PARALLEL = 0,
|
||||
TRN_EXEC_ONE_BY_ONE = 1,
|
||||
TRN_EXEC_NO_PARALLEL = 1,
|
||||
} ETrnExecType;
|
||||
|
||||
typedef enum {
|
||||
|
@ -168,16 +166,16 @@ typedef struct {
|
|||
SRpcHandleInfo rpcInfo;
|
||||
void* rpcRsp;
|
||||
int32_t rpcRspLen;
|
||||
SArray* redoLogs;
|
||||
SArray* undoLogs;
|
||||
SArray* commitLogs;
|
||||
int32_t redoActionPos;
|
||||
SArray* redoActions;
|
||||
SArray* undoActions;
|
||||
SArray* commitActions;
|
||||
int64_t createdTime;
|
||||
int64_t lastExecTime;
|
||||
int64_t dbUid;
|
||||
char dbname[TSDB_DB_FNAME_LEN];
|
||||
char lastError[TSDB_TRANS_ERROR_LEN];
|
||||
char desc[TSDB_TRANS_DESC_LEN];
|
||||
int32_t startFunc;
|
||||
int32_t stopFunc;
|
||||
int32_t paramLen;
|
||||
|
|
|
@ -30,7 +30,7 @@ int32_t mndSchedInitSubEp(SMnode* pMnode, const SMqTopicObj* pTopic, SMqSubscrib
|
|||
int32_t mndScheduleStream(SMnode* pMnode, STrans* pTrans, SStreamObj* pStream);
|
||||
|
||||
int32_t mndConvertRSmaTask(const char* ast, int64_t uid, int8_t triggerType, int64_t watermark, char** pStr,
|
||||
int32_t* pLen);
|
||||
int32_t* pLen, double filesFactor);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
|
|
|
@ -26,31 +26,24 @@ typedef enum {
|
|||
TRANS_START_FUNC_TEST = 1,
|
||||
TRANS_STOP_FUNC_TEST = 2,
|
||||
TRANS_START_FUNC_MQ_REB = 3,
|
||||
TRANS_STOP_FUNC_TEST_MQ_REB = 4,
|
||||
TRANS_STOP_FUNC_MQ_REB = 4,
|
||||
} ETrnFunc;
|
||||
|
||||
typedef struct {
|
||||
SEpSet epSet;
|
||||
tmsg_t msgType;
|
||||
int8_t msgSent;
|
||||
int8_t msgReceived;
|
||||
int32_t id;
|
||||
int32_t errCode;
|
||||
int32_t acceptableCode;
|
||||
int8_t stage;
|
||||
int8_t isRaw;
|
||||
int8_t rawWritten;
|
||||
int8_t msgSent;
|
||||
int8_t msgReceived;
|
||||
tmsg_t msgType;
|
||||
SEpSet epSet;
|
||||
int32_t contLen;
|
||||
void *pCont;
|
||||
} STransAction;
|
||||
|
||||
typedef struct {
|
||||
SSdbRaw *pRaw;
|
||||
} STransLog;
|
||||
|
||||
typedef struct {
|
||||
ETrnStep stepType;
|
||||
STransAction redoAction;
|
||||
STransAction undoAction;
|
||||
STransLog redoLog;
|
||||
STransLog undoLog;
|
||||
} STransStep;
|
||||
} STransAction;
|
||||
|
||||
typedef void (*TransCbFp)(SMnode *pMnode, void *param, int32_t paramLen);
|
||||
|
||||
|
@ -69,7 +62,7 @@ int32_t mndTransAppendUndoAction(STrans *pTrans, STransAction *pAction);
|
|||
void mndTransSetRpcRsp(STrans *pTrans, void *pCont, int32_t contLen);
|
||||
void mndTransSetCb(STrans *pTrans, ETrnFunc startFunc, ETrnFunc stopFunc, void *param, int32_t paramLen);
|
||||
void mndTransSetDbInfo(STrans *pTrans, SDbObj *pDb);
|
||||
void mndTransSetExecOneByOne(STrans *pTrans);
|
||||
void mndTransSetNoParallel(STrans *pTrans);
|
||||
|
||||
int32_t mndTransPrepare(SMnode *pMnode, STrans *pTrans);
|
||||
void mndTransProcessRsp(SRpcMsg *pRsp);
|
||||
|
|
|
@ -78,10 +78,8 @@ static int32_t mndCreateDefaultAcct(SMnode *pMnode) {
|
|||
if (pRaw == NULL) return -1;
|
||||
sdbSetRawStatus(pRaw, SDB_STATUS_READY);
|
||||
|
||||
mDebug("acct:%s, will be created while deploy sdb, raw:%p", acctObj.acct, pRaw);
|
||||
#if 0
|
||||
return sdbWrite(pMnode->pSdb, pRaw);
|
||||
#else
|
||||
mDebug("acct:%s, will be created when deploying, raw:%p", acctObj.acct, pRaw);
|
||||
|
||||
STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_CREATE_ACCT, NULL);
|
||||
if (pTrans == NULL) {
|
||||
mError("acct:%s, failed to create since %s", acctObj.acct, terrstr());
|
||||
|
@ -94,7 +92,6 @@ static int32_t mndCreateDefaultAcct(SMnode *pMnode) {
|
|||
mndTransDrop(pTrans);
|
||||
return -1;
|
||||
}
|
||||
sdbSetRawStatus(pRaw, SDB_STATUS_READY);
|
||||
|
||||
if (mndTransPrepare(pMnode, pTrans) != 0) {
|
||||
mError("trans:%d, failed to prepare since %s", pTrans->id, terrstr());
|
||||
|
@ -104,7 +101,6 @@ static int32_t mndCreateDefaultAcct(SMnode *pMnode) {
|
|||
|
||||
mndTransDrop(pTrans);
|
||||
return 0;
|
||||
#endif
|
||||
}
|
||||
|
||||
static SSdbRaw *mndAcctActionEncode(SAcctObj *pAcct) {
|
||||
|
|
|
@ -172,13 +172,13 @@ static int32_t mndCreateDefaultCluster(SMnode *pMnode) {
|
|||
clusterObj.id = mndGenerateUid(clusterObj.name, TSDB_CLUSTER_ID_LEN);
|
||||
clusterObj.id = (clusterObj.id >= 0 ? clusterObj.id : -clusterObj.id);
|
||||
pMnode->clusterId = clusterObj.id;
|
||||
mDebug("cluster:%" PRId64 ", name is %s", clusterObj.id, clusterObj.name);
|
||||
mInfo("cluster:%" PRId64 ", name is %s", clusterObj.id, clusterObj.name);
|
||||
|
||||
SSdbRaw *pRaw = mndClusterActionEncode(&clusterObj);
|
||||
if (pRaw == NULL) return -1;
|
||||
sdbSetRawStatus(pRaw, SDB_STATUS_READY);
|
||||
|
||||
mDebug("cluster:%" PRId64 ", will be created while deploy sdb, raw:%p", clusterObj.id, pRaw);
|
||||
mDebug("cluster:%" PRId64 ", will be created when deploying, raw:%p", clusterObj.id, pRaw);
|
||||
#if 0
|
||||
return sdbWrite(pMnode->pSdb, pRaw);
|
||||
#else
|
||||
|
|
|
@ -1314,7 +1314,7 @@ int32_t mndValidateDbInfo(SMnode *pMnode, SDbVgVersion *pDbs, int32_t numOfDbs,
|
|||
|
||||
SDbObj *pDb = mndAcquireDb(pMnode, pDbVgVersion->dbFName);
|
||||
if (pDb == NULL) {
|
||||
mDebug("db:%s, no exist", pDbVgVersion->dbFName);
|
||||
mTrace("db:%s, no exist", pDbVgVersion->dbFName);
|
||||
memcpy(usedbRsp.db, pDbVgVersion->dbFName, TSDB_DB_FNAME_LEN);
|
||||
usedbRsp.uid = pDbVgVersion->dbId;
|
||||
usedbRsp.vgVersion = -1;
|
||||
|
|
|
@ -99,7 +99,7 @@ static int32_t mndCreateDefaultDnode(SMnode *pMnode) {
|
|||
if (pRaw == NULL) return -1;
|
||||
if (sdbSetRawStatus(pRaw, SDB_STATUS_READY) != 0) return -1;
|
||||
|
||||
mDebug("dnode:%d, will be created while deploy sdb, raw:%p", dnodeObj.id, pRaw);
|
||||
mDebug("dnode:%d, will be created when deploying, raw:%p", dnodeObj.id, pRaw);
|
||||
|
||||
#if 0
|
||||
return sdbWrite(pMnode->pSdb, pRaw);
|
||||
|
@ -395,9 +395,10 @@ static int32_t mndProcessStatusReq(SRpcMsg *pReq) {
|
|||
mndReleaseQnode(pMnode, pQnode);
|
||||
}
|
||||
|
||||
int64_t dnodeVer = sdbGetTableVer(pMnode->pSdb, SDB_DNODE) + sdbGetTableVer(pMnode->pSdb, SDB_MNODE);
|
||||
int64_t curMs = taosGetTimestampMs();
|
||||
bool online = mndIsDnodeOnline(pMnode, pDnode, curMs);
|
||||
bool dnodeChanged = (statusReq.dnodeVer != sdbGetTableVer(pMnode->pSdb, SDB_DNODE));
|
||||
bool dnodeChanged = (statusReq.dnodeVer != dnodeVer);
|
||||
bool reboot = (pDnode->rebootTime != statusReq.rebootTime);
|
||||
bool needCheck = !online || dnodeChanged || reboot;
|
||||
|
||||
|
@ -440,7 +441,8 @@ static int32_t mndProcessStatusReq(SRpcMsg *pReq) {
|
|||
if (!online) {
|
||||
mInfo("dnode:%d, from offline to online", pDnode->id);
|
||||
} else {
|
||||
mDebug("dnode:%d, send dnode eps", pDnode->id);
|
||||
mDebug("dnode:%d, send dnode epset, online:%d ver:% " PRId64 ":%" PRId64 " reboot:%d", pDnode->id, online,
|
||||
statusReq.dnodeVer, dnodeVer, reboot);
|
||||
}
|
||||
|
||||
pDnode->rebootTime = statusReq.rebootTime;
|
||||
|
@ -448,7 +450,7 @@ static int32_t mndProcessStatusReq(SRpcMsg *pReq) {
|
|||
pDnode->numOfSupportVnodes = statusReq.numOfSupportVnodes;
|
||||
|
||||
SStatusRsp statusRsp = {0};
|
||||
statusRsp.dnodeVer = sdbGetTableVer(pMnode->pSdb, SDB_DNODE) + sdbGetTableVer(pMnode->pSdb, SDB_MNODE);
|
||||
statusRsp.dnodeVer = dnodeVer;
|
||||
statusRsp.dnodeCfg.dnodeId = pDnode->id;
|
||||
statusRsp.dnodeCfg.clusterId = pMnode->clusterId;
|
||||
statusRsp.pDnodeEps = taosArrayInit(mndGetDnodeSize(pMnode), sizeof(SDnodeEp));
|
||||
|
|
|
@ -472,7 +472,7 @@ int32_t mndProcessRpcMsg(SRpcMsg *pMsg) {
|
|||
} else if (code == 0) {
|
||||
mTrace("msg:%p, successfully processed and response", pMsg);
|
||||
} else {
|
||||
mError("msg:%p, failed to process since %s, app:%p type:%s", pMsg, terrstr(), pMsg->info.ahandle,
|
||||
mDebug("msg:%p, failed to process since %s, app:%p type:%s", pMsg, terrstr(), pMsg->info.ahandle,
|
||||
TMSG_INFO(pMsg->msgType));
|
||||
}
|
||||
|
||||
|
|
|
@ -90,7 +90,7 @@ static int32_t mndCreateDefaultMnode(SMnode *pMnode) {
|
|||
if (pRaw == NULL) return -1;
|
||||
sdbSetRawStatus(pRaw, SDB_STATUS_READY);
|
||||
|
||||
mDebug("mnode:%d, will be created while deploy sdb, raw:%p", mnodeObj.id, pRaw);
|
||||
mDebug("mnode:%d, will be created when deploying, raw:%p", mnodeObj.id, pRaw);
|
||||
|
||||
#if 0
|
||||
return sdbWrite(pMnode->pSdb, pRaw);
|
||||
|
@ -367,7 +367,7 @@ static int32_t mndCreateMnode(SMnode *pMnode, SRpcMsg *pReq, SDnodeObj *pDnode,
|
|||
if (pTrans == NULL) goto _OVER;
|
||||
|
||||
mDebug("trans:%d, used to create mnode:%d", pTrans->id, pCreate->dnodeId);
|
||||
mndTransSetExecOneByOne(pTrans);
|
||||
mndTransSetNoParallel(pTrans);
|
||||
if (mndSetCreateMnodeRedoLogs(pMnode, pTrans, &mnodeObj) != 0) goto _OVER;
|
||||
if (mndSetCreateMnodeCommitLogs(pMnode, pTrans, &mnodeObj) != 0) goto _OVER;
|
||||
if (mndSetCreateMnodeRedoActions(pMnode, pTrans, pDnode, &mnodeObj) != 0) goto _OVER;
|
||||
|
@ -539,7 +539,7 @@ static int32_t mndDropMnode(SMnode *pMnode, SRpcMsg *pReq, SMnodeObj *pObj) {
|
|||
if (pTrans == NULL) goto _OVER;
|
||||
|
||||
mDebug("trans:%d, used to drop mnode:%d", pTrans->id, pObj->id);
|
||||
mndTransSetExecOneByOne(pTrans);
|
||||
mndTransSetNoParallel(pTrans);
|
||||
if (mndSetDropMnodeRedoLogs(pMnode, pTrans, pObj) != 0) goto _OVER;
|
||||
if (mndSetDropMnodeCommitLogs(pMnode, pTrans, pObj) != 0) goto _OVER;
|
||||
if (mndSetDropMnodeRedoActions(pMnode, pTrans, pObj->pDnode, pObj) != 0) goto _OVER;
|
||||
|
|
|
@ -36,7 +36,7 @@
|
|||
extern bool tsStreamSchedV;
|
||||
|
||||
int32_t mndConvertRSmaTask(const char* ast, int64_t uid, int8_t triggerType, int64_t watermark, char** pStr,
|
||||
int32_t* pLen) {
|
||||
int32_t* pLen, double filesFactor) {
|
||||
SNode* pAst = NULL;
|
||||
SQueryPlan* pPlan = NULL;
|
||||
terrno = TSDB_CODE_SUCCESS;
|
||||
|
@ -58,6 +58,7 @@ int32_t mndConvertRSmaTask(const char* ast, int64_t uid, int8_t triggerType, int
|
|||
.rSmaQuery = true,
|
||||
.triggerType = triggerType,
|
||||
.watermark = watermark,
|
||||
.filesFactor = filesFactor,
|
||||
};
|
||||
if (qCreateQueryPlan(&cxt, &pPlan, NULL) < 0) {
|
||||
terrno = TSDB_CODE_QRY_INVALID_INPUT;
|
||||
|
|
|
@ -507,7 +507,7 @@ static int32_t mndCreateSma(SMnode *pMnode, SRpcMsg *pReq, SMCreateSmaReq *pCrea
|
|||
|
||||
mDebug("trans:%d, used to create sma:%s", pTrans->id, pCreate->name);
|
||||
mndTransSetDbInfo(pTrans, pDb);
|
||||
mndTransSetExecOneByOne(pTrans);
|
||||
mndTransSetNoParallel(pTrans);
|
||||
|
||||
if (mndSetCreateSmaRedoLogs(pMnode, pTrans, &smaObj) != 0) goto _OVER;
|
||||
if (mndSetCreateSmaVgroupRedoLogs(pMnode, pTrans, &streamObj.fixedSinkVg) != 0) goto _OVER;
|
||||
|
|
|
@ -397,13 +397,13 @@ static void *mndBuildVCreateStbReq(SMnode *pMnode, SVgObj *pVgroup, SStbObj *pSt
|
|||
req.pRSmaParam.xFilesFactor = pStb->xFilesFactor;
|
||||
req.pRSmaParam.delay = pStb->delay;
|
||||
if (pStb->ast1Len > 0) {
|
||||
if (mndConvertRSmaTask(pStb->pAst1, pStb->uid, 0, 0, &req.pRSmaParam.qmsg1, &req.pRSmaParam.qmsg1Len) !=
|
||||
if (mndConvertRSmaTask(pStb->pAst1, pStb->uid, 0, 0, &req.pRSmaParam.qmsg1, &req.pRSmaParam.qmsg1Len, req.pRSmaParam.xFilesFactor) !=
|
||||
TSDB_CODE_SUCCESS) {
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
if (pStb->ast2Len > 0) {
|
||||
if (mndConvertRSmaTask(pStb->pAst2, pStb->uid, 0, 0, &req.pRSmaParam.qmsg2, &req.pRSmaParam.qmsg2Len) !=
|
||||
if (mndConvertRSmaTask(pStb->pAst2, pStb->uid, 0, 0, &req.pRSmaParam.qmsg2, &req.pRSmaParam.qmsg2Len, req.pRSmaParam.xFilesFactor) !=
|
||||
TSDB_CODE_SUCCESS) {
|
||||
return NULL;
|
||||
}
|
||||
|
@ -1597,7 +1597,7 @@ static int32_t mndProcessTableMetaReq(SRpcMsg *pReq) {
|
|||
pReq->info.rspLen = rspLen;
|
||||
code = 0;
|
||||
|
||||
mDebug("stb:%s.%s, meta is retrieved", infoReq.dbFName, infoReq.tbName);
|
||||
mTrace("%s.%s, meta is retrieved", infoReq.dbFName, infoReq.tbName);
|
||||
|
||||
_OVER:
|
||||
if (code != 0) {
|
||||
|
|
|
@ -501,7 +501,7 @@ static int32_t mndPersistRebResult(SMnode *pMnode, SRpcMsg *pMsg, const SMqRebOu
|
|||
// 4. TODO commit log: modification log
|
||||
|
||||
// 5. set cb
|
||||
mndTransSetCb(pTrans, TRANS_START_FUNC_MQ_REB, TRANS_STOP_FUNC_TEST_MQ_REB, NULL, 0);
|
||||
mndTransSetCb(pTrans, TRANS_START_FUNC_MQ_REB, TRANS_STOP_FUNC_MQ_REB, NULL, 0);
|
||||
|
||||
// 6. execution
|
||||
if (mndTransPrepare(pMnode, pTrans) != 0) {
|
||||
|
|
|
@ -65,7 +65,7 @@ int32_t mndSyncGetSnapshot(struct SSyncFSM *pFsm, SSnapshot *pSnapshot) {
|
|||
void mndRestoreFinish(struct SSyncFSM *pFsm) {
|
||||
SMnode *pMnode = pFsm->data;
|
||||
if (!pMnode->deploy) {
|
||||
mInfo("mnode sync restore finished");
|
||||
mInfo("mnode sync restore finished, and will handle outstanding transactions");
|
||||
mndTransPullup(pMnode);
|
||||
mndSetRestore(pMnode, true);
|
||||
} else {
|
||||
|
@ -244,7 +244,7 @@ void mndSyncStart(SMnode *pMnode) {
|
|||
} else {
|
||||
syncStart(pMgmt->sync);
|
||||
}
|
||||
mDebug("sync:%" PRId64 " is started, standby:%d", pMgmt->sync, pMgmt->standby);
|
||||
mDebug("mnode sync started, id:%" PRId64 " standby:%d", pMgmt->sync, pMgmt->standby);
|
||||
}
|
||||
|
||||
void mndSyncStop(SMnode *pMnode) {}
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -77,7 +77,7 @@ static int32_t mndCreateDefaultUser(SMnode *pMnode, char *acct, char *user, char
|
|||
if (pRaw == NULL) return -1;
|
||||
sdbSetRawStatus(pRaw, SDB_STATUS_READY);
|
||||
|
||||
mDebug("user:%s, will be created while deploy sdb, raw:%p", userObj.user, pRaw);
|
||||
mDebug("user:%s, will be created when deploying, raw:%p", userObj.user, pRaw);
|
||||
|
||||
#if 0
|
||||
return sdbWrite(pMnode->pSdb, pRaw);
|
||||
|
|
|
@ -501,7 +501,7 @@ int32_t mndAllocVgroup(SMnode *pMnode, SDbObj *pDb, SVgObj **ppVgroups) {
|
|||
*ppVgroups = pVgroups;
|
||||
code = 0;
|
||||
|
||||
mInfo("db:%s, %d vgroups is alloced, replica:%d", pDb->name, pDb->cfg.numOfVgroups, pDb->cfg.replications);
|
||||
mInfo("db:%s, total %d vgroups is alloced, replica:%d", pDb->name, pDb->cfg.numOfVgroups, pDb->cfg.replications);
|
||||
|
||||
_OVER:
|
||||
if (code != 0) taosMemoryFree(pVgroups);
|
||||
|
@ -539,7 +539,7 @@ int32_t mndAddVnodeToVgroup(SMnode *pMnode, SVgObj *pVgroup, SArray *pArray) {
|
|||
pVgid->role = TAOS_SYNC_STATE_FOLLOWER;
|
||||
pDnode->numOfVnodes++;
|
||||
|
||||
mInfo("db:%s, vgId:%d, vn:%d dnode:%d is added", pVgroup->dbName, pVgroup->vgId, maxPos, pVgid->dnodeId);
|
||||
mInfo("db:%s, vgId:%d, vnode_index:%d dnode:%d is added", pVgroup->dbName, pVgroup->vgId, maxPos, pVgid->dnodeId);
|
||||
maxPos++;
|
||||
if (maxPos == 3) return 0;
|
||||
}
|
||||
|
|
|
@ -5,7 +5,9 @@ target_link_libraries(
|
|||
PUBLIC sut
|
||||
)
|
||||
|
||||
if(NOT TD_WINDOWS)
|
||||
add_test(
|
||||
NAME dbTest
|
||||
COMMAND dbTest
|
||||
)
|
||||
endif(NOT TD_WINDOWS)
|
||||
|
|
|
@ -5,7 +5,9 @@ target_link_libraries(
|
|||
PUBLIC sut
|
||||
)
|
||||
|
||||
if(NOT TD_WINDOWS)
|
||||
add_test(
|
||||
NAME smaTest
|
||||
COMMAND smaTest
|
||||
)
|
||||
endif(NOT TD_WINDOWS)
|
||||
|
|
|
@ -5,7 +5,9 @@ target_link_libraries(
|
|||
PUBLIC sut
|
||||
)
|
||||
|
||||
if(NOT TD_WINDOWS)
|
||||
add_test(
|
||||
NAME stbTest
|
||||
COMMAND stbTest
|
||||
)
|
||||
endif(NOT TD_WINDOWS)
|
|
@ -168,6 +168,7 @@ typedef struct SSdb {
|
|||
char *currDir;
|
||||
char *tmpDir;
|
||||
int64_t lastCommitVer;
|
||||
int64_t lastCommitTerm;
|
||||
int64_t curVer;
|
||||
int64_t curTerm;
|
||||
int64_t tableVer[SDB_MAX];
|
||||
|
|
|
@ -55,6 +55,7 @@ SSdb *sdbInit(SSdbOpt *pOption) {
|
|||
pSdb->curVer = -1;
|
||||
pSdb->curTerm = -1;
|
||||
pSdb->lastCommitVer = -1;
|
||||
pSdb->lastCommitTerm = -1;
|
||||
pSdb->pMnode = pOption->pMnode;
|
||||
taosThreadMutexInit(&pSdb->filelock, NULL);
|
||||
mDebug("sdb init successfully");
|
||||
|
|
|
@ -70,6 +70,7 @@ static void sdbResetData(SSdb *pSdb) {
|
|||
pSdb->curVer = -1;
|
||||
pSdb->curTerm = -1;
|
||||
pSdb->lastCommitVer = -1;
|
||||
pSdb->lastCommitTerm = -1;
|
||||
mDebug("sdb reset successfully");
|
||||
}
|
||||
|
||||
|
@ -211,12 +212,12 @@ static int32_t sdbReadFileImp(SSdb *pSdb) {
|
|||
char file[PATH_MAX] = {0};
|
||||
|
||||
snprintf(file, sizeof(file), "%s%ssdb.data", pSdb->currDir, TD_DIRSEP);
|
||||
mDebug("start to read file:%s", file);
|
||||
mDebug("start to read sdb file:%s", file);
|
||||
|
||||
SSdbRaw *pRaw = taosMemoryMalloc(WAL_MAX_SIZE + 100);
|
||||
if (pRaw == NULL) {
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
mError("failed read file since %s", terrstr());
|
||||
mError("failed read sdb file since %s", terrstr());
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
@ -224,12 +225,12 @@ static int32_t sdbReadFileImp(SSdb *pSdb) {
|
|||
if (pFile == NULL) {
|
||||
taosMemoryFree(pRaw);
|
||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||
mError("failed to read file:%s since %s", file, terrstr());
|
||||
mError("failed to read sdb file:%s since %s", file, terrstr());
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (sdbReadFileHead(pSdb, pFile) != 0) {
|
||||
mError("failed to read file:%s head since %s", file, terrstr());
|
||||
mError("failed to read sdb file:%s head since %s", file, terrstr());
|
||||
taosMemoryFree(pRaw);
|
||||
taosCloseFile(&pFile);
|
||||
return -1;
|
||||
|
@ -245,13 +246,13 @@ static int32_t sdbReadFileImp(SSdb *pSdb) {
|
|||
|
||||
if (ret < 0) {
|
||||
code = TAOS_SYSTEM_ERROR(errno);
|
||||
mError("failed to read file:%s since %s", file, tstrerror(code));
|
||||
mError("failed to read sdb file:%s since %s", file, tstrerror(code));
|
||||
break;
|
||||
}
|
||||
|
||||
if (ret != readLen) {
|
||||
code = TSDB_CODE_FILE_CORRUPTED;
|
||||
mError("failed to read file:%s since %s", file, tstrerror(code));
|
||||
mError("failed to read sdb file:%s since %s", file, tstrerror(code));
|
||||
break;
|
||||
}
|
||||
|
||||
|
@ -259,34 +260,36 @@ static int32_t sdbReadFileImp(SSdb *pSdb) {
|
|||
ret = taosReadFile(pFile, pRaw->pData, readLen);
|
||||
if (ret < 0) {
|
||||
code = TAOS_SYSTEM_ERROR(errno);
|
||||
mError("failed to read file:%s since %s", file, tstrerror(code));
|
||||
mError("failed to read sdb file:%s since %s", file, tstrerror(code));
|
||||
break;
|
||||
}
|
||||
|
||||
if (ret != readLen) {
|
||||
code = TSDB_CODE_FILE_CORRUPTED;
|
||||
mError("failed to read file:%s since %s", file, tstrerror(code));
|
||||
mError("failed to read sdb file:%s since %s", file, tstrerror(code));
|
||||
break;
|
||||
}
|
||||
|
||||
int32_t totalLen = sizeof(SSdbRaw) + pRaw->dataLen + sizeof(int32_t);
|
||||
if ((!taosCheckChecksumWhole((const uint8_t *)pRaw, totalLen)) != 0) {
|
||||
code = TSDB_CODE_CHECKSUM_ERROR;
|
||||
mError("failed to read file:%s since %s", file, tstrerror(code));
|
||||
mError("failed to read sdb file:%s since %s", file, tstrerror(code));
|
||||
break;
|
||||
}
|
||||
|
||||
code = sdbWriteWithoutFree(pSdb, pRaw);
|
||||
if (code != 0) {
|
||||
mError("failed to read file:%s since %s", file, terrstr());
|
||||
mError("failed to read sdb file:%s since %s", file, terrstr());
|
||||
goto _OVER;
|
||||
}
|
||||
}
|
||||
|
||||
code = 0;
|
||||
pSdb->lastCommitVer = pSdb->curVer;
|
||||
pSdb->lastCommitTerm = pSdb->curTerm;
|
||||
memcpy(pSdb->tableVer, tableVer, sizeof(tableVer));
|
||||
mDebug("read file:%s successfully, ver:%" PRId64, file, pSdb->lastCommitVer);
|
||||
mDebug("read sdb file:%s successfully, ver:%" PRId64 " term:%" PRId64, file, pSdb->lastCommitVer,
|
||||
pSdb->lastCommitTerm);
|
||||
|
||||
_OVER:
|
||||
taosCloseFile(&pFile);
|
||||
|
@ -302,7 +305,7 @@ int32_t sdbReadFile(SSdb *pSdb) {
|
|||
sdbResetData(pSdb);
|
||||
int32_t code = sdbReadFileImp(pSdb);
|
||||
if (code != 0) {
|
||||
mError("failed to read sdb since %s", terrstr());
|
||||
mError("failed to read sdb file since %s", terrstr());
|
||||
sdbResetData(pSdb);
|
||||
}
|
||||
|
||||
|
@ -318,18 +321,19 @@ static int32_t sdbWriteFileImp(SSdb *pSdb) {
|
|||
char curfile[PATH_MAX] = {0};
|
||||
snprintf(curfile, sizeof(curfile), "%s%ssdb.data", pSdb->currDir, TD_DIRSEP);
|
||||
|
||||
mDebug("start to write file:%s, current ver:%" PRId64 " term:%" PRId64 ", commit ver:%" PRId64, curfile, pSdb->curVer,
|
||||
pSdb->curTerm, pSdb->lastCommitVer);
|
||||
mDebug("start to write sdb file, current ver:%" PRId64 " term:%" PRId64 ", commit ver:%" PRId64 " term:%" PRId64
|
||||
" file:%s",
|
||||
pSdb->curVer, pSdb->curTerm, pSdb->lastCommitVer, pSdb->lastCommitTerm, curfile);
|
||||
|
||||
TdFilePtr pFile = taosOpenFile(tmpfile, TD_FILE_CREATE | TD_FILE_WRITE | TD_FILE_TRUNC);
|
||||
if (pFile == NULL) {
|
||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||
mError("failed to open file:%s for write since %s", tmpfile, terrstr());
|
||||
mError("failed to open sdb file:%s for write since %s", tmpfile, terrstr());
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (sdbWriteFileHead(pSdb, pFile) != 0) {
|
||||
mError("failed to write file:%s head since %s", tmpfile, terrstr());
|
||||
mError("failed to write sdb file:%s head since %s", tmpfile, terrstr());
|
||||
taosCloseFile(&pFile);
|
||||
return -1;
|
||||
}
|
||||
|
@ -338,7 +342,7 @@ static int32_t sdbWriteFileImp(SSdb *pSdb) {
|
|||
SdbEncodeFp encodeFp = pSdb->encodeFps[i];
|
||||
if (encodeFp == NULL) continue;
|
||||
|
||||
mTrace("write %s to file, total %d rows", sdbTableName(i), sdbGetSize(pSdb, i));
|
||||
mTrace("write %s to sdb file, total %d rows", sdbTableName(i), sdbGetSize(pSdb, i));
|
||||
|
||||
SHashObj *hash = pSdb->hashObjs[i];
|
||||
TdThreadRwlock *pLock = &pSdb->locks[i];
|
||||
|
@ -394,7 +398,7 @@ static int32_t sdbWriteFileImp(SSdb *pSdb) {
|
|||
code = taosFsyncFile(pFile);
|
||||
if (code != 0) {
|
||||
code = TAOS_SYSTEM_ERROR(errno);
|
||||
mError("failed to sync file:%s since %s", tmpfile, tstrerror(code));
|
||||
mError("failed to sync sdb file:%s since %s", tmpfile, tstrerror(code));
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -404,15 +408,17 @@ static int32_t sdbWriteFileImp(SSdb *pSdb) {
|
|||
code = taosRenameFile(tmpfile, curfile);
|
||||
if (code != 0) {
|
||||
code = TAOS_SYSTEM_ERROR(errno);
|
||||
mError("failed to write file:%s since %s", curfile, tstrerror(code));
|
||||
mError("failed to write sdb file:%s since %s", curfile, tstrerror(code));
|
||||
}
|
||||
}
|
||||
|
||||
if (code != 0) {
|
||||
mError("failed to write file:%s since %s", curfile, tstrerror(code));
|
||||
mError("failed to write sdb file:%s since %s", curfile, tstrerror(code));
|
||||
} else {
|
||||
pSdb->lastCommitVer = pSdb->curVer;
|
||||
mDebug("write file:%s successfully, ver:%" PRId64 " term:%" PRId64, curfile, pSdb->lastCommitVer, pSdb->curTerm);
|
||||
pSdb->lastCommitTerm = pSdb->curTerm;
|
||||
mDebug("write sdb file successfully, ver:%" PRId64 " term:%" PRId64 " file:%s", pSdb->lastCommitVer,
|
||||
pSdb->lastCommitTerm, curfile);
|
||||
}
|
||||
|
||||
terrno = code;
|
||||
|
@ -427,7 +433,7 @@ int32_t sdbWriteFile(SSdb *pSdb) {
|
|||
taosThreadMutexLock(&pSdb->filelock);
|
||||
int32_t code = sdbWriteFileImp(pSdb);
|
||||
if (code != 0) {
|
||||
mError("failed to write sdb since %s", terrstr());
|
||||
mError("failed to write sdb file since %s", terrstr());
|
||||
}
|
||||
taosThreadMutexUnlock(&pSdb->filelock);
|
||||
return code;
|
||||
|
@ -493,7 +499,7 @@ int32_t sdbStartRead(SSdb *pSdb, SSdbIter **ppIter) {
|
|||
if (taosCopyFile(datafile, pIter->name) < 0) {
|
||||
taosThreadMutexUnlock(&pSdb->filelock);
|
||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||
mError("failed to copy file %s to %s since %s", datafile, pIter->name, terrstr());
|
||||
mError("failed to copy sdb file %s to %s since %s", datafile, pIter->name, terrstr());
|
||||
sdbCloseIter(pIter);
|
||||
return -1;
|
||||
}
|
||||
|
@ -502,7 +508,7 @@ int32_t sdbStartRead(SSdb *pSdb, SSdbIter **ppIter) {
|
|||
pIter->file = taosOpenFile(pIter->name, TD_FILE_READ);
|
||||
if (pIter->file == NULL) {
|
||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||
mError("failed to open file:%s since %s", pIter->name, terrstr());
|
||||
mError("failed to open sdb file:%s since %s", pIter->name, terrstr());
|
||||
sdbCloseIter(pIter);
|
||||
return -1;
|
||||
}
|
||||
|
|
|
@ -149,7 +149,7 @@ int32_t tdUpdateExpireWindow(SSma* pSma, SSubmitReq* pMsg, int64_t version);
|
|||
int32_t tdProcessTSmaCreate(SSma* pSma, int64_t version, const char* msg);
|
||||
int32_t tdProcessTSmaInsert(SSma* pSma, int64_t indexUid, const char* msg);
|
||||
|
||||
int32_t tdProcessRSmaCreate(SSma* pSma, SMeta* pMeta, SVCreateStbReq* pReq, SMsgCb* pMsgCb);
|
||||
int32_t tdProcessRSmaCreate(SVnode *pVnode, SVCreateStbReq* pReq);
|
||||
int32_t tdProcessRSmaSubmit(SSma* pSma, void* pMsg, int32_t inputType);
|
||||
int32_t tdFetchTbUidList(SSma* pSma, STbUidStore** ppStore, tb_uid_t suid, tb_uid_t uid);
|
||||
int32_t tdUpdateTbUidList(SSma* pSma, STbUidStore* pUidStore);
|
||||
|
|
|
@ -165,7 +165,10 @@ int32_t tdFetchTbUidList(SSma *pSma, STbUidStore **ppStore, tb_uid_t suid, tb_ui
|
|||
* @param pReq
|
||||
* @return int32_t
|
||||
*/
|
||||
int32_t tdProcessRSmaCreate(SSma *pSma, SMeta *pMeta, SVCreateStbReq *pReq, SMsgCb *pMsgCb) {
|
||||
int32_t tdProcessRSmaCreate(SVnode *pVnode, SVCreateStbReq *pReq) {
|
||||
SSma *pSma = pVnode->pSma;
|
||||
SMeta *pMeta = pVnode->pMeta;
|
||||
SMsgCb *pMsgCb = &pVnode->msgCb;
|
||||
if (!pReq->rollup) {
|
||||
smaTrace("vgId:%d return directly since no rollup for stable %s %" PRIi64, SMA_VID(pSma), pReq->name, pReq->suid);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
@ -210,6 +213,7 @@ int32_t tdProcessRSmaCreate(SSma *pSma, SMeta *pMeta, SVCreateStbReq *pReq, SMsg
|
|||
.reader = pReadHandle,
|
||||
.meta = pMeta,
|
||||
.pMsgCb = pMsgCb,
|
||||
.vnode = pVnode,
|
||||
};
|
||||
|
||||
if (param->qmsg1) {
|
||||
|
|
|
@ -360,7 +360,7 @@ static int vnodeProcessCreateStbReq(SVnode *pVnode, int64_t version, void *pReq,
|
|||
goto _err;
|
||||
}
|
||||
|
||||
tdProcessRSmaCreate(pVnode->pSma, pVnode->pMeta, &req, &pVnode->msgCb);
|
||||
tdProcessRSmaCreate(pVnode, &req);
|
||||
|
||||
tDecoderClear(&coder);
|
||||
return 0;
|
||||
|
|
|
@ -440,6 +440,7 @@ typedef struct STimeWindowSupp {
|
|||
int64_t waterMark;
|
||||
TSKEY maxTs;
|
||||
SColumnInfoData timeWindowData; // query time window info for scalar function execution.
|
||||
SHashObj *winMap;
|
||||
} STimeWindowAggSupp;
|
||||
|
||||
typedef struct SIntervalAggOperatorInfo {
|
||||
|
@ -758,7 +759,7 @@ SOperatorInfo* createDataBlockInfoScanOperator(void* dataReader, SExecTaskInfo*
|
|||
|
||||
SOperatorInfo* createStreamScanOperatorInfo(void* pDataReader, SReadHandle* pHandle,
|
||||
SArray* pTableIdList, STableScanPhysiNode* pTableScanNode, SExecTaskInfo* pTaskInfo,
|
||||
STimeWindowAggSupp* pTwSup, int16_t tsColId);
|
||||
STimeWindowAggSupp* pTwSup);
|
||||
|
||||
|
||||
SOperatorInfo* createFillOperatorInfo(SOperatorInfo* downstream, SExprInfo* pExpr, int32_t numOfCols,
|
||||
|
@ -837,6 +838,8 @@ SResultWindowInfo* getSessionTimeWindow(SArray* pWinInfos, TSKEY ts, int64_t gap
|
|||
int32_t updateSessionWindowInfo(SResultWindowInfo* pWinInfo, TSKEY* pTs, int32_t rows,
|
||||
int32_t start, int64_t gap, SHashObj* pStDeleted);
|
||||
bool functionNeedToExecute(SqlFunctionCtx* pCtx);
|
||||
int64_t getSmaWaterMark(int64_t interval, double filesFactor);
|
||||
bool isSmaStream(int8_t triggerType);
|
||||
|
||||
int32_t compareTimeWindow(const void* p1, const void* p2, const void* param);
|
||||
#ifdef __cplusplus
|
||||
|
|
|
@ -4529,8 +4529,8 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
|
|||
qDebug("%s pDataReader is not NULL", GET_TASKID(pTaskInfo));
|
||||
}
|
||||
SArray* tableIdList = extractTableIdList(pTableListInfo);
|
||||
SOperatorInfo* pOperator = createStreamScanOperatorInfo(pDataReader, pHandle, tableIdList, pTableScanNode,
|
||||
pTaskInfo, &twSup, pTableScanNode->tsColId);
|
||||
SOperatorInfo* pOperator = createStreamScanOperatorInfo(pDataReader, pHandle,
|
||||
tableIdList, pTableScanNode, pTaskInfo, &twSup);
|
||||
|
||||
taosArrayDestroy(tableIdList);
|
||||
return pOperator;
|
||||
|
@ -4631,7 +4631,19 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
|
|||
|
||||
STimeWindowAggSupp as = {.waterMark = pIntervalPhyNode->window.watermark,
|
||||
.calTrigger = pIntervalPhyNode->window.triggerType,
|
||||
.maxTs = INT64_MIN};
|
||||
.maxTs = INT64_MIN,
|
||||
.winMap = NULL,};
|
||||
if (isSmaStream(pIntervalPhyNode->window.triggerType)) {
|
||||
if (FLT_LESS(pIntervalPhyNode->window.filesFactor, 1.000000)) {
|
||||
as.calTrigger = STREAM_TRIGGER_AT_ONCE_SMA;
|
||||
} else {
|
||||
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_TIMESTAMP);
|
||||
as.winMap = taosHashInit(64, hashFn, true, HASH_NO_LOCK);
|
||||
as.waterMark = getSmaWaterMark(interval.interval,
|
||||
pIntervalPhyNode->window.filesFactor);
|
||||
as.calTrigger = STREAM_TRIGGER_WINDOW_CLOSE_SMA;
|
||||
}
|
||||
}
|
||||
|
||||
int32_t tsSlotId = ((SColumnNode*)pIntervalPhyNode->window.pTspk)->slotId;
|
||||
pOptr = createIntervalOperatorInfo(ops[0], pExprInfo, num, pResBlock, &interval, tsSlotId, &as, pTaskInfo);
|
||||
|
@ -5300,3 +5312,18 @@ int32_t initStreamAggSupporter(SStreamAggSupporter* pSup, const char* pKey) {
|
|||
}
|
||||
return createDiskbasedBuf(&pSup->pResultBuf, pageSize, bufSize, pKey, TD_TMP_DIR_PATH);
|
||||
}
|
||||
|
||||
int64_t getSmaWaterMark(int64_t interval, double filesFactor) {
|
||||
int64_t waterMark = 0;
|
||||
ASSERT(FLT_GREATEREQUAL(filesFactor,0.000000));
|
||||
waterMark = -1 * filesFactor;
|
||||
return waterMark;
|
||||
}
|
||||
|
||||
bool isSmaStream(int8_t triggerType) {
|
||||
if (triggerType == STREAM_TRIGGER_AT_ONCE ||
|
||||
triggerType == STREAM_TRIGGER_WINDOW_CLOSE) {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
|
|
@ -877,7 +877,7 @@ static SSDataBlock* doStreamBlockScan(SOperatorInfo* pOperator) {
|
|||
if (rows == 0) {
|
||||
pOperator->status = OP_EXEC_DONE;
|
||||
} else if (pInfo->pUpdateInfo) {
|
||||
SSDataBlock* upRes = getUpdateDataBlock(pInfo, true); // TODO(liuyao) get invertible from plan
|
||||
SSDataBlock* upRes = getUpdateDataBlock(pInfo, true);
|
||||
if (upRes) {
|
||||
pInfo->pUpdateRes = upRes;
|
||||
if (upRes->info.type == STREAM_REPROCESS) {
|
||||
|
@ -896,7 +896,7 @@ static SSDataBlock* doStreamBlockScan(SOperatorInfo* pOperator) {
|
|||
|
||||
SOperatorInfo* createStreamScanOperatorInfo(void* pDataReader, SReadHandle* pHandle,
|
||||
SArray* pTableIdList, STableScanPhysiNode* pTableScanNode, SExecTaskInfo* pTaskInfo,
|
||||
STimeWindowAggSupp* pTwSup, int16_t tsColId) {
|
||||
STimeWindowAggSupp* pTwSup) {
|
||||
SStreamBlockScanInfo* pInfo = taosMemoryCalloc(1, sizeof(SStreamBlockScanInfo));
|
||||
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
|
||||
if (pInfo == NULL || pOperator == NULL) {
|
||||
|
@ -941,8 +941,12 @@ SOperatorInfo* createStreamScanOperatorInfo(void* pDataReader, SReadHandle* pHan
|
|||
goto _error;
|
||||
}
|
||||
|
||||
pInfo->primaryTsIndex = tsColId;
|
||||
if (pSTInfo->interval.interval > 0) {
|
||||
if (isSmaStream(pTableScanNode->triggerType)) {
|
||||
pTwSup->waterMark = getSmaWaterMark(pSTInfo->interval.interval,
|
||||
pTableScanNode->filesFactor);
|
||||
}
|
||||
pInfo->primaryTsIndex = 0; // pTableScanNode->tsColId;
|
||||
if (pSTInfo->interval.interval > 0 && pDataReader) {
|
||||
pInfo->pUpdateInfo = updateInfoInitP(&pSTInfo->interval, pTwSup->waterMark);
|
||||
} else {
|
||||
pInfo->pUpdateInfo = NULL;
|
||||
|
|
|
@ -748,11 +748,15 @@ static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResul
|
|||
longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY);
|
||||
}
|
||||
|
||||
if (pInfo->execModel == OPTR_EXEC_MODEL_STREAM &&
|
||||
(pInfo->twAggSup.calTrigger == STREAM_TRIGGER_AT_ONCE ||
|
||||
pInfo->twAggSup.calTrigger == 0) ) {
|
||||
if (pInfo->execModel == OPTR_EXEC_MODEL_STREAM) {
|
||||
if (pInfo->twAggSup.calTrigger == STREAM_TRIGGER_AT_ONCE ||
|
||||
pInfo->twAggSup.calTrigger == STREAM_TRIGGER_AT_ONCE_SMA) {
|
||||
saveResult(pResult, tableGroupId, pUpdated);
|
||||
}
|
||||
if (pInfo->twAggSup.winMap) {
|
||||
taosHashRemove(pInfo->twAggSup.winMap, &win.skey, sizeof(TSKEY));
|
||||
}
|
||||
}
|
||||
|
||||
int32_t forwardStep = 0;
|
||||
TSKEY ekey = ascScan? win.ekey:win.skey;
|
||||
|
@ -824,11 +828,15 @@ static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResul
|
|||
longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY);
|
||||
}
|
||||
|
||||
if (pInfo->execModel == OPTR_EXEC_MODEL_STREAM &&
|
||||
(pInfo->twAggSup.calTrigger == STREAM_TRIGGER_AT_ONCE ||
|
||||
pInfo->twAggSup.calTrigger == 0) ) {
|
||||
if (pInfo->execModel == OPTR_EXEC_MODEL_STREAM) {
|
||||
if (pInfo->twAggSup.calTrigger == STREAM_TRIGGER_AT_ONCE ||
|
||||
pInfo->twAggSup.calTrigger == STREAM_TRIGGER_AT_ONCE_SMA) {
|
||||
saveResult(pResult, tableGroupId, pUpdated);
|
||||
}
|
||||
if (pInfo->twAggSup.winMap) {
|
||||
taosHashRemove(pInfo->twAggSup.winMap, &win.skey, sizeof(TSKEY));
|
||||
}
|
||||
}
|
||||
|
||||
ekey = ascScan? nextWin.ekey:nextWin.skey;
|
||||
forwardStep =
|
||||
|
@ -1172,15 +1180,23 @@ static int32_t closeIntervalWindow(SHashObj *pHashMap, STimeWindowAggSupp *pSup,
|
|||
void* key = taosHashGetKey(pIte, &keyLen);
|
||||
uint64_t groupId = *(uint64_t*) key;
|
||||
ASSERT(keyLen == GET_RES_WINDOW_KEY_LEN(sizeof(TSKEY)));
|
||||
TSKEY ts = *(uint64_t*) ((char*)key + sizeof(uint64_t));
|
||||
TSKEY ts = *(int64_t*) ((char*)key + sizeof(uint64_t));
|
||||
SResultRowInfo dumyInfo;
|
||||
dumyInfo.cur.pageId = -1;
|
||||
STimeWindow win = getActiveTimeWindow(NULL, &dumyInfo, ts, pInterval,
|
||||
pInterval->precision, NULL);
|
||||
if (win.ekey < pSup->maxTs - pSup->waterMark) {
|
||||
if (pSup->calTrigger == STREAM_TRIGGER_WINDOW_CLOSE_SMA) {
|
||||
if (taosHashGet(pSup->winMap, &win.skey, sizeof(TSKEY))) {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
char keyBuf[GET_RES_WINDOW_KEY_LEN(sizeof(TSKEY))];
|
||||
SET_RES_WINDOW_KEY(keyBuf, &ts, sizeof(TSKEY), groupId);
|
||||
if (pSup->calTrigger != STREAM_TRIGGER_AT_ONCE_SMA &&
|
||||
pSup->calTrigger != STREAM_TRIGGER_WINDOW_CLOSE_SMA) {
|
||||
taosHashRemove(pHashMap, keyBuf, keyLen);
|
||||
}
|
||||
SResKeyPos* pos = taosMemoryMalloc(sizeof(SResKeyPos) + sizeof(uint64_t));
|
||||
if (pos == NULL) {
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
|
@ -1192,6 +1208,7 @@ static int32_t closeIntervalWindow(SHashObj *pHashMap, STimeWindowAggSupp *pSup,
|
|||
taosMemoryFree(pos);
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
taosHashPut(pSup->winMap, &win.skey, sizeof(TSKEY), NULL, 0);
|
||||
}
|
||||
}
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
@ -1248,7 +1265,8 @@ static SSDataBlock* doStreamIntervalAgg(SOperatorInfo* pOperator) {
|
|||
&pInfo->interval, pClosed);
|
||||
finalizeUpdatedResult(pOperator->numOfExprs, pInfo->aggSup.pResultBuf, pClosed,
|
||||
pInfo->binfo.rowCellInfoOffset);
|
||||
if (pInfo->twAggSup.calTrigger == STREAM_TRIGGER__WINDOW_CLOSE) {
|
||||
if (pInfo->twAggSup.calTrigger == STREAM_TRIGGER_WINDOW_CLOSE ||
|
||||
pInfo->twAggSup.calTrigger == STREAM_TRIGGER_WINDOW_CLOSE_SMA) {
|
||||
taosArrayAddAll(pUpdated, pClosed);
|
||||
}
|
||||
taosArrayDestroy(pClosed);
|
||||
|
@ -2412,7 +2430,7 @@ int32_t closeSessionWindow(SArray *pWins, STimeWindowAggSupp *pTwSup, SArray *pC
|
|||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
pSeWin->isClosed = true;
|
||||
if (calTrigger == STREAM_TRIGGER__WINDOW_CLOSE) {
|
||||
if (calTrigger == STREAM_TRIGGER_WINDOW_CLOSE) {
|
||||
pSeWin->isOutput = true;
|
||||
}
|
||||
}
|
||||
|
@ -2486,7 +2504,7 @@ static SSDataBlock* doStreamSessionWindowAgg(SOperatorInfo* pOperator) {
|
|||
SArray* pUpdated = taosArrayInit(16, POINTER_BYTES);
|
||||
copyUpdateResult(pStUpdated, pUpdated, pBInfo->pRes->info.groupId);
|
||||
taosHashCleanup(pStUpdated);
|
||||
if (pInfo->twAggSup.calTrigger == STREAM_TRIGGER__WINDOW_CLOSE) {
|
||||
if (pInfo->twAggSup.calTrigger == STREAM_TRIGGER_WINDOW_CLOSE) {
|
||||
taosArrayAddAll(pUpdated, pClosed);
|
||||
}
|
||||
|
||||
|
|
|
@ -305,6 +305,7 @@ static SNode* logicNodeCopy(const SLogicNode* pSrc, SLogicNode* pDst) {
|
|||
CLONE_NODE_FIELD(pConditions);
|
||||
CLONE_NODE_LIST_FIELD(pChildren);
|
||||
COPY_SCALAR_FIELD(optimizedFlag);
|
||||
COPY_SCALAR_FIELD(precision);
|
||||
return (SNode*)pDst;
|
||||
}
|
||||
|
||||
|
@ -328,6 +329,10 @@ static SNode* logicScanCopy(const SScanLogicNode* pSrc, SScanLogicNode* pDst) {
|
|||
COPY_SCALAR_FIELD(intervalUnit);
|
||||
COPY_SCALAR_FIELD(slidingUnit);
|
||||
CLONE_NODE_FIELD(pTagCond);
|
||||
COPY_SCALAR_FIELD(triggerType);
|
||||
COPY_SCALAR_FIELD(watermark);
|
||||
COPY_SCALAR_FIELD(tsColId);
|
||||
COPY_SCALAR_FIELD(filesFactor);
|
||||
return (SNode*)pDst;
|
||||
}
|
||||
|
||||
|
@ -384,6 +389,7 @@ static SNode* logicWindowCopy(const SWindowLogicNode* pSrc, SWindowLogicNode* pD
|
|||
CLONE_NODE_FIELD(pStateExpr);
|
||||
COPY_SCALAR_FIELD(triggerType);
|
||||
COPY_SCALAR_FIELD(watermark);
|
||||
COPY_SCALAR_FIELD(filesFactor);
|
||||
return (SNode*)pDst;
|
||||
}
|
||||
|
||||
|
|
|
@ -1133,6 +1133,7 @@ static const char* jkTableScanPhysiPlanSlidingUnit = "slidingUnit";
|
|||
static const char* jkTableScanPhysiPlanTriggerType = "triggerType";
|
||||
static const char* jkTableScanPhysiPlanWatermark = "watermark";
|
||||
static const char* jkTableScanPhysiPlanTsColId = "tsColId";
|
||||
static const char* jkTableScanPhysiPlanFilesFactor = "FilesFactor";
|
||||
|
||||
static int32_t physiTableScanNodeToJson(const void* pObj, SJson* pJson) {
|
||||
const STableScanPhysiNode* pNode = (const STableScanPhysiNode*)pObj;
|
||||
|
@ -1183,6 +1184,9 @@ static int32_t physiTableScanNodeToJson(const void* pObj, SJson* pJson) {
|
|||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = tjsonAddIntegerToObject(pJson, jkTableScanPhysiPlanTsColId, pNode->tsColId);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = tjsonAddDoubleToObject(pJson, jkTableScanPhysiPlanFilesFactor, pNode->filesFactor);
|
||||
}
|
||||
|
||||
return code;
|
||||
}
|
||||
|
@ -1242,7 +1246,9 @@ static int32_t jsonToPhysiTableScanNode(const SJson* pJson, void* pObj) {
|
|||
if (TSDB_CODE_SUCCESS == code) {
|
||||
tjsonGetNumberValue(pJson, jkTableScanPhysiPlanTsColId, pNode->tsColId, code);
|
||||
}
|
||||
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = tjsonGetDoubleValue(pJson, jkTableScanPhysiPlanFilesFactor, &pNode->filesFactor);
|
||||
}
|
||||
return code;
|
||||
}
|
||||
|
||||
|
@ -1496,6 +1502,7 @@ static const char* jkWindowPhysiPlanFuncs = "Funcs";
|
|||
static const char* jkWindowPhysiPlanTsPk = "TsPk";
|
||||
static const char* jkWindowPhysiPlanTriggerType = "TriggerType";
|
||||
static const char* jkWindowPhysiPlanWatermark = "Watermark";
|
||||
static const char* jkWindowPhysiPlanFilesFactor = "FilesFactor";
|
||||
|
||||
static int32_t physiWindowNodeToJson(const void* pObj, SJson* pJson) {
|
||||
const SWinodwPhysiNode* pNode = (const SWinodwPhysiNode*)pObj;
|
||||
|
@ -1516,6 +1523,9 @@ static int32_t physiWindowNodeToJson(const void* pObj, SJson* pJson) {
|
|||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = tjsonAddIntegerToObject(pJson, jkWindowPhysiPlanWatermark, pNode->watermark);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = tjsonAddDoubleToObject(pJson, jkWindowPhysiPlanFilesFactor, pNode->filesFactor);
|
||||
}
|
||||
|
||||
return code;
|
||||
}
|
||||
|
@ -1541,6 +1551,9 @@ static int32_t jsonToPhysiWindowNode(const SJson* pJson, void* pObj) {
|
|||
tjsonGetNumberValue(pJson, jkWindowPhysiPlanWatermark, pNode->watermark, code);
|
||||
;
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = tjsonGetDoubleValue(pJson, jkWindowPhysiPlanFilesFactor, &pNode->filesFactor);
|
||||
}
|
||||
|
||||
return code;
|
||||
}
|
||||
|
|
|
@ -59,7 +59,6 @@ typedef enum EDatabaseOptionType {
|
|||
|
||||
typedef enum ETableOptionType {
|
||||
TABLE_OPTION_COMMENT = 1,
|
||||
TABLE_OPTION_DELAY,
|
||||
TABLE_OPTION_FILE_FACTOR,
|
||||
TABLE_OPTION_ROLLUP,
|
||||
TABLE_OPTION_TTL,
|
||||
|
|
|
@ -313,7 +313,6 @@ tags_def(A) ::= TAGS NK_LP column_def_list(B) NK_RP.
|
|||
|
||||
table_options(A) ::= . { A = createDefaultTableOptions(pCxt); }
|
||||
table_options(A) ::= table_options(B) COMMENT NK_STRING(C). { A = setTableOption(pCxt, B, TABLE_OPTION_COMMENT, &C); }
|
||||
table_options(A) ::= table_options(B) DELAY NK_INTEGER(C). { A = setTableOption(pCxt, B, TABLE_OPTION_DELAY, &C); }
|
||||
table_options(A) ::= table_options(B) FILE_FACTOR NK_FLOAT(C). { A = setTableOption(pCxt, B, TABLE_OPTION_FILE_FACTOR, &C); }
|
||||
table_options(A) ::= table_options(B) ROLLUP NK_LP func_name_list(C) NK_RP. { A = setTableOption(pCxt, B, TABLE_OPTION_ROLLUP, C); }
|
||||
table_options(A) ::= table_options(B) TTL NK_INTEGER(C). { A = setTableOption(pCxt, B, TABLE_OPTION_TTL, &C); }
|
||||
|
@ -408,7 +407,7 @@ cmd ::= CREATE TOPIC not_exists_opt(A)
|
|||
cmd ::= CREATE TOPIC not_exists_opt(A)
|
||||
topic_name(B) topic_options(D) AS db_name(C). { pCxt->pRootNode = createCreateTopicStmt(pCxt, A, &B, NULL, &C, D); }
|
||||
cmd ::= DROP TOPIC exists_opt(A) topic_name(B). { pCxt->pRootNode = createDropTopicStmt(pCxt, A, &B); }
|
||||
cmd ::= DROP CGROUP exists_opt(A) cgroup_name(B) ON topic_name(C). { pCxt->pRootNode = createDropCGroupStmt(pCxt, A, &B, &C); }
|
||||
cmd ::= DROP CONSUMER GROUP exists_opt(A) cgroup_name(B) ON topic_name(C). { pCxt->pRootNode = createDropCGroupStmt(pCxt, A, &B, &C); }
|
||||
|
||||
topic_options(A) ::= . { A = createTopicOptions(pCxt); }
|
||||
topic_options(A) ::= topic_options(B) WITH TABLE. { ((STopicOptions*)B)->withTable = true; A = B; }
|
||||
|
|
|
@ -857,7 +857,6 @@ SNode* createDefaultTableOptions(SAstCreateContext* pCxt) {
|
|||
CHECK_PARSER_STATUS(pCxt);
|
||||
STableOptions* pOptions = nodesMakeNode(QUERY_NODE_TABLE_OPTIONS);
|
||||
CHECK_OUT_OF_MEM(pOptions);
|
||||
pOptions->delay = TSDB_DEFAULT_ROLLUP_DELAY;
|
||||
pOptions->filesFactor = TSDB_DEFAULT_ROLLUP_FILE_FACTOR;
|
||||
pOptions->ttl = TSDB_DEFAULT_TABLE_TTL;
|
||||
return (SNode*)pOptions;
|
||||
|
@ -867,7 +866,6 @@ SNode* createAlterTableOptions(SAstCreateContext* pCxt) {
|
|||
CHECK_PARSER_STATUS(pCxt);
|
||||
STableOptions* pOptions = nodesMakeNode(QUERY_NODE_TABLE_OPTIONS);
|
||||
CHECK_OUT_OF_MEM(pOptions);
|
||||
pOptions->delay = -1;
|
||||
pOptions->filesFactor = -1;
|
||||
pOptions->ttl = -1;
|
||||
return (SNode*)pOptions;
|
||||
|
@ -882,11 +880,8 @@ SNode* setTableOption(SAstCreateContext* pCxt, SNode* pOptions, ETableOptionType
|
|||
sizeof(((STableOptions*)pOptions)->comment));
|
||||
}
|
||||
break;
|
||||
case TABLE_OPTION_DELAY:
|
||||
((STableOptions*)pOptions)->delay = taosStr2Int32(((SToken*)pVal)->z, NULL, 10);
|
||||
break;
|
||||
case TABLE_OPTION_FILE_FACTOR:
|
||||
((STableOptions*)pOptions)->filesFactor = taosStr2Float(((SToken*)pVal)->z, NULL);
|
||||
((STableOptions*)pOptions)->filesFactor = taosStr2Double(((SToken*)pVal)->z, NULL);
|
||||
break;
|
||||
case TABLE_OPTION_ROLLUP:
|
||||
((STableOptions*)pOptions)->pRollupFuncs = pVal;
|
||||
|
|
|
@ -53,7 +53,6 @@ static SKeyword keywordTable[] = {
|
|||
{"CACHE", TK_CACHE},
|
||||
{"CACHELAST", TK_CACHELAST},
|
||||
{"CAST", TK_CAST},
|
||||
{"CGROUP", TK_CGROUP},
|
||||
{"CLUSTER", TK_CLUSTER},
|
||||
{"COLUMN", TK_COLUMN},
|
||||
{"COMMENT", TK_COMMENT},
|
||||
|
@ -62,13 +61,13 @@ static SKeyword keywordTable[] = {
|
|||
{"CONNS", TK_CONNS},
|
||||
{"CONNECTION", TK_CONNECTION},
|
||||
{"CONNECTIONS", TK_CONNECTIONS},
|
||||
{"CONSUMER", TK_CONSUMER},
|
||||
{"COUNT", TK_COUNT},
|
||||
{"CREATE", TK_CREATE},
|
||||
{"DATABASE", TK_DATABASE},
|
||||
{"DATABASES", TK_DATABASES},
|
||||
{"DAYS", TK_DAYS},
|
||||
{"DBS", TK_DBS},
|
||||
{"DELAY", TK_DELAY},
|
||||
{"DESC", TK_DESC},
|
||||
{"DESCRIBE", TK_DESCRIBE},
|
||||
{"DISTINCT", TK_DISTINCT},
|
||||
|
|
|
@ -465,20 +465,22 @@ static bool isPrimaryKey(STempTableNode* pTable, SNode* pExpr) {
|
|||
return isPrimaryKeyImpl(pTable, pExpr);
|
||||
}
|
||||
|
||||
static bool findAndSetColumn(SColumnNode** pColRef, const STableNode* pTable) {
|
||||
static int32_t findAndSetColumn(STranslateContext* pCxt, SColumnNode** pColRef, const STableNode* pTable,
|
||||
bool* pFound) {
|
||||
SColumnNode* pCol = *pColRef;
|
||||
bool found = false;
|
||||
*pFound = false;
|
||||
if (QUERY_NODE_REAL_TABLE == nodeType(pTable)) {
|
||||
const STableMeta* pMeta = ((SRealTableNode*)pTable)->pMeta;
|
||||
if (isInternalPrimaryKey(pCol)) {
|
||||
setColumnInfoBySchema((SRealTableNode*)pTable, pMeta->schema, false, pCol);
|
||||
return true;
|
||||
*pFound = true;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
int32_t nums = pMeta->tableInfo.numOfTags + pMeta->tableInfo.numOfColumns;
|
||||
for (int32_t i = 0; i < nums; ++i) {
|
||||
if (0 == strcmp(pCol->colName, pMeta->schema[i].name)) {
|
||||
setColumnInfoBySchema((SRealTableNode*)pTable, pMeta->schema + i, (i >= pMeta->tableInfo.numOfColumns), pCol);
|
||||
found = true;
|
||||
*pFound = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
@ -489,13 +491,15 @@ static bool findAndSetColumn(SColumnNode** pColRef, const STableNode* pTable) {
|
|||
SExprNode* pExpr = (SExprNode*)pNode;
|
||||
if (0 == strcmp(pCol->colName, pExpr->aliasName) ||
|
||||
(isPrimaryKey((STempTableNode*)pTable, pNode) && isInternalPrimaryKey(pCol))) {
|
||||
if (*pFound) {
|
||||
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_AMBIGUOUS_COLUMN, pCol->colName);
|
||||
}
|
||||
setColumnInfoByExpr(pTable, pExpr, pColRef);
|
||||
found = true;
|
||||
break;
|
||||
*pFound = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
return found;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static EDealRes translateColumnWithPrefix(STranslateContext* pCxt, SColumnNode** pCol) {
|
||||
|
@ -506,7 +510,12 @@ static EDealRes translateColumnWithPrefix(STranslateContext* pCxt, SColumnNode**
|
|||
STableNode* pTable = taosArrayGetP(pTables, i);
|
||||
if (belongTable(pCxt->pParseCxt->db, (*pCol), pTable)) {
|
||||
foundTable = true;
|
||||
if (findAndSetColumn(pCol, pTable)) {
|
||||
bool foundCol = false;
|
||||
pCxt->errCode = findAndSetColumn(pCxt, pCol, pTable, &foundCol);
|
||||
if (TSDB_CODE_SUCCESS != pCxt->errCode) {
|
||||
return DEAL_RES_ERROR;
|
||||
}
|
||||
if (foundCol) {
|
||||
break;
|
||||
}
|
||||
return generateDealNodeErrMsg(pCxt, TSDB_CODE_PAR_INVALID_COLUMN, (*pCol)->colName);
|
||||
|
@ -525,16 +534,21 @@ static EDealRes translateColumnWithoutPrefix(STranslateContext* pCxt, SColumnNod
|
|||
bool isInternalPk = isInternalPrimaryKey(*pCol);
|
||||
for (size_t i = 0; i < nums; ++i) {
|
||||
STableNode* pTable = taosArrayGetP(pTables, i);
|
||||
if (findAndSetColumn(pCol, pTable)) {
|
||||
bool foundCol = false;
|
||||
pCxt->errCode = findAndSetColumn(pCxt, pCol, pTable, &foundCol);
|
||||
if (TSDB_CODE_SUCCESS != pCxt->errCode) {
|
||||
return DEAL_RES_ERROR;
|
||||
}
|
||||
if (foundCol) {
|
||||
if (found) {
|
||||
return generateDealNodeErrMsg(pCxt, TSDB_CODE_PAR_AMBIGUOUS_COLUMN, (*pCol)->colName);
|
||||
}
|
||||
found = true;
|
||||
}
|
||||
if (isInternalPk) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
if (!found) {
|
||||
if (isInternalPk) {
|
||||
if (NULL != pCxt->pCurrStmt->pWindow) {
|
||||
|
@ -1939,7 +1953,9 @@ static int32_t createPrimaryKeyColByTable(STranslateContext* pCxt, STableNode* p
|
|||
}
|
||||
pCol->colId = PRIMARYKEY_TIMESTAMP_COL_ID;
|
||||
strcpy(pCol->colName, PK_TS_COL_INTERNAL_NAME);
|
||||
if (!findAndSetColumn(&pCol, pTable)) {
|
||||
bool found = false;
|
||||
int32_t code = findAndSetColumn(pCxt, &pCol, pTable, &found);
|
||||
if (TSDB_CODE_SUCCESS != code || !found) {
|
||||
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_TIMELINE_FUNC);
|
||||
}
|
||||
*pPrimaryKey = (SNode*)pCol;
|
||||
|
@ -2617,10 +2633,7 @@ static int32_t checkTableSchema(STranslateContext* pCxt, SCreateTableStmt* pStmt
|
|||
}
|
||||
|
||||
static int32_t checkCreateTable(STranslateContext* pCxt, SCreateTableStmt* pStmt) {
|
||||
int32_t code = checkRangeOption(pCxt, "delay", pStmt->pOptions->delay, TSDB_MIN_ROLLUP_DELAY, TSDB_MAX_ROLLUP_DELAY);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = checTableFactorOption(pCxt, pStmt->pOptions->filesFactor);
|
||||
}
|
||||
int32_t code = checTableFactorOption(pCxt, pStmt->pOptions->filesFactor);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = checkTableRollupOption(pCxt, pStmt->pOptions->pRollupFuncs);
|
||||
}
|
||||
|
@ -2861,7 +2874,6 @@ static int32_t buildRollupAst(STranslateContext* pCxt, SCreateTableStmt* pStmt,
|
|||
static int32_t buildCreateStbReq(STranslateContext* pCxt, SCreateTableStmt* pStmt, SMCreateStbReq* pReq) {
|
||||
pReq->igExists = pStmt->ignoreExists;
|
||||
pReq->xFilesFactor = pStmt->pOptions->filesFactor;
|
||||
pReq->delay = pStmt->pOptions->delay;
|
||||
pReq->ttl = pStmt->pOptions->ttl;
|
||||
columnDefNodeToField(pStmt->pCols, &pReq->pColumns);
|
||||
columnDefNodeToField(pStmt->pTags, &pReq->pTags);
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -298,14 +298,12 @@ TEST_F(ParserInitialCTest, createStable) {
|
|||
|
||||
auto setCreateStbReqFunc = [&](const char* pTbname, int8_t igExists = 0,
|
||||
float xFilesFactor = TSDB_DEFAULT_ROLLUP_FILE_FACTOR,
|
||||
int32_t delay = TSDB_DEFAULT_ROLLUP_DELAY, int32_t ttl = TSDB_DEFAULT_TABLE_TTL,
|
||||
const char* pComment = nullptr) {
|
||||
int32_t ttl = TSDB_DEFAULT_TABLE_TTL, const char* pComment = nullptr) {
|
||||
memset(&expect, 0, sizeof(SMCreateStbReq));
|
||||
int32_t len = snprintf(expect.name, sizeof(expect.name), "0.test.%s", pTbname);
|
||||
expect.name[len] = '\0';
|
||||
expect.igExists = igExists;
|
||||
expect.xFilesFactor = xFilesFactor;
|
||||
expect.delay = delay;
|
||||
expect.ttl = ttl;
|
||||
if (nullptr != pComment) {
|
||||
expect.comment = strdup(pComment);
|
||||
|
@ -393,7 +391,7 @@ TEST_F(ParserInitialCTest, createStable) {
|
|||
addFieldToCreateStbReqFunc(false, "id", TSDB_DATA_TYPE_INT);
|
||||
run("CREATE STABLE t1(ts TIMESTAMP, c1 INT) TAGS(id INT)");
|
||||
|
||||
setCreateStbReqFunc("t1", 1, 0.1, 2, 100, "test create table");
|
||||
setCreateStbReqFunc("t1", 1, 0.1, 100, "test create table");
|
||||
addFieldToCreateStbReqFunc(true, "ts", TSDB_DATA_TYPE_TIMESTAMP, 0, 0);
|
||||
addFieldToCreateStbReqFunc(true, "c1", TSDB_DATA_TYPE_INT);
|
||||
addFieldToCreateStbReqFunc(true, "c2", TSDB_DATA_TYPE_UINT);
|
||||
|
@ -431,7 +429,7 @@ TEST_F(ParserInitialCTest, createStable) {
|
|||
"TAGS (a1 TIMESTAMP, a2 INT, a3 INT UNSIGNED, a4 BIGINT, a5 BIGINT UNSIGNED, a6 FLOAT, a7 DOUBLE, "
|
||||
"a8 BINARY(20), a9 SMALLINT, a10 SMALLINT UNSIGNED COMMENT 'test column comment', a11 TINYINT, "
|
||||
"a12 TINYINT UNSIGNED, a13 BOOL, a14 NCHAR(30), a15 VARCHAR(50)) "
|
||||
"TTL 100 COMMENT 'test create table' SMA(c1, c2, c3) ROLLUP (MIN) FILE_FACTOR 0.1 DELAY 2");
|
||||
"TTL 100 COMMENT 'test create table' SMA(c1, c2, c3) ROLLUP (MIN) FILE_FACTOR 0.1");
|
||||
}
|
||||
|
||||
TEST_F(ParserInitialCTest, createStream) {
|
||||
|
@ -464,7 +462,7 @@ TEST_F(ParserInitialCTest, createTable) {
|
|||
"TAGS (a1 TIMESTAMP, a2 INT, a3 INT UNSIGNED, a4 BIGINT, a5 BIGINT UNSIGNED, a6 FLOAT, a7 DOUBLE, a8 BINARY(20), "
|
||||
"a9 SMALLINT, a10 SMALLINT UNSIGNED COMMENT 'test column comment', a11 TINYINT, a12 TINYINT UNSIGNED, a13 BOOL, "
|
||||
"a14 NCHAR(30), a15 VARCHAR(50)) "
|
||||
"TTL 100 COMMENT 'test create table' SMA(c1, c2, c3) ROLLUP (MIN) FILE_FACTOR 0.1 DELAY 2");
|
||||
"TTL 100 COMMENT 'test create table' SMA(c1, c2, c3) ROLLUP (MIN) FILE_FACTOR 0.1");
|
||||
|
||||
run("CREATE TABLE IF NOT EXISTS t1 USING st1 TAGS(1, 'wxy')");
|
||||
|
||||
|
|
|
@ -32,7 +32,7 @@ TEST_F(ParserInitialDTest, dropBnode) {
|
|||
run("DROP BNODE ON DNODE 1");
|
||||
}
|
||||
|
||||
// DROP CGROUP [ IF EXISTS ] cgroup_name ON topic_name
|
||||
// DROP CONSUMER GROUP [ IF EXISTS ] cgroup_name ON topic_name
|
||||
TEST_F(ParserInitialDTest, dropCGroup) {
|
||||
useDb("root", "test");
|
||||
|
||||
|
@ -56,10 +56,10 @@ TEST_F(ParserInitialDTest, dropCGroup) {
|
|||
});
|
||||
|
||||
setDropCgroupReqFunc("tp1", "cg1");
|
||||
run("DROP CGROUP cg1 ON tp1");
|
||||
run("DROP CONSUMER GROUP cg1 ON tp1");
|
||||
|
||||
setDropCgroupReqFunc("tp1", "cg1", 1);
|
||||
run("DROP CGROUP IF EXISTS cg1 ON tp1");
|
||||
run("DROP CONSUMER GROUP IF EXISTS cg1 ON tp1");
|
||||
}
|
||||
|
||||
// todo drop database
|
||||
|
|
|
@ -252,6 +252,8 @@ TEST_F(ParserSelectTest, semanticError) {
|
|||
// TSDB_CODE_PAR_AMBIGUOUS_COLUMN
|
||||
run("SELECT c2 FROM t1 tt1, t1 tt2 WHERE tt1.c1 = tt2.c1", TSDB_CODE_PAR_AMBIGUOUS_COLUMN, PARSER_STAGE_TRANSLATE);
|
||||
|
||||
run("SELECT c2 FROM (SELECT c1 c2, c2 FROM t1)", TSDB_CODE_PAR_AMBIGUOUS_COLUMN, PARSER_STAGE_TRANSLATE);
|
||||
|
||||
// TSDB_CODE_PAR_WRONG_VALUE_TYPE
|
||||
run("SELECT timestamp '2010a' FROM t1", TSDB_CODE_PAR_WRONG_VALUE_TYPE, PARSER_STAGE_TRANSLATE);
|
||||
|
||||
|
|
|
@ -124,6 +124,7 @@ static int32_t createChildLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSelec
|
|||
SLogicNode* pNode = NULL;
|
||||
int32_t code = func(pCxt, pSelect, &pNode);
|
||||
if (TSDB_CODE_SUCCESS == code && NULL != pNode) {
|
||||
pNode->precision = pSelect->precision;
|
||||
code = pushLogicNode(pCxt, pRoot, pNode);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS != code) {
|
||||
|
@ -400,6 +401,7 @@ static int32_t createLogicNodeByTable(SLogicPlanContext* pCxt, SSelectStmt* pSel
|
|||
nodesDestroyNode(pNode);
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
pNode->precision = pSelect->precision;
|
||||
*pLogicNode = pNode;
|
||||
}
|
||||
return code;
|
||||
|
@ -485,6 +487,10 @@ static int32_t createWindowLogicNodeFinalize(SLogicPlanContext* pCxt, SSelectStm
|
|||
pWindow->watermark = pCxt->pPlanCxt->watermark;
|
||||
}
|
||||
|
||||
if (pCxt->pPlanCxt->rSmaQuery) {
|
||||
pWindow->filesFactor = pCxt->pPlanCxt->filesFactor;
|
||||
}
|
||||
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = rewriteExprForSelect(pWindow->pFuncs, pSelect, SQL_CLAUSE_WINDOW);
|
||||
}
|
||||
|
|
|
@ -99,7 +99,8 @@ static bool osdMayBeOptimized(SLogicNode* pNode) {
|
|||
return false;
|
||||
}
|
||||
// todo: release after function splitting
|
||||
if (TSDB_SUPER_TABLE == ((SScanLogicNode*)pNode)->pMeta->tableType) {
|
||||
if (TSDB_SUPER_TABLE == ((SScanLogicNode*)pNode)->pMeta->tableType &&
|
||||
SCAN_TYPE_STREAM != ((SScanLogicNode*)pNode)->scanType) {
|
||||
return false;
|
||||
}
|
||||
if (NULL == pNode->pParent || (QUERY_NODE_LOGIC_PLAN_WINDOW != nodeType(pNode->pParent) &&
|
||||
|
@ -226,6 +227,7 @@ static void setScanWindowInfo(SScanLogicNode* pScan) {
|
|||
pScan->triggerType = ((SWindowLogicNode*)pScan->node.pParent)->triggerType;
|
||||
pScan->watermark = ((SWindowLogicNode*)pScan->node.pParent)->watermark;
|
||||
pScan->tsColId = ((SColumnNode*)((SWindowLogicNode*)pScan->node.pParent)->pTspk)->colId;
|
||||
pScan->filesFactor = ((SWindowLogicNode*)pScan->node.pParent)->filesFactor;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -508,6 +508,7 @@ static int32_t createTableScanPhysiNode(SPhysiPlanContext* pCxt, SSubplan* pSubp
|
|||
pTableScan->triggerType = pScanLogicNode->triggerType;
|
||||
pTableScan->watermark = pScanLogicNode->watermark;
|
||||
pTableScan->tsColId = pScanLogicNode->tsColId;
|
||||
pTableScan->filesFactor = pScanLogicNode->filesFactor;
|
||||
|
||||
return createScanPhysiNodeFinalize(pCxt, pSubplan, pScanLogicNode, (SScanPhysiNode*)pTableScan, pPhyNode);
|
||||
}
|
||||
|
@ -920,6 +921,7 @@ static int32_t createWindowPhysiNodeFinalize(SPhysiPlanContext* pCxt, SNodeList*
|
|||
|
||||
pWindow->triggerType = pWindowLogicNode->triggerType;
|
||||
pWindow->watermark = pWindowLogicNode->watermark;
|
||||
pWindow->filesFactor = pWindowLogicNode->filesFactor;
|
||||
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
*pPhyNode = (SPhysiNode*)pWindow;
|
||||
|
|
|
@ -79,7 +79,11 @@ int32_t qwExecTask(QW_FPARAMS_DEF, SQWTaskCtx *ctx, bool *queryEnd) {
|
|||
if (taskHandle) {
|
||||
code = qExecTask(taskHandle, &pRes, &useconds);
|
||||
if (code) {
|
||||
if (code != TSDB_CODE_OPS_NOT_SUPPORT) {
|
||||
QW_TASK_ELOG("qExecTask failed, code:%x - %s", code, tstrerror(code));
|
||||
} else {
|
||||
QW_TASK_DLOG("qExecTask failed, code:%x - %s", code, tstrerror(code));
|
||||
}
|
||||
QW_ERR_RET(code);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -17,9 +17,7 @@ TARGET_INCLUDE_DIRECTORIES(
|
|||
PUBLIC "${TD_SOURCE_DIR}/source/libs/parser/inc"
|
||||
PRIVATE "${TD_SOURCE_DIR}/source/libs/scalar/inc"
|
||||
)
|
||||
if(NOT TD_WINDOWS)
|
||||
add_test(
|
||||
NAME scalarTest
|
||||
COMMAND scalarTest
|
||||
)
|
||||
endif(NOT TD_WINDOWS)
|
||||
|
|
|
@ -2498,7 +2498,7 @@ TEST(ScalarFunctionTest, tanFunction_column) {
|
|||
code = tanFunction(pInput, 1, pOutput);
|
||||
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||
for (int32_t i = 0; i < rowNum; ++i) {
|
||||
ASSERT_EQ(*((double *)colDataGetData(pOutput->columnData, i)), result[i]);
|
||||
ASSERT_NEAR(*((double *)colDataGetData(pOutput->columnData, i)), result[i], 1e-15);
|
||||
PRINTF("tiny_int after TAN:%f\n", *((double *)colDataGetData(pOutput->columnData, i)));
|
||||
}
|
||||
scltDestroyDataBlock(pInput);
|
||||
|
@ -2517,7 +2517,7 @@ TEST(ScalarFunctionTest, tanFunction_column) {
|
|||
code = tanFunction(pInput, 1, pOutput);
|
||||
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||
for (int32_t i = 0; i < rowNum; ++i) {
|
||||
ASSERT_EQ(*((double *)colDataGetData(pOutput->columnData, i)), result[i]);
|
||||
ASSERT_NEAR(*((double *)colDataGetData(pOutput->columnData, i)), result[i], 1e-15);
|
||||
PRINTF("float after TAN:%f\n", *((double *)colDataGetData(pOutput->columnData, i)));
|
||||
}
|
||||
|
||||
|
|
|
@ -42,7 +42,7 @@ static void windowSBfAdd(SUpdateInfo *pInfo, uint64_t count) {
|
|||
}
|
||||
|
||||
static void windowSBfDelete(SUpdateInfo *pInfo, uint64_t count) {
|
||||
if (count < pInfo->numSBFs - 1) {
|
||||
if (count < pInfo->numSBFs) {
|
||||
for (uint64_t i = 0; i < count; ++i) {
|
||||
SScalableBf *pTsSBFs = taosArrayGetP(pInfo->pTsSBFs, 0);
|
||||
tScalableBfDestroy(pTsSBFs);
|
||||
|
@ -73,8 +73,8 @@ static int64_t adjustInterval(int64_t interval, int32_t precision) {
|
|||
}
|
||||
|
||||
static int64_t adjustWatermark(int64_t adjInterval, int64_t originInt, int64_t watermark) {
|
||||
if (watermark <= 0) {
|
||||
watermark = TMIN(originInt/adjInterval, 1) * adjInterval;
|
||||
if (watermark <= adjInterval) {
|
||||
watermark = TMAX(originInt/adjInterval, 1) * adjInterval;
|
||||
} else if (watermark > MAX_NUM_SCALABLE_BF * adjInterval) {
|
||||
watermark = MAX_NUM_SCALABLE_BF * adjInterval;
|
||||
}/* else if (watermark < MIN_NUM_SCALABLE_BF * adjInterval) {
|
||||
|
|
|
@ -494,6 +494,7 @@ class TDDnodes:
|
|||
self.simDeployed = False
|
||||
self.testCluster = False
|
||||
self.valgrind = 0
|
||||
self.killValgrind = 1
|
||||
|
||||
def init(self, path, remoteIP = ""):
|
||||
psCmd = "ps -ef|grep -w taosd| grep -v grep| grep -v defunct | awk '{print $2}'"
|
||||
|
@ -505,6 +506,7 @@ class TDDnodes:
|
|||
processID = subprocess.check_output(
|
||||
psCmd, shell=True).decode("utf-8")
|
||||
|
||||
if self.killValgrind == 1:
|
||||
psCmd = "ps -ef|grep -w valgrind.bin| grep -v grep | awk '{print $2}'"
|
||||
processID = subprocess.check_output(psCmd, shell=True).decode("utf-8")
|
||||
while(processID):
|
||||
|
@ -549,6 +551,9 @@ class TDDnodes:
|
|||
def setValgrind(self, value):
|
||||
self.valgrind = value
|
||||
|
||||
def setKillValgrind(self, value):
|
||||
self.killValgrind = value
|
||||
|
||||
def deploy(self, index, *updatecfgDict):
|
||||
self.sim.setTestCluster(self.testCluster)
|
||||
|
||||
|
@ -622,6 +627,7 @@ class TDDnodes:
|
|||
processID = subprocess.check_output(
|
||||
psCmd, shell=True).decode("utf-8")
|
||||
|
||||
if self.killValgrind == 1:
|
||||
psCmd = "ps -ef|grep -w valgrind.bin| grep -v grep | awk '{print $2}'"
|
||||
processID = subprocess.check_output(psCmd, shell=True).decode("utf-8")
|
||||
while(processID):
|
||||
|
|
|
@ -9,7 +9,7 @@ sql create database d0 keep 365000d,365000d,365000d
|
|||
sql use d0
|
||||
|
||||
print =============== create super table and register rsma
|
||||
sql create table if not exists stb (ts timestamp, c1 int) tags (city binary(20),district binary(20)) rollup(min) file_factor 0.1 delay 2;
|
||||
sql create table if not exists stb (ts timestamp, c1 int) tags (city binary(20),district binary(20)) rollup(min) file_factor 0.1;
|
||||
|
||||
sql show stables
|
||||
if $rows != 1 then
|
||||
|
|
|
@ -9,7 +9,7 @@ sql create database d0 retentions 15s:7d,1m:21d,15m:365d;
|
|||
sql use d0
|
||||
|
||||
print =============== create super table and register rsma
|
||||
sql create table if not exists stb (ts timestamp, c1 int) tags (city binary(20),district binary(20)) rollup(min) file_factor 0.1 delay 2;
|
||||
sql create table if not exists stb (ts timestamp, c1 int) tags (city binary(20),district binary(20)) rollup(min) file_factor 0.1;
|
||||
|
||||
sql show stables
|
||||
if $rows != 1 then
|
||||
|
|
|
@ -5,7 +5,7 @@ sleep 50
|
|||
sql connect
|
||||
|
||||
print =============== create database
|
||||
sql create database d1
|
||||
sql create database d1 vgroups 1
|
||||
sql use d1
|
||||
|
||||
print =============== create super table, include column type for count/sum/min/max/first
|
||||
|
|
|
@ -23,7 +23,7 @@ sql insert into t1 values(1648791223001,10,2,3,1.1,2);
|
|||
sql insert into t1 values(1648791233002,3,2,3,2.1,3);
|
||||
sql insert into t1 values(1648791243003,NULL,NULL,NULL,NULL,4);
|
||||
sql insert into t1 values(1648791213002,NULL,NULL,NULL,NULL,5) (1648791233012,NULL,NULL,NULL,NULL,6);
|
||||
|
||||
sleep 300
|
||||
sql select * from streamt order by s desc;
|
||||
|
||||
# row 0
|
||||
|
@ -115,7 +115,7 @@ sql insert into t1 values(1648791233002,3,2,3,2.1,9);
|
|||
sql insert into t1 values(1648791243003,4,2,3,3.1,10);
|
||||
sql insert into t1 values(1648791213002,4,2,3,4.1,11) ;
|
||||
sql insert into t1 values(1648791213002,4,2,3,4.1,12) (1648791223009,4,2,3,4.1,13);
|
||||
|
||||
sleep 300
|
||||
sql select * from streamt order by s desc ;
|
||||
|
||||
# row 0
|
||||
|
|
|
@ -22,7 +22,7 @@ sql insert into t1 values(1648791210000,1,1,1,1.1,1);
|
|||
sql insert into t1 values(1648791220000,2,2,2,2.1,2);
|
||||
sql insert into t1 values(1648791230000,3,3,3,3.1,3);
|
||||
sql insert into t1 values(1648791240000,4,4,4,4.1,4);
|
||||
|
||||
sleep 300
|
||||
sql select * from streamt order by s desc;
|
||||
|
||||
# row 0
|
||||
|
@ -50,7 +50,7 @@ sql insert into t1 values(1648791250005,5,5,5,5.1,5);
|
|||
sql insert into t1 values(1648791260006,6,6,6,6.1,6);
|
||||
sql insert into t1 values(1648791270007,7,7,7,7.1,7);
|
||||
sql insert into t1 values(1648791240005,5,5,5,5.1,8) (1648791250006,6,6,6,6.1,9);
|
||||
|
||||
sleep 300
|
||||
sql select * from streamt order by s desc;
|
||||
|
||||
# row 0
|
||||
|
@ -100,7 +100,7 @@ sql insert into t1 values(1648791260007,7,7,7,7.1,12) (1648791290008,7,7,7,7.1,1
|
|||
sql insert into t1 values(1648791500000,7,7,7,7.1,15) (1648791520000,8,8,8,8.1,16) (1648791540000,8,8,8,8.1,17);
|
||||
sql insert into t1 values(1648791530000,8,8,8,8.1,18);
|
||||
sql insert into t1 values(1648791220000,10,10,10,10.1,19) (1648791290008,2,2,2,2.1,20) (1648791540000,17,17,17,17.1,21) (1648791500001,22,22,22,22.1,22);
|
||||
|
||||
sleep 300
|
||||
sql select * from streamt order by s desc;
|
||||
|
||||
# row 0
|
||||
|
|
|
@ -94,92 +94,4 @@ if $data11 != 5 then
|
|||
return -1
|
||||
endi
|
||||
|
||||
sql create table t2(ts timestamp, a int, b int , c int, d double);
|
||||
sql create stream streams2 trigger window_close watermark 20s into streamt2 as select _wstartts, count(*) c1, count(d) c2 , sum(a) c3 , max(b) c4, min(c) c5 from t2 interval(10s);
|
||||
sql insert into t2 values(1648791213000,1,2,3,1.0);
|
||||
sql insert into t2 values(1648791239999,1,2,3,1.0);
|
||||
sleep 300
|
||||
sql select * from streamt2;
|
||||
if $rows != 0 then
|
||||
print ======$rows
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql insert into t2 values(1648791240000,1,2,3,1.0);
|
||||
sleep 300
|
||||
sql select * from streamt2;
|
||||
if $rows != 1 then
|
||||
print ======$rows
|
||||
return -1
|
||||
endi
|
||||
if $data01 != 1 then
|
||||
print ======$data01
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql insert into t2 values(1648791250001,1,2,3,1.0) (1648791250002,1,2,3,1.0) (1648791250003,1,2,3,1.0) (1648791240000,1,2,3,1.0);
|
||||
sleep 300
|
||||
sql select * from streamt2;
|
||||
if $rows != 1 then
|
||||
print ======$rows
|
||||
return -1
|
||||
endi
|
||||
if $data01 != 1 then
|
||||
print ======$data01
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql insert into t2 values(1648791280000,1,2,3,1.0);
|
||||
sleep 300
|
||||
sql select * from streamt2;
|
||||
if $rows != 4 then
|
||||
print ======$rows
|
||||
return -1
|
||||
endi
|
||||
if $data01 != 1 then
|
||||
print ======$data01
|
||||
return -1
|
||||
endi
|
||||
if $data11 != 1 then
|
||||
print ======$data11
|
||||
return -1
|
||||
endi
|
||||
if $data21 != 1 then
|
||||
print ======$data21
|
||||
return -1
|
||||
endi
|
||||
if $data31 != 3 then
|
||||
print ======$data31
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql insert into t2 values(1648791250001,1,2,3,1.0) (1648791250002,1,2,3,1.0) (1648791250003,1,2,3,1.0) (1648791280000,1,2,3,1.0) (1648791280001,1,2,3,1.0) (1648791280002,1,2,3,1.0) (1648791310000,1,2,3,1.0) (1648791280001,1,2,3,1.0);
|
||||
sleep 300
|
||||
sql select * from streamt2;
|
||||
|
||||
if $rows != 5 then
|
||||
print ======$rows
|
||||
return -1
|
||||
endi
|
||||
if $data01 != 1 then
|
||||
print ======$data01
|
||||
return -1
|
||||
endi
|
||||
if $data11 != 1 then
|
||||
print ======$data11
|
||||
return -1
|
||||
endi
|
||||
if $data21 != 1 then
|
||||
print ======$data21
|
||||
return -1
|
||||
endi
|
||||
if $data31 != 3 then
|
||||
print ======$data31
|
||||
return -1
|
||||
endi
|
||||
if $data41 != 3 then
|
||||
print ======$data31
|
||||
return -1
|
||||
endi
|
||||
|
||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
|
@ -0,0 +1,79 @@
|
|||
{
|
||||
"filetype": "insert",
|
||||
"cfgdir": "/etc/taos/",
|
||||
"host": "test216",
|
||||
"port": 6030,
|
||||
"user": "root",
|
||||
"password": "taosdata",
|
||||
"thread_count": 8,
|
||||
"thread_count_create_tbl": 8,
|
||||
"result_file": "./insert_res.txt",
|
||||
"confirm_parameter_prompt": "no",
|
||||
"insert_interval": 0,
|
||||
"interlace_rows": 1000,
|
||||
"num_of_records_per_req": 100000,
|
||||
"databases": [
|
||||
{
|
||||
"dbinfo": {
|
||||
"name": "db",
|
||||
"drop": "yes",
|
||||
"vgroups": 24
|
||||
},
|
||||
"super_tables": [
|
||||
{
|
||||
"name": "stb",
|
||||
"child_table_exists": "no",
|
||||
"childtable_count": 100000,
|
||||
"childtable_prefix": "stb_",
|
||||
"auto_create_table": "no",
|
||||
"batch_create_tbl_num": 50000,
|
||||
"data_source": "rand",
|
||||
"insert_mode": "taosc",
|
||||
"insert_rows": 5,
|
||||
"interlace_rows": 100000,
|
||||
"insert_interval": 0,
|
||||
"max_sql_len": 10000000,
|
||||
"disorder_ratio": 0,
|
||||
"disorder_range": 1000,
|
||||
"timestamp_step": 10,
|
||||
"start_timestamp": "2022-05-01 00:00:00.000",
|
||||
"sample_format": "csv",
|
||||
"use_sample_ts": "no",
|
||||
"tags_file": "",
|
||||
"columns": [
|
||||
{
|
||||
"type": "INT"
|
||||
},
|
||||
{
|
||||
"type": "TINYINT",
|
||||
"count": 1
|
||||
},
|
||||
{"type": "DOUBLE"},
|
||||
|
||||
{
|
||||
"type": "BINARY",
|
||||
"len": 40,
|
||||
"count": 1
|
||||
},
|
||||
{
|
||||
"type": "nchar",
|
||||
"len": 20,
|
||||
"count": 1
|
||||
}
|
||||
],
|
||||
"tags": [
|
||||
{
|
||||
"type": "TINYINT",
|
||||
"count": 1
|
||||
},
|
||||
{
|
||||
"type": "BINARY",
|
||||
"len": 16,
|
||||
"count": 1
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
|
@ -0,0 +1,42 @@
|
|||
{
|
||||
"filetype": "query",
|
||||
"cfgdir": "/etc/taos",
|
||||
"host": "test216",
|
||||
"port": 6030,
|
||||
"user": "root",
|
||||
"password": "taosdata",
|
||||
"confirm_parameter_prompt": "no",
|
||||
"databases": "db",
|
||||
"query_times": 100,
|
||||
"query_mode": "taosc",
|
||||
"specified_table_query": {
|
||||
"query_interval": 0,
|
||||
"threads": 8,
|
||||
"sqls": [
|
||||
{
|
||||
"sql": "select count(*) from stb_0 ",
|
||||
"result": "./query_res0.txt"
|
||||
},
|
||||
{
|
||||
"sql": "select last_row(*) from stb_1 ",
|
||||
"result": "./query_res1.txt"
|
||||
},
|
||||
{
|
||||
"sql": "select last(*) from stb_2 ",
|
||||
"result": "./query_res2.txt"
|
||||
},
|
||||
{
|
||||
"sql": "select first(*) from stb_3 ",
|
||||
"result": "./query_res3.txt"
|
||||
},
|
||||
{
|
||||
"sql": "select avg(c0),min(c2),max(c1) from stb_4",
|
||||
"result": "./query_res4.txt"
|
||||
},
|
||||
{
|
||||
"sql": "select avg(c0),min(c2),max(c1) from stb_5 where ts <= '2022-05-01 20:00:00.500' and ts >= '2022-05-01 00:00:00.000' ",
|
||||
"result": "./query_res5.txt"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
|
@ -132,11 +132,11 @@ class TDTestCase:
|
|||
querystmt.bind_param(queryparam)
|
||||
querystmt.execute()
|
||||
result=querystmt.use_result()
|
||||
rows=result.fetch_all()
|
||||
print( querystmt.use_result())
|
||||
# rows=result.fetch_all()
|
||||
# print( querystmt.use_result())
|
||||
|
||||
# result = conn.query("select * from log")
|
||||
# rows=result.fetch_all()
|
||||
rows=result.fetch_all()
|
||||
# rows=result.fetch_all()
|
||||
print(rows)
|
||||
assert rows[1][0] == "ts"
|
||||
|
@ -213,7 +213,7 @@ class TDTestCase:
|
|||
params[11].float([3, None, 1])
|
||||
params[12].double([3, None, 1.2])
|
||||
params[13].binary(["abc", "dddafadfadfadfadfa", None])
|
||||
params[14].nchar(["涛思数据", None, "a long string with 中文字符"])
|
||||
params[14].nchar(["涛思数据", None, "a? long string with 中文字符"])
|
||||
params[15].timestamp([None, None, 1626861392591])
|
||||
|
||||
stmt.bind_param_batch(params)
|
||||
|
@ -230,9 +230,31 @@ class TDTestCase:
|
|||
querystmt1.execute()
|
||||
result1=querystmt1.use_result()
|
||||
rows1=result1.fetch_all()
|
||||
assert str(rows1[0][0]) == "2021-07-21 17:56:32.589111"
|
||||
assert rows1[0][10] == 3
|
||||
assert rows1[1][10] == 4
|
||||
print("1",rows1)
|
||||
|
||||
querystmt2=conn.statement("select abs(?) from log where bu < ?")
|
||||
queryparam2=new_bind_params(2)
|
||||
print(type(queryparam2))
|
||||
queryparam2[0].int(5)
|
||||
queryparam2[1].int(5)
|
||||
querystmt2.bind_param(queryparam2)
|
||||
querystmt2.execute()
|
||||
result2=querystmt2.use_result()
|
||||
rows2=result2.fetch_all()
|
||||
print("2",rows2)
|
||||
|
||||
querystmt3=conn.statement("select abs(?) from log where nn= 'a? long string with 中文字符' ")
|
||||
queryparam3=new_bind_params(1)
|
||||
print(type(queryparam3))
|
||||
queryparam3[0].int(5)
|
||||
querystmt3.bind_param(queryparam3)
|
||||
querystmt3.execute()
|
||||
result3=querystmt3.use_result()
|
||||
rows3=result3.fetch_all()
|
||||
print("3",rows3)
|
||||
# assert str(rows1[0][0]) == "2021-07-21 17:56:32.589111"
|
||||
# assert rows1[0][10] == 3
|
||||
# assert rows1[1][10] == 4
|
||||
|
||||
# conn.execute("drop database if exists %s" % dbname)
|
||||
conn.close()
|
||||
|
@ -247,7 +269,6 @@ class TDTestCase:
|
|||
config = buildPath+ "../sim/dnode1/cfg/"
|
||||
host="localhost"
|
||||
connectstmt=self.newcon(host,config)
|
||||
print(connectstmt)
|
||||
self.test_stmt_insert_multi(connectstmt)
|
||||
connectstmt=self.newcon(host,config)
|
||||
self.test_stmt_set_tbname_tag(connectstmt)
|
|
@ -0,0 +1,181 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import os
|
||||
import threading as thd
|
||||
import multiprocessing as mp
|
||||
from numpy.lib.function_base import insert
|
||||
import taos
|
||||
from taos import *
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
import numpy as np
|
||||
import datetime as dt
|
||||
from datetime import datetime
|
||||
from ctypes import *
|
||||
import time
|
||||
# constant define
|
||||
WAITS = 5 # wait seconds
|
||||
|
||||
class TDTestCase:
|
||||
#
|
||||
# --------------- main frame -------------------
|
||||
def caseDescription(self):
|
||||
'''
|
||||
limit and offset keyword function test cases;
|
||||
case1: limit offset base function test
|
||||
case2: offset return valid
|
||||
'''
|
||||
return
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root)-len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
# init
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor())
|
||||
# tdSql.prepare()
|
||||
# self.create_tables();
|
||||
self.ts = 1500000000000
|
||||
|
||||
# stop
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
# --------------- case -------------------
|
||||
|
||||
|
||||
def newcon(self,host,cfg):
|
||||
user = "root"
|
||||
password = "taosdata"
|
||||
port =6030
|
||||
con=taos.connect(host=host, user=user, password=password, config=cfg ,port=port)
|
||||
print(con)
|
||||
return con
|
||||
|
||||
def test_stmt_insert_multi(self,conn):
|
||||
# type: (TaosConnection) -> None
|
||||
|
||||
dbname = "pytest_taos_stmt_multi"
|
||||
try:
|
||||
conn.execute("drop database if exists %s" % dbname)
|
||||
conn.execute("create database if not exists %s" % dbname)
|
||||
conn.select_db(dbname)
|
||||
|
||||
conn.execute(
|
||||
"create table if not exists log(ts timestamp, bo bool, nil tinyint, ti tinyint, si smallint, ii int,\
|
||||
bi bigint, tu tinyint unsigned, su smallint unsigned, iu int unsigned, bu bigint unsigned, \
|
||||
ff float, dd double, bb binary(100), nn nchar(100), tt timestamp)",
|
||||
)
|
||||
# conn.load_table_info("log")
|
||||
|
||||
start = datetime.now()
|
||||
stmt = conn.statement("insert into log values(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)")
|
||||
|
||||
params = new_multi_binds(16)
|
||||
params[0].timestamp((1626861392589, 1626861392590, 1626861392591))
|
||||
params[1].bool((True, None, False))
|
||||
params[2].tinyint([-128, -128, None]) # -128 is tinyint null
|
||||
params[3].tinyint([0, 127, None])
|
||||
params[4].smallint([3, None, 2])
|
||||
params[5].int([3, 4, None])
|
||||
params[6].bigint([3, 4, None])
|
||||
params[7].tinyint_unsigned([3, 4, None])
|
||||
params[8].smallint_unsigned([3, 4, None])
|
||||
params[9].int_unsigned([3, 4, None])
|
||||
params[10].bigint_unsigned([3, 4, None])
|
||||
params[11].float([3, None, 1])
|
||||
params[12].double([3, None, 1.2])
|
||||
params[13].binary(["abc", "dddafadfadfadfadfa", None])
|
||||
params[14].nchar(["涛思数据", None, "a long string with 中文字符"])
|
||||
params[15].timestamp([None, None, 1626861392591])
|
||||
# print(type(stmt))
|
||||
stmt.bind_param_batch(params)
|
||||
stmt.execute()
|
||||
end = datetime.now()
|
||||
print("elapsed time: ", end - start)
|
||||
assert stmt.affected_rows == 3
|
||||
|
||||
#query
|
||||
querystmt=conn.statement("select ?,bu from log")
|
||||
queryparam=new_bind_params(1)
|
||||
print(type(queryparam))
|
||||
queryparam[0].binary("ts")
|
||||
querystmt.bind_param(queryparam)
|
||||
querystmt.execute()
|
||||
result=querystmt.use_result()
|
||||
# rows=result.fetch_all()
|
||||
# print( querystmt.use_result())
|
||||
|
||||
# result = conn.query("select * from log")
|
||||
rows=result.fetch_all()
|
||||
# rows=result.fetch_all()
|
||||
print(rows)
|
||||
assert rows[1][0] == "ts"
|
||||
assert rows[0][1] == 3
|
||||
|
||||
#query
|
||||
querystmt1=conn.statement("select * from log where bu < ?")
|
||||
queryparam1=new_bind_params(1)
|
||||
print(type(queryparam1))
|
||||
queryparam1[0].int(4)
|
||||
querystmt1.bind_param(queryparam1)
|
||||
querystmt1.execute()
|
||||
result1=querystmt1.use_result()
|
||||
rows1=result1.fetch_all()
|
||||
print(rows1)
|
||||
assert str(rows1[0][0]) == "2021-07-21 17:56:32.589000"
|
||||
assert rows1[0][10] == 3
|
||||
|
||||
|
||||
stmt.close()
|
||||
|
||||
# conn.execute("drop database if exists %s" % dbname)
|
||||
conn.close()
|
||||
|
||||
except Exception as err:
|
||||
# conn.execute("drop database if exists %s" % dbname)
|
||||
conn.close()
|
||||
raise err
|
||||
|
||||
def run(self):
|
||||
buildPath = self.getBuildPath()
|
||||
config = buildPath+ "../sim/dnode1/cfg/"
|
||||
host="localhost"
|
||||
connectstmt=self.newcon(host,config)
|
||||
self.test_stmt_insert_multi(connectstmt)
|
||||
return
|
||||
|
||||
|
||||
# add case with filename
|
||||
#
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,176 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import os
|
||||
import threading as thd
|
||||
import multiprocessing as mp
|
||||
from numpy.lib.function_base import insert
|
||||
import taos
|
||||
from taos import *
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
import numpy as np
|
||||
import datetime as dt
|
||||
from datetime import datetime
|
||||
from ctypes import *
|
||||
import time
|
||||
# constant define
|
||||
WAITS = 5 # wait seconds
|
||||
|
||||
class TDTestCase:
|
||||
#
|
||||
# --------------- main frame -------------------
|
||||
def caseDescription(self):
|
||||
'''
|
||||
limit and offset keyword function test cases;
|
||||
case1: limit offset base function test
|
||||
case2: offset return valid
|
||||
'''
|
||||
return
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root)-len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
# init
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor())
|
||||
# tdSql.prepare()
|
||||
# self.create_tables();
|
||||
self.ts = 1500000000000
|
||||
|
||||
# stop
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
# --------------- case -------------------
|
||||
|
||||
|
||||
def newcon(self,host,cfg):
|
||||
user = "root"
|
||||
password = "taosdata"
|
||||
port =6030
|
||||
con=taos.connect(host=host, user=user, password=password, config=cfg ,port=port)
|
||||
print(con)
|
||||
return con
|
||||
|
||||
def test_stmt_set_tbname_tag(self,conn):
|
||||
dbname = "pytest_taos_stmt_set_tbname_tag"
|
||||
|
||||
try:
|
||||
conn.execute("drop database if exists %s" % dbname)
|
||||
conn.execute("create database if not exists %s PRECISION 'us' " % dbname)
|
||||
conn.select_db(dbname)
|
||||
conn.execute("create table if not exists log(ts timestamp, bo bool, nil tinyint, ti tinyint, si smallint, ii int,\
|
||||
bi bigint, tu tinyint unsigned, su smallint unsigned, iu int unsigned, bu bigint unsigned, \
|
||||
ff float, dd double, bb binary(100), nn nchar(100), tt timestamp , vc varchar(100)) tags (t1 timestamp, t2 bool,\
|
||||
t3 tinyint, t4 tinyint, t5 smallint, t6 int, t7 bigint, t8 tinyint unsigned, t9 smallint unsigned, \
|
||||
t10 int unsigned, t11 bigint unsigned, t12 float, t13 double, t14 binary(100), t15 nchar(100), t16 timestamp)")
|
||||
|
||||
stmt = conn.statement("insert into ? using log tags (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?) \
|
||||
values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)")
|
||||
tags = new_bind_params(16)
|
||||
tags[0].timestamp(1626861392589123, PrecisionEnum.Microseconds)
|
||||
tags[1].bool(True)
|
||||
tags[2].null()
|
||||
tags[3].tinyint(2)
|
||||
tags[4].smallint(3)
|
||||
tags[5].int(4)
|
||||
tags[6].bigint(5)
|
||||
tags[7].tinyint_unsigned(6)
|
||||
tags[8].smallint_unsigned(7)
|
||||
tags[9].int_unsigned(8)
|
||||
tags[10].bigint_unsigned(9)
|
||||
tags[11].float(10.1)
|
||||
tags[12].double(10.11)
|
||||
tags[13].binary("hello")
|
||||
tags[14].nchar("stmt")
|
||||
tags[15].timestamp(1626861392589, PrecisionEnum.Milliseconds)
|
||||
stmt.set_tbname_tags("tb1", tags)
|
||||
params = new_multi_binds(16)
|
||||
params[0].timestamp((1626861392589111, 1626861392590111, 1626861392591111))
|
||||
params[1].bool((True, None, False))
|
||||
params[2].tinyint([-128, -128, None]) # -128 is tinyint null
|
||||
params[3].tinyint([0, 127, None])
|
||||
params[4].smallint([3, None, 2])
|
||||
params[5].int([3, 4, None])
|
||||
params[6].bigint([3, 4, None])
|
||||
params[7].tinyint_unsigned([3, 4, None])
|
||||
params[8].smallint_unsigned([3, 4, None])
|
||||
params[9].int_unsigned([3, 4, None])
|
||||
params[10].bigint_unsigned([3, 4, 5])
|
||||
params[11].float([3, None, 1])
|
||||
params[12].double([3, None, 1.2])
|
||||
params[13].binary(["abc", "dddafadfadfadfadfa", None])
|
||||
params[14].nchar(["涛思数据", None, "a long string with 中文字符"])
|
||||
params[15].timestamp([None, None, 1626861392591])
|
||||
params[16].binary(["涛思数据16", None, "a long string with 中文-字符"])
|
||||
|
||||
stmt.bind_param_batch(params)
|
||||
stmt.execute()
|
||||
|
||||
assert stmt.affected_rows == 3
|
||||
|
||||
#query
|
||||
querystmt1=conn.statement("select * from log where bu < ?")
|
||||
queryparam1=new_bind_params(1)
|
||||
print(type(queryparam1))
|
||||
queryparam1[0].int(5)
|
||||
querystmt1.bind_param(queryparam1)
|
||||
querystmt1.execute()
|
||||
result1=querystmt1.use_result()
|
||||
rows1=result1.fetch_all()
|
||||
print(rows1)
|
||||
# assert str(rows1[0][0]) == "2021-07-21 17:56:32.589111"
|
||||
# assert rows1[0][10] == 3
|
||||
# assert rows1[1][10] == 4
|
||||
|
||||
# conn.execute("drop database if exists %s" % dbname)
|
||||
conn.close()
|
||||
|
||||
except Exception as err:
|
||||
# conn.execute("drop database if exists %s" % dbname)
|
||||
conn.close()
|
||||
raise err
|
||||
|
||||
def run(self):
|
||||
buildPath = self.getBuildPath()
|
||||
config = buildPath+ "../sim/dnode1/cfg/"
|
||||
host="localhost"
|
||||
connectstmt=self.newcon(host,config)
|
||||
self.test_stmt_set_tbname_tag(connectstmt)
|
||||
|
||||
return
|
||||
|
||||
|
||||
# add case with filename
|
||||
#
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,265 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor())
|
||||
self.ts = 1537146000000
|
||||
self.param_list = ['LT','lt','Lt','lT','GT','gt','Gt','gT','LE','le','Le','lE','GE','ge','Ge','gE','NE','ne','Ne','nE','EQ','eq','Eq','eQ']
|
||||
self.row_num = 10
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
# timestamp = 1ms , time_unit = 1s
|
||||
tdSql.execute('''create table test(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
||||
col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned)''')
|
||||
for i in range(self.row_num):
|
||||
tdSql.execute("insert into test values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
||||
% (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
||||
integer_list = [1,2,3,4,11,12,13,14]
|
||||
float_list = [5,6]
|
||||
|
||||
for i in integer_list:
|
||||
for j in self.param_list:
|
||||
tdSql.query(f"select stateduration(col{i},'{j}',5) from test")
|
||||
tdSql.checkRows(10)
|
||||
if j in ['LT' ,'lt','Lt','lT']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (0,), (0,), (0,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
elif j in ['GT','gt', 'Gt','gT']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (-1,), (0,), (0,), (0,), (0,), (0,)])
|
||||
elif j in ['LE','le','Le','lE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (0,), (0,), (0,), (0,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
elif j in [ 'GE','ge','Ge','gE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (0,), (0,), (0,), (0,), (0,), (0,)])
|
||||
elif j in ['NE','ne','Ne','nE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (0,), (0,), (0,), (-1,), (0,), (0,), (0,), (0,), (0,)])
|
||||
elif j in ['EQ','eq','Eq','eQ']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (0,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
for i in float_list:
|
||||
for j in self.param_list:
|
||||
tdSql.query(f"select stateduration(col{i},'{j}',5) from test")
|
||||
tdSql.checkRows(10)
|
||||
if j in ['LT','lt','Lt','lT','LE','le','Le','lE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (0,), (0,), (0,), (0,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
elif j in ['GE','ge','Ge','gE','GT','gt','Gt','gT']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (-1,), (0,), (0,), (0,), (0,), (0,)])
|
||||
elif j in ['NE','ne','Ne','nE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (0,), (0,), (0,), (0,), (0,), (0,), (0,), (0,), (0,)])
|
||||
elif j in ['EQ','eq','Eq','eQ']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
|
||||
error_column_list = ['ts','col7','col8','col9','a',1]
|
||||
for i in error_column_list:
|
||||
for j in self.param_list:
|
||||
tdSql.error(f"select stateduration({i},{j},5) from test")
|
||||
|
||||
error_param_list = ['a',1]
|
||||
for i in error_param_list:
|
||||
tdSql.error(f"select stateduration(col1,{i},5) from test")
|
||||
|
||||
# timestamp = 1s, time_unit =1s
|
||||
tdSql.execute('''create table test1(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
||||
col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned)''')
|
||||
for i in range(self.row_num):
|
||||
tdSql.execute("insert into test1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
||||
% (self.ts + i*1000, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
||||
|
||||
for i in integer_list:
|
||||
for j in self.param_list:
|
||||
tdSql.query(f"select stateduration(col{i},'{j}',5) from test1")
|
||||
tdSql.checkRows(10)
|
||||
# print(tdSql.queryResult)
|
||||
if j in ['LT' ,'lt','Lt','lT']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (1,), (2,), (3,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
elif j in ['GT','gt', 'Gt','gT']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (-1,), (0,), (1,), (2,), (3,), (4,)])
|
||||
elif j in ['LE','le','Le','lE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (1,), (2,), (3,), (4,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
elif j in [ 'GE','ge','Ge','gE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (0,), (1,), (2,), (3,), (4,), (5,)])
|
||||
elif j in ['NE','ne','Ne','nE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (1,), (2,), (3,), (-1,), (0,), (1,), (2,), (3,), (4,)])
|
||||
elif j in ['EQ','eq','Eq','eQ']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (0,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
for i in float_list:
|
||||
for j in self.param_list:
|
||||
tdSql.query(f"select stateduration(col{i},'{j}',5) from test1")
|
||||
tdSql.checkRows(10)
|
||||
print(tdSql.queryResult)
|
||||
if j in ['LT','lt','Lt','lT','LE','le','Le','lE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (1,), (2,), (3,), (4,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
elif j in ['GE','ge','Ge','gE','GT','gt','Gt','gT']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (-1,), (0,), (1,), (2,), (3,), (4,)])
|
||||
elif j in ['NE','ne','Ne','nE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (1,), (2,), (3,), (4,), (5,), (6,), (7,), (8,), (9,)])
|
||||
elif j in ['EQ','eq','Eq','eQ']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
|
||||
|
||||
# timestamp = 1m, time_unit =1m
|
||||
tdSql.execute('''create table test2(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
||||
col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned)''')
|
||||
for i in range(self.row_num):
|
||||
tdSql.execute("insert into test2 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
||||
% (self.ts + i*1000*60, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
||||
|
||||
for i in integer_list:
|
||||
for j in self.param_list:
|
||||
tdSql.query(f"select stateduration(col{i},'{j}',5,1m) from test2")
|
||||
tdSql.checkRows(10)
|
||||
# print(tdSql.queryResult)
|
||||
if j in ['LT' ,'lt','Lt','lT']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (1,), (2,), (3,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
elif j in ['GT','gt', 'Gt','gT']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (-1,), (0,), (1,), (2,), (3,), (4,)])
|
||||
elif j in ['LE','le','Le','lE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (1,), (2,), (3,), (4,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
elif j in [ 'GE','ge','Ge','gE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (0,), (1,), (2,), (3,), (4,), (5,)])
|
||||
elif j in ['NE','ne','Ne','nE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (1,), (2,), (3,), (-1,), (0,), (1,), (2,), (3,), (4,)])
|
||||
elif j in ['EQ','eq','Eq','eQ']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (0,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
for i in float_list:
|
||||
for j in self.param_list:
|
||||
tdSql.query(f"select stateduration(col{i},'{j}',5,1m) from test2")
|
||||
tdSql.checkRows(10)
|
||||
print(tdSql.queryResult)
|
||||
if j in ['LT','lt','Lt','lT','LE','le','Le','lE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (1,), (2,), (3,), (4,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
elif j in ['GE','ge','Ge','gE','GT','gt','Gt','gT']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (-1,), (0,), (1,), (2,), (3,), (4,)])
|
||||
elif j in ['NE','ne','Ne','nE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (1,), (2,), (3,), (4,), (5,), (6,), (7,), (8,), (9,)])
|
||||
elif j in ['EQ','eq','Eq','eQ']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
|
||||
# timestamp = 1h, time_unit =1h
|
||||
tdSql.execute('''create table test3(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
||||
col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned)''')
|
||||
for i in range(self.row_num):
|
||||
tdSql.execute("insert into test3 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
||||
% (self.ts + i*1000*60*60, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
||||
|
||||
for i in integer_list:
|
||||
for j in self.param_list:
|
||||
tdSql.query(f"select stateduration(col{i},'{j}',5,1h) from test3")
|
||||
tdSql.checkRows(10)
|
||||
# print(tdSql.queryResult)
|
||||
if j in ['LT' ,'lt','Lt','lT']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (1,), (2,), (3,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
elif j in ['GT','gt', 'Gt','gT']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (-1,), (0,), (1,), (2,), (3,), (4,)])
|
||||
elif j in ['LE','le','Le','lE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (1,), (2,), (3,), (4,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
elif j in [ 'GE','ge','Ge','gE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (0,), (1,), (2,), (3,), (4,), (5,)])
|
||||
elif j in ['NE','ne','Ne','nE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (1,), (2,), (3,), (-1,), (0,), (1,), (2,), (3,), (4,)])
|
||||
elif j in ['EQ','eq','Eq','eQ']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (0,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
for i in float_list:
|
||||
for j in self.param_list:
|
||||
tdSql.query(f"select stateduration(col{i},'{j}',5,1h) from test3")
|
||||
tdSql.checkRows(10)
|
||||
print(tdSql.queryResult)
|
||||
if j in ['LT','lt','Lt','lT','LE','le','Le','lE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (1,), (2,), (3,), (4,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
elif j in ['GE','ge','Ge','gE','GT','gt','Gt','gT']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (-1,), (0,), (1,), (2,), (3,), (4,)])
|
||||
elif j in ['NE','ne','Ne','nE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (1,), (2,), (3,), (4,), (5,), (6,), (7,), (8,), (9,)])
|
||||
elif j in ['EQ','eq','Eq','eQ']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
|
||||
# timestamp = 1h,time_unit =1m
|
||||
for i in integer_list:
|
||||
for j in self.param_list:
|
||||
tdSql.query(f"select stateduration(col{i},'{j}',5,1m) from test3")
|
||||
tdSql.checkRows(10)
|
||||
# print(tdSql.queryResult)
|
||||
if j in ['LT' ,'lt','Lt','lT']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (60,), (120,), (180,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
elif j in ['GT','gt', 'Gt','gT']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (-1,), (0,), (60,), (120,), (180,), (240,)])
|
||||
elif j in ['LE','le','Le','lE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (60,), (120,), (180,), (240,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
elif j in [ 'GE','ge','Ge','gE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (0,), (60,), (120,), (180,), (240,), (300,)])
|
||||
elif j in ['NE','ne','Ne','nE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (60,), (120,), (180,), (-1,), (0,), (60,), (120,), (180,), (240,)])
|
||||
elif j in ['EQ','eq','Eq','eQ']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (0,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
for i in float_list:
|
||||
for j in self.param_list:
|
||||
tdSql.query(f"select stateduration(col{i},'{j}',5,1m) from test3")
|
||||
tdSql.checkRows(10)
|
||||
print(tdSql.queryResult)
|
||||
if j in ['LT','lt','Lt','lT','LE','le','Le','lE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (60,), (120,), (180,), (240,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
elif j in ['GE','ge','Ge','gE','GT','gt','Gt','gT']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (-1,), (0,), (60,), (120,), (180,), (240,)])
|
||||
elif j in ['NE','ne','Ne','nE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (60,), (120,), (180,), (240,), (300,), (360,), (420,), (480,), (540,)])
|
||||
elif j in ['EQ','eq','Eq','eQ']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
|
||||
# for stb
|
||||
tdSql.execute('''create table stb(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
||||
col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(t0 int)''')
|
||||
tdSql.execute('create table stb_1 using stb tags(1)')
|
||||
for i in range(self.row_num):
|
||||
tdSql.execute("insert into stb_1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
||||
% (self.ts + i*1000*60*60, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
||||
|
||||
for i in integer_list:
|
||||
for j in self.param_list:
|
||||
tdSql.query(f"select stateduration(col{i},'{j}',5,1h) from stb")
|
||||
tdSql.checkRows(10)
|
||||
# print(tdSql.queryResult)
|
||||
if j in ['LT' ,'lt','Lt','lT']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (1,), (2,), (3,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
elif j in ['GT','gt', 'Gt','gT']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (-1,), (0,), (1,), (2,), (3,), (4,)])
|
||||
elif j in ['LE','le','Le','lE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (1,), (2,), (3,), (4,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
elif j in [ 'GE','ge','Ge','gE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (0,), (1,), (2,), (3,), (4,), (5,)])
|
||||
elif j in ['NE','ne','Ne','nE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (1,), (2,), (3,), (-1,), (0,), (1,), (2,), (3,), (4,)])
|
||||
elif j in ['EQ','eq','Eq','eQ']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (0,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
for i in float_list:
|
||||
for j in self.param_list:
|
||||
tdSql.query(f"select stateduration(col{i},'{j}',5,1h) from stb")
|
||||
tdSql.checkRows(10)
|
||||
print(tdSql.queryResult)
|
||||
if j in ['LT','lt','Lt','lT','LE','le','Le','lE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (1,), (2,), (3,), (4,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
elif j in ['GE','ge','Ge','gE','GT','gt','Gt','gT']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (-1,), (0,), (1,), (2,), (3,), (4,)])
|
||||
elif j in ['NE','ne','Ne','nE']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(0,), (1,), (2,), (3,), (4,), (5,), (6,), (7,), (8,), (9,)])
|
||||
elif j in ['EQ','eq','Eq','eQ']:
|
||||
tdSql.checkEqual(tdSql.queryResult,[(-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,), (-1,)])
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -15,6 +15,7 @@ python3 ./test.py -f 0-others/user_control.py
|
|||
python3 ./test.py -f 0-others/fsync.py
|
||||
|
||||
python3 ./test.py -f 1-insert/opentsdb_telnet_line_taosc_insert.py
|
||||
python3 ./test.py -f 1-insert/test_stmt_muti_insert_query.py
|
||||
|
||||
python3 ./test.py -f 2-query/between.py
|
||||
python3 ./test.py -f 2-query/distinct.py
|
||||
|
@ -55,8 +56,8 @@ python3 ./test.py -f 2-query/Timediff.py
|
|||
|
||||
python3 ./test.py -f 2-query/top.py
|
||||
python3 ./test.py -f 2-query/bottom.py
|
||||
|
||||
|
||||
python3 ./test.py -f 2-query/percentile.py
|
||||
python3 ./test.py -f 2-query/apercentile.py
|
||||
python3 ./test.py -f 2-query/abs.py
|
||||
python3 ./test.py -f 2-query/ceil.py
|
||||
python3 ./test.py -f 2-query/floor.py
|
||||
|
@ -72,7 +73,9 @@ python3 ./test.py -f 2-query/arccos.py
|
|||
python3 ./test.py -f 2-query/arctan.py
|
||||
python3 ./test.py -f 2-query/query_cols_tags_and_or.py
|
||||
# python3 ./test.py -f 2-query/nestedQuery.py
|
||||
python3 ./test.py -f 2-query/nestedQuery_str.py
|
||||
# TD-15983 subquery output duplicate name column.
|
||||
# Please Xiangyang Guo modify the following script
|
||||
# python3 ./test.py -f 2-query/nestedQuery_str.py
|
||||
python3 ./test.py -f 2-query/avg.py
|
||||
python3 ./test.py -f 2-query/elapsed.py
|
||||
python3 ./test.py -f 2-query/csum.py
|
||||
|
@ -81,6 +84,7 @@ python3 ./test.py -f 2-query/diff.py
|
|||
python3 ./test.py -f 2-query/sample.py
|
||||
python3 ./test.py -f 2-query/function_diff.py
|
||||
python3 ./test.py -f 2-query/unique.py
|
||||
python3 ./test.py -f 2-query/stateduration.py
|
||||
|
||||
python3 ./test.py -f 7-tmq/basic5.py
|
||||
python3 ./test.py -f 7-tmq/subscribeDb.py
|
||||
|
@ -92,4 +96,3 @@ python3 ./test.py -f 7-tmq/subscribeStb1.py
|
|||
python3 ./test.py -f 7-tmq/subscribeStb2.py
|
||||
python3 ./test.py -f 7-tmq/subscribeStb3.py
|
||||
python3 ./test.py -f 7-tmq/subscribeStb4.py
|
||||
python3 ./test.py -f 7-tmq/subscribeStb2.py
|
|
@ -1,76 +0,0 @@
|
|||
{
|
||||
"filetype": "insert",
|
||||
"cfgdir": "/etc/taos",
|
||||
"host": "127.0.0.1",
|
||||
"port": 6030,
|
||||
"user": "root",
|
||||
"password": "taosdata",
|
||||
"thread_count": 16,
|
||||
"create_table_thread_count": 1,
|
||||
"result_file": "./insert_res.txt",
|
||||
"confirm_parameter_prompt": "no",
|
||||
"insert_interval": 0,
|
||||
"interlace_rows": 0,
|
||||
"num_of_records_per_req": 10000,
|
||||
"prepared_rand": 10000,
|
||||
"chinese": "no",
|
||||
"databases": [
|
||||
{
|
||||
"dbinfo": {
|
||||
"name": "db",
|
||||
"drop": "yes",
|
||||
"vgroups":4,
|
||||
"replica": 1,
|
||||
"precision": "ms"
|
||||
},
|
||||
"super_tables": [
|
||||
{
|
||||
"name": "stb",
|
||||
"child_table_exists": "no",
|
||||
"childtable_count": 1000,
|
||||
"childtable_prefix": "stb_",
|
||||
"escape_character": "no",
|
||||
"auto_create_table": "no",
|
||||
"batch_create_tbl_num": 10,
|
||||
"data_source": "rand",
|
||||
"insert_mode": "taosc",
|
||||
"non_stop_mode": "no",
|
||||
"line_protocol": "line",
|
||||
"insert_rows": 100000,
|
||||
"interlace_rows": 0,
|
||||
"insert_interval": 0,
|
||||
"disorder_ratio": 0,
|
||||
"timestamp_step": 1,
|
||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
||||
"use_sample_ts": "no",
|
||||
"tags_file": "",
|
||||
"columns": [
|
||||
{
|
||||
"type": "FLOAT",
|
||||
"name": "current",
|
||||
"count": 4,
|
||||
"max": 12,
|
||||
"min": 8
|
||||
},
|
||||
{ "type": "INT", "name": "voltage", "max": 225, "min": 215 },
|
||||
{ "type": "FLOAT", "name": "phase", "max": 1, "min": 0 }
|
||||
],
|
||||
"tags": [
|
||||
{
|
||||
"type": "TINYINT",
|
||||
"name": "groupid",
|
||||
"max": 10,
|
||||
"min": 1
|
||||
},
|
||||
{
|
||||
"name": "location",
|
||||
"type": "BINARY",
|
||||
"len": 16,
|
||||
"values": ["beijing", "shanghai"]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
|
@ -37,6 +37,7 @@ if __name__ == "__main__":
|
|||
masterIp = ""
|
||||
testCluster = False
|
||||
valgrind = 0
|
||||
killValgrind = 1
|
||||
logSql = True
|
||||
stop = 0
|
||||
restart = False
|
||||
|
@ -45,8 +46,8 @@ if __name__ == "__main__":
|
|||
windows = 1
|
||||
updateCfgDict = {}
|
||||
execCmd = ""
|
||||
opts, args = getopt.gnu_getopt(sys.argv[1:], 'f:p:m:l:scghrd:e:', [
|
||||
'file=', 'path=', 'master', 'logSql', 'stop', 'cluster', 'valgrind', 'help', 'restart', 'updateCfgDict', 'execCmd'])
|
||||
opts, args = getopt.gnu_getopt(sys.argv[1:], 'f:p:m:l:scghrd:k:e:', [
|
||||
'file=', 'path=', 'master', 'logSql', 'stop', 'cluster', 'valgrind', 'help', 'restart', 'updateCfgDict', 'killv', 'execCmd'])
|
||||
for key, value in opts:
|
||||
if key in ['-h', '--help']:
|
||||
tdLog.printNoPrefix(
|
||||
|
@ -60,6 +61,7 @@ if __name__ == "__main__":
|
|||
tdLog.printNoPrefix('-g valgrind Test Flag')
|
||||
tdLog.printNoPrefix('-r taosd restart test')
|
||||
tdLog.printNoPrefix('-d update cfg dict, base64 json str')
|
||||
tdLog.printNoPrefix('-k not kill valgrind processer')
|
||||
tdLog.printNoPrefix('-e eval str to run')
|
||||
sys.exit(0)
|
||||
|
||||
|
@ -100,6 +102,9 @@ if __name__ == "__main__":
|
|||
print('updateCfgDict convert fail.')
|
||||
sys.exit(0)
|
||||
|
||||
if key in ['-k', '--killValgrind']:
|
||||
killValgrind = 0
|
||||
|
||||
if key in ['-e', '--execCmd']:
|
||||
try:
|
||||
execCmd = base64.b64decode(value.encode()).decode()
|
||||
|
@ -189,6 +194,7 @@ if __name__ == "__main__":
|
|||
else:
|
||||
tdCases.runAllWindows(conn)
|
||||
else:
|
||||
tdDnodes.setKillValgrind(killValgrind)
|
||||
tdDnodes.init(deployPath, masterIp)
|
||||
tdDnodes.setTestCluster(testCluster)
|
||||
tdDnodes.setValgrind(valgrind)
|
||||
|
|
|
@ -283,9 +283,7 @@ void dumpTrans(SSdb *pSdb, SJson *json) {
|
|||
tjsonAddStringToObject(item, "createdTime", i642str(pObj->createdTime));
|
||||
tjsonAddStringToObject(item, "dbUid", i642str(pObj->dbUid));
|
||||
tjsonAddStringToObject(item, "dbname", pObj->dbname);
|
||||
tjsonAddIntegerToObject(item, "redoLogNum", taosArrayGetSize(pObj->redoLogs));
|
||||
tjsonAddIntegerToObject(item, "undoLogNum", taosArrayGetSize(pObj->undoLogs));
|
||||
tjsonAddIntegerToObject(item, "commitLogNum", taosArrayGetSize(pObj->commitLogs));
|
||||
tjsonAddIntegerToObject(item, "commitLogNum", taosArrayGetSize(pObj->commitActions));
|
||||
tjsonAddIntegerToObject(item, "redoActionNum", taosArrayGetSize(pObj->redoActions));
|
||||
tjsonAddIntegerToObject(item, "undoActionNum", taosArrayGetSize(pObj->undoActions));
|
||||
|
||||
|
|
Loading…
Reference in New Issue