Merge branch '3.0' of github.com:taosdata/TDengine into szhou/fix/udf
This commit is contained in:
commit
8927040032
|
@ -111,7 +111,7 @@ TDengine REST API 详情请参考[官方文档](/reference/rest-api/)。
|
||||||
|
|
||||||
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能。
|
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能。
|
||||||
|
|
||||||
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,您可以设置不同参数进行体验,请执行 `taosBenchmark --help` 详细列出。taosBenchmark 详细使用方法请参照 [taosBenchmark 参考手册](../reference/taosbenchmark)。
|
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,您可以设置不同参数进行体验,请执行 `taosBenchmark --help` 详细列出。taosBenchmark 详细使用方法请参照 [taosBenchmark 参考手册](../../reference/taosbenchmark)。
|
||||||
|
|
||||||
## 体验查询
|
## 体验查询
|
||||||
|
|
||||||
|
|
|
@ -245,7 +245,7 @@ select * from t;
|
||||||
Query OK, 2 row(s) in set (0.003128s)
|
Query OK, 2 row(s) in set (0.003128s)
|
||||||
```
|
```
|
||||||
|
|
||||||
除执行 SQL 语句外,系统管理员还可以从 TDengine CLI 进行检查系统运行状态、添加删除用户账号等操作。TDengine CLI 连同应用驱动也可以独立安装在 Linux 或 Windows 机器上运行,更多细节请参考 [这里](../reference/taos-shell/)
|
除执行 SQL 语句外,系统管理员还可以从 TDengine CLI 进行检查系统运行状态、添加删除用户账号等操作。TDengine CLI 连同应用驱动也可以独立安装在 Linux 或 Windows 机器上运行,更多细节请参考 [这里](../../reference/taos-shell/)
|
||||||
|
|
||||||
## 使用 taosBenchmark 体验写入速度
|
## 使用 taosBenchmark 体验写入速度
|
||||||
|
|
||||||
|
|
|
@ -18,7 +18,7 @@ database_option: {
|
||||||
| CACHESIZE value
|
| CACHESIZE value
|
||||||
| COMP {0 | 1 | 2}
|
| COMP {0 | 1 | 2}
|
||||||
| DURATION value
|
| DURATION value
|
||||||
| FSYNC value
|
| WAL_FSYNC_PERIOD value
|
||||||
| MAXROWS value
|
| MAXROWS value
|
||||||
| MINROWS value
|
| MINROWS value
|
||||||
| KEEP value
|
| KEEP value
|
||||||
|
@ -28,7 +28,7 @@ database_option: {
|
||||||
| REPLICA value
|
| REPLICA value
|
||||||
| RETENTIONS ingestion_duration:keep_duration ...
|
| RETENTIONS ingestion_duration:keep_duration ...
|
||||||
| STRICT {'off' | 'on'}
|
| STRICT {'off' | 'on'}
|
||||||
| WAL {1 | 2}
|
| WAL_LEVEL {1 | 2}
|
||||||
| VGROUPS value
|
| VGROUPS value
|
||||||
| SINGLE_STABLE {0 | 1}
|
| SINGLE_STABLE {0 | 1}
|
||||||
| WAL_RETENTION_PERIOD value
|
| WAL_RETENTION_PERIOD value
|
||||||
|
|
|
@ -218,7 +218,7 @@ GROUP BY 子句中的表达式可以包含表或视图中的任何列,这些
|
||||||
|
|
||||||
PARTITION BY 子句是 TDengine 特色语法,按 part_list 对数据进行切分,在每个切分的分片中进行计算。
|
PARTITION BY 子句是 TDengine 特色语法,按 part_list 对数据进行切分,在每个切分的分片中进行计算。
|
||||||
|
|
||||||
详见 [TDengine 特色查询](./distinguished)
|
详见 [TDengine 特色查询](../distinguished)
|
||||||
|
|
||||||
## ORDER BY
|
## ORDER BY
|
||||||
|
|
||||||
|
|
|
@ -16,15 +16,15 @@ toc_max_heading_level: 4
|
||||||
SELECT ABS(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
SELECT ABS(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:获得指定列的绝对值
|
**功能说明**:获得指定字段的绝对值。
|
||||||
|
|
||||||
**返回结果类型**:如果输入值为整数,输出值是 UBIGINT 类型。如果输入值是 FLOAT/DOUBLE 数据类型,输出值是 DOUBLE 数据类型。
|
**返回结果类型**:与指定字段的原始数据类型一致。
|
||||||
|
|
||||||
**适用数据类型**:数值类型。
|
**适用数据类型**:数值类型。
|
||||||
|
|
||||||
**嵌套子查询支持**:适用于内层查询和外层查询。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
**适用于**: 表和超级表
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。
|
**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。
|
||||||
|
|
||||||
|
@ -34,15 +34,15 @@ SELECT ABS(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
SELECT ACOS(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
SELECT ACOS(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:获得指定列的反余弦结果
|
**功能说明**:获得指定字段的反余弦结果。
|
||||||
|
|
||||||
**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL
|
**返回结果类型**:DOUBLE。
|
||||||
|
|
||||||
**适用数据类型**:数值类型。
|
**适用数据类型**:数值类型。
|
||||||
|
|
||||||
**嵌套子查询支持**:适用于内层查询和外层查询。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
**适用于**: 表和超级表
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。
|
**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。
|
||||||
|
|
||||||
|
@ -52,15 +52,15 @@ SELECT ACOS(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
SELECT ASIN(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
SELECT ASIN(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:获得指定列的反正弦结果
|
**功能说明**:获得指定字段的反正弦结果。
|
||||||
|
|
||||||
**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL
|
**返回结果类型**:DOUBLE。
|
||||||
|
|
||||||
**适用数据类型**:数值类型。
|
**适用数据类型**:数值类型。
|
||||||
|
|
||||||
**嵌套子查询支持**:适用于内层查询和外层查询。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
**适用于**: 表和超级表
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。
|
**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。
|
||||||
|
|
||||||
|
@ -71,15 +71,15 @@ SELECT ASIN(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
SELECT ATAN(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
SELECT ATAN(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:获得指定列的反正切结果
|
**功能说明**:获得指定字段的反正切结果。
|
||||||
|
|
||||||
**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL
|
**返回结果类型**:DOUBLE。
|
||||||
|
|
||||||
**适用数据类型**:数值类型。
|
**适用数据类型**:数值类型。
|
||||||
|
|
||||||
**嵌套子查询支持**:适用于内层查询和外层查询。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
**适用于**: 表和超级表
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。
|
**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。
|
||||||
|
|
||||||
|
@ -90,20 +90,17 @@ SELECT ATAN(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
SELECT CEIL(field_name) FROM { tb_name | stb_name } [WHERE clause];
|
SELECT CEIL(field_name) FROM { tb_name | stb_name } [WHERE clause];
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:获得指定列的向上取整数的结果。
|
**功能说明**:获得指定字段的向上取整数的结果。
|
||||||
|
|
||||||
**返回结果类型**:与指定列的原始数据类型一致。例如,如果指定列的原始数据类型为 Float,那么返回的数据类型也为 Float;如果指定列的原始数据类型为 Double,那么返回的数据类型也为 Double。
|
**返回结果类型**:与指定字段的原始数据类型一致。
|
||||||
|
|
||||||
**适用数据类型**:数值类型。
|
**适用数据类型**:数值类型。
|
||||||
|
|
||||||
**适用于**: 普通表、超级表。
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
**嵌套子查询支持**:适用于内层查询和外层查询。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
**使用说明**:
|
**使用说明**: 只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。
|
||||||
|
|
||||||
- 支持 +、-、\*、/ 运算,如 ceil(col1) + ceil(col2)。
|
|
||||||
- 只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。
|
|
||||||
|
|
||||||
#### COS
|
#### COS
|
||||||
|
|
||||||
|
@ -111,15 +108,15 @@ SELECT CEIL(field_name) FROM { tb_name | stb_name } [WHERE clause];
|
||||||
SELECT COS(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
SELECT COS(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:获得指定列的余弦结果
|
**功能说明**:获得指定字段的余弦结果。
|
||||||
|
|
||||||
**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL
|
**返回结果类型**:DOUBLE。
|
||||||
|
|
||||||
**适用数据类型**:数值类型。
|
**适用数据类型**:数值类型。
|
||||||
|
|
||||||
**嵌套子查询支持**:适用于内层查询和外层查询。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
**适用于**: 表和超级表
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。
|
**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。
|
||||||
|
|
||||||
|
@ -129,24 +126,24 @@ SELECT COS(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
SELECT FLOOR(field_name) FROM { tb_name | stb_name } [WHERE clause];
|
SELECT FLOOR(field_name) FROM { tb_name | stb_name } [WHERE clause];
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:获得指定列的向下取整数的结果。
|
**功能说明**:获得指定字段的向下取整数的结果。
|
||||||
其他使用说明参见 CEIL 函数描述。
|
其他使用说明参见 CEIL 函数描述。
|
||||||
|
|
||||||
#### LOG
|
#### LOG
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT LOG(field_name, base) FROM { tb_name | stb_name } [WHERE clause]
|
SELECT LOG(field_name[, base]) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:获得指定列对于底数 base 的对数
|
**功能说明**:获得指定字段对于底数 base 的对数。如果 base 参数省略,则返回指定字段的自然对数值。
|
||||||
|
|
||||||
**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL
|
**返回结果类型**:DOUBLE。
|
||||||
|
|
||||||
**适用数据类型**:数值类型。
|
**适用数据类型**:数值类型。
|
||||||
|
|
||||||
**嵌套子查询支持**:适用于内层查询和外层查询。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
**适用于**: 表和超级表
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。
|
**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。
|
||||||
|
|
||||||
|
@ -157,15 +154,15 @@ SELECT LOG(field_name, base) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
SELECT POW(field_name, power) FROM { tb_name | stb_name } [WHERE clause]
|
SELECT POW(field_name, power) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:获得指定列的指数为 power 的幂
|
**功能说明**:获得指定字段的指数为 power 的幂。
|
||||||
|
|
||||||
**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL
|
**返回结果类型**:DOUBLE。
|
||||||
|
|
||||||
**适用数据类型**:数值类型。
|
**适用数据类型**:数值类型。
|
||||||
|
|
||||||
**嵌套子查询支持**:适用于内层查询和外层查询。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
**适用于**: 表和超级表
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。
|
**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。
|
||||||
|
|
||||||
|
@ -176,7 +173,7 @@ SELECT POW(field_name, power) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
SELECT ROUND(field_name) FROM { tb_name | stb_name } [WHERE clause];
|
SELECT ROUND(field_name) FROM { tb_name | stb_name } [WHERE clause];
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:获得指定列的四舍五入的结果。
|
**功能说明**:获得指定字段的四舍五入的结果。
|
||||||
其他使用说明参见 CEIL 函数描述。
|
其他使用说明参见 CEIL 函数描述。
|
||||||
|
|
||||||
|
|
||||||
|
@ -186,15 +183,15 @@ SELECT ROUND(field_name) FROM { tb_name | stb_name } [WHERE clause];
|
||||||
SELECT SIN(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
SELECT SIN(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:获得指定列的正弦结果
|
**功能说明**:获得指定字段的正弦结果。
|
||||||
|
|
||||||
**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL
|
**返回结果类型**:DOUBLE。
|
||||||
|
|
||||||
**适用数据类型**:数值类型。
|
**适用数据类型**:数值类型。
|
||||||
|
|
||||||
**嵌套子查询支持**:适用于内层查询和外层查询。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
**适用于**: 表和超级表
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。
|
**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。
|
||||||
|
|
||||||
|
@ -204,15 +201,15 @@ SELECT SIN(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
SELECT SQRT(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
SELECT SQRT(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:获得指定列的平方根
|
**功能说明**:获得指定字段的平方根。
|
||||||
|
|
||||||
**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL
|
**返回结果类型**:DOUBLE。
|
||||||
|
|
||||||
**适用数据类型**:数值类型。
|
**适用数据类型**:数值类型。
|
||||||
|
|
||||||
**嵌套子查询支持**:适用于内层查询和外层查询。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
**适用于**: 表和超级表
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。
|
**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。
|
||||||
|
|
||||||
|
@ -222,15 +219,15 @@ SELECT SQRT(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
SELECT TAN(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
SELECT TAN(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:获得指定列的正切结果
|
**功能说明**:获得指定字段的正切结果。
|
||||||
|
|
||||||
**返回结果类型**:DOUBLE。如果输入值为 NULL,输出值也为 NULL
|
**返回结果类型**:DOUBLE。
|
||||||
|
|
||||||
**适用数据类型**:数值类型。
|
**适用数据类型**:数值类型。
|
||||||
|
|
||||||
**嵌套子查询支持**:适用于内层查询和外层查询。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
**适用于**: 表和超级表
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。
|
**使用说明**:只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用。
|
||||||
|
|
||||||
|
@ -246,13 +243,13 @@ SELECT CHAR_LENGTH(str|column) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
|
|
||||||
**功能说明**:以字符计数的字符串长度。
|
**功能说明**:以字符计数的字符串长度。
|
||||||
|
|
||||||
**返回结果类型**:INT。如果输入值为NULL,输出值为NULL。
|
**返回结果类型**:BIGINT。
|
||||||
|
|
||||||
**适用数据类型**:VARCHAR, NCHAR
|
**适用数据类型**:VARCHAR, NCHAR。
|
||||||
|
|
||||||
**嵌套子查询支持**:适用于内层查询和外层查询。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
**适用于**: 表和超级表
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
#### CONCAT
|
#### CONCAT
|
||||||
|
|
||||||
|
@ -262,13 +259,13 @@ SELECT CONCAT(str1|column1, str2|column2, ...) FROM { tb_name | stb_name } [WHER
|
||||||
|
|
||||||
**功能说明**:字符串连接函数。
|
**功能说明**:字符串连接函数。
|
||||||
|
|
||||||
**返回结果类型**:如果所有参数均为 VARCHAR 类型,则结果类型为 VARCHAR。如果参数包含NCHAR类型,则结果类型为NCHAR。如果输入值为NULL,输出值为NULL。
|
**返回结果类型**:如果所有参数均为 VARCHAR 类型,则结果类型为 VARCHAR。如果参数包含NCHAR类型,则结果类型为NCHAR。如果参数包含NULL值,则输出值为NULL。
|
||||||
|
|
||||||
**适用数据类型**:VARCHAR, NCHAR。 该函数最小参数个数为2个,最大参数个数为8个。
|
**适用数据类型**:VARCHAR, NCHAR。 该函数最小参数个数为2个,最大参数个数为8个。
|
||||||
|
|
||||||
**嵌套子查询支持**:适用于内层查询和外层查询。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
**适用于**: 表和超级表
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
|
|
||||||
#### CONCAT_WS
|
#### CONCAT_WS
|
||||||
|
@ -279,13 +276,13 @@ SELECT CONCAT_WS(separator, str1|column1, str2|column2, ...) FROM { tb_name | st
|
||||||
|
|
||||||
**功能说明**:带分隔符的字符串连接函数。
|
**功能说明**:带分隔符的字符串连接函数。
|
||||||
|
|
||||||
**返回结果类型**:如果所有参数均为VARCHAR类型,则结果类型为VARCHAR。如果参数包含NCHAR类型,则结果类型为NCHAR。如果输入值为NULL,输出值为NULL。如果separator值不为NULL,其他输入为NULL,输出为空串。
|
**返回结果类型**:如果所有参数均为VARCHAR类型,则结果类型为VARCHAR。如果参数包含NCHAR类型,则结果类型为NCHAR。如果参数包含NULL值,则输出值为NULL。
|
||||||
|
|
||||||
**适用数据类型**:VARCHAR, NCHAR。 该函数最小参数个数为3个,最大参数个数为9个。
|
**适用数据类型**:VARCHAR, NCHAR。 该函数最小参数个数为3个,最大参数个数为9个。
|
||||||
|
|
||||||
**嵌套子查询支持**:适用于内层查询和外层查询。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
**适用于**: 表和超级表
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
|
|
||||||
#### LENGTH
|
#### LENGTH
|
||||||
|
@ -296,13 +293,13 @@ SELECT LENGTH(str|column) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
|
|
||||||
**功能说明**:以字节计数的字符串长度。
|
**功能说明**:以字节计数的字符串长度。
|
||||||
|
|
||||||
**返回结果类型**:INT。
|
**返回结果类型**:BIGINT。
|
||||||
|
|
||||||
**适用数据类型**:输入参数是 VARCHAR 类型或者 NCHAR 类型的字符串或者列。
|
**适用数据类型**:输入参数是 VARCHAR 类型或者 NCHAR 类型的字符串或者列。
|
||||||
|
|
||||||
**嵌套子查询支持**:适用于内层查询和外层查询。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
**适用于**: 表和超级表
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
|
|
||||||
#### LOWER
|
#### LOWER
|
||||||
|
@ -313,13 +310,13 @@ SELECT LOWER(str|column) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
|
|
||||||
**功能说明**:将字符串参数值转换为全小写字母。
|
**功能说明**:将字符串参数值转换为全小写字母。
|
||||||
|
|
||||||
**返回结果类型**:同输入类型。如果输入值为NULL,输出值为NULL。
|
**返回结果类型**:与输入字段的原始类型相同。
|
||||||
|
|
||||||
**适用数据类型**:输入参数是 VARCHAR 类型或者 NCHAR 类型的字符串或者列。
|
**适用数据类型**:VARCHAR, NCHAR。
|
||||||
|
|
||||||
**嵌套子查询支持**:适用于内层查询和外层查询。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
**适用于**: 表和超级表
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
|
|
||||||
#### LTRIM
|
#### LTRIM
|
||||||
|
@ -330,13 +327,13 @@ SELECT LTRIM(str|column) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
|
|
||||||
**功能说明**:返回清除左边空格后的字符串。
|
**功能说明**:返回清除左边空格后的字符串。
|
||||||
|
|
||||||
**返回结果类型**:同输入类型。如果输入值为NULL,输出值为NULL。
|
**返回结果类型**:与输入字段的原始类型相同。
|
||||||
|
|
||||||
**适用数据类型**:输入参数是 VARCHAR 类型或者 NCHAR 类型的字符串或者列。
|
**适用数据类型**:VARCHAR, NCHAR。
|
||||||
|
|
||||||
**嵌套子查询支持**:适用于内层查询和外层查询。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
**适用于**: 表和超级表
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
|
|
||||||
#### RTRIM
|
#### RTRIM
|
||||||
|
@ -347,13 +344,13 @@ SELECT LTRIM(str|column) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
|
|
||||||
**功能说明**:返回清除右边空格后的字符串。
|
**功能说明**:返回清除右边空格后的字符串。
|
||||||
|
|
||||||
**返回结果类型**:同输入类型。如果输入值为NULL,输出值为NULL。
|
**返回结果类型**:与输入字段的原始类型相同。
|
||||||
|
|
||||||
**适用数据类型**:输入参数是 VARCHAR 类型或者 NCHAR 类型的字符串或者列。
|
**适用数据类型**:VARCHAR, NCHAR。
|
||||||
|
|
||||||
**嵌套子查询支持**:适用于内层查询和外层查询。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
**适用于**: 表和超级表
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
|
|
||||||
#### SUBSTR
|
#### SUBSTR
|
||||||
|
@ -362,15 +359,15 @@ SELECT LTRIM(str|column) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
SELECT SUBSTR(str,pos[,len]) FROM { tb_name | stb_name } [WHERE clause]
|
SELECT SUBSTR(str,pos[,len]) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:从源字符串 str 中的指定位置 pos 开始取一个长度为 len 的子串并返回。
|
**功能说明**:从源字符串 str 中的指定位置 pos 开始取一个长度为 len 的子串并返回。如果输入参数 len 被忽略,返回的子串包含从 pos 开始的整个字串。
|
||||||
|
|
||||||
**返回结果类型**:同输入类型。如果输入值为NULL,输出值为NULL。
|
**返回结果类型**:与输入字段的原始类型相同。
|
||||||
|
|
||||||
**适用数据类型**:输入参数是 VARCHAR 类型或者 NCHAR 类型的字符串或者列。输入参数pos可以为正数,也可以为负数。如果pos是正数,表示开始位置从字符串开头正数计算。如果pos为负数,表示开始位置从字符串结尾倒数计算。如果输入参数len被忽略,返回的子串包含从pos开始的整个字串。
|
**适用数据类型**:VARCHAR, NCHAR。输入参数 pos 可以为正数,也可以为负数。如果 pos 是正数,表示开始位置从字符串开头正数计算。如果 pos 为负数,表示开始位置从字符串结尾倒数计算。
|
||||||
|
|
||||||
**嵌套子查询支持**:适用于内层查询和外层查询。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
**适用于**: 表和超级表
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
|
|
||||||
#### UPPER
|
#### UPPER
|
||||||
|
@ -381,13 +378,13 @@ SELECT UPPER(str|column) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
|
|
||||||
**功能说明**:将字符串参数值转换为全大写字母。
|
**功能说明**:将字符串参数值转换为全大写字母。
|
||||||
|
|
||||||
**返回结果类型**:同输入类型。如果输入值为NULL,输出值为NULL。
|
**返回结果类型**:与输入字段的原始类型相同。
|
||||||
|
|
||||||
**适用数据类型**:输入参数是 VARCHAR 类型或者 NCHAR 类型的字符串或者列。
|
**适用数据类型**:VARCHAR, NCHAR。
|
||||||
|
|
||||||
**嵌套子查询支持**:适用于内层查询和外层查询。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
**适用于**: 表和超级表
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
|
|
||||||
### 转换函数
|
### 转换函数
|
||||||
|
@ -400,16 +397,19 @@ SELECT UPPER(str|column) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
SELECT CAST(expression AS type_name) FROM { tb_name | stb_name } [WHERE clause]
|
SELECT CAST(expression AS type_name) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:数据类型转换函数,输入参数 expression 支持普通列、常量、标量函数及它们之间的四则运算,只适用于 select 子句中。
|
**功能说明**:数据类型转换函数,返回 expression 转换为 type_name 指定的类型后的结果。只适用于 select 子句中。
|
||||||
|
|
||||||
**返回结果类型**:CAST 中指定的类型(type_name),可以是 BIGINT、BIGINT UNSIGNED、BINARY、VARCHAR、NCHAR和TIMESTAMP。
|
**返回结果类型**:CAST 中指定的类型(type_name)。
|
||||||
|
|
||||||
**适用数据类型**:输入参数 expression 的类型可以是BLOB、MEDIUMBLOB和JSON外的所有类型
|
**适用数据类型**:输入参数 expression 的类型可以是BLOB、MEDIUMBLOB和JSON外的所有类型。
|
||||||
|
|
||||||
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
**使用说明**:
|
**使用说明**:
|
||||||
|
|
||||||
- 对于不能支持的类型转换会直接报错。
|
- 对于不能支持的类型转换会直接报错。
|
||||||
- 如果输入值为NULL则输出值也为NULL。
|
|
||||||
- 对于类型支持但某些值无法正确转换的情况对应的转换后的值以转换函数输出为准。目前可能遇到的几种情况:
|
- 对于类型支持但某些值无法正确转换的情况对应的转换后的值以转换函数输出为准。目前可能遇到的几种情况:
|
||||||
1)字符串类型转换数值类型时可能出现的无效字符情况,例如"a"可能转为0,但不会报错。
|
1)字符串类型转换数值类型时可能出现的无效字符情况,例如"a"可能转为0,但不会报错。
|
||||||
2)转换到数值类型时,数值大于type_name可表示的范围时,则会溢出,但不会报错。
|
2)转换到数值类型时,数值大于type_name可表示的范围时,则会溢出,但不会报错。
|
||||||
|
@ -418,20 +418,23 @@ SELECT CAST(expression AS type_name) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
#### TO_ISO8601
|
#### TO_ISO8601
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT TO_ISO8601(ts_val | ts_col) FROM { tb_name | stb_name } [WHERE clause];
|
SELECT TO_ISO8601(ts[, timezone]) FROM { tb_name | stb_name } [WHERE clause];
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:将 UNIX 时间戳转换成为 ISO8601 标准的日期时间格式,并附加客户端时区信息。
|
**功能说明**:将 UNIX 时间戳转换成为 ISO8601 标准的日期时间格式,并附加时区信息。timezone 参数允许用户为输出结果指定附带任意时区信息。如果 timezone 参数省略,输出结果附带当前客户端的系统时区信息。
|
||||||
|
|
||||||
**返回结果数据类型**:VARCHAR 类型。
|
**返回结果数据类型**:VARCHAR 类型。
|
||||||
|
|
||||||
**适用数据类型**:UNIX 时间戳常量或是 TIMESTAMP 类型的列
|
**适用数据类型**:INTEGER, TIMESTAMP。
|
||||||
|
|
||||||
**适用于**:表、超级表。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
**使用说明**:
|
**使用说明**:
|
||||||
|
|
||||||
- 如果输入是 UNIX 时间戳常量,返回格式精度由时间戳的位数决定;
|
- timezone 参数允许输入的时区格式为: [z/Z, +/-hhmm, +/-hh, +/-hh:mm]。例如,TO_ISO8601(1, "+00:00")。
|
||||||
|
- 如果输入是表示 UNIX 时间戳的整形,返回格式精度由时间戳的位数决定;
|
||||||
- 如果输入是 TIMSTAMP 类型的列,返回格式的时间戳精度与当前 DATABASE 设置的时间精度一致。
|
- 如果输入是 TIMSTAMP 类型的列,返回格式的时间戳精度与当前 DATABASE 设置的时间精度一致。
|
||||||
|
|
||||||
|
|
||||||
|
@ -443,32 +446,34 @@ SELECT TO_JSON(str_literal) FROM { tb_name | stb_name } [WHERE clause];
|
||||||
|
|
||||||
**功能说明**: 将字符串常量转换为 JSON 类型。
|
**功能说明**: 将字符串常量转换为 JSON 类型。
|
||||||
|
|
||||||
**返回结果数据类型**: JSON
|
**返回结果数据类型**: JSON。
|
||||||
|
|
||||||
**适用数据类型**: JSON 字符串,形如 '{ "literal" : literal }'。'{}'表示空值。键必须为字符串字面量,值可以为数值字面量、字符串字面量、布尔字面量或空值字面量。str_literal中不支持转义符。
|
**适用数据类型**: JSON 字符串,形如 '{ "literal" : literal }'。'{}'表示空值。键必须为字符串字面量,值可以为数值字面量、字符串字面量、布尔字面量或空值字面量。str_literal中不支持转义符。
|
||||||
|
|
||||||
**适用于**: 表和超级表
|
|
||||||
|
|
||||||
**嵌套子查询支持**:适用于内层查询和外层查询。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
|
**适用于**: 表和超级表。
|
||||||
|
|
||||||
|
|
||||||
#### TO_UNIXTIMESTAMP
|
#### TO_UNIXTIMESTAMP
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT TO_UNIXTIMESTAMP(datetime_string | ts_col) FROM { tb_name | stb_name } [WHERE clause];
|
SELECT TO_UNIXTIMESTAMP(datetime_string) FROM { tb_name | stb_name } [WHERE clause];
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:将日期时间格式的字符串转换成为 UNIX 时间戳。
|
**功能说明**:将日期时间格式的字符串转换成为 UNIX 时间戳。
|
||||||
|
|
||||||
**返回结果数据类型**:长整型 INT64。
|
**返回结果数据类型**:BIGINT。
|
||||||
|
|
||||||
**应用字段**:字符串常量或是 VARCHAR/NCHAR 类型的列。
|
**应用字段**:VARCHAR, NCHAR。
|
||||||
|
|
||||||
**适用于**:表、超级表。
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
|
**适用于**:表和超级表。
|
||||||
|
|
||||||
**使用说明**:
|
**使用说明**:
|
||||||
|
|
||||||
- 输入的日期时间字符串须符合 ISO8601/RFC3339 标准,无法转换的字符串格式将返回 0。
|
- 输入的日期时间字符串须符合 ISO8601/RFC3339 标准,无法转换的字符串格式将返回 NULL。
|
||||||
- 返回的时间戳精度与当前 DATABASE 设置的时间精度一致。
|
- 返回的时间戳精度与当前 DATABASE 设置的时间精度一致。
|
||||||
|
|
||||||
|
|
||||||
|
@ -488,11 +493,13 @@ INSERT INTO tb_name VALUES (NOW(), ...);
|
||||||
|
|
||||||
**功能说明**:返回客户端当前系统时间。
|
**功能说明**:返回客户端当前系统时间。
|
||||||
|
|
||||||
**返回结果数据类型**:TIMESTAMP 时间戳类型。
|
**返回结果数据类型**:TIMESTAMP。
|
||||||
|
|
||||||
**应用字段**:在 WHERE 或 INSERT 语句中使用时只能作用于 TIMESTAMP 类型的字段。
|
**应用字段**:在 WHERE 或 INSERT 语句中使用时只能作用于 TIMESTAMP 类型的字段。
|
||||||
|
|
||||||
**适用于**:表、超级表。
|
**适用于**:表和超级表。
|
||||||
|
|
||||||
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
**使用说明**:
|
**使用说明**:
|
||||||
|
|
||||||
|
@ -504,40 +511,42 @@ INSERT INTO tb_name VALUES (NOW(), ...);
|
||||||
#### TIMEDIFF
|
#### TIMEDIFF
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT TIMEDIFF(ts_val1 | datetime_string1 | ts_col1, ts_val2 | datetime_string2 | ts_col2 [, time_unit]) FROM { tb_name | stb_name } [WHERE clause];
|
SELECT TIMEDIFF(ts | datetime_string1, ts | datetime_string2 [, time_unit]) FROM { tb_name | stb_name } [WHERE clause];
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:计算两个时间戳之间的差值,并近似到时间单位 time_unit 指定的精度。
|
**功能说明**:计算两个时间戳之间的差值,并近似到时间单位 time_unit 指定的精度。
|
||||||
|
|
||||||
**返回结果数据类型**:长整型 INT64。
|
**返回结果数据类型**:BIGINT。输入包含不符合时间日期格式字符串则返回 NULL。
|
||||||
|
|
||||||
**应用字段**:UNIX 时间戳,日期时间格式的字符串,或者 TIMESTAMP 类型的列。
|
**应用字段**:表示 UNIX 时间戳的 BIGINT, TIMESTAMP 类型,或符合日期时间格式的 VARCHAR, NCHAR 类型。
|
||||||
|
|
||||||
**适用于**:表、超级表。
|
**适用于**:表和超级表。
|
||||||
|
|
||||||
|
**嵌套子查询支持**:适用于内层查询和外层查询。
|
||||||
|
|
||||||
**使用说明**:
|
**使用说明**:
|
||||||
- 支持的时间单位 time_unit 如下:
|
- 支持的时间单位 time_unit 如下:
|
||||||
1u(微秒),1a(毫秒),1s(秒),1m(分),1h(小时),1d(天)。
|
1b(纳秒), 1u(微秒),1a(毫秒),1s(秒),1m(分),1h(小时),1d(天), 1w(周)。
|
||||||
- 如果时间单位 time_unit 未指定, 返回的时间差值精度与当前 DATABASE 设置的时间精度一致。
|
- 如果时间单位 time_unit 未指定, 返回的时间差值精度与当前 DATABASE 设置的时间精度一致。
|
||||||
|
|
||||||
|
|
||||||
#### TIMETRUNCATE
|
#### TIMETRUNCATE
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT TIMETRUNCATE(ts_val | datetime_string | ts_col, time_unit) FROM { tb_name | stb_name } [WHERE clause];
|
SELECT TIMETRUNCATE(ts | datetime_string , time_unit) FROM { tb_name | stb_name } [WHERE clause];
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:将时间戳按照指定时间单位 time_unit 进行截断。
|
**功能说明**:将时间戳按照指定时间单位 time_unit 进行截断。
|
||||||
|
|
||||||
**返回结果数据类型**:TIMESTAMP 时间戳类型。
|
**返回结果数据类型**:TIMESTAMP。
|
||||||
|
|
||||||
**应用字段**:UNIX 时间戳,日期时间格式的字符串,或者 TIMESTAMP 类型的列。
|
**应用字段**:表示 UNIX 时间戳的 BIGINT, TIMESTAMP 类型,或符合日期时间格式的 VARCHAR, NCHAR 类型。
|
||||||
|
|
||||||
**适用于**:表、超级表。
|
**适用于**:表和超级表。
|
||||||
|
|
||||||
**使用说明**:
|
**使用说明**:
|
||||||
- 支持的时间单位 time_unit 如下:
|
- 支持的时间单位 time_unit 如下:
|
||||||
1u(微秒),1a(毫秒),1s(秒),1m(分),1h(小时),1d(天)。
|
1b(纳秒), 1u(微秒),1a(毫秒),1s(秒),1m(分),1h(小时),1d(天), 1w(周)。
|
||||||
- 返回的时间戳精度与当前 DATABASE 设置的时间精度一致。
|
- 返回的时间戳精度与当前 DATABASE 设置的时间精度一致。
|
||||||
|
|
||||||
|
|
||||||
|
@ -549,11 +558,11 @@ SELECT TIMEZONE() FROM { tb_name | stb_name } [WHERE clause];
|
||||||
|
|
||||||
**功能说明**:返回客户端当前时区信息。
|
**功能说明**:返回客户端当前时区信息。
|
||||||
|
|
||||||
**返回结果数据类型**:VARCHAR 类型。
|
**返回结果数据类型**:VARCHAR。
|
||||||
|
|
||||||
**应用字段**:无
|
**应用字段**:无
|
||||||
|
|
||||||
**适用于**:表、超级表。
|
**适用于**:表和超级表。
|
||||||
|
|
||||||
|
|
||||||
#### TODAY
|
#### TODAY
|
||||||
|
@ -566,11 +575,11 @@ INSERT INTO tb_name VALUES (TODAY(), ...);
|
||||||
|
|
||||||
**功能说明**:返回客户端当日零时的系统时间。
|
**功能说明**:返回客户端当日零时的系统时间。
|
||||||
|
|
||||||
**返回结果数据类型**:TIMESTAMP 时间戳类型。
|
**返回结果数据类型**:TIMESTAMP。
|
||||||
|
|
||||||
**应用字段**:在 WHERE 或 INSERT 语句中使用时只能作用于 TIMESTAMP 类型的字段。
|
**应用字段**:在 WHERE 或 INSERT 语句中使用时只能作用于 TIMESTAMP 类型的字段。
|
||||||
|
|
||||||
**适用于**:表、超级表。
|
**适用于**:表和超级表。
|
||||||
|
|
||||||
**使用说明**:
|
**使用说明**:
|
||||||
|
|
||||||
|
@ -591,13 +600,13 @@ TDengine 支持针对数据的聚合查询。提供如下聚合函数。
|
||||||
SELECT AVG(field_name) FROM tb_name [WHERE clause];
|
SELECT AVG(field_name) FROM tb_name [WHERE clause];
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:统计表/超级表中某列的平均值。
|
**功能说明**:统计指定字段的平均值。
|
||||||
|
|
||||||
**返回数据类型**:双精度浮点数 Double。
|
**返回数据类型**:DOUBLE。
|
||||||
|
|
||||||
**适用数据类型**:数值类型。
|
**适用数据类型**:数值类型。
|
||||||
|
|
||||||
**适用于**:表、超级表。
|
**适用于**:表和超级表。
|
||||||
|
|
||||||
|
|
||||||
### COUNT
|
### COUNT
|
||||||
|
@ -606,19 +615,18 @@ SELECT AVG(field_name) FROM tb_name [WHERE clause];
|
||||||
SELECT COUNT([*|field_name]) FROM tb_name [WHERE clause];
|
SELECT COUNT([*|field_name]) FROM tb_name [WHERE clause];
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:统计表/超级表中记录行数或某列的非空值个数。
|
**功能说明**:统计指定字段的记录行数。
|
||||||
|
|
||||||
**返回数据类型**:长整型 INT64。
|
**返回数据类型**:BIGINT。
|
||||||
|
|
||||||
**适用数据类型**:应用全部字段。
|
**适用数据类型**:全部类型字段。
|
||||||
|
|
||||||
**适用于**:表、超级表。
|
**适用于**:表和超级表。
|
||||||
|
|
||||||
**使用说明**:
|
**使用说明**:
|
||||||
|
|
||||||
- 可以使用星号(\*)来替代具体的字段,使用星号(\*)返回全部记录数量。
|
- 可以使用星号(\*)来替代具体的字段,使用星号(\*)返回全部记录数量。
|
||||||
- 针对同一表的(不包含 NULL 值)字段查询结果均相同。
|
- 如果统计字段是具体的列,则返回该列中非 NULL 值的记录数量。
|
||||||
- 如果统计对象是具体的列,则返回该列中非 NULL 值的记录数量。
|
|
||||||
|
|
||||||
|
|
||||||
### ELAPSED
|
### ELAPSED
|
||||||
|
@ -629,17 +637,18 @@ SELECT ELAPSED(ts_primary_key [, time_unit]) FROM { tb_name | stb_name } [WHERE
|
||||||
|
|
||||||
**功能说明**:elapsed函数表达了统计周期内连续的时间长度,和twa函数配合使用可以计算统计曲线下的面积。在通过INTERVAL子句指定窗口的情况下,统计在给定时间范围内的每个窗口内有数据覆盖的时间范围;如果没有INTERVAL子句,则返回整个给定时间范围内的有数据覆盖的时间范围。注意,ELAPSED返回的并不是时间范围的绝对值,而是绝对值除以time_unit所得到的单位个数。
|
**功能说明**:elapsed函数表达了统计周期内连续的时间长度,和twa函数配合使用可以计算统计曲线下的面积。在通过INTERVAL子句指定窗口的情况下,统计在给定时间范围内的每个窗口内有数据覆盖的时间范围;如果没有INTERVAL子句,则返回整个给定时间范围内的有数据覆盖的时间范围。注意,ELAPSED返回的并不是时间范围的绝对值,而是绝对值除以time_unit所得到的单位个数。
|
||||||
|
|
||||||
**返回结果类型**:Double
|
**返回结果类型**:DOUBLE。
|
||||||
|
|
||||||
**适用数据类型**:Timestamp类型
|
**适用数据类型**:TIMESTAMP。
|
||||||
|
|
||||||
**支持的版本**:2.6.0.0 及以后的版本。
|
**支持的版本**:2.6.0.0 及以后的版本。
|
||||||
|
|
||||||
**适用于**: 表,超级表,嵌套查询的外层查询
|
**适用于**: 表,超级表,嵌套查询的外层查询
|
||||||
|
|
||||||
**说明**:
|
**说明**:
|
||||||
- field_name参数只能是表的第一列,即timestamp主键列。
|
- field_name参数只能是表的第一列,即 TIMESTAMP 类型的主键列。
|
||||||
- 按time_unit参数指定的时间单位返回,最小是数据库的时间分辨率。time_unit参数未指定时,以数据库的时间分辨率为时间单位。
|
- 按time_unit参数指定的时间单位返回,最小是数据库的时间分辨率。time_unit 参数未指定时,以数据库的时间分辨率为时间单位。支持的时间单位 time_unit 如下:
|
||||||
|
1b(纳秒), 1u(微秒),1a(毫秒),1s(秒),1m(分),1h(小时),1d(天), 1w(周)。
|
||||||
- 可以和interval组合使用,返回每个时间窗口的时间戳差值。需要特别注意的是,除第一个时间窗口和最后一个时间窗口外,中间窗口的时间戳差值均为窗口长度。
|
- 可以和interval组合使用,返回每个时间窗口的时间戳差值。需要特别注意的是,除第一个时间窗口和最后一个时间窗口外,中间窗口的时间戳差值均为窗口长度。
|
||||||
- order by asc/desc不影响差值的计算结果。
|
- order by asc/desc不影响差值的计算结果。
|
||||||
- 对于超级表,需要和group by tbname子句组合使用,不可以直接使用。
|
- 对于超级表,需要和group by tbname子句组合使用,不可以直接使用。
|
||||||
|
@ -668,11 +677,11 @@ SELECT LEASTSQUARES(field_name, start_val, step_val) FROM tb_name [WHERE clause]
|
||||||
SELECT MODE(field_name) FROM tb_name [WHERE clause];
|
SELECT MODE(field_name) FROM tb_name [WHERE clause];
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:返回出现频率最高的值,若存在多个频率相同的最高值,输出空。
|
**功能说明**:返回出现频率最高的值,若存在多个频率相同的最高值,输出NULL。
|
||||||
|
|
||||||
**返回数据类型**:同应用的字段。
|
**返回数据类型**:与输入数据类型一致。
|
||||||
|
|
||||||
**适用数据类型**: 数值类型。
|
**适用数据类型**:全部类型字段。
|
||||||
|
|
||||||
**适用于**:表和超级表。
|
**适用于**:表和超级表。
|
||||||
|
|
||||||
|
@ -683,11 +692,11 @@ SELECT MODE(field_name) FROM tb_name [WHERE clause];
|
||||||
SELECT SPREAD(field_name) FROM { tb_name | stb_name } [WHERE clause];
|
SELECT SPREAD(field_name) FROM { tb_name | stb_name } [WHERE clause];
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:统计表/超级表中某列的最大值和最小值之差。
|
**功能说明**:统计表中某列的最大值和最小值之差。
|
||||||
|
|
||||||
**返回数据类型**:双精度浮点数。
|
**返回数据类型**:DOUBLE。
|
||||||
|
|
||||||
**适用数据类型**:数值类型或TIMESTAMP类型。
|
**适用数据类型**:INTEGER, TIMESTAMP。
|
||||||
|
|
||||||
**适用于**:表和超级表。
|
**适用于**:表和超级表。
|
||||||
|
|
||||||
|
@ -700,7 +709,7 @@ SELECT STDDEV(field_name) FROM tb_name [WHERE clause];
|
||||||
|
|
||||||
**功能说明**:统计表中某列的均方差。
|
**功能说明**:统计表中某列的均方差。
|
||||||
|
|
||||||
**返回数据类型**:双精度浮点数 Double。
|
**返回数据类型**:DOUBLE。
|
||||||
|
|
||||||
**适用数据类型**:数值类型。
|
**适用数据类型**:数值类型。
|
||||||
|
|
||||||
|
@ -715,7 +724,7 @@ SELECT SUM(field_name) FROM tb_name [WHERE clause];
|
||||||
|
|
||||||
**功能说明**:统计表/超级表中某列的和。
|
**功能说明**:统计表/超级表中某列的和。
|
||||||
|
|
||||||
**返回数据类型**:双精度浮点数 Double 和长整型 INT64。
|
**返回数据类型**:DOUBLE, BIGINT。
|
||||||
|
|
||||||
**适用数据类型**:数值类型。
|
**适用数据类型**:数值类型。
|
||||||
|
|
||||||
|
@ -729,10 +738,10 @@ SELECT HYPERLOGLOG(field_name) FROM { tb_name | stb_name } [WHERE clause];
|
||||||
```
|
```
|
||||||
|
|
||||||
**功能说明**:
|
**功能说明**:
|
||||||
- 采用 hyperloglog 算法,返回某列的基数。该算法在数据量很大的情况下,可以明显降低内存的占用,但是求出来的基数是个估算值,标准误差(标准误差是多次实验,每次的平均数的标准差,不是与真实结果的误差)为 0.81%。
|
- 采用 hyperloglog 算法,返回某列的基数。该算法在数据量很大的情况下,可以明显降低内存的占用,求出来的基数是个估算值,标准误差(标准误差是多次实验,每次的平均数的标准差,不是与真实结果的误差)为 0.81%。
|
||||||
- 在数据量较少的时候该算法不是很准确,可以使用 select count(data) from (select unique(col) as data from table) 的方法。
|
- 在数据量较少的时候该算法不是很准确,可以使用 select count(data) from (select unique(col) as data from table) 的方法。
|
||||||
|
|
||||||
**返回结果类型**:整形。
|
**返回结果类型**:INTEGER。
|
||||||
|
|
||||||
**适用数据类型**:任何类型。
|
**适用数据类型**:任何类型。
|
||||||
|
|
||||||
|
@ -747,7 +756,7 @@ SELECT HISTOGRAM(field_name,bin_type, bin_description, normalized) FROM tb_nam
|
||||||
|
|
||||||
**功能说明**:统计数据按照用户指定区间的分布。
|
**功能说明**:统计数据按照用户指定区间的分布。
|
||||||
|
|
||||||
**返回结果类型**:如归一化参数 normalized 设置为 1,返回结果为双精度浮点类型 DOUBLE,否则为长整形 INT64。
|
**返回结果类型**:如归一化参数 normalized 设置为 1,返回结果为 DOUBLE 类型,否则为 BIGINT 类型。
|
||||||
|
|
||||||
**适用数据类型**:数值型字段。
|
**适用数据类型**:数值型字段。
|
||||||
|
|
||||||
|
@ -782,11 +791,15 @@ FROM { tb_name | stb_name } [WHERE clause]
|
||||||
|
|
||||||
**功能说明**:统计表/超级表中指定列的值的近似百分比分位数,与 PERCENTILE 函数相似,但是返回近似结果。
|
**功能说明**:统计表/超级表中指定列的值的近似百分比分位数,与 PERCENTILE 函数相似,但是返回近似结果。
|
||||||
|
|
||||||
**返回数据类型**: 双精度浮点数 Double。
|
**返回数据类型**: DOUBLE。
|
||||||
|
|
||||||
**适用数据类型**:数值类型。P值范围是[0,100],当为0时等同于MIN,为100时等同于MAX。如果不指定 algo_type 则使用默认算法 。
|
**适用数据类型**:数值类型。
|
||||||
|
|
||||||
**适用于**:表、超级表。
|
**适用于**:表和超级表。
|
||||||
|
|
||||||
|
**说明**:
|
||||||
|
- P值范围是[0,100],当为0时等同于MIN,为100时等同于MAX。
|
||||||
|
- algo_type 取值为 "default" 或 "t-digest"。 输入为 "default" 时函数使用基于直方图算法进行计算。输入为 "t-digest" 时使用t-digest算法计算分位数的近似结果。如果不指定 algo_type 则使用 "default" 算法。
|
||||||
|
|
||||||
### BOTTOM
|
### BOTTOM
|
||||||
|
|
||||||
|
@ -930,7 +943,7 @@ SELECT PERCENTILE(field_name, P) FROM { tb_name } [WHERE clause];
|
||||||
|
|
||||||
**功能说明**:统计表中某列的值百分比分位数。
|
**功能说明**:统计表中某列的值百分比分位数。
|
||||||
|
|
||||||
**返回数据类型**: 双精度浮点数 Double。
|
**返回数据类型**: DOUBLE。
|
||||||
|
|
||||||
**应用字段**:数值类型。
|
**应用字段**:数值类型。
|
||||||
|
|
||||||
|
@ -951,7 +964,7 @@ SELECT TAIL(field_name, k, offset_val) FROM {tb_name | stb_name} [WHERE clause];
|
||||||
|
|
||||||
**返回数据类型**:同应用的字段。
|
**返回数据类型**:同应用的字段。
|
||||||
|
|
||||||
**适用数据类型**:适合于除时间主列外的任何类型。
|
**适用数据类型**:适合于除时间主键列外的任何类型。
|
||||||
|
|
||||||
**适用于**:表、超级表。
|
**适用于**:表、超级表。
|
||||||
|
|
||||||
|
@ -968,7 +981,7 @@ SELECT TOP(field_name, K) FROM { tb_name | stb_name } [WHERE clause];
|
||||||
|
|
||||||
**适用数据类型**:数值类型。
|
**适用数据类型**:数值类型。
|
||||||
|
|
||||||
**适用于**:表、超级表。
|
**适用于**:表和超级表。
|
||||||
|
|
||||||
**使用说明**:
|
**使用说明**:
|
||||||
|
|
||||||
|
@ -1009,13 +1022,13 @@ SELECT CSUM(field_name) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
|
|
||||||
**嵌套子查询支持**: 适用于内层查询和外层查询。
|
**嵌套子查询支持**: 适用于内层查询和外层查询。
|
||||||
|
|
||||||
**适用于**:表和超级表
|
**适用于**:表和超级表。
|
||||||
|
|
||||||
**使用说明**:
|
**使用说明**:
|
||||||
|
|
||||||
- 不支持 +、-、*、/ 运算,如 csum(col1) + csum(col2)。
|
- 不支持 +、-、*、/ 运算,如 csum(col1) + csum(col2)。
|
||||||
- 只能与聚合(Aggregation)函数一起使用。 该函数可以应用在普通表和超级表上。
|
- 只能与聚合(Aggregation)函数一起使用。 该函数可以应用在普通表和超级表上。
|
||||||
- 使用在超级表上的时候,需要搭配 Group by tbname使用,将结果强制规约到单个时间线。
|
- 使用在超级表上的时候,需要搭配 PARTITION BY tbname使用,将结果强制规约到单个时间线。
|
||||||
|
|
||||||
|
|
||||||
### DERIVATIVE
|
### DERIVATIVE
|
||||||
|
@ -1026,13 +1039,13 @@ SELECT DERIVATIVE(field_name, time_interval, ignore_negative) FROM tb_name [WHER
|
||||||
|
|
||||||
**功能说明**:统计表中某列数值的单位变化率。其中单位时间区间的长度可以通过 time_interval 参数指定,最小可以是 1 秒(1s);ignore_negative 参数的值可以是 0 或 1,为 1 时表示忽略负值。
|
**功能说明**:统计表中某列数值的单位变化率。其中单位时间区间的长度可以通过 time_interval 参数指定,最小可以是 1 秒(1s);ignore_negative 参数的值可以是 0 或 1,为 1 时表示忽略负值。
|
||||||
|
|
||||||
**返回数据类型**:双精度浮点数。
|
**返回数据类型**:DOUBLE。
|
||||||
|
|
||||||
**适用数据类型**:数值类型。
|
**适用数据类型**:数值类型。
|
||||||
|
|
||||||
**适用于**:表、超级表
|
**适用于**:表和超级表。
|
||||||
|
|
||||||
**使用说明**: DERIVATIVE 函数可以在由 GROUP BY 划分出单独时间线的情况下用于超级表(也即 GROUP BY tbname)。
|
**使用说明**: DERIVATIVE 函数可以在由 PARTITION BY 划分出单独时间线的情况下用于超级表(也即 PARTITION BY tbname)。
|
||||||
|
|
||||||
|
|
||||||
### DIFF
|
### DIFF
|
||||||
|
@ -1047,7 +1060,7 @@ SELECT {DIFF(field_name, ignore_negative) | DIFF(field_name)} FROM tb_name [WHER
|
||||||
|
|
||||||
**适用数据类型**:数值类型。
|
**适用数据类型**:数值类型。
|
||||||
|
|
||||||
**适用于**:表、超级表。
|
**适用于**:表和超级表。
|
||||||
|
|
||||||
**使用说明**: 输出结果行数是范围内总行数减一,第一行没有结果输出。
|
**使用说明**: 输出结果行数是范围内总行数减一,第一行没有结果输出。
|
||||||
|
|
||||||
|
@ -1060,11 +1073,12 @@ SELECT IRATE(field_name) FROM tb_name WHERE clause;
|
||||||
|
|
||||||
**功能说明**:计算瞬时增长率。使用时间区间中最后两个样本数据来计算瞬时增长速率;如果这两个值呈递减关系,那么只取最后一个数用于计算,而不是使用二者差值。
|
**功能说明**:计算瞬时增长率。使用时间区间中最后两个样本数据来计算瞬时增长速率;如果这两个值呈递减关系,那么只取最后一个数用于计算,而不是使用二者差值。
|
||||||
|
|
||||||
**返回数据类型**:双精度浮点数 Double。
|
**返回数据类型**:DOUBLE。
|
||||||
|
|
||||||
**适用数据类型**:数值类型。
|
**适用数据类型**:数值类型。
|
||||||
|
|
||||||
**适用于**:表、超级表。
|
**适用于**:表和超级表。
|
||||||
|
|
||||||
|
|
||||||
### MAVG
|
### MAVG
|
||||||
|
|
||||||
|
@ -1074,19 +1088,19 @@ SELECT MAVG(field_name, K) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
|
|
||||||
**功能说明**: 计算连续 k 个值的移动平均数(moving average)。如果输入行数小于 k,则无结果输出。参数 k 的合法输入范围是 1≤ k ≤ 1000。
|
**功能说明**: 计算连续 k 个值的移动平均数(moving average)。如果输入行数小于 k,则无结果输出。参数 k 的合法输入范围是 1≤ k ≤ 1000。
|
||||||
|
|
||||||
**返回结果类型**: 返回双精度浮点数类型。
|
**返回结果类型**: DOUBLE。
|
||||||
|
|
||||||
**适用数据类型**: 数值类型。
|
**适用数据类型**: 数值类型。
|
||||||
|
|
||||||
**嵌套子查询支持**: 适用于内层查询和外层查询。
|
**嵌套子查询支持**: 适用于内层查询和外层查询。
|
||||||
|
|
||||||
**适用于**:表和超级表
|
**适用于**:表和超级表。
|
||||||
|
|
||||||
**使用说明**:
|
**使用说明**:
|
||||||
|
|
||||||
- 不支持 +、-、*、/ 运算,如 mavg(col1, k1) + mavg(col2, k1);
|
- 不支持 +、-、*、/ 运算,如 mavg(col1, k1) + mavg(col2, k1);
|
||||||
- 只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用;
|
- 只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用;
|
||||||
- 使用在超级表上的时候,需要搭配 Group by tbname使用,将结果强制规约到单个时间线。
|
- 使用在超级表上的时候,需要搭配 PARTITION BY tbname使用,将结果强制规约到单个时间线。
|
||||||
|
|
||||||
### SAMPLE
|
### SAMPLE
|
||||||
|
|
||||||
|
@ -1102,12 +1116,12 @@ SELECT SAMPLE(field_name, K) FROM { tb_name | stb_name } [WHERE clause]
|
||||||
|
|
||||||
**嵌套子查询支持**: 适用于内层查询和外层查询。
|
**嵌套子查询支持**: 适用于内层查询和外层查询。
|
||||||
|
|
||||||
**适用于**:表和超级表
|
**适用于**:表和超级表。
|
||||||
|
|
||||||
**使用说明**:
|
**使用说明**:
|
||||||
|
|
||||||
- 不能参与表达式计算;该函数可以应用在普通表和超级表上;
|
- 不能参与表达式计算;该函数可以应用在普通表和超级表上;
|
||||||
- 使用在超级表上的时候,需要搭配 Group by tbname 使用,将结果强制规约到单个时间线。
|
- 使用在超级表上的时候,需要搭配 PARTITION by tbname 使用,将结果强制规约到单个时间线。
|
||||||
|
|
||||||
### STATECOUNT
|
### STATECOUNT
|
||||||
|
|
||||||
|
@ -1119,10 +1133,10 @@ SELECT STATECOUNT(field_name, oper, val) FROM { tb_name | stb_name } [WHERE clau
|
||||||
|
|
||||||
**参数范围**:
|
**参数范围**:
|
||||||
|
|
||||||
- oper : LT (小于)、GT(大于)、LE(小于等于)、GE(大于等于)、NE(不等于)、EQ(等于),不区分大小写。
|
- oper : "LT" (小于)、"GT"(大于)、"LE"(小于等于)、"GE"(大于等于)、"NE"(不等于)、"EQ"(等于),不区分大小写。
|
||||||
- val : 数值型
|
- val : 数值型
|
||||||
|
|
||||||
**返回结果类型**:整形。
|
**返回结果类型**:INTEGER。
|
||||||
|
|
||||||
**适用数据类型**:数值类型。
|
**适用数据类型**:数值类型。
|
||||||
|
|
||||||
|
@ -1132,7 +1146,7 @@ SELECT STATECOUNT(field_name, oper, val) FROM { tb_name | stb_name } [WHERE clau
|
||||||
|
|
||||||
**使用说明**:
|
**使用说明**:
|
||||||
|
|
||||||
- 该函数可以应用在普通表上,在由 GROUP BY 划分出单独时间线的情况下用于超级表(也即 GROUP BY tbname)
|
- 该函数可以应用在普通表上,在由 PARTITION BY 划分出单独时间线的情况下用于超级表(也即 PARTITION BY tbname)
|
||||||
- 不能和窗口操作一起使用,例如 interval/state_window/session_window。
|
- 不能和窗口操作一起使用,例如 interval/state_window/session_window。
|
||||||
|
|
||||||
|
|
||||||
|
@ -1146,11 +1160,11 @@ SELECT stateDuration(field_name, oper, val, unit) FROM { tb_name | stb_name } [W
|
||||||
|
|
||||||
**参数范围**:
|
**参数范围**:
|
||||||
|
|
||||||
- oper : LT (小于)、GT(大于)、LE(小于等于)、GE(大于等于)、NE(不等于)、EQ(等于),不区分大小写。
|
- oper : "LT" (小于)、"GT"(大于)、"LE"(小于等于)、"GE"(大于等于)、"NE"(不等于)、"EQ"(等于),不区分大小写。
|
||||||
- val : 数值型
|
- val : 数值型
|
||||||
- unit : 时间长度的单位,范围[1s、1m、1h ],不足一个单位舍去。默认为 1s。
|
- unit : 时间长度的单位,范围[1s、1m、1h ],不足一个单位舍去。默认为 1s。
|
||||||
|
|
||||||
**返回结果类型**:整形。
|
**返回结果类型**:INTEGER。
|
||||||
|
|
||||||
**适用数据类型**:数值类型。
|
**适用数据类型**:数值类型。
|
||||||
|
|
||||||
|
@ -1160,7 +1174,7 @@ SELECT stateDuration(field_name, oper, val, unit) FROM { tb_name | stb_name } [W
|
||||||
|
|
||||||
**使用说明**:
|
**使用说明**:
|
||||||
|
|
||||||
- 该函数可以应用在普通表上,在由 GROUP BY 划分出单独时间线的情况下用于超级表(也即 GROUP BY tbname)
|
- 该函数可以应用在普通表上,在由 PARTITION BY 划分出单独时间线的情况下用于超级表(也即 PARTITION BY tbname)
|
||||||
- 不能和窗口操作一起使用,例如 interval/state_window/session_window。
|
- 不能和窗口操作一起使用,例如 interval/state_window/session_window。
|
||||||
|
|
||||||
|
|
||||||
|
@ -1172,13 +1186,13 @@ SELECT TWA(field_name) FROM tb_name WHERE clause;
|
||||||
|
|
||||||
**功能说明**:时间加权平均函数。统计表中某列在一段时间内的时间加权平均。
|
**功能说明**:时间加权平均函数。统计表中某列在一段时间内的时间加权平均。
|
||||||
|
|
||||||
**返回数据类型**:双精度浮点数 Double。
|
**返回数据类型**:DOUBLE。
|
||||||
|
|
||||||
**适用数据类型**:数值类型。
|
**适用数据类型**:数值类型。
|
||||||
|
|
||||||
**适用于**:表、超级表。
|
**适用于**:表和超级表。
|
||||||
|
|
||||||
**使用说明**: TWA 函数可以在由 GROUP BY 划分出单独时间线的情况下用于超级表(也即 GROUP BY tbname)。
|
**使用说明**: TWA 函数可以在由 PARTITION BY 划分出单独时间线的情况下用于超级表(也即 PARTITION BY tbname)。
|
||||||
|
|
||||||
|
|
||||||
## 系统信息函数
|
## 系统信息函数
|
||||||
|
|
|
@ -40,7 +40,6 @@ ALTER ALL DNODES dnode_option
|
||||||
|
|
||||||
dnode_option: {
|
dnode_option: {
|
||||||
'resetLog'
|
'resetLog'
|
||||||
| 'resetQueryCache'
|
|
||||||
| 'balance' value
|
| 'balance' value
|
||||||
| 'monitor' value
|
| 'monitor' value
|
||||||
| 'debugFlag' value
|
| 'debugFlag' value
|
||||||
|
|
|
@ -19,7 +19,7 @@ library_path:包含UDF函数实现的动态链接库的绝对路径,是在
|
||||||
OUTPUTTYPE:标识此函数的返回类型。
|
OUTPUTTYPE:标识此函数的返回类型。
|
||||||
BUFSIZE:中间结果的缓冲区大小,单位是字节。不设置则默认为0。最大不可超过512字节。
|
BUFSIZE:中间结果的缓冲区大小,单位是字节。不设置则默认为0。最大不可超过512字节。
|
||||||
|
|
||||||
关于如何开发自定义函数,请参考 [UDF使用说明](../develop/udf)。
|
关于如何开发自定义函数,请参考 [UDF使用说明](../../develop/udf)。
|
||||||
|
|
||||||
## 删除自定义函数
|
## 删除自定义函数
|
||||||
|
|
||||||
|
|
|
@ -80,21 +80,16 @@ taos --dump-config
|
||||||
| 补充说明 | RESTful 服务在 2.4.0.0 之前(不含)由 taosd 提供,默认端口为 6041; 在 2.4.0.0 及后续版本由 taosAdapter,默认端口为 6041 |
|
| 补充说明 | RESTful 服务在 2.4.0.0 之前(不含)由 taosd 提供,默认端口为 6041; 在 2.4.0.0 及后续版本由 taosAdapter,默认端口为 6041 |
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
确保集群中所有主机在端口 6030-6042 上的 TCP/UDP 协议能够互通。(详细的端口情况请参见下表)
|
确保集群中所有主机在端口 6030 上的 TCP 协议能够互通。(详细的端口情况请参见下表)
|
||||||
:::
|
:::
|
||||||
| 协议 | 默认端口 | 用途说明 | 修改方法 |
|
| 协议 | 默认端口 | 用途说明 | 修改方法 |
|
||||||
| :--- | :-------- | :---------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------- |
|
| :--- | :-------- | :---------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| TCP | 6030 | 客户端与服务端之间通讯。 | 由配置文件设置 serverPort 决定。 |
|
| TCP | 6030 | 客户端与服务端之间通讯,多节点集群的节点间通讯。 | 由配置文件设置 serverPort 决定。 |
|
||||||
| TCP | 6035 | 多节点集群的节点间通讯。 | 随 serverPort 端口变化。 |
|
|
||||||
| TCP | 6040 | 多节点集群的节点间数据同步。 | 随 serverPort 端口变化。 |
|
|
||||||
| TCP | 6041 | 客户端与服务端之间的 RESTful 通讯。 | 随 serverPort 端口变化。注意 taosAdapter 配置或有不同,请参考相应[文档](/reference/taosadapter/)。 |
|
| TCP | 6041 | 客户端与服务端之间的 RESTful 通讯。 | 随 serverPort 端口变化。注意 taosAdapter 配置或有不同,请参考相应[文档](/reference/taosadapter/)。 |
|
||||||
| TCP | 6042 | Arbitrator 的服务端口。 | 随 Arbitrator 启动参数设置变化。 |
|
|
||||||
| TCP | 6043 | TaosKeeper 监控服务端口。 | 随 TaosKeeper 启动参数设置变化。 |
|
| TCP | 6043 | TaosKeeper 监控服务端口。 | 随 TaosKeeper 启动参数设置变化。 |
|
||||||
| TCP | 6044 | 支持 StatsD 的数据接入端口。 | 随 taosAdapter 启动参数设置变化(2.3.0.1+以上版本)。 |
|
| TCP | 6044 | 支持 StatsD 的数据接入端口。 | 随 taosAdapter 启动参数设置变化(2.3.0.1+以上版本)。 |
|
||||||
| UDP | 6045 | 支持 collectd 数据接入端口。 | 随 taosAdapter 启动参数设置变化(2.3.0.1+以上版本)。 |
|
| UDP | 6045 | 支持 collectd 数据接入端口。 | 随 taosAdapter 启动参数设置变化(2.3.0.1+以上版本)。 |
|
||||||
| TCP | 6060 | 企业版内 Monitor 服务的网络端口。 | |
|
| TCP | 6060 | 企业版内 Monitor 服务的网络端口。 | |
|
||||||
| UDP | 6030-6034 | 客户端与服务端之间通讯。 | 随 serverPort 端口变化。 |
|
|
||||||
| UDP | 6035-6039 | 多节点集群的节点间通讯。 | 随 serverPort 端口变化。
|
|
||||||
|
|
||||||
### maxShellConns
|
### maxShellConns
|
||||||
|
|
||||||
|
@ -105,26 +100,6 @@ taos --dump-config
|
||||||
| 取值范围 | 10-50000000 |
|
| 取值范围 | 10-50000000 |
|
||||||
| 缺省值 | 5000 |
|
| 缺省值 | 5000 |
|
||||||
|
|
||||||
### maxConnections
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ------------------------------------------------------------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 一个数据库连接所容许的 dnode 连接数 |
|
|
||||||
| 取值范围 | 1-100000 |
|
|
||||||
| 缺省值 | 5000 |
|
|
||||||
| 补充说明 | 实际测试下来,如果默认没有配,选 50 个 worker thread 会产生 Network unavailable |
|
|
||||||
|
|
||||||
### rpcForceTcp
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | --------------------------------------------------- |
|
|
||||||
| 适用范围 | 服务端和客户端均适用 |
|
|
||||||
| 含义 | 强制使用 TCP 传输 |
|
|
||||||
| 取值范围 | 0: 不开启 1: 开启 |
|
|
||||||
| 缺省值 | 0 |
|
|
||||||
| 补充说明 | 在网络比较差的环境中,建议开启。<br/>2.0 版本新增。 |
|
|
||||||
|
|
||||||
## 监控相关
|
## 监控相关
|
||||||
|
|
||||||
### monitor
|
### monitor
|
||||||
|
@ -132,10 +107,26 @@ taos --dump-config
|
||||||
| 属性 | 说明 |
|
| 属性 | 说明 |
|
||||||
| -------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
|
| -------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| 适用范围 | 仅服务端适用 |
|
| 适用范围 | 仅服务端适用 |
|
||||||
| 含义 | 服务器内部的系统监控开关。监控主要负责收集物理节点的负载状况,包括 CPU、内存、硬盘、网络带宽、HTTP 请求量的监控记录,记录信息存储在`LOG`库中。 |
|
| 含义 | 服务器内部的系统监控开关。监控主要负责收集物理节点的负载状况,包括 CPU、内存、硬盘、网络带宽的监控记录,监控信息将通过 HTTP 协议发送给由 `monitorFqdn` 和 `monitorProt` 指定的 TaosKeeper 监控服务 |
|
||||||
| 取值范围 | 0:关闭监控服务, 1:激活监控服务。 |
|
| 取值范围 | 0:关闭监控服务, 1:激活监控服务。 |
|
||||||
| 缺省值 | 1 |
|
| 缺省值 | 1 |
|
||||||
|
|
||||||
|
### monitorFqdn
|
||||||
|
|
||||||
|
| 属性 | 说明 |
|
||||||
|
| -------- | -------------------------------------------- |
|
||||||
|
| 适用范围 | 仅服务端适用 |
|
||||||
|
| 含义 | TaosKeeper 监控服务的 FQDN |
|
||||||
|
| 缺省值 | 无 |
|
||||||
|
|
||||||
|
### monitorPort
|
||||||
|
|
||||||
|
| 属性 | 说明 |
|
||||||
|
| -------- | -------------------------------------------- |
|
||||||
|
| 适用范围 | 仅服务端适用 |
|
||||||
|
| 含义 | TaosKeeper 监控服务的端口号 |
|
||||||
|
| 缺省值 | 6043 |
|
||||||
|
|
||||||
### monitorInterval
|
### monitorInterval
|
||||||
|
|
||||||
| 属性 | 说明 |
|
| 属性 | 说明 |
|
||||||
|
@ -143,9 +134,10 @@ taos --dump-config
|
||||||
| 适用范围 | 仅服务端适用 |
|
| 适用范围 | 仅服务端适用 |
|
||||||
| 含义 | 监控数据库记录系统参数(CPU/内存)的时间间隔 |
|
| 含义 | 监控数据库记录系统参数(CPU/内存)的时间间隔 |
|
||||||
| 单位 | 秒 |
|
| 单位 | 秒 |
|
||||||
| 取值范围 | 1-600 |
|
| 取值范围 | 1-200000 |
|
||||||
| 缺省值 | 30 |
|
| 缺省值 | 30 |
|
||||||
|
|
||||||
|
|
||||||
### telemetryReporting
|
### telemetryReporting
|
||||||
|
|
||||||
| 属性 | 说明 |
|
| 属性 | 说明 |
|
||||||
|
@ -167,19 +159,10 @@ taos --dump-config
|
||||||
| 缺省值 | 无 |
|
| 缺省值 | 无 |
|
||||||
| 补充说明 | 计算规则可以根据实际应用可能的最大并发数和表的数字相乘,再乘 170 。<br/>(2.0.15 以前的版本中,此参数的单位是字节) |
|
| 补充说明 | 计算规则可以根据实际应用可能的最大并发数和表的数字相乘,再乘 170 。<br/>(2.0.15 以前的版本中,此参数的单位是字节) |
|
||||||
|
|
||||||
### ratioOfQueryCores
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 设置查询线程的最大数量。 |
|
|
||||||
| 缺省值 | 1 |
|
|
||||||
| 补充说明 | 最小值 0 表示只有 1 个查询线程 <br/> 最大值 2 表示最大建立 2 倍 CPU 核数的查询线程。<br/>默认为 1,表示最大和 CPU 核数相等的查询线程。<br/>该值可以为小数,即 0.5 表示最大建立 CPU 核数一半的查询线程。 |
|
|
||||||
|
|
||||||
### maxNumOfDistinctRes
|
### maxNumOfDistinctRes
|
||||||
|
|
||||||
| 属性 | 说明 |
|
| 属性 | 说明 |
|
||||||
| -------- | -------------------------------- | --- |
|
| -------- | -------------------------------- |
|
||||||
| 适用范围 | 仅服务端适用 |
|
| 适用范围 | 仅服务端适用 |
|
||||||
| 含义 | 允许返回的 distinct 结果最大行数 |
|
| 含义 | 允许返回的 distinct 结果最大行数 |
|
||||||
| 取值范围 | 默认值为 10 万,最大值 1 亿 |
|
| 取值范围 | 默认值为 10 万,最大值 1 亿 |
|
||||||
|
@ -301,96 +284,6 @@ charset 的有效值是 UTF-8。
|
||||||
| 含义 | 数据文件目录,所有的数据文件都将写入该目录 |
|
| 含义 | 数据文件目录,所有的数据文件都将写入该目录 |
|
||||||
| 缺省值 | /var/lib/taos |
|
| 缺省值 | /var/lib/taos |
|
||||||
|
|
||||||
### cache
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ------------ |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 内存块的大小 |
|
|
||||||
| 单位 | MB |
|
|
||||||
| 缺省值 | 16 |
|
|
||||||
|
|
||||||
### blocks
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ----------------------------------------------------------------------------------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 每个 vnode(tsdb)中有多少 cache 大小的内存块。因此一个 vnode 的用的内存大小粗略为(cache \* blocks) |
|
|
||||||
| 缺省值 | 6 |
|
|
||||||
|
|
||||||
### days
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | -------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 数据文件存储数据的时间跨度 |
|
|
||||||
| 单位 | 天 |
|
|
||||||
| 缺省值 | 10 |
|
|
||||||
|
|
||||||
### keep
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | -------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 数据保留的天数 |
|
|
||||||
| 单位 | 天 |
|
|
||||||
| 缺省值 | 3650 |
|
|
||||||
|
|
||||||
### minRows
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ---------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 文件块中记录的最小条数 |
|
|
||||||
| 缺省值 | 100 |
|
|
||||||
|
|
||||||
### maxRows
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ---------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 文件块中记录的最大条数 |
|
|
||||||
| 缺省值 | 4096 |
|
|
||||||
|
|
||||||
### walLevel
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | --------------------------------------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | WAL 级别 |
|
|
||||||
| 取值范围 | 0: 不写WAL; <br/> 1:写 WAL, 但不执行 fsync <br/> 2:写 WAL, 而且执行 fsync |
|
|
||||||
| 缺省值 | 1 |
|
|
||||||
|
|
||||||
### fsync
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | -------------------------------------------------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 当 WAL 设置为 2 时,执行 fsync 的周期 |
|
|
||||||
| 单位 | 毫秒 |
|
|
||||||
| 取值范围 | 最小为 0,表示每次写入,立即执行 fsync <br/> 最大为 180000(三分钟) |
|
|
||||||
| 缺省值 | 3000 |
|
|
||||||
|
|
||||||
### update
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ---------------------------------------------------------------------------------------------------------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 允许更新已存在的数据行 |
|
|
||||||
| 取值范围 | 0:不允许更新 <br/> 1:允许整行更新 <br/> 2:允许部分列更新。(2.1.7.0 版本开始此参数支持设为 2,在此之前取值只能是 [0, 1]) |
|
|
||||||
| 缺省值 | 0 |
|
|
||||||
| 补充说明 | 2.0.8.0 版本之前,不支持此参数。 |
|
|
||||||
|
|
||||||
### cacheLast
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 是否在内存中缓存子表的最近数据 |
|
|
||||||
| 取值范围 | 0:关闭 <br/> 1:缓存子表最近一行数据 <br/> 2:缓存子表每一列的最近的非 NULL 值 <br/> 3:同时打开缓存最近行和列功能。(2.1.2.0 版本开始此参数支持 0 ~ 3 的取值范围,在此之前取值只能是 [0, 1]) |
|
|
||||||
| 缺省值 | 0 |
|
|
||||||
| 补充说明 | 2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。 |
|
|
||||||
|
|
||||||
### minimalTmpDirGB
|
### minimalTmpDirGB
|
||||||
|
|
||||||
| 属性 | 说明 |
|
| 属性 | 说明 |
|
||||||
|
@ -409,110 +302,19 @@ charset 的有效值是 UTF-8。
|
||||||
| 单位 | GB |
|
| 单位 | GB |
|
||||||
| 缺省值 | 2.0 |
|
| 缺省值 | 2.0 |
|
||||||
|
|
||||||
### vnodeBak
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | -------------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 删除 vnode 时是否备份 vnode 目录 |
|
|
||||||
| 取值范围 | 0:否,1:是 |
|
|
||||||
| 缺省值 | 1 |
|
|
||||||
|
|
||||||
## 集群相关
|
## 集群相关
|
||||||
|
|
||||||
### numOfMnodes
|
### supportVnodes
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ------------------ |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 系统中管理节点个数 |
|
|
||||||
| 缺省值 | 3 |
|
|
||||||
|
|
||||||
### replica
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ------------ |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 副本个数 |
|
|
||||||
| 取值范围 | 1-3 |
|
|
||||||
| 缺省值 | 1 |
|
|
||||||
|
|
||||||
### quorum
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | -------------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 多副本环境下指令执行的确认数要求 |
|
|
||||||
| 取值范围 | 1,2 |
|
|
||||||
| 缺省值 | 1 |
|
|
||||||
|
|
||||||
### role
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
| 属性 | 说明 |
|
||||||
| -------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
|
| -------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| 适用范围 | 仅服务端适用 |
|
| 适用范围 | 仅服务端适用 |
|
||||||
| 含义 | dnode 的可选角色 |
|
| 含义 | dnode 支持的最大 vnode 数目 |
|
||||||
| 取值范围 | 0:any(既可作为 mnode,也可分配 vnode) <br/> 1:mgmt(只能作为 mnode,不能分配 vnode) <br/> 2:dnode(不能作为 mnode,只能分配 vnode) |
|
| 取值范围 | 0-4096 |
|
||||||
| 缺省值 | 0 |
|
| 缺省值 | 256 |
|
||||||
|
|
||||||
### balance
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ---------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 是否启动负载均衡 |
|
|
||||||
| 取值范围 | 0,1 |
|
|
||||||
| 缺省值 | 1 |
|
|
||||||
|
|
||||||
### balanceInterval
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ------------------------------------------------ |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 管理节点在正常运行状态下,检查负载均衡的时间间隔 |
|
|
||||||
| 单位 | 秒 |
|
|
||||||
| 取值范围 | 1-30000 |
|
|
||||||
| 缺省值 | 300 |
|
|
||||||
|
|
||||||
### arbitrator
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ------------------------------------------ |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 系统中裁决器的 endpoint,其格式如 firstEp |
|
|
||||||
| 缺省值 | 空 |
|
|
||||||
|
|
||||||
## 时间相关
|
## 时间相关
|
||||||
|
|
||||||
### precision
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ------------------------------------------------- |
|
|
||||||
| 适用范围 | 仅服务端 |
|
|
||||||
| 含义 | 创建数据库时使用的时间精度 |
|
|
||||||
| 取值范围 | ms: millisecond; us: microsecond ; ns: nanosecond |
|
|
||||||
| 缺省值 | ms |
|
|
||||||
|
|
||||||
### rpcTimer
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | -------------------- |
|
|
||||||
| 适用范围 | 服务端和客户端均适用 |
|
|
||||||
| 含义 | rpc 重试时长 |
|
|
||||||
| 单位 | 毫秒 |
|
|
||||||
| 取值范围 | 100-3000 |
|
|
||||||
| 缺省值 | 300 |
|
|
||||||
|
|
||||||
### rpcMaxTime
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | -------------------- |
|
|
||||||
| 适用范围 | 服务端和客户端均适用 |
|
|
||||||
| 含义 | rpc 等待应答最大时长 |
|
|
||||||
| 单位 | 秒 |
|
|
||||||
| 取值范围 | 100-7200 |
|
|
||||||
| 缺省值 | 600 |
|
|
||||||
|
|
||||||
### statusInterval
|
### statusInterval
|
||||||
|
|
||||||
| 属性 | 说明 |
|
| 属性 | 说明 |
|
||||||
|
@ -533,105 +335,8 @@ charset 的有效值是 UTF-8。
|
||||||
| 取值范围 | 1-120 |
|
| 取值范围 | 1-120 |
|
||||||
| 缺省值 | 3 |
|
| 缺省值 | 3 |
|
||||||
|
|
||||||
### tableMetaKeepTimer
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | --------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 表的元数据 cache 时长 |
|
|
||||||
| 单位 | 秒 |
|
|
||||||
| 取值范围 | 1-8640000 |
|
|
||||||
| 缺省值 | 7200 |
|
|
||||||
|
|
||||||
### maxTmrCtrl
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | -------------------- |
|
|
||||||
| 适用范围 | 服务端和客户端均适用 |
|
|
||||||
| 含义 | 定时器个数 |
|
|
||||||
| 单位 | 个 |
|
|
||||||
| 取值范围 | 8-2048 |
|
|
||||||
| 缺省值 | 512 |
|
|
||||||
|
|
||||||
### offlineThreshold
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ------------------------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | dnode 离线阈值,超过该时间将导致 dnode 离线 |
|
|
||||||
| 单位 | 秒 |
|
|
||||||
| 取值范围 | 5-7200000 |
|
|
||||||
| 缺省值 | 86400\*10(10 天) |
|
|
||||||
|
|
||||||
## 性能调优
|
## 性能调优
|
||||||
|
|
||||||
### numOfThreadsPerCore
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ----------------------------------- |
|
|
||||||
| 适用范围 | 服务端和客户端均适用 |
|
|
||||||
| 含义 | 每个 CPU 核生成的队列消费者线程数量 |
|
|
||||||
| 缺省值 | 1.0 |
|
|
||||||
|
|
||||||
### ratioOfQueryThreads
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ------------------------------------------------------------------------------------------------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 设置查询线程的最大数量 |
|
|
||||||
| 取值范围 | 0:表示只有 1 个查询线程 <br/> 1:表示最大和 CPU 核数相等的查询线程 <br/> 2:表示最大建立 2 倍 CPU 核数的查询线程。 |
|
|
||||||
| 缺省值 | 1 |
|
|
||||||
| 补充说明 | 该值可以为小数,即 0.5 表示最大建立 CPU 核数一半的查询线程。 |
|
|
||||||
|
|
||||||
### maxVgroupsPerDb
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ------------------------------------ |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 每个 DB 中 能够使用的最大 vnode 个数 |
|
|
||||||
| 取值范围 | 0-8192 |
|
|
||||||
| 缺省值 | 0 |
|
|
||||||
|
|
||||||
### maxTablesPerVnode
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | --------------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 每个 vnode 中能够创建的最大表个数 |
|
|
||||||
| 缺省值 | 1000000 |
|
|
||||||
|
|
||||||
### minTablesPerVnode
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | --------------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 每个 vnode 中必须创建表的最小数量 |
|
|
||||||
| 缺省值 | 1000 |
|
|
||||||
|
|
||||||
### tableIncStepPerVnode
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ------------------------------------------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 每个 vnode 中超过最小表数,i.e. minTablesPerVnode, 后递增步长 |
|
|
||||||
| 缺省值 | 1000 |
|
|
||||||
|
|
||||||
### maxNumOfOrderedRes
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | -------------------------------------- |
|
|
||||||
| 适用范围 | 服务端和客户端均适用 |
|
|
||||||
| 含义 | 支持超级表时间排序允许的最多记录数限制 |
|
|
||||||
| 缺省值 | 10 万 |
|
|
||||||
|
|
||||||
### mnodeEqualVnodeNum
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ------------------------------------ |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 将一个 mnode 等同于 vnode 消耗的个数 |
|
|
||||||
| 缺省值 | 4 |
|
|
||||||
|
|
||||||
### numOfCommitThreads
|
### numOfCommitThreads
|
||||||
|
|
||||||
| 属性 | 说明 |
|
| 属性 | 说明 |
|
||||||
|
@ -642,23 +347,6 @@ charset 的有效值是 UTF-8。
|
||||||
|
|
||||||
## 压缩相关
|
## 压缩相关
|
||||||
|
|
||||||
### comp
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ----------------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 文件压缩标志位 |
|
|
||||||
| 取值范围 | 0:关闭,1:一阶段压缩,2:两阶段压缩 |
|
|
||||||
| 缺省值 | 2 |
|
|
||||||
|
|
||||||
### tsdbMetaCompactRatio
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | -------------------------------------------------------------- |
|
|
||||||
| 含义 | tsdb meta 文件中冗余数据超过多少阈值,开启 meta 文件的压缩功能 |
|
|
||||||
| 取值范围 | 0:不开启,[1-100]:冗余数据比例 |
|
|
||||||
| 缺省值 | 0 |
|
|
||||||
|
|
||||||
### compressMsgSize
|
### compressMsgSize
|
||||||
|
|
||||||
| 属性 | 说明 |
|
| 属性 | 说明 |
|
||||||
|
@ -710,135 +398,6 @@ charset 的有效值是 UTF-8。
|
||||||
| 缺省值 | 0.0000000000000001 |
|
| 缺省值 | 0.0000000000000001 |
|
||||||
| 补充说明 | 小于此值的浮点数尾数部分将被截取 |
|
| 补充说明 | 小于此值的浮点数尾数部分将被截取 |
|
||||||
|
|
||||||
## 连续查询相关
|
|
||||||
|
|
||||||
### stream
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ------------------------------ |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 是否启用连续查询(流计算功能) |
|
|
||||||
| 取值范围 | 0:不允许 <br/> 1:允许 |
|
|
||||||
| 缺省值 | 1 |
|
|
||||||
|
|
||||||
### minSlidingTime
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ----------------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 最小滑动窗口时长 |
|
|
||||||
| 单位 | 毫秒 |
|
|
||||||
| 取值范围 | 10-1000000 |
|
|
||||||
| 缺省值 | 10 |
|
|
||||||
| 补充说明 | 支持 us 补值后,这个值就是 1us 了。 |
|
|
||||||
|
|
||||||
### minIntervalTime
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | -------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 时间窗口最小值 |
|
|
||||||
| 单位 | 毫秒 |
|
|
||||||
| 取值范围 | 1-1000000 |
|
|
||||||
| 缺省值 | 10 |
|
|
||||||
|
|
||||||
### maxStreamCompDelay
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | -------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 连续查询启动最大延迟 |
|
|
||||||
| 单位 | 毫秒 |
|
|
||||||
| 取值范围 | 10-1000000000 |
|
|
||||||
| 缺省值 | 20000 |
|
|
||||||
|
|
||||||
### maxFirstStreamCompDelay
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | -------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 第一次连续查询启动最大延迟 |
|
|
||||||
| 单位 | 毫秒 |
|
|
||||||
| 取值范围 | 10-1000000000 |
|
|
||||||
| 缺省值 | 10000 |
|
|
||||||
|
|
||||||
### retryStreamCompDelay
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | -------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 连续查询重试等待间隔 |
|
|
||||||
| 单位 | 毫秒 |
|
|
||||||
| 取值范围 | 10-1000000000 |
|
|
||||||
| 缺省值 | 10 |
|
|
||||||
|
|
||||||
### streamCompDelayRatio
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ---------------------------------------------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 连续查询的延迟时间计算系数,实际延迟时间为本参数乘以计算时间窗口 |
|
|
||||||
| 取值范围 | 0.1-0.9 |
|
|
||||||
| 缺省值 | 0.1 |
|
|
||||||
|
|
||||||
:::info
|
|
||||||
为避免多个 stream 同时执行占用太多系统资源,程序中对 stream 的执行时间人为增加了一些随机的延时。<br/>maxFirstStreamCompDelay 是 stream 第一次执行前最少要等待的时间。<br/>streamCompDelayRatio 是延迟时间的计算系数,它乘以查询的 interval 后为延迟时间基准。<br/>maxStreamCompDelay 是延迟时间基准的上限。<br/>实际延迟时间为一个不超过延迟时间基准的随机值。<br/>stream 某次计算失败后需要重试,retryStreamCompDelay 是重试的等待时间基准。<br/>实际重试等待时间为不超过等待时间基准的随机值。
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
## HTTP 相关
|
|
||||||
|
|
||||||
:::note
|
|
||||||
HTTP 服务在 2.4.0.0(不含)以前的版本中由 taosd 提供,在 2.4.0.0 以后(含)由 taosAdapter 提供。
|
|
||||||
本节的配置参数仅在 2.4.0.0(不含)以前的版本中生效。如果您使用的是 2.4.0.0(含)及以后的版本请参考[文档](/reference/taosadapter/)。
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
### http
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | --------------------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 服务器内部的 http 服务开关。 |
|
|
||||||
| 取值范围 | 0:关闭 http 服务, 1:激活 http 服务。 |
|
|
||||||
| 缺省值 | 1 |
|
|
||||||
|
|
||||||
### httpEnableRecordSql
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | --------------------------------------------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 记录通过 RESTFul 接口,产生的 SQL 调用。 |
|
|
||||||
| 缺省值 | 0 |
|
|
||||||
| 补充说明 | 生成的文件(httpnote.0/httpnote.1),与服务端日志所在目录相同。 |
|
|
||||||
|
|
||||||
### httpMaxThreads
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ------------------------------------------------------------------------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | RESTFul 接口的线程数。taosAdapter 配置或有不同,请参考相应[文档](/reference/taosadapter/)。 |
|
|
||||||
| 缺省值 | 2 |
|
|
||||||
|
|
||||||
### restfulRowLimit
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ----------------------------------------------------------------------------------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | RESTFul 接口单次返回的记录条数。taosAdapter 配置或有不同,请参考相应[文档](/reference/taosadapter/)。 |
|
|
||||||
| 缺省值 | 10240 |
|
|
||||||
| 补充说明 | 最大 10,000,000 |
|
|
||||||
|
|
||||||
### httpDBNameMandatory
|
|
||||||
|
|
||||||
| 属性 | 说明 |
|
|
||||||
| -------- | ---------------------------- |
|
|
||||||
| 适用范围 | 仅服务端适用 |
|
|
||||||
| 含义 | 是否在 URL 中输入 数据库名称 |
|
|
||||||
| 取值范围 | 0:不开启,1:开启 |
|
|
||||||
| 缺省值 | 0 |
|
|
||||||
| 补充说明 | 2.3 版本新增。 |
|
|
||||||
|
|
||||||
## 日志相关
|
## 日志相关
|
||||||
|
|
||||||
### logDir
|
### logDir
|
||||||
|
|
|
@ -10,7 +10,7 @@ import TabItem from "@theme/TabItem";
|
||||||
|
|
||||||
## 安装
|
## 安装
|
||||||
|
|
||||||
关于安装,请参考 [使用安装包立即开始](../get-started/package)
|
关于安装,请参考 [使用安装包立即开始](../../get-started/package)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -7,7 +7,7 @@ title: 容量规划
|
||||||
|
|
||||||
## 服务端内存需求
|
## 服务端内存需求
|
||||||
|
|
||||||
每个 Database 可以创建固定数目的 vgroup,默认 2 个 vgroup,在创建数据库时可以通过`vgroups <num>`参数来指定,其副本数由参数`replica <num>`指定。vgroup 中的每个副本会是一个 vnode;所以每个 vnode 占用的内存由以下几个参数决定:
|
每个 Database 可以创建固定数目的 vgroup,默认 2 个 vgroup,在创建数据库时可以通过`vgroups <num>`参数来指定,其副本数由参数`replica <num>`指定。vgroup 中的每个副本会是一个 vnode;所以每个数据库占用的内存由以下几个参数决定:
|
||||||
|
|
||||||
- vgroups
|
- vgroups
|
||||||
- replica
|
- replica
|
||||||
|
@ -16,12 +16,15 @@ title: 容量规划
|
||||||
- pagesize
|
- pagesize
|
||||||
- cachesize
|
- cachesize
|
||||||
|
|
||||||
关于这些参数的详细说明请参考 [数据库管理](../taos-sql/database)。
|
关于这些参数的详细说明请参考 [数据库管理](../../taos-sql/database)。
|
||||||
|
|
||||||
所以一个数据库所需要的内存大小等于 `vgroups * replica * (buffer + pages * pagesize + cachesize)`。
|
一个数据库所需要的内存大小等于
|
||||||
|
|
||||||
但要注意的是这些内存并不需要由单一服务器提供,而是由整个集群中所有数据节点共同负担,相当于由这些数据节点所在的服务器共同负担。如果集群中有不止一个数据库,则所需内存要累加,并由集群中所有服务器共同负担。更复杂的场景是如果集群中的数据节点并非在最初就一次性全部建立,而是随着使用中系统负载的增加服务器并增加数据节点,则新创建的数据库会导致新旧数据节点上的负载并不均衡,此时简单的理论计算并不能直接地简单使用,要结合各数据节点的负载情况。
|
```
|
||||||
|
vgroups * replica * (buffer + pages * pagesize + cachesize)
|
||||||
|
```
|
||||||
|
|
||||||
|
但要注意的是这些内存并不需要由单一服务器提供,而是由整个集群中所有数据节点共同负担,相当于由这些数据节点所在的服务器共同负担。如果集群中有不止一个数据库,则所需内存要累加。更复杂的场景是如果集群中的数据节点并非在最初就一次性全部建立,而是随着使用中系统负载的增加逐步增加服务器并增加数据节点,则新创建的数据库会导致新旧数据节点上的负载并不均衡,此时简单的理论计算并不能直接使用,要结合各数据节点的负载情况。
|
||||||
|
|
||||||
## 客户端内存需求
|
## 客户端内存需求
|
||||||
|
|
||||||
|
@ -48,7 +51,7 @@ CPU 的需求取决于如下两方面:
|
||||||
- **数据插入** TDengine 单核每秒能至少处理一万个插入请求。每个插入请求可以带多条记录,一次插入一条记录与插入 10 条记录,消耗的计算资源差别很小。因此每次插入,条数越大,插入效率越高。如果一个插入请求带 200 条以上记录,单核就能达到每秒插入 100 万条记录的速度。但对前端数据采集的要求越高,因为需要缓存记录,然后一批插入。
|
- **数据插入** TDengine 单核每秒能至少处理一万个插入请求。每个插入请求可以带多条记录,一次插入一条记录与插入 10 条记录,消耗的计算资源差别很小。因此每次插入,条数越大,插入效率越高。如果一个插入请求带 200 条以上记录,单核就能达到每秒插入 100 万条记录的速度。但对前端数据采集的要求越高,因为需要缓存记录,然后一批插入。
|
||||||
- **查询需求** TDengine 提供高效的查询,但是每个场景的查询差异很大,查询频次变化也很大,难以给出客观数字。需要用户针对自己的场景,写一些查询语句,才能确定。
|
- **查询需求** TDengine 提供高效的查询,但是每个场景的查询差异很大,查询频次变化也很大,难以给出客观数字。需要用户针对自己的场景,写一些查询语句,才能确定。
|
||||||
|
|
||||||
因此仅对数据插入而言,CPU 是可以估算出来的,但查询所耗的计算资源无法估算。在实际运营过程中,不建议 CPU 使用率超过 50%,超过后,需要增加新的节点,以获得更多计算资源。
|
因此仅对数据插入而言,CPU 是可以估算出来的,但查询所耗的计算资源无法估算。在实际运行过程中,不建议 CPU 使用率超过 50%,超过后,需要增加新的节点,以获得更多计算资源。
|
||||||
|
|
||||||
## 存储需求
|
## 存储需求
|
||||||
|
|
||||||
|
|
|
@ -4,16 +4,16 @@ title: 容错和灾备
|
||||||
|
|
||||||
## 容错
|
## 容错
|
||||||
|
|
||||||
TDengine 支持**WAL**(Write Ahead Log)机制,实现数据的容错能力,保证数据的高可用。
|
TDengine 支持 **WAL**(Write Ahead Log)机制,实现数据的容错能力,保证数据的高可用。
|
||||||
|
|
||||||
TDengine 接收到应用的请求数据包时,先将请求的原始数据包写入数据库日志文件,等数据成功写入数据库数据文件后,再删除相应的 WAL。这样保证了 TDengine 能够在断电等因素导致的服务重启时从数据库日志文件中恢复数据,避免数据的丢失。
|
TDengine 接收到应用的请求数据包时,先将请求的原始数据包写入数据库日志文件,等数据成功写入数据库数据文件后,再删除相应的 WAL。这样保证了 TDengine 能够在断电等因素导致的服务重启时从数据库日志文件中恢复数据,避免数据的丢失。
|
||||||
|
|
||||||
涉及的系统配置参数有两个:
|
涉及的系统配置参数有两个:
|
||||||
|
|
||||||
- wal_level:WAL 级别,1:写WAL,但不执行fsync。2:写WAL,而且执行fsync。默认值为 1。
|
- wal_level:WAL 级别,1:写 WAL,但不执行 fsync。2:写 WAL,而且执行 fsync。默认值为 1。
|
||||||
- wal_fsync_period:当 wal_evel 设置为 2 时,执行 fsync 的周期。设置为 0,表示每次写入,立即执行 fsync。
|
- wal_fsync_period:当 wal_level 设置为 2 时,执行 fsync 的周期。设置为 0,表示每次写入,立即执行 fsync。
|
||||||
|
|
||||||
如果要 100%的保证数据不丢失,需要将 wal_level 设置为 2,fsync 设置为 0。这时写入速度将会下降。但如果应用侧启动的写数据的线程数达到一定的数量(超过 50),那么写入数据的性能也会很不错,只会比 fsync 设置为 3000 毫秒下降 30%左右。
|
如果要 100%的保证数据不丢失,需要将 wal_level 设置为 2,wal_fsync_period 设置为 0。这时写入速度将会下降。但如果应用侧启动的写数据的线程数达到一定的数量(超过 50),那么写入数据的性能也会很不错,只会比 wal_fsync_period 设置为 3000 毫秒下降 30%左右。
|
||||||
|
|
||||||
## 灾备
|
## 灾备
|
||||||
|
|
||||||
|
@ -27,4 +27,4 @@ TDengine 集群的节点数必须大于等于副本数,否则创建表时将
|
||||||
|
|
||||||
当 TDengine 集群中的节点部署在不同的物理机上,并设置多个副本数时,就实现了系统的高可靠性,无需再使用其他软件或工具。TDengine 企业版还可以将副本部署在不同机房,从而实现异地容灾。
|
当 TDengine 集群中的节点部署在不同的物理机上,并设置多个副本数时,就实现了系统的高可靠性,无需再使用其他软件或工具。TDengine 企业版还可以将副本部署在不同机房,从而实现异地容灾。
|
||||||
|
|
||||||
另外一种灾备方式是通过 `taosX` 将一个 TDengine 集群的数据同步复制到物理上位于不同数据中心的另一个 TDengine 集群。其详细使用方法请参考 [taosX 参考手册](../reference/taosX)
|
另外一种灾备方式是通过 `taosX` 将一个 TDengine 集群的数据同步复制到物理上位于不同数据中心的另一个 TDengine 集群。其详细使用方法请参考 [taosX 参考手册](../../reference/taosX)
|
||||||
|
|
|
@ -102,11 +102,6 @@ extern int32_t tsQuerySmaOptimize;
|
||||||
// client
|
// client
|
||||||
extern int32_t tsMinSlidingTime;
|
extern int32_t tsMinSlidingTime;
|
||||||
extern int32_t tsMinIntervalTime;
|
extern int32_t tsMinIntervalTime;
|
||||||
extern int32_t tsMaxStreamComputDelay;
|
|
||||||
extern int32_t tsStreamCompStartDelay;
|
|
||||||
extern int32_t tsRetryStreamCompDelay;
|
|
||||||
extern float tsStreamComputDelayRatio; // the delayed computing ration of the whole time window
|
|
||||||
extern int64_t tsMaxRetentWindow;
|
|
||||||
|
|
||||||
// build info
|
// build info
|
||||||
extern char version[];
|
extern char version[];
|
||||||
|
|
|
@ -77,11 +77,11 @@ typedef struct {
|
||||||
} SWalSyncInfo;
|
} SWalSyncInfo;
|
||||||
|
|
||||||
typedef struct {
|
typedef struct {
|
||||||
int8_t protoVer;
|
|
||||||
int64_t version;
|
int64_t version;
|
||||||
int16_t msgType;
|
int64_t ingestTs;
|
||||||
int32_t bodyLen;
|
int32_t bodyLen;
|
||||||
int64_t ingestTs; // not implemented
|
int16_t msgType;
|
||||||
|
int8_t protoVer;
|
||||||
|
|
||||||
// sync meta
|
// sync meta
|
||||||
SWalSyncInfo syncMeta;
|
SWalSyncInfo syncMeta;
|
||||||
|
|
|
@ -118,20 +118,6 @@ int32_t tsMaxNumOfDistinctResults = 1000 * 10000;
|
||||||
// 1 database precision unit for interval time range, changed accordingly
|
// 1 database precision unit for interval time range, changed accordingly
|
||||||
int32_t tsMinIntervalTime = 1;
|
int32_t tsMinIntervalTime = 1;
|
||||||
|
|
||||||
// 20sec, the maximum value of stream computing delay, changed accordingly
|
|
||||||
int32_t tsMaxStreamComputDelay = 20000;
|
|
||||||
|
|
||||||
// 10sec, the first stream computing delay time after system launched successfully, changed accordingly
|
|
||||||
int32_t tsStreamCompStartDelay = 10000;
|
|
||||||
|
|
||||||
// the stream computing delay time after executing failed, change accordingly
|
|
||||||
int32_t tsRetryStreamCompDelay = 10 * 1000;
|
|
||||||
|
|
||||||
// The delayed computing ration. 10% of the whole computing time window by default.
|
|
||||||
float tsStreamComputDelayRatio = 0.1f;
|
|
||||||
|
|
||||||
int64_t tsMaxRetentWindow = 24 * 3600L; // maximum time window tolerance
|
|
||||||
|
|
||||||
// the maximum allowed query buffer size during query processing for each data node.
|
// the maximum allowed query buffer size during query processing for each data node.
|
||||||
// -1 no limit (default)
|
// -1 no limit (default)
|
||||||
// 0 no query allowed, queries are disabled
|
// 0 no query allowed, queries are disabled
|
||||||
|
@ -330,7 +316,7 @@ static int32_t taosAddClientCfg(SConfig *pCfg) {
|
||||||
if (cfgAddString(pCfg, "fqdn", defaultFqdn, 1) != 0) return -1;
|
if (cfgAddString(pCfg, "fqdn", defaultFqdn, 1) != 0) return -1;
|
||||||
if (cfgAddInt32(pCfg, "serverPort", defaultServerPort, 1, 65056, 1) != 0) return -1;
|
if (cfgAddInt32(pCfg, "serverPort", defaultServerPort, 1, 65056, 1) != 0) return -1;
|
||||||
if (cfgAddDir(pCfg, "tempDir", tsTempDir, 1) != 0) return -1;
|
if (cfgAddDir(pCfg, "tempDir", tsTempDir, 1) != 0) return -1;
|
||||||
if (cfgAddFloat(pCfg, "minimalTempDirGB", 1.0f, 0.001f, 10000000, 1) != 0) return -1;
|
if (cfgAddFloat(pCfg, "minimalTmpDirGB", 1.0f, 0.001f, 10000000, 1) != 0) return -1;
|
||||||
if (cfgAddInt32(pCfg, "shellActivityTimer", tsShellActivityTimer, 1, 120, 1) != 0) return -1;
|
if (cfgAddInt32(pCfg, "shellActivityTimer", tsShellActivityTimer, 1, 120, 1) != 0) return -1;
|
||||||
if (cfgAddInt32(pCfg, "compressMsgSize", tsCompressMsgSize, -1, 100000000, 1) != 0) return -1;
|
if (cfgAddInt32(pCfg, "compressMsgSize", tsCompressMsgSize, -1, 100000000, 1) != 0) return -1;
|
||||||
if (cfgAddInt32(pCfg, "compressColData", tsCompressColData, -1, 100000000, 1) != 0) return -1;
|
if (cfgAddInt32(pCfg, "compressColData", tsCompressColData, -1, 100000000, 1) != 0) return -1;
|
||||||
|
@ -383,10 +369,6 @@ static int32_t taosAddServerCfg(SConfig *pCfg) {
|
||||||
if (cfgAddInt32(pCfg, "minIntervalTime", tsMinIntervalTime, 1, 1000000, 0) != 0) return -1;
|
if (cfgAddInt32(pCfg, "minIntervalTime", tsMinIntervalTime, 1, 1000000, 0) != 0) return -1;
|
||||||
if (cfgAddInt32(pCfg, "maxNumOfDistinctRes", tsMaxNumOfDistinctResults, 10 * 10000, 10000 * 10000, 0) != 0) return -1;
|
if (cfgAddInt32(pCfg, "maxNumOfDistinctRes", tsMaxNumOfDistinctResults, 10 * 10000, 10000 * 10000, 0) != 0) return -1;
|
||||||
if (cfgAddInt32(pCfg, "countAlwaysReturnValue", tsCountAlwaysReturnValue, 0, 1, 0) != 0) return -1;
|
if (cfgAddInt32(pCfg, "countAlwaysReturnValue", tsCountAlwaysReturnValue, 0, 1, 0) != 0) return -1;
|
||||||
if (cfgAddInt32(pCfg, "maxStreamCompDelay", tsMaxStreamComputDelay, 10, 1000000000, 0) != 0) return -1;
|
|
||||||
if (cfgAddInt32(pCfg, "maxFirstStreamCompDelay", tsStreamCompStartDelay, 1000, 1000000000, 0) != 0) return -1;
|
|
||||||
if (cfgAddInt32(pCfg, "retryStreamCompDelay", tsRetryStreamCompDelay, 10, 1000000000, 0) != 0) return -1;
|
|
||||||
if (cfgAddFloat(pCfg, "streamCompDelayRatio", tsStreamComputDelayRatio, 0.1, 0.9, 0) != 0) return -1;
|
|
||||||
if (cfgAddInt32(pCfg, "queryBufferSize", tsQueryBufferSize, -1, 500000000000, 0) != 0) return -1;
|
if (cfgAddInt32(pCfg, "queryBufferSize", tsQueryBufferSize, -1, 500000000000, 0) != 0) return -1;
|
||||||
if (cfgAddBool(pCfg, "retrieveBlockingModel", tsRetrieveBlockingModel, 0) != 0) return -1;
|
if (cfgAddBool(pCfg, "retrieveBlockingModel", tsRetrieveBlockingModel, 0) != 0) return -1;
|
||||||
if (cfgAddBool(pCfg, "printAuth", tsPrintAuth, 0) != 0) return -1;
|
if (cfgAddBool(pCfg, "printAuth", tsPrintAuth, 0) != 0) return -1;
|
||||||
|
@ -532,7 +514,7 @@ static int32_t taosSetClientCfg(SConfig *pCfg) {
|
||||||
|
|
||||||
tstrncpy(tsTempDir, cfgGetItem(pCfg, "tempDir")->str, PATH_MAX);
|
tstrncpy(tsTempDir, cfgGetItem(pCfg, "tempDir")->str, PATH_MAX);
|
||||||
taosExpandDir(tsTempDir, tsTempDir, PATH_MAX);
|
taosExpandDir(tsTempDir, tsTempDir, PATH_MAX);
|
||||||
tsTempSpace.reserved = cfgGetItem(pCfg, "minimalTempDirGB")->fval;
|
tsTempSpace.reserved = cfgGetItem(pCfg, "minimalTmpDirGB")->fval;
|
||||||
if (taosMulMkDir(tsTempDir) != 0) {
|
if (taosMulMkDir(tsTempDir) != 0) {
|
||||||
uError("failed to create tempDir:%s since %s", tsTempDir, terrstr());
|
uError("failed to create tempDir:%s since %s", tsTempDir, terrstr());
|
||||||
return -1;
|
return -1;
|
||||||
|
@ -579,10 +561,6 @@ static int32_t taosSetServerCfg(SConfig *pCfg) {
|
||||||
tsMinIntervalTime = cfgGetItem(pCfg, "minIntervalTime")->i32;
|
tsMinIntervalTime = cfgGetItem(pCfg, "minIntervalTime")->i32;
|
||||||
tsMaxNumOfDistinctResults = cfgGetItem(pCfg, "maxNumOfDistinctRes")->i32;
|
tsMaxNumOfDistinctResults = cfgGetItem(pCfg, "maxNumOfDistinctRes")->i32;
|
||||||
tsCountAlwaysReturnValue = cfgGetItem(pCfg, "countAlwaysReturnValue")->i32;
|
tsCountAlwaysReturnValue = cfgGetItem(pCfg, "countAlwaysReturnValue")->i32;
|
||||||
tsMaxStreamComputDelay = cfgGetItem(pCfg, "maxStreamCompDelay")->i32;
|
|
||||||
tsStreamCompStartDelay = cfgGetItem(pCfg, "maxFirstStreamCompDelay")->i32;
|
|
||||||
tsRetryStreamCompDelay = cfgGetItem(pCfg, "retryStreamCompDelay")->i32;
|
|
||||||
tsStreamComputDelayRatio = cfgGetItem(pCfg, "streamCompDelayRatio")->fval;
|
|
||||||
tsQueryBufferSize = cfgGetItem(pCfg, "queryBufferSize")->i32;
|
tsQueryBufferSize = cfgGetItem(pCfg, "queryBufferSize")->i32;
|
||||||
tsRetrieveBlockingModel = cfgGetItem(pCfg, "retrieveBlockingModel")->bval;
|
tsRetrieveBlockingModel = cfgGetItem(pCfg, "retrieveBlockingModel")->bval;
|
||||||
tsPrintAuth = cfgGetItem(pCfg, "printAuth")->bval;
|
tsPrintAuth = cfgGetItem(pCfg, "printAuth")->bval;
|
||||||
|
@ -758,10 +736,6 @@ int32_t taosSetCfg(SConfig *pCfg, char *name) {
|
||||||
tsMaxShellConns = cfgGetItem(pCfg, "maxShellConns")->i32;
|
tsMaxShellConns = cfgGetItem(pCfg, "maxShellConns")->i32;
|
||||||
} else if (strcasecmp("maxNumOfDistinctRes", name) == 0) {
|
} else if (strcasecmp("maxNumOfDistinctRes", name) == 0) {
|
||||||
tsMaxNumOfDistinctResults = cfgGetItem(pCfg, "maxNumOfDistinctRes")->i32;
|
tsMaxNumOfDistinctResults = cfgGetItem(pCfg, "maxNumOfDistinctRes")->i32;
|
||||||
} else if (strcasecmp("maxStreamCompDelay", name) == 0) {
|
|
||||||
tsMaxStreamComputDelay = cfgGetItem(pCfg, "maxStreamCompDelay")->i32;
|
|
||||||
} else if (strcasecmp("maxFirstStreamCompDelay", name) == 0) {
|
|
||||||
tsStreamCompStartDelay = cfgGetItem(pCfg, "maxFirstStreamCompDelay")->i32;
|
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
@ -772,8 +746,8 @@ int32_t taosSetCfg(SConfig *pCfg, char *name) {
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case 'i': {
|
case 'i': {
|
||||||
if (strcasecmp("minimalTempDirGB", name) == 0) {
|
if (strcasecmp("minimalTmpDirGB", name) == 0) {
|
||||||
tsTempSpace.reserved = cfgGetItem(pCfg, "minimalTempDirGB")->fval;
|
tsTempSpace.reserved = cfgGetItem(pCfg, "minimalTmpDirGB")->fval;
|
||||||
} else if (strcasecmp("minimalDataDirGB", name) == 0) {
|
} else if (strcasecmp("minimalDataDirGB", name) == 0) {
|
||||||
tsDataSpace.reserved = cfgGetItem(pCfg, "minimalDataDirGB")->fval;
|
tsDataSpace.reserved = cfgGetItem(pCfg, "minimalDataDirGB")->fval;
|
||||||
} else if (strcasecmp("minSlidingTime", name) == 0) {
|
} else if (strcasecmp("minSlidingTime", name) == 0) {
|
||||||
|
@ -883,9 +857,7 @@ int32_t taosSetCfg(SConfig *pCfg, char *name) {
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case 'r': {
|
case 'r': {
|
||||||
if (strcasecmp("retryStreamCompDelay", name) == 0) {
|
if (strcasecmp("retrieveBlockingModel", name) == 0) {
|
||||||
tsRetryStreamCompDelay = cfgGetItem(pCfg, "retryStreamCompDelay")->i32;
|
|
||||||
} else if (strcasecmp("retrieveBlockingModel", name) == 0) {
|
|
||||||
tsRetrieveBlockingModel = cfgGetItem(pCfg, "retrieveBlockingModel")->bval;
|
tsRetrieveBlockingModel = cfgGetItem(pCfg, "retrieveBlockingModel")->bval;
|
||||||
} else if (strcasecmp("rpcQueueMemoryAllowed", name) == 0) {
|
} else if (strcasecmp("rpcQueueMemoryAllowed", name) == 0) {
|
||||||
tsRpcQueueMemoryAllowed = cfgGetItem(pCfg, "rpcQueueMemoryAllowed")->i64;
|
tsRpcQueueMemoryAllowed = cfgGetItem(pCfg, "rpcQueueMemoryAllowed")->i64;
|
||||||
|
@ -913,8 +885,6 @@ int32_t taosSetCfg(SConfig *pCfg, char *name) {
|
||||||
tsNumOfSupportVnodes = cfgGetItem(pCfg, "supportVnodes")->i32;
|
tsNumOfSupportVnodes = cfgGetItem(pCfg, "supportVnodes")->i32;
|
||||||
} else if (strcasecmp("statusInterval", name) == 0) {
|
} else if (strcasecmp("statusInterval", name) == 0) {
|
||||||
tsStatusInterval = cfgGetItem(pCfg, "statusInterval")->i32;
|
tsStatusInterval = cfgGetItem(pCfg, "statusInterval")->i32;
|
||||||
} else if (strcasecmp("streamCompDelayRatio", name) == 0) {
|
|
||||||
tsStreamComputDelayRatio = cfgGetItem(pCfg, "streamCompDelayRatio")->fval;
|
|
||||||
} else if (strcasecmp("slaveQuery", name) == 0) {
|
} else if (strcasecmp("slaveQuery", name) == 0) {
|
||||||
tsEnableSlaveQuery = cfgGetItem(pCfg, "slaveQuery")->bval;
|
tsEnableSlaveQuery = cfgGetItem(pCfg, "slaveQuery")->bval;
|
||||||
} else if (strcasecmp("snodeShmSize", name) == 0) {
|
} else if (strcasecmp("snodeShmSize", name) == 0) {
|
||||||
|
|
|
@ -805,14 +805,6 @@ static int32_t mndProcessConfigDnodeReq(SRpcMsg *pReq) {
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
SDnodeObj *pDnode = mndAcquireDnode(pMnode, cfgReq.dnodeId);
|
|
||||||
if (pDnode == NULL) {
|
|
||||||
mError("dnode:%d, failed to config since %s ", cfgReq.dnodeId, terrstr());
|
|
||||||
return -1;
|
|
||||||
}
|
|
||||||
SEpSet epSet = mndGetDnodeEpset(pDnode);
|
|
||||||
mndReleaseDnode(pMnode, pDnode);
|
|
||||||
|
|
||||||
SDCfgDnodeReq dcfgReq = {0};
|
SDCfgDnodeReq dcfgReq = {0};
|
||||||
if (strcasecmp(cfgReq.config, "resetlog") == 0) {
|
if (strcasecmp(cfgReq.config, "resetlog") == 0) {
|
||||||
strcpy(dcfgReq.config, "resetlog");
|
strcpy(dcfgReq.config, "resetlog");
|
||||||
|
@ -860,16 +852,36 @@ static int32_t mndProcessConfigDnodeReq(SRpcMsg *pReq) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t bufLen = tSerializeSDCfgDnodeReq(NULL, 0, &dcfgReq);
|
int32_t code = -1;
|
||||||
void *pBuf = rpcMallocCont(bufLen);
|
SSdb *pSdb = pMnode->pSdb;
|
||||||
|
void *pIter = NULL;
|
||||||
|
while (1) {
|
||||||
|
SDnodeObj *pDnode = NULL;
|
||||||
|
pIter = sdbFetch(pSdb, SDB_DNODE, pIter, (void **)&pDnode);
|
||||||
|
if (pIter == NULL) break;
|
||||||
|
|
||||||
if (pBuf == NULL) return -1;
|
if (pDnode->id == cfgReq.dnodeId || cfgReq.dnodeId == -1 || cfgReq.dnodeId == 0) {
|
||||||
tSerializeSDCfgDnodeReq(pBuf, bufLen, &dcfgReq);
|
SEpSet epSet = mndGetDnodeEpset(pDnode);
|
||||||
|
int32_t bufLen = tSerializeSDCfgDnodeReq(NULL, 0, &dcfgReq);
|
||||||
|
void *pBuf = rpcMallocCont(bufLen);
|
||||||
|
|
||||||
mInfo("dnode:%d, send config req to dnode, app:%p config:%s value:%s", cfgReq.dnodeId, pReq->info.ahandle,
|
if (pBuf != NULL) {
|
||||||
dcfgReq.config, dcfgReq.value);
|
tSerializeSDCfgDnodeReq(pBuf, bufLen, &dcfgReq);
|
||||||
SRpcMsg rpcMsg = {.msgType = TDMT_DND_CONFIG_DNODE, .pCont = pBuf, .contLen = bufLen};
|
mInfo("dnode:%d, send config req to dnode, app:%p config:%s value:%s", cfgReq.dnodeId, pReq->info.ahandle,
|
||||||
return tmsgSendReq(&epSet, &rpcMsg);
|
dcfgReq.config, dcfgReq.value);
|
||||||
|
SRpcMsg rpcMsg = {.msgType = TDMT_DND_CONFIG_DNODE, .pCont = pBuf, .contLen = bufLen};
|
||||||
|
tmsgSendReq(&epSet, &rpcMsg);
|
||||||
|
code = 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
sdbRelease(pSdb, pDnode);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (code == -1) {
|
||||||
|
terrno = TSDB_CODE_MND_DNODE_NOT_EXIST;
|
||||||
|
}
|
||||||
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t mndProcessConfigDnodeRsp(SRpcMsg *pRsp) {
|
static int32_t mndProcessConfigDnodeRsp(SRpcMsg *pRsp) {
|
||||||
|
|
|
@ -1287,6 +1287,7 @@ static bool mndTransPerformRedoActionStage(SMnode *pMnode, STrans *pTrans) {
|
||||||
mDebug("trans:%d, stage keep on redoAction since %s", pTrans->id, tstrerror(code));
|
mDebug("trans:%d, stage keep on redoAction since %s", pTrans->id, tstrerror(code));
|
||||||
continueExec = false;
|
continueExec = false;
|
||||||
} else {
|
} else {
|
||||||
|
pTrans->failedTimes++;
|
||||||
pTrans->code = terrno;
|
pTrans->code = terrno;
|
||||||
if (pTrans->policy == TRN_POLICY_ROLLBACK) {
|
if (pTrans->policy == TRN_POLICY_ROLLBACK) {
|
||||||
if (pTrans->lastAction != 0) {
|
if (pTrans->lastAction != 0) {
|
||||||
|
@ -1306,7 +1307,6 @@ static bool mndTransPerformRedoActionStage(SMnode *pMnode, STrans *pTrans) {
|
||||||
mError("trans:%d, stage from redoAction to rollback since %s", pTrans->id, terrstr());
|
mError("trans:%d, stage from redoAction to rollback since %s", pTrans->id, terrstr());
|
||||||
continueExec = true;
|
continueExec = true;
|
||||||
} else {
|
} else {
|
||||||
pTrans->failedTimes++;
|
|
||||||
mError("trans:%d, stage keep on redoAction since %s, failedTimes:%d", pTrans->id, terrstr(), pTrans->failedTimes);
|
mError("trans:%d, stage keep on redoAction since %s, failedTimes:%d", pTrans->id, terrstr(), pTrans->failedTimes);
|
||||||
continueExec = false;
|
continueExec = false;
|
||||||
}
|
}
|
||||||
|
|
|
@ -46,11 +46,6 @@ void tsdbCloseCache(SLRUCache *pCache) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static void getTableCacheKeyS(tb_uid_t uid, const char *cacheType, char *key, int *len) {
|
|
||||||
snprintf(key, 30, "%" PRIi64 "%s", uid, cacheType);
|
|
||||||
*len = strlen(key);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void getTableCacheKey(tb_uid_t uid, int cacheType, char *key, int *len) {
|
static void getTableCacheKey(tb_uid_t uid, int cacheType, char *key, int *len) {
|
||||||
if (cacheType == 0) { // last_row
|
if (cacheType == 0) { // last_row
|
||||||
*(uint64_t *)key = (uint64_t)uid;
|
*(uint64_t *)key = (uint64_t)uid;
|
||||||
|
@ -245,8 +240,6 @@ int32_t tsdbCacheInsertLast(SLRUCache *pCache, tb_uid_t uid, STSRow *row, STsdb
|
||||||
char key[32] = {0};
|
char key[32] = {0};
|
||||||
int keyLen = 0;
|
int keyLen = 0;
|
||||||
|
|
||||||
// ((void)(row));
|
|
||||||
|
|
||||||
// getTableCacheKey(uid, "l", key, &keyLen);
|
// getTableCacheKey(uid, "l", key, &keyLen);
|
||||||
getTableCacheKey(uid, 1, key, &keyLen);
|
getTableCacheKey(uid, 1, key, &keyLen);
|
||||||
LRUHandle *h = taosLRUCacheLookup(pCache, key, keyLen);
|
LRUHandle *h = taosLRUCacheLookup(pCache, key, keyLen);
|
||||||
|
@ -323,26 +316,10 @@ static tb_uid_t getTableSuidByUid(tb_uid_t uid, STsdb *pTsdb) {
|
||||||
static int32_t getTableDelDataFromDelIdx(SDelFReader *pDelReader, SDelIdx *pDelIdx, SArray *aDelData) {
|
static int32_t getTableDelDataFromDelIdx(SDelFReader *pDelReader, SDelIdx *pDelIdx, SArray *aDelData) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
|
|
||||||
// SMapData delDataMap;
|
|
||||||
// SDelData delData;
|
|
||||||
|
|
||||||
if (pDelIdx) {
|
if (pDelIdx) {
|
||||||
// tMapDataReset(&delDataMap);
|
|
||||||
|
|
||||||
// code = tsdbReadDelData(pDelReader, pDelIdx, &delDataMap, NULL);
|
|
||||||
code = tsdbReadDelData(pDelReader, pDelIdx, aDelData, NULL);
|
code = tsdbReadDelData(pDelReader, pDelIdx, aDelData, NULL);
|
||||||
if (code) goto _err;
|
|
||||||
/*
|
|
||||||
for (int32_t iDelData = 0; iDelData < delDataMap.nItem; ++iDelData) {
|
|
||||||
code = tMapDataGetItemByIdx(&delDataMap, iDelData, &delData, tGetDelData);
|
|
||||||
if (code) goto _err;
|
|
||||||
|
|
||||||
taosArrayPush(aDelData, &delData);
|
|
||||||
}
|
|
||||||
*/
|
|
||||||
}
|
}
|
||||||
|
|
||||||
_err:
|
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -444,18 +421,16 @@ typedef struct SFSNextRowIter {
|
||||||
SArray *aDFileSet;
|
SArray *aDFileSet;
|
||||||
SDataFReader *pDataFReader;
|
SDataFReader *pDataFReader;
|
||||||
SArray *aBlockIdx;
|
SArray *aBlockIdx;
|
||||||
// SMapData blockIdxMap;
|
SBlockIdx *pBlockIdx;
|
||||||
// SBlockIdx blockIdx;
|
SMapData blockMap;
|
||||||
SBlockIdx *pBlockIdx;
|
int32_t nBlock;
|
||||||
SMapData blockMap;
|
int32_t iBlock;
|
||||||
int32_t nBlock;
|
SBlock block;
|
||||||
int32_t iBlock;
|
SBlockData blockData;
|
||||||
SBlock block;
|
SBlockData *pBlockData;
|
||||||
SBlockData blockData;
|
int32_t nRow;
|
||||||
SBlockData *pBlockData;
|
int32_t iRow;
|
||||||
int32_t nRow;
|
TSDBROW row;
|
||||||
int32_t iRow;
|
|
||||||
TSDBROW row;
|
|
||||||
} SFSNextRowIter;
|
} SFSNextRowIter;
|
||||||
|
|
||||||
static int32_t getNextRowFromFS(void *iter, TSDBROW **ppRow) {
|
static int32_t getNextRowFromFS(void *iter, TSDBROW **ppRow) {
|
||||||
|
@ -629,41 +604,8 @@ typedef struct SMemNextRowIter {
|
||||||
} SMemNextRowIter;
|
} SMemNextRowIter;
|
||||||
|
|
||||||
static int32_t getNextRowFromMem(void *iter, TSDBROW **ppRow) {
|
static int32_t getNextRowFromMem(void *iter, TSDBROW **ppRow) {
|
||||||
// static int32_t getNextRowFromMem(void *iter, SArray *pRowArray) {
|
|
||||||
SMemNextRowIter *state = (SMemNextRowIter *)iter;
|
SMemNextRowIter *state = (SMemNextRowIter *)iter;
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
/*
|
|
||||||
if (!state->iterOpened) {
|
|
||||||
if (state->pMem != NULL) {
|
|
||||||
tsdbTbDataIterOpen(state->pMem, NULL, 1, &state->iter);
|
|
||||||
|
|
||||||
state->iterOpened = true;
|
|
||||||
|
|
||||||
TSDBROW *pMemRow = tsdbTbDataIterGet(&state->iter);
|
|
||||||
if (pMemRow) {
|
|
||||||
state->curRow = pMemRow;
|
|
||||||
} else {
|
|
||||||
return code;
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
return code;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
taosArrayPush(pRowArray, state->curRow);
|
|
||||||
while (tsdbTbDataIterNext(&state->iter)) {
|
|
||||||
TSDBROW *row = tsdbTbDataIterGet(&state->iter);
|
|
||||||
|
|
||||||
if (TSDBROW_TS(row) < TSDBROW_TS(state->curRow)) {
|
|
||||||
state->curRow = row;
|
|
||||||
break;
|
|
||||||
} else {
|
|
||||||
taosArrayPush(pRowArray, row);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return code;
|
|
||||||
*/
|
|
||||||
switch (state->state) {
|
switch (state->state) {
|
||||||
case SMEMNEXTROW_ENTER: {
|
case SMEMNEXTROW_ENTER: {
|
||||||
if (state->pMem != NULL) {
|
if (state->pMem != NULL) {
|
||||||
|
@ -702,44 +644,44 @@ _err:
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t tsRowFromTsdbRow(STSchema *pTSchema, TSDBROW *pRow, STSRow **ppRow) {
|
/* static int32_t tsRowFromTsdbRow(STSchema *pTSchema, TSDBROW *pRow, STSRow **ppRow) { */
|
||||||
int32_t code = 0;
|
/* int32_t code = 0; */
|
||||||
|
|
||||||
SColVal *pColVal = &(SColVal){0};
|
/* SColVal *pColVal = &(SColVal){0}; */
|
||||||
|
|
||||||
if (pRow->type == 0) {
|
/* if (pRow->type == 0) { */
|
||||||
*ppRow = tdRowDup(pRow->pTSRow);
|
/* *ppRow = tdRowDup(pRow->pTSRow); */
|
||||||
} else {
|
/* } else { */
|
||||||
SArray *pArray = taosArrayInit(pTSchema->numOfCols, sizeof(SColVal));
|
/* SArray *pArray = taosArrayInit(pTSchema->numOfCols, sizeof(SColVal)); */
|
||||||
if (pArray == NULL) {
|
/* if (pArray == NULL) { */
|
||||||
code = TSDB_CODE_OUT_OF_MEMORY;
|
/* code = TSDB_CODE_OUT_OF_MEMORY; */
|
||||||
goto _exit;
|
/* goto _exit; */
|
||||||
}
|
/* } */
|
||||||
|
|
||||||
TSDBKEY key = TSDBROW_KEY(pRow);
|
/* TSDBKEY key = TSDBROW_KEY(pRow); */
|
||||||
STColumn *pTColumn = &pTSchema->columns[0];
|
/* STColumn *pTColumn = &pTSchema->columns[0]; */
|
||||||
*pColVal = COL_VAL_VALUE(pTColumn->colId, pTColumn->type, (SValue){.ts = key.ts});
|
/* *pColVal = COL_VAL_VALUE(pTColumn->colId, pTColumn->type, (SValue){.ts = key.ts}); */
|
||||||
|
|
||||||
if (taosArrayPush(pArray, pColVal) == NULL) {
|
/* if (taosArrayPush(pArray, pColVal) == NULL) { */
|
||||||
code = TSDB_CODE_OUT_OF_MEMORY;
|
/* code = TSDB_CODE_OUT_OF_MEMORY; */
|
||||||
goto _exit;
|
/* goto _exit; */
|
||||||
}
|
/* } */
|
||||||
|
|
||||||
for (int16_t iCol = 1; iCol < pTSchema->numOfCols; iCol++) {
|
/* for (int16_t iCol = 1; iCol < pTSchema->numOfCols; iCol++) { */
|
||||||
tsdbRowGetColVal(pRow, pTSchema, iCol, pColVal);
|
/* tsdbRowGetColVal(pRow, pTSchema, iCol, pColVal); */
|
||||||
if (taosArrayPush(pArray, pColVal) == NULL) {
|
/* if (taosArrayPush(pArray, pColVal) == NULL) { */
|
||||||
code = TSDB_CODE_OUT_OF_MEMORY;
|
/* code = TSDB_CODE_OUT_OF_MEMORY; */
|
||||||
goto _exit;
|
/* goto _exit; */
|
||||||
}
|
/* } */
|
||||||
}
|
/* } */
|
||||||
|
|
||||||
code = tdSTSRowNew(pArray, pTSchema, ppRow);
|
/* code = tdSTSRowNew(pArray, pTSchema, ppRow); */
|
||||||
if (code) goto _exit;
|
/* if (code) goto _exit; */
|
||||||
}
|
/* } */
|
||||||
|
|
||||||
_exit:
|
/* _exit: */
|
||||||
return code;
|
/* return code; */
|
||||||
}
|
/* } */
|
||||||
|
|
||||||
static bool tsdbKeyDeleted(TSDBKEY *key, SArray *pSkyline, int64_t *iSkyline) {
|
static bool tsdbKeyDeleted(TSDBKEY *key, SArray *pSkyline, int64_t *iSkyline) {
|
||||||
bool deleted = false;
|
bool deleted = false;
|
||||||
|
@ -768,10 +710,8 @@ static bool tsdbKeyDeleted(TSDBKEY *key, SArray *pSkyline, int64_t *iSkyline) {
|
||||||
}
|
}
|
||||||
|
|
||||||
typedef int32_t (*_next_row_fn_t)(void *iter, TSDBROW **ppRow);
|
typedef int32_t (*_next_row_fn_t)(void *iter, TSDBROW **ppRow);
|
||||||
// typedef int32_t (*_next_row_fn_t)(void *iter, SArray *pRowArray);
|
|
||||||
typedef int32_t (*_next_row_clear_fn_t)(void *iter);
|
typedef int32_t (*_next_row_clear_fn_t)(void *iter);
|
||||||
|
|
||||||
// typedef struct TsdbNextRowState {
|
|
||||||
typedef struct {
|
typedef struct {
|
||||||
TSDBROW *pRow;
|
TSDBROW *pRow;
|
||||||
bool stop;
|
bool stop;
|
||||||
|
@ -782,7 +722,6 @@ typedef struct {
|
||||||
} TsdbNextRowState;
|
} TsdbNextRowState;
|
||||||
|
|
||||||
typedef struct {
|
typedef struct {
|
||||||
// STsdb *pTsdb;
|
|
||||||
SArray *pSkyline;
|
SArray *pSkyline;
|
||||||
int64_t iSkyline;
|
int64_t iSkyline;
|
||||||
|
|
||||||
|
@ -793,10 +732,8 @@ typedef struct {
|
||||||
TSDBROW memRow, imemRow, fsRow;
|
TSDBROW memRow, imemRow, fsRow;
|
||||||
|
|
||||||
TsdbNextRowState input[3];
|
TsdbNextRowState input[3];
|
||||||
// SMemTable *pMemTable;
|
STsdbReadSnap *pReadSnap;
|
||||||
// SMemTable *pIMemTable;
|
STsdb *pTsdb;
|
||||||
STsdbReadSnap *pReadSnap;
|
|
||||||
STsdb *pTsdb;
|
|
||||||
} CacheNextRowIter;
|
} CacheNextRowIter;
|
||||||
|
|
||||||
static int32_t nextRowIterOpen(CacheNextRowIter *pIter, tb_uid_t uid, STsdb *pTsdb) {
|
static int32_t nextRowIterOpen(CacheNextRowIter *pIter, tb_uid_t uid, STsdb *pTsdb) {
|
||||||
|
@ -967,7 +904,7 @@ _err:
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t mergeLastRow2(tb_uid_t uid, STsdb *pTsdb, bool *dup, STSRow **ppRow) {
|
static int32_t mergeLastRow(tb_uid_t uid, STsdb *pTsdb, bool *dup, STSRow **ppRow) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
|
|
||||||
STSchema *pTSchema = metaGetTbTSchema(pTsdb->pVnode->pMeta, uid, -1);
|
STSchema *pTSchema = metaGetTbTSchema(pTsdb->pVnode->pMeta, uid, -1);
|
||||||
|
@ -978,8 +915,6 @@ static int32_t mergeLastRow2(tb_uid_t uid, STsdb *pTsdb, bool *dup, STSRow **ppR
|
||||||
SArray *pColArray = taosArrayInit(nCol, sizeof(SColVal));
|
SArray *pColArray = taosArrayInit(nCol, sizeof(SColVal));
|
||||||
SColVal *pColVal = &(SColVal){0};
|
SColVal *pColVal = &(SColVal){0};
|
||||||
|
|
||||||
// tb_uid_t suid = getTableSuidByUid(uid, pTsdb);
|
|
||||||
|
|
||||||
TSKEY lastRowTs = TSKEY_MAX;
|
TSKEY lastRowTs = TSKEY_MAX;
|
||||||
|
|
||||||
CacheNextRowIter iter = {0};
|
CacheNextRowIter iter = {0};
|
||||||
|
@ -1066,7 +1001,7 @@ _err:
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t mergeLast2(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray) {
|
static int32_t mergeLast(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
|
|
||||||
STSchema *pTSchema = metaGetTbTSchema(pTsdb->pVnode->pMeta, uid, -1);
|
STSchema *pTSchema = metaGetTbTSchema(pTsdb->pVnode->pMeta, uid, -1);
|
||||||
|
@ -1077,8 +1012,6 @@ static int32_t mergeLast2(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray) {
|
||||||
SArray *pColArray = taosArrayInit(nCol, sizeof(SLastCol));
|
SArray *pColArray = taosArrayInit(nCol, sizeof(SLastCol));
|
||||||
SColVal *pColVal = &(SColVal){0};
|
SColVal *pColVal = &(SColVal){0};
|
||||||
|
|
||||||
// tb_uid_t suid = getTableSuidByUid(uid, pTsdb);
|
|
||||||
|
|
||||||
TSKEY lastRowTs = TSKEY_MAX;
|
TSKEY lastRowTs = TSKEY_MAX;
|
||||||
|
|
||||||
CacheNextRowIter iter = {0};
|
CacheNextRowIter iter = {0};
|
||||||
|
@ -1124,12 +1057,7 @@ static int32_t mergeLast2(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray) {
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
/*
|
|
||||||
if ((TSDBROW_TS(pRow) < lastRowTs)) {
|
|
||||||
// goto build the result ts row
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
*/
|
|
||||||
// merge into pColArray
|
// merge into pColArray
|
||||||
setNoneCol = false;
|
setNoneCol = false;
|
||||||
for (iCol = noneCol; iCol < nCol; ++iCol) {
|
for (iCol = noneCol; iCol < nCol; ++iCol) {
|
||||||
|
@ -1139,7 +1067,6 @@ static int32_t mergeLast2(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray) {
|
||||||
tsdbRowGetColVal(pRow, pTSchema, iCol, pColVal);
|
tsdbRowGetColVal(pRow, pTSchema, iCol, pColVal);
|
||||||
if ((tColVal->isNone || tColVal->isNull) && (!pColVal->isNone && !pColVal->isNull)) {
|
if ((tColVal->isNone || tColVal->isNull) && (!pColVal->isNone && !pColVal->isNull)) {
|
||||||
taosArraySet(pColArray, iCol, &(SLastCol){.ts = rowTs, .colVal = *pColVal});
|
taosArraySet(pColArray, iCol, &(SLastCol){.ts = rowTs, .colVal = *pColVal});
|
||||||
//} else if (tColVal->isNone && pColVal->isNone && !setNoneCol) {
|
|
||||||
} else if ((tColVal->isNone || tColVal->isNull) && (pColVal->isNone || pColVal->isNull) && !setNoneCol) {
|
} else if ((tColVal->isNone || tColVal->isNull) && (pColVal->isNone || pColVal->isNull) && !setNoneCol) {
|
||||||
noneCol = iCol;
|
noneCol = iCol;
|
||||||
setNoneCol = true;
|
setNoneCol = true;
|
||||||
|
@ -1148,521 +1075,36 @@ static int32_t mergeLast2(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray) {
|
||||||
} while (setNoneCol);
|
} while (setNoneCol);
|
||||||
|
|
||||||
// build the result ts row here
|
// build the result ts row here
|
||||||
//*dup = false;
|
|
||||||
if (taosArrayGetSize(pColArray) <= 0) {
|
if (taosArrayGetSize(pColArray) <= 0) {
|
||||||
*ppLastArray = NULL;
|
*ppLastArray = NULL;
|
||||||
taosArrayDestroy(pColArray);
|
taosArrayDestroy(pColArray);
|
||||||
} else {
|
} else {
|
||||||
*ppLastArray = pColArray;
|
*ppLastArray = pColArray;
|
||||||
}
|
}
|
||||||
/* if (taosArrayGetSize(pColArray) == nCol) {
|
|
||||||
code = tdSTSRowNew(pColArray, pTSchema, ppRow);
|
|
||||||
if (code) goto _err;
|
|
||||||
} else {
|
|
||||||
*ppRow = NULL;
|
|
||||||
}*/
|
|
||||||
|
|
||||||
nextRowIterClose(&iter);
|
nextRowIterClose(&iter);
|
||||||
// taosArrayDestroy(pColArray);
|
|
||||||
taosMemoryFreeClear(pTSchema);
|
taosMemoryFreeClear(pTSchema);
|
||||||
return code;
|
return code;
|
||||||
|
|
||||||
_err:
|
_err:
|
||||||
nextRowIterClose(&iter);
|
nextRowIterClose(&iter);
|
||||||
// taosArrayDestroy(pColArray);
|
|
||||||
taosMemoryFreeClear(pTSchema);
|
taosMemoryFreeClear(pTSchema);
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
// static int32_t mergeLastRow(tb_uid_t uid, STsdb *pTsdb, bool *dup, STSRow **ppRow) {
|
|
||||||
// int32_t code = 0;
|
|
||||||
// SArray *pSkyline = NULL;
|
|
||||||
|
|
||||||
// STSchema *pTSchema = metaGetTbTSchema(pTsdb->pVnode->pMeta, uid, -1);
|
|
||||||
// int16_t nCol = pTSchema->numOfCols;
|
|
||||||
// SArray *pColArray = taosArrayInit(nCol, sizeof(SColVal));
|
|
||||||
|
|
||||||
// tb_uid_t suid = getTableSuidByUid(uid, pTsdb);
|
|
||||||
|
|
||||||
// STbData *pMem = NULL;
|
|
||||||
// if (pTsdb->mem) {
|
|
||||||
// tsdbGetTbDataFromMemTable(pTsdb->mem, suid, uid, &pMem);
|
|
||||||
// }
|
|
||||||
|
|
||||||
// STbData *pIMem = NULL;
|
|
||||||
// if (pTsdb->imem) {
|
|
||||||
// tsdbGetTbDataFromMemTable(pTsdb->imem, suid, uid, &pIMem);
|
|
||||||
// }
|
|
||||||
|
|
||||||
// *ppRow = NULL;
|
|
||||||
|
|
||||||
// pSkyline = taosArrayInit(32, sizeof(TSDBKEY));
|
|
||||||
|
|
||||||
// SDelIdx delIdx;
|
|
||||||
|
|
||||||
// SDelFile *pDelFile = tsdbFSStateGetDelFile(pTsdb->pFS->cState);
|
|
||||||
// if (pDelFile) {
|
|
||||||
// SDelFReader *pDelFReader;
|
|
||||||
|
|
||||||
// code = tsdbDelFReaderOpen(&pDelFReader, pDelFile, pTsdb, NULL);
|
|
||||||
// if (code) goto _err;
|
|
||||||
|
|
||||||
// code = getTableDelIdx(pDelFReader, suid, uid, &delIdx);
|
|
||||||
// if (code) goto _err;
|
|
||||||
|
|
||||||
// code = getTableDelSkyline(pMem, pIMem, pDelFReader, &delIdx, pSkyline);
|
|
||||||
// if (code) goto _err;
|
|
||||||
|
|
||||||
// tsdbDelFReaderClose(&pDelFReader);
|
|
||||||
// } else {
|
|
||||||
// code = getTableDelSkyline(pMem, pIMem, NULL, NULL, pSkyline);
|
|
||||||
// if (code) goto _err;
|
|
||||||
// }
|
|
||||||
|
|
||||||
// int64_t iSkyline = taosArrayGetSize(pSkyline) - 1;
|
|
||||||
|
|
||||||
// SBlockIdx idx = {.suid = suid, .uid = uid};
|
|
||||||
|
|
||||||
// SFSNextRowIter fsState = {0};
|
|
||||||
// fsState.state = SFSNEXTROW_FS;
|
|
||||||
// fsState.pTsdb = pTsdb;
|
|
||||||
// fsState.pBlockIdxExp = &idx;
|
|
||||||
|
|
||||||
// SMemNextRowIter memState = {0};
|
|
||||||
// SMemNextRowIter imemState = {0};
|
|
||||||
// TSDBROW memRow, imemRow, fsRow;
|
|
||||||
|
|
||||||
// TsdbNextRowState input[3] = {{&memRow, true, false, &memState, getNextRowFromMem, NULL},
|
|
||||||
// {&imemRow, true, false, &imemState, getNextRowFromMem, NULL},
|
|
||||||
// {&fsRow, false, true, &fsState, getNextRowFromFS, clearNextRowFromFS}};
|
|
||||||
|
|
||||||
// if (pMem) {
|
|
||||||
// memState.pMem = pMem;
|
|
||||||
// memState.state = SMEMNEXTROW_ENTER;
|
|
||||||
// input[0].stop = false;
|
|
||||||
// input[0].next = true;
|
|
||||||
// }
|
|
||||||
// if (pIMem) {
|
|
||||||
// imemState.pMem = pIMem;
|
|
||||||
// imemState.state = SMEMNEXTROW_ENTER;
|
|
||||||
// input[1].stop = false;
|
|
||||||
// input[1].next = true;
|
|
||||||
// }
|
|
||||||
|
|
||||||
// int16_t nilColCount = nCol - 1; // count of null & none cols
|
|
||||||
// int iCol = 0; // index of first nil col index from left to right
|
|
||||||
// bool setICol = false;
|
|
||||||
|
|
||||||
// do {
|
|
||||||
// for (int i = 0; i < 3; ++i) {
|
|
||||||
// if (input[i].next && !input[i].stop) {
|
|
||||||
// if (input[i].pRow == NULL) {
|
|
||||||
// code = input[i].nextRowFn(input[i].iter, &input[i].pRow);
|
|
||||||
// if (code) goto _err;
|
|
||||||
|
|
||||||
// if (input[i].pRow == NULL) {
|
|
||||||
// input[i].stop = true;
|
|
||||||
// input[i].next = false;
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
|
|
||||||
// if (input[0].stop && input[1].stop && input[2].stop) {
|
|
||||||
// break;
|
|
||||||
// }
|
|
||||||
|
|
||||||
// // select maxpoint(s) from mem, imem, fs
|
|
||||||
// TSDBROW *max[3] = {0};
|
|
||||||
// int iMax[3] = {-1, -1, -1};
|
|
||||||
// int nMax = 0;
|
|
||||||
// TSKEY maxKey = TSKEY_MIN;
|
|
||||||
|
|
||||||
// for (int i = 0; i < 3; ++i) {
|
|
||||||
// if (!input[i].stop && input[i].pRow != NULL) {
|
|
||||||
// TSDBKEY key = TSDBROW_KEY(input[i].pRow);
|
|
||||||
|
|
||||||
// // merging & deduplicating on client side
|
|
||||||
// if (maxKey <= key.ts) {
|
|
||||||
// if (maxKey < key.ts) {
|
|
||||||
// nMax = 0;
|
|
||||||
// maxKey = key.ts;
|
|
||||||
// }
|
|
||||||
|
|
||||||
// iMax[nMax] = i;
|
|
||||||
// max[nMax++] = input[i].pRow;
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
|
|
||||||
// // delete detection
|
|
||||||
// TSDBROW *merge[3] = {0};
|
|
||||||
// int iMerge[3] = {-1, -1, -1};
|
|
||||||
// int nMerge = 0;
|
|
||||||
// for (int i = 0; i < nMax; ++i) {
|
|
||||||
// TSDBKEY maxKey = TSDBROW_KEY(max[i]);
|
|
||||||
|
|
||||||
// bool deleted = tsdbKeyDeleted(&maxKey, pSkyline, &iSkyline);
|
|
||||||
// if (!deleted) {
|
|
||||||
// iMerge[nMerge] = i;
|
|
||||||
// merge[nMerge++] = max[i];
|
|
||||||
// }
|
|
||||||
|
|
||||||
// input[iMax[i]].next = deleted;
|
|
||||||
// }
|
|
||||||
|
|
||||||
// // merge if nMerge > 1
|
|
||||||
// if (nMerge > 0) {
|
|
||||||
// *dup = false;
|
|
||||||
|
|
||||||
// if (nMerge == 1) {
|
|
||||||
// code = tsRowFromTsdbRow(pTSchema, merge[nMerge - 1], ppRow);
|
|
||||||
// if (code) goto _err;
|
|
||||||
// } else {
|
|
||||||
// // merge 2 or 3 rows
|
|
||||||
// SRowMerger merger = {0};
|
|
||||||
|
|
||||||
// tRowMergerInit(&merger, merge[0], pTSchema);
|
|
||||||
// for (int i = 1; i < nMerge; ++i) {
|
|
||||||
// tRowMerge(&merger, merge[i]);
|
|
||||||
// }
|
|
||||||
// tRowMergerGetRow(&merger, ppRow);
|
|
||||||
// tRowMergerClear(&merger);
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
|
|
||||||
// } while (1);
|
|
||||||
|
|
||||||
// for (int i = 0; i < 3; ++i) {
|
|
||||||
// if (input[i].nextRowClearFn) {
|
|
||||||
// input[i].nextRowClearFn(input[i].iter);
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
// if (pSkyline) {
|
|
||||||
// taosArrayDestroy(pSkyline);
|
|
||||||
// }
|
|
||||||
// taosMemoryFreeClear(pTSchema);
|
|
||||||
|
|
||||||
// return code;
|
|
||||||
// _err:
|
|
||||||
// for (int i = 0; i < 3; ++i) {
|
|
||||||
// if (input[i].nextRowClearFn) {
|
|
||||||
// input[i].nextRowClearFn(input[i].iter);
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
// if (pSkyline) {
|
|
||||||
// taosArrayDestroy(pSkyline);
|
|
||||||
// }
|
|
||||||
// taosMemoryFreeClear(pTSchema);
|
|
||||||
// tsdbError("vgId:%d merge last_row failed since %s", TD_VID(pTsdb->pVnode), tstrerror(code));
|
|
||||||
// return code;
|
|
||||||
// }
|
|
||||||
|
|
||||||
// static int32_t mergeLast(tb_uid_t uid, STsdb *pTsdb, STSRow **ppRow) {
|
|
||||||
// static int32_t mergeLast(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray) {
|
|
||||||
// int32_t code = 0;
|
|
||||||
// SArray *pSkyline = NULL;
|
|
||||||
// STSRow *pRow = NULL;
|
|
||||||
// STSRow **ppRow = &pRow;
|
|
||||||
|
|
||||||
// STSchema *pTSchema = metaGetTbTSchema(pTsdb->pVnode->pMeta, uid, -1);
|
|
||||||
// int16_t nCol = pTSchema->numOfCols;
|
|
||||||
// // SArray *pColArray = taosArrayInit(nCol, sizeof(SColVal));
|
|
||||||
// SArray *pColArray = taosArrayInit(nCol, sizeof(SLastCol));
|
|
||||||
|
|
||||||
// tb_uid_t suid = getTableSuidByUid(uid, pTsdb);
|
|
||||||
|
|
||||||
// STbData *pMem = NULL;
|
|
||||||
// if (pTsdb->mem) {
|
|
||||||
// tsdbGetTbDataFromMemTable(pTsdb->mem, suid, uid, &pMem);
|
|
||||||
// }
|
|
||||||
|
|
||||||
// STbData *pIMem = NULL;
|
|
||||||
// if (pTsdb->imem) {
|
|
||||||
// tsdbGetTbDataFromMemTable(pTsdb->imem, suid, uid, &pIMem);
|
|
||||||
// }
|
|
||||||
|
|
||||||
// *ppLastArray = NULL;
|
|
||||||
|
|
||||||
// pSkyline = taosArrayInit(32, sizeof(TSDBKEY));
|
|
||||||
|
|
||||||
// SDelIdx delIdx;
|
|
||||||
|
|
||||||
// SDelFile *pDelFile = tsdbFSStateGetDelFile(pTsdb->pFS->cState);
|
|
||||||
// if (pDelFile) {
|
|
||||||
// SDelFReader *pDelFReader;
|
|
||||||
|
|
||||||
// code = tsdbDelFReaderOpen(&pDelFReader, pDelFile, pTsdb, NULL);
|
|
||||||
// if (code) goto _err;
|
|
||||||
|
|
||||||
// code = getTableDelIdx(pDelFReader, suid, uid, &delIdx);
|
|
||||||
// if (code) goto _err;
|
|
||||||
|
|
||||||
// code = getTableDelSkyline(pMem, pIMem, pDelFReader, &delIdx, pSkyline);
|
|
||||||
// if (code) goto _err;
|
|
||||||
|
|
||||||
// tsdbDelFReaderClose(&pDelFReader);
|
|
||||||
// } else {
|
|
||||||
// code = getTableDelSkyline(pMem, pIMem, NULL, NULL, pSkyline);
|
|
||||||
// if (code) goto _err;
|
|
||||||
// }
|
|
||||||
|
|
||||||
// int64_t iSkyline = taosArrayGetSize(pSkyline) - 1;
|
|
||||||
|
|
||||||
// SBlockIdx idx = {.suid = suid, .uid = uid};
|
|
||||||
|
|
||||||
// SFSNextRowIter fsState = {0};
|
|
||||||
// fsState.state = SFSNEXTROW_FS;
|
|
||||||
// fsState.pTsdb = pTsdb;
|
|
||||||
// fsState.pBlockIdxExp = &idx;
|
|
||||||
|
|
||||||
// SMemNextRowIter memState = {0};
|
|
||||||
// SMemNextRowIter imemState = {0};
|
|
||||||
// TSDBROW memRow, imemRow, fsRow;
|
|
||||||
|
|
||||||
// TsdbNextRowState input[3] = {{&memRow, true, false, &memState, getNextRowFromMem, NULL},
|
|
||||||
// {&imemRow, true, false, &imemState, getNextRowFromMem, NULL},
|
|
||||||
// {&fsRow, false, true, &fsState, getNextRowFromFS, clearNextRowFromFS}};
|
|
||||||
|
|
||||||
// if (pMem) {
|
|
||||||
// memState.pMem = pMem;
|
|
||||||
// memState.state = SMEMNEXTROW_ENTER;
|
|
||||||
// input[0].stop = false;
|
|
||||||
// input[0].next = true;
|
|
||||||
// }
|
|
||||||
// if (pIMem) {
|
|
||||||
// imemState.pMem = pIMem;
|
|
||||||
// imemState.state = SMEMNEXTROW_ENTER;
|
|
||||||
// input[1].stop = false;
|
|
||||||
// input[1].next = true;
|
|
||||||
// }
|
|
||||||
|
|
||||||
// int16_t nilColCount = nCol - 1; // count of null & none cols
|
|
||||||
// int iCol = 0; // index of first nil col index from left to right
|
|
||||||
// bool setICol = false;
|
|
||||||
|
|
||||||
// do {
|
|
||||||
// for (int i = 0; i < 3; ++i) {
|
|
||||||
// if (input[i].next && !input[i].stop) {
|
|
||||||
// code = input[i].nextRowFn(input[i].iter, &input[i].pRow);
|
|
||||||
// if (code) goto _err;
|
|
||||||
|
|
||||||
// if (input[i].pRow == NULL) {
|
|
||||||
// input[i].stop = true;
|
|
||||||
// input[i].next = false;
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
|
|
||||||
// if (input[0].stop && input[1].stop && input[2].stop) {
|
|
||||||
// break;
|
|
||||||
// }
|
|
||||||
|
|
||||||
// // select maxpoint(s) from mem, imem, fs
|
|
||||||
// TSDBROW *max[3] = {0};
|
|
||||||
// int iMax[3] = {-1, -1, -1};
|
|
||||||
// int nMax = 0;
|
|
||||||
// TSKEY maxKey = TSKEY_MIN;
|
|
||||||
|
|
||||||
// for (int i = 0; i < 3; ++i) {
|
|
||||||
// if (!input[i].stop && input[i].pRow != NULL) {
|
|
||||||
// TSDBKEY key = TSDBROW_KEY(input[i].pRow);
|
|
||||||
|
|
||||||
// // merging & deduplicating on client side
|
|
||||||
// if (maxKey <= key.ts) {
|
|
||||||
// if (maxKey < key.ts) {
|
|
||||||
// nMax = 0;
|
|
||||||
// maxKey = key.ts;
|
|
||||||
// }
|
|
||||||
|
|
||||||
// iMax[nMax] = i;
|
|
||||||
// max[nMax++] = input[i].pRow;
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
|
|
||||||
// // delete detection
|
|
||||||
// TSDBROW *merge[3] = {0};
|
|
||||||
// int iMerge[3] = {-1, -1, -1};
|
|
||||||
// int nMerge = 0;
|
|
||||||
// for (int i = 0; i < nMax; ++i) {
|
|
||||||
// TSDBKEY maxKey = TSDBROW_KEY(max[i]);
|
|
||||||
|
|
||||||
// bool deleted = tsdbKeyDeleted(&maxKey, pSkyline, &iSkyline);
|
|
||||||
// if (!deleted) {
|
|
||||||
// iMerge[nMerge] = iMax[i];
|
|
||||||
// merge[nMerge++] = max[i];
|
|
||||||
// }
|
|
||||||
|
|
||||||
// input[iMax[i]].next = deleted;
|
|
||||||
// }
|
|
||||||
|
|
||||||
// // merge if nMerge > 1
|
|
||||||
// if (nMerge > 0) {
|
|
||||||
// if (nMerge == 1) {
|
|
||||||
// code = tsRowFromTsdbRow(pTSchema, merge[nMerge - 1], ppRow);
|
|
||||||
// if (code) goto _err;
|
|
||||||
// } else {
|
|
||||||
// // merge 2 or 3 rows
|
|
||||||
// SRowMerger merger = {0};
|
|
||||||
|
|
||||||
// tRowMergerInit(&merger, merge[0], pTSchema);
|
|
||||||
// for (int i = 1; i < nMerge; ++i) {
|
|
||||||
// tRowMerge(&merger, merge[i]);
|
|
||||||
// }
|
|
||||||
// tRowMergerGetRow(&merger, ppRow);
|
|
||||||
// tRowMergerClear(&merger);
|
|
||||||
// }
|
|
||||||
// } else {
|
|
||||||
// /* *ppRow = NULL; */
|
|
||||||
// /* return code; */
|
|
||||||
// continue;
|
|
||||||
// }
|
|
||||||
|
|
||||||
// if (iCol == 0) {
|
|
||||||
// STColumn *pTColumn = &pTSchema->columns[0];
|
|
||||||
// SColVal *pColVal = &(SColVal){0};
|
|
||||||
|
|
||||||
// *pColVal = COL_VAL_VALUE(pTColumn->colId, pTColumn->type, (SValue){.ts = maxKey});
|
|
||||||
|
|
||||||
// // if (taosArrayPush(pColArray, pColVal) == NULL) {
|
|
||||||
// if (taosArrayPush(pColArray, &(SLastCol){.ts = maxKey, .colVal = *pColVal}) == NULL) {
|
|
||||||
// code = TSDB_CODE_OUT_OF_MEMORY;
|
|
||||||
// goto _err;
|
|
||||||
// }
|
|
||||||
|
|
||||||
// ++iCol;
|
|
||||||
|
|
||||||
// setICol = false;
|
|
||||||
// for (int16_t i = iCol; i < nCol; ++i) {
|
|
||||||
// // tsdbRowGetColVal(*ppRow, pTSchema, i, pColVal);
|
|
||||||
// tTSRowGetVal(*ppRow, pTSchema, i, pColVal);
|
|
||||||
// // if (taosArrayPush(pColArray, pColVal) == NULL) {
|
|
||||||
// if (taosArrayPush(pColArray, &(SLastCol){.ts = maxKey, .colVal = *pColVal}) == NULL) {
|
|
||||||
// code = TSDB_CODE_OUT_OF_MEMORY;
|
|
||||||
// goto _err;
|
|
||||||
// }
|
|
||||||
|
|
||||||
// if (pColVal->isNull || pColVal->isNone) {
|
|
||||||
// for (int j = 0; j < nMerge; ++j) {
|
|
||||||
// SColVal jColVal = {0};
|
|
||||||
// tsdbRowGetColVal(merge[j], pTSchema, i, &jColVal);
|
|
||||||
// if (jColVal.isNull || jColVal.isNone) {
|
|
||||||
// input[iMerge[j]].next = true;
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
// if (!setICol) {
|
|
||||||
// iCol = i;
|
|
||||||
// setICol = true;
|
|
||||||
// }
|
|
||||||
// } else {
|
|
||||||
// --nilColCount;
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
|
|
||||||
// if (*ppRow) {
|
|
||||||
// taosMemoryFreeClear(*ppRow);
|
|
||||||
// }
|
|
||||||
|
|
||||||
// continue;
|
|
||||||
// }
|
|
||||||
|
|
||||||
// setICol = false;
|
|
||||||
// for (int16_t i = iCol; i < nCol; ++i) {
|
|
||||||
// SColVal colVal = {0};
|
|
||||||
// tTSRowGetVal(*ppRow, pTSchema, i, &colVal);
|
|
||||||
// TSKEY rowTs = (*ppRow)->ts;
|
|
||||||
|
|
||||||
// // SColVal *tColVal = (SColVal *)taosArrayGet(pColArray, i);
|
|
||||||
// SLastCol *tTsVal = (SLastCol *)taosArrayGet(pColArray, i);
|
|
||||||
// SColVal *tColVal = &tTsVal->colVal;
|
|
||||||
|
|
||||||
// if (!colVal.isNone && !colVal.isNull) {
|
|
||||||
// if (tColVal->isNull || tColVal->isNone) {
|
|
||||||
// // taosArraySet(pColArray, i, &colVal);
|
|
||||||
// taosArraySet(pColArray, i, &(SLastCol){.ts = rowTs, .colVal = colVal});
|
|
||||||
// --nilColCount;
|
|
||||||
// }
|
|
||||||
// } else {
|
|
||||||
// if ((tColVal->isNull || tColVal->isNone) && !setICol) {
|
|
||||||
// iCol = i;
|
|
||||||
// setICol = true;
|
|
||||||
|
|
||||||
// for (int j = 0; j < nMerge; ++j) {
|
|
||||||
// SColVal jColVal = {0};
|
|
||||||
// tsdbRowGetColVal(merge[j], pTSchema, i, &jColVal);
|
|
||||||
// if (jColVal.isNull || jColVal.isNone) {
|
|
||||||
// input[iMerge[j]].next = true;
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
|
|
||||||
// if (*ppRow) {
|
|
||||||
// taosMemoryFreeClear(*ppRow);
|
|
||||||
// }
|
|
||||||
// } while (nilColCount > 0);
|
|
||||||
|
|
||||||
// // if () new ts row from pColArray if non empty
|
|
||||||
// /* if (taosArrayGetSize(pColArray) == nCol) { */
|
|
||||||
// /* code = tdSTSRowNew(pColArray, pTSchema, ppRow); */
|
|
||||||
// /* if (code) goto _err; */
|
|
||||||
// /* } */
|
|
||||||
// /* taosArrayDestroy(pColArray); */
|
|
||||||
// if (taosArrayGetSize(pColArray) <= 0) {
|
|
||||||
// *ppLastArray = NULL;
|
|
||||||
// taosArrayDestroy(pColArray);
|
|
||||||
// } else {
|
|
||||||
// *ppLastArray = pColArray;
|
|
||||||
// }
|
|
||||||
// if (*ppRow) {
|
|
||||||
// taosMemoryFreeClear(*ppRow);
|
|
||||||
// }
|
|
||||||
|
|
||||||
// for (int i = 0; i < 3; ++i) {
|
|
||||||
// if (input[i].nextRowClearFn) {
|
|
||||||
// input[i].nextRowClearFn(input[i].iter);
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
// if (pSkyline) {
|
|
||||||
// taosArrayDestroy(pSkyline);
|
|
||||||
// }
|
|
||||||
// taosMemoryFreeClear(pTSchema);
|
|
||||||
|
|
||||||
// return code;
|
|
||||||
// _err:
|
|
||||||
// taosArrayDestroy(pColArray);
|
|
||||||
// if (*ppRow) {
|
|
||||||
// taosMemoryFreeClear(*ppRow);
|
|
||||||
// }
|
|
||||||
// for (int i = 0; i < 3; ++i) {
|
|
||||||
// if (input[i].nextRowClearFn) {
|
|
||||||
// input[i].nextRowClearFn(input[i].iter);
|
|
||||||
// }
|
|
||||||
// }
|
|
||||||
// if (pSkyline) {
|
|
||||||
// taosArrayDestroy(pSkyline);
|
|
||||||
// }
|
|
||||||
// taosMemoryFreeClear(pTSchema);
|
|
||||||
// tsdbError("vgId:%d merge last_row failed since %s", TD_VID(pTsdb->pVnode), tstrerror(code));
|
|
||||||
// return code;
|
|
||||||
// }
|
|
||||||
|
|
||||||
int32_t tsdbCacheGetLastrowH(SLRUCache *pCache, tb_uid_t uid, STsdb *pTsdb, LRUHandle **handle) {
|
int32_t tsdbCacheGetLastrowH(SLRUCache *pCache, tb_uid_t uid, STsdb *pTsdb, LRUHandle **handle) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
char key[32] = {0};
|
char key[32] = {0};
|
||||||
int keyLen = 0;
|
int keyLen = 0;
|
||||||
|
|
||||||
// getTableCacheKey(uid, "lr", key, &keyLen);
|
// getTableCacheKeyS(uid, "lr", key, &keyLen);
|
||||||
getTableCacheKey(uid, 0, key, &keyLen);
|
getTableCacheKey(uid, 0, key, &keyLen);
|
||||||
LRUHandle *h = taosLRUCacheLookup(pCache, key, keyLen);
|
LRUHandle *h = taosLRUCacheLookup(pCache, key, keyLen);
|
||||||
if (h) {
|
if (h) {
|
||||||
//*ppRow = (STSRow *)taosLRUCacheValue(pCache, h);
|
|
||||||
} else {
|
} else {
|
||||||
STSRow *pRow = NULL;
|
STSRow *pRow = NULL;
|
||||||
bool dup = false; // which is always false for now
|
bool dup = false; // which is always false for now
|
||||||
code = mergeLastRow2(uid, pTsdb, &dup, &pRow);
|
code = mergeLastRow(uid, pTsdb, &dup, &pRow);
|
||||||
// if table's empty or error, return code of -1
|
// if table's empty or error, return code of -1
|
||||||
if (code < 0 || pRow == NULL) {
|
if (code < 0 || pRow == NULL) {
|
||||||
if (!dup && pRow) {
|
if (!dup && pRow) {
|
||||||
|
@ -1680,9 +1122,7 @@ int32_t tsdbCacheGetLastrowH(SLRUCache *pCache, tb_uid_t uid, STsdb *pTsdb, LRUH
|
||||||
code = -1;
|
code = -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
// tsdbCacheInsertLastrow(pCache, pTsdb, uid, pRow, dup);
|
|
||||||
h = taosLRUCacheLookup(pCache, key, keyLen);
|
h = taosLRUCacheLookup(pCache, key, keyLen);
|
||||||
//*ppRow = (STSRow *)taosLRUCacheValue(pCache, h);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
*handle = h;
|
*handle = h;
|
||||||
|
@ -1719,18 +1159,13 @@ int32_t tsdbCacheGetLastH(SLRUCache *pCache, tb_uid_t uid, STsdb *pTsdb, LRUHand
|
||||||
char key[32] = {0};
|
char key[32] = {0};
|
||||||
int keyLen = 0;
|
int keyLen = 0;
|
||||||
|
|
||||||
// getTableCacheKey(uid, "l", key, &keyLen);
|
// getTableCacheKeyS(uid, "l", key, &keyLen);
|
||||||
getTableCacheKey(uid, 1, key, &keyLen);
|
getTableCacheKey(uid, 1, key, &keyLen);
|
||||||
LRUHandle *h = taosLRUCacheLookup(pCache, key, keyLen);
|
LRUHandle *h = taosLRUCacheLookup(pCache, key, keyLen);
|
||||||
if (h) {
|
if (h) {
|
||||||
//*ppRow = (STSRow *)taosLRUCacheValue(pCache, h);
|
|
||||||
|
|
||||||
} else {
|
} else {
|
||||||
// STSRow *pRow = NULL;
|
|
||||||
// code = mergeLast(uid, pTsdb, &pRow);
|
|
||||||
SArray *pLastArray = NULL;
|
SArray *pLastArray = NULL;
|
||||||
// code = mergeLast(uid, pTsdb, &pLastArray);
|
code = mergeLast(uid, pTsdb, &pLastArray);
|
||||||
code = mergeLast2(uid, pTsdb, &pLastArray);
|
|
||||||
// if table's empty or error, return code of -1
|
// if table's empty or error, return code of -1
|
||||||
// if (code < 0 || pRow == NULL) {
|
// if (code < 0 || pRow == NULL) {
|
||||||
if (code < 0 || pLastArray == NULL) {
|
if (code < 0 || pLastArray == NULL) {
|
||||||
|
@ -1746,7 +1181,6 @@ int32_t tsdbCacheGetLastH(SLRUCache *pCache, tb_uid_t uid, STsdb *pTsdb, LRUHand
|
||||||
}
|
}
|
||||||
|
|
||||||
h = taosLRUCacheLookup(pCache, key, keyLen);
|
h = taosLRUCacheLookup(pCache, key, keyLen);
|
||||||
//*ppRow = (STSRow *)taosLRUCacheValue(pCache, h);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
*handle = h;
|
*handle = h;
|
||||||
|
|
|
@ -281,8 +281,8 @@ void vnodeApplyWriteMsg(SQueueInfo *pInfo, STaosQall *qall, int32_t numOfMsgs) {
|
||||||
for (int32_t i = 0; i < numOfMsgs; ++i) {
|
for (int32_t i = 0; i < numOfMsgs; ++i) {
|
||||||
if (taosGetQitem(qall, (void **)&pMsg) == 0) continue;
|
if (taosGetQitem(qall, (void **)&pMsg) == 0) continue;
|
||||||
const STraceId *trace = &pMsg->info.traceId;
|
const STraceId *trace = &pMsg->info.traceId;
|
||||||
vGTrace("vgId:%d, msg:%p get from vnode-apply queue, type:%s handle:%p", vgId, pMsg, TMSG_INFO(pMsg->msgType),
|
vGInfo("vgId:%d, msg:%p get from vnode-apply queue, type:%s handle:%p index:%ld", vgId, pMsg,
|
||||||
pMsg->info.handle);
|
TMSG_INFO(pMsg->msgType), pMsg->info.handle, pMsg->info.conn.applyIndex);
|
||||||
|
|
||||||
SRpcMsg rsp = {.code = pMsg->code, .info = pMsg->info};
|
SRpcMsg rsp = {.code = pMsg->code, .info = pMsg->info};
|
||||||
if (rsp.code == 0) {
|
if (rsp.code == 0) {
|
||||||
|
@ -503,9 +503,6 @@ static void vnodeSyncReconfig(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SReCon
|
||||||
static void vnodeSyncCommitMsg(SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta cbMeta) {
|
static void vnodeSyncCommitMsg(SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta cbMeta) {
|
||||||
if (cbMeta.isWeak == 0) {
|
if (cbMeta.isWeak == 0) {
|
||||||
SVnode *pVnode = pFsm->data;
|
SVnode *pVnode = pFsm->data;
|
||||||
vTrace("vgId:%d, commit-cb is excuted, fsm:%p, index:%" PRId64 ", isWeak:%d, code:%d, state:%d %s, msgtype:%d %s",
|
|
||||||
syncGetVgId(pVnode->sync), pFsm, cbMeta.index, cbMeta.isWeak, cbMeta.code, cbMeta.state,
|
|
||||||
syncUtilState2String(cbMeta.state), pMsg->msgType, TMSG_INFO(pMsg->msgType));
|
|
||||||
|
|
||||||
if (cbMeta.code == 0) {
|
if (cbMeta.code == 0) {
|
||||||
SRpcMsg rpcMsg = {.msgType = pMsg->msgType, .contLen = pMsg->contLen};
|
SRpcMsg rpcMsg = {.msgType = pMsg->msgType, .contLen = pMsg->contLen};
|
||||||
|
@ -514,11 +511,17 @@ static void vnodeSyncCommitMsg(SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta c
|
||||||
syncGetAndDelRespRpc(pVnode->sync, cbMeta.seqNum, &rpcMsg.info);
|
syncGetAndDelRespRpc(pVnode->sync, cbMeta.seqNum, &rpcMsg.info);
|
||||||
rpcMsg.info.conn.applyIndex = cbMeta.index;
|
rpcMsg.info.conn.applyIndex = cbMeta.index;
|
||||||
rpcMsg.info.conn.applyTerm = cbMeta.term;
|
rpcMsg.info.conn.applyTerm = cbMeta.term;
|
||||||
|
|
||||||
|
vInfo("vgId:%d, commit-cb is excuted, fsm:%p, index:%" PRId64 ", term:%" PRIu64 ", msg-index:%" PRId64
|
||||||
|
", isWeak:%d, code:%d, state:%d %s, msgtype:%d %s",
|
||||||
|
syncGetVgId(pVnode->sync), pFsm, cbMeta.index, cbMeta.term, rpcMsg.info.conn.applyIndex, cbMeta.isWeak,
|
||||||
|
cbMeta.code, cbMeta.state, syncUtilState2String(cbMeta.state), pMsg->msgType, TMSG_INFO(pMsg->msgType));
|
||||||
|
|
||||||
tmsgPutToQueue(&pVnode->msgCb, APPLY_QUEUE, &rpcMsg);
|
tmsgPutToQueue(&pVnode->msgCb, APPLY_QUEUE, &rpcMsg);
|
||||||
} else {
|
} else {
|
||||||
SRpcMsg rsp = {.code = cbMeta.code, .info = pMsg->info};
|
SRpcMsg rsp = {.code = cbMeta.code, .info = pMsg->info};
|
||||||
vError("vgId:%d, sync commit error, msgtype:%d,%s, error:0x%X, errmsg:%s", syncGetVgId(pVnode->sync),
|
vError("vgId:%d, sync commit error, msgtype:%d,%s, index:%ld, error:0x%X, errmsg:%s", syncGetVgId(pVnode->sync),
|
||||||
pMsg->msgType, TMSG_INFO(pMsg->msgType), cbMeta.code, tstrerror(cbMeta.code));
|
pMsg->msgType, TMSG_INFO(pMsg->msgType), cbMeta.index, cbMeta.code, tstrerror(cbMeta.code));
|
||||||
if (rsp.info.handle != NULL) {
|
if (rsp.info.handle != NULL) {
|
||||||
tmsgSendRsp(&rsp);
|
tmsgSendRsp(&rsp);
|
||||||
}
|
}
|
||||||
|
|
|
@ -612,8 +612,7 @@ int32_t sumFunction(SqlFunctionCtx* pCtx) {
|
||||||
SSumRes* pSumRes = GET_ROWCELL_INTERBUF(GET_RES_INFO(pCtx));
|
SSumRes* pSumRes = GET_ROWCELL_INTERBUF(GET_RES_INFO(pCtx));
|
||||||
|
|
||||||
if (IS_NULL_TYPE(type)) {
|
if (IS_NULL_TYPE(type)) {
|
||||||
GET_RES_INFO(pCtx)->isNullRes = 1;
|
numOfElem = 0;
|
||||||
numOfElem = 1;
|
|
||||||
goto _sum_over;
|
goto _sum_over;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1172,8 +1171,7 @@ int32_t doMinMaxHelper(SqlFunctionCtx* pCtx, int32_t isMinFunc) {
|
||||||
SMinmaxResInfo* pBuf = GET_ROWCELL_INTERBUF(pResInfo);
|
SMinmaxResInfo* pBuf = GET_ROWCELL_INTERBUF(pResInfo);
|
||||||
|
|
||||||
if (IS_NULL_TYPE(type)) {
|
if (IS_NULL_TYPE(type)) {
|
||||||
GET_RES_INFO(pCtx)->isNullRes = 1;
|
numOfElems = 0;
|
||||||
numOfElems = 1;
|
|
||||||
goto _min_max_over;
|
goto _min_max_over;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -579,11 +579,13 @@ static int32_t sifExecLogic(SLogicConditionNode *node, SIFCtx *ctx, SIFParam *ou
|
||||||
|
|
||||||
if (ctx->noExec == false) {
|
if (ctx->noExec == false) {
|
||||||
for (int32_t m = 0; m < node->pParameterList->length; m++) {
|
for (int32_t m = 0; m < node->pParameterList->length; m++) {
|
||||||
// add impl later
|
|
||||||
if (node->condType == LOGIC_COND_TYPE_AND) {
|
if (node->condType == LOGIC_COND_TYPE_AND) {
|
||||||
taosArrayAddAll(output->result, params[m].result);
|
taosArrayAddAll(output->result, params[m].result);
|
||||||
|
// taosArrayDestroy(params[m].result);
|
||||||
|
// params[m].result = NULL;
|
||||||
} else if (node->condType == LOGIC_COND_TYPE_OR) {
|
} else if (node->condType == LOGIC_COND_TYPE_OR) {
|
||||||
taosArrayAddAll(output->result, params[m].result);
|
taosArrayAddAll(output->result, params[m].result);
|
||||||
|
// params[m].result = NULL;
|
||||||
} else if (node->condType == LOGIC_COND_TYPE_NOT) {
|
} else if (node->condType == LOGIC_COND_TYPE_NOT) {
|
||||||
// taosArrayAddAll(output->result, params[m].result);
|
// taosArrayAddAll(output->result, params[m].result);
|
||||||
}
|
}
|
||||||
|
@ -593,6 +595,8 @@ static int32_t sifExecLogic(SLogicConditionNode *node, SIFCtx *ctx, SIFParam *ou
|
||||||
} else {
|
} else {
|
||||||
for (int32_t m = 0; m < node->pParameterList->length; m++) {
|
for (int32_t m = 0; m < node->pParameterList->length; m++) {
|
||||||
output->status = sifMergeCond(node->condType, output->status, params[m].status);
|
output->status = sifMergeCond(node->condType, output->status, params[m].status);
|
||||||
|
taosArrayDestroy(params[m].result);
|
||||||
|
params[m].result = NULL;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
_return:
|
_return:
|
||||||
|
@ -607,6 +611,7 @@ static EDealRes sifWalkFunction(SNode *pNode, void *context) {
|
||||||
SIFCtx *ctx = context;
|
SIFCtx *ctx = context;
|
||||||
ctx->code = sifExecFunction(node, ctx, &output);
|
ctx->code = sifExecFunction(node, ctx, &output);
|
||||||
if (ctx->code != TSDB_CODE_SUCCESS) {
|
if (ctx->code != TSDB_CODE_SUCCESS) {
|
||||||
|
sifFreeParam(&output);
|
||||||
return DEAL_RES_ERROR;
|
return DEAL_RES_ERROR;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -624,6 +629,7 @@ static EDealRes sifWalkLogic(SNode *pNode, void *context) {
|
||||||
SIFCtx *ctx = context;
|
SIFCtx *ctx = context;
|
||||||
ctx->code = sifExecLogic(node, ctx, &output);
|
ctx->code = sifExecLogic(node, ctx, &output);
|
||||||
if (ctx->code) {
|
if (ctx->code) {
|
||||||
|
sifFreeParam(&output);
|
||||||
return DEAL_RES_ERROR;
|
return DEAL_RES_ERROR;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -640,6 +646,7 @@ static EDealRes sifWalkOper(SNode *pNode, void *context) {
|
||||||
SIFCtx *ctx = context;
|
SIFCtx *ctx = context;
|
||||||
ctx->code = sifExecOper(node, ctx, &output);
|
ctx->code = sifExecOper(node, ctx, &output);
|
||||||
if (ctx->code) {
|
if (ctx->code) {
|
||||||
|
sifFreeParam(&output);
|
||||||
return DEAL_RES_ERROR;
|
return DEAL_RES_ERROR;
|
||||||
}
|
}
|
||||||
if (taosHashPut(ctx->pRes, &pNode, POINTER_BYTES, &output, sizeof(output))) {
|
if (taosHashPut(ctx->pRes, &pNode, POINTER_BYTES, &output, sizeof(output))) {
|
||||||
|
@ -698,7 +705,11 @@ static int32_t sifCalculate(SNode *pNode, SIFParam *pDst) {
|
||||||
}
|
}
|
||||||
|
|
||||||
nodesWalkExprPostOrder(pNode, sifCalcWalker, &ctx);
|
nodesWalkExprPostOrder(pNode, sifCalcWalker, &ctx);
|
||||||
SIF_ERR_RET(ctx.code);
|
|
||||||
|
if (ctx.code != 0) {
|
||||||
|
sifFreeRes(ctx.pRes);
|
||||||
|
return ctx.code;
|
||||||
|
}
|
||||||
|
|
||||||
if (pDst) {
|
if (pDst) {
|
||||||
SIFParam *res = (SIFParam *)taosHashGet(ctx.pRes, (void *)&pNode, POINTER_BYTES);
|
SIFParam *res = (SIFParam *)taosHashGet(ctx.pRes, (void *)&pNode, POINTER_BYTES);
|
||||||
|
@ -714,8 +725,7 @@ static int32_t sifCalculate(SNode *pNode, SIFParam *pDst) {
|
||||||
taosHashRemove(ctx.pRes, (void *)&pNode, POINTER_BYTES);
|
taosHashRemove(ctx.pRes, (void *)&pNode, POINTER_BYTES);
|
||||||
}
|
}
|
||||||
sifFreeRes(ctx.pRes);
|
sifFreeRes(ctx.pRes);
|
||||||
|
return code;
|
||||||
SIF_RET(code);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t sifGetFltHint(SNode *pNode, SIdxFltStatus *status) {
|
static int32_t sifGetFltHint(SNode *pNode, SIdxFltStatus *status) {
|
||||||
|
@ -732,8 +742,10 @@ static int32_t sifGetFltHint(SNode *pNode, SIdxFltStatus *status) {
|
||||||
}
|
}
|
||||||
|
|
||||||
nodesWalkExprPostOrder(pNode, sifCalcWalker, &ctx);
|
nodesWalkExprPostOrder(pNode, sifCalcWalker, &ctx);
|
||||||
|
if (ctx.code != 0) {
|
||||||
SIF_ERR_RET(ctx.code);
|
sifFreeRes(ctx.pRes);
|
||||||
|
return ctx.code;
|
||||||
|
}
|
||||||
|
|
||||||
SIFParam *res = (SIFParam *)taosHashGet(ctx.pRes, (void *)&pNode, POINTER_BYTES);
|
SIFParam *res = (SIFParam *)taosHashGet(ctx.pRes, (void *)&pNode, POINTER_BYTES);
|
||||||
if (res == NULL) {
|
if (res == NULL) {
|
||||||
|
@ -745,8 +757,7 @@ static int32_t sifGetFltHint(SNode *pNode, SIdxFltStatus *status) {
|
||||||
sifFreeParam(res);
|
sifFreeParam(res);
|
||||||
taosHashRemove(ctx.pRes, (void *)&pNode, POINTER_BYTES);
|
taosHashRemove(ctx.pRes, (void *)&pNode, POINTER_BYTES);
|
||||||
taosHashCleanup(ctx.pRes);
|
taosHashCleanup(ctx.pRes);
|
||||||
|
return code;
|
||||||
SIF_RET(code);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t doFilterTag(SNode *pFilterNode, SIndexMetaArg *metaArg, SArray *result, SIdxFltStatus *status) {
|
int32_t doFilterTag(SNode *pFilterNode, SIndexMetaArg *metaArg, SArray *result, SIdxFltStatus *status) {
|
||||||
|
@ -760,7 +771,11 @@ int32_t doFilterTag(SNode *pFilterNode, SIndexMetaArg *metaArg, SArray *result,
|
||||||
|
|
||||||
SArray *output = taosArrayInit(8, sizeof(uint64_t));
|
SArray *output = taosArrayInit(8, sizeof(uint64_t));
|
||||||
SIFParam param = {.arg = *metaArg, .result = output};
|
SIFParam param = {.arg = *metaArg, .result = output};
|
||||||
SIF_ERR_RET(sifCalculate((SNode *)pFilterNode, ¶m));
|
int32_t code = sifCalculate((SNode *)pFilterNode, ¶m);
|
||||||
|
if (code != 0) {
|
||||||
|
sifFreeParam(¶m);
|
||||||
|
return code;
|
||||||
|
}
|
||||||
|
|
||||||
taosArrayAddAll(result, param.result);
|
taosArrayAddAll(result, param.result);
|
||||||
sifFreeParam(¶m);
|
sifFreeParam(¶m);
|
||||||
|
|
|
@ -1,5 +1,4 @@
|
||||||
/** Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
|
/** Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
|
||||||
|
|
||||||
*
|
*
|
||||||
* This program is free software: you can use, redistribute, and/or modify
|
* This program is free software: you can use, redistribute, and/or modify
|
||||||
* it under the terms of the GNU Affero General Public License, version 3
|
* it under the terms of the GNU Affero General Public License, version 3
|
||||||
|
@ -16,6 +15,10 @@
|
||||||
#ifdef USE_UV
|
#ifdef USE_UV
|
||||||
#include "transComm.h"
|
#include "transComm.h"
|
||||||
|
|
||||||
|
typedef struct SConnList {
|
||||||
|
queue conn;
|
||||||
|
} SConnList;
|
||||||
|
|
||||||
typedef struct SCliConn {
|
typedef struct SCliConn {
|
||||||
T_REF_DECLARE()
|
T_REF_DECLARE()
|
||||||
uv_connect_t connReq;
|
uv_connect_t connReq;
|
||||||
|
@ -26,7 +29,9 @@ typedef struct SCliConn {
|
||||||
|
|
||||||
SConnBuffer readBuf;
|
SConnBuffer readBuf;
|
||||||
STransQueue cliMsgs;
|
STransQueue cliMsgs;
|
||||||
queue q;
|
|
||||||
|
queue q;
|
||||||
|
SConnList* list;
|
||||||
|
|
||||||
STransCtx ctx;
|
STransCtx ctx;
|
||||||
bool broken; // link broken or not
|
bool broken; // link broken or not
|
||||||
|
@ -56,13 +61,14 @@ typedef struct SCliMsg {
|
||||||
} SCliMsg;
|
} SCliMsg;
|
||||||
|
|
||||||
typedef struct SCliThrd {
|
typedef struct SCliThrd {
|
||||||
TdThread thread; // tid
|
TdThread thread; // tid
|
||||||
int64_t pid; // pid
|
int64_t pid; // pid
|
||||||
uv_loop_t* loop;
|
uv_loop_t* loop;
|
||||||
SAsyncPool* asyncPool;
|
SAsyncPool* asyncPool;
|
||||||
uv_idle_t* idle;
|
uv_idle_t* idle;
|
||||||
uv_timer_t timer;
|
uv_prepare_t* prepare;
|
||||||
void* pool; // conn pool
|
uv_timer_t timer;
|
||||||
|
void* pool; // conn pool
|
||||||
|
|
||||||
// msg queue
|
// msg queue
|
||||||
queue msg;
|
queue msg;
|
||||||
|
@ -86,10 +92,6 @@ typedef struct SCliObj {
|
||||||
SCliThrd** pThreadObj;
|
SCliThrd** pThreadObj;
|
||||||
} SCliObj;
|
} SCliObj;
|
||||||
|
|
||||||
typedef struct SConnList {
|
|
||||||
queue conn;
|
|
||||||
} SConnList;
|
|
||||||
|
|
||||||
// conn pool
|
// conn pool
|
||||||
// add expire timeout and capacity limit
|
// add expire timeout and capacity limit
|
||||||
static void* createConnPool(int size);
|
static void* createConnPool(int size);
|
||||||
|
@ -101,7 +103,7 @@ static void doCloseIdleConn(void* param);
|
||||||
static int sockDebugInfo(struct sockaddr* sockname, char* dst) {
|
static int sockDebugInfo(struct sockaddr* sockname, char* dst) {
|
||||||
struct sockaddr_in addr = *(struct sockaddr_in*)sockname;
|
struct sockaddr_in addr = *(struct sockaddr_in*)sockname;
|
||||||
|
|
||||||
char buf[20] = {0};
|
char buf[16] = {0};
|
||||||
int r = uv_ip4_name(&addr, (char*)buf, sizeof(buf));
|
int r = uv_ip4_name(&addr, (char*)buf, sizeof(buf));
|
||||||
sprintf(dst, "%s:%d", buf, ntohs(addr.sin_port));
|
sprintf(dst, "%s:%d", buf, ntohs(addr.sin_port));
|
||||||
return r;
|
return r;
|
||||||
|
@ -118,6 +120,9 @@ static void cliSendCb(uv_write_t* req, int status);
|
||||||
static void cliConnCb(uv_connect_t* req, int status);
|
static void cliConnCb(uv_connect_t* req, int status);
|
||||||
static void cliAsyncCb(uv_async_t* handle);
|
static void cliAsyncCb(uv_async_t* handle);
|
||||||
static void cliIdleCb(uv_idle_t* handle);
|
static void cliIdleCb(uv_idle_t* handle);
|
||||||
|
static void cliPrepareCb(uv_prepare_t* handle);
|
||||||
|
|
||||||
|
static int32_t allocConnRef(SCliConn* conn, bool update);
|
||||||
|
|
||||||
static int cliAppCb(SCliConn* pConn, STransMsg* pResp, SCliMsg* pMsg);
|
static int cliAppCb(SCliConn* pConn, STransMsg* pResp, SCliMsg* pMsg);
|
||||||
|
|
||||||
|
@ -198,7 +203,7 @@ static void cliReleaseUnfinishedMsg(SCliConn* conn) {
|
||||||
pThrd = (SCliThrd*)(exh)->pThrd; \
|
pThrd = (SCliThrd*)(exh)->pThrd; \
|
||||||
} \
|
} \
|
||||||
} while (0)
|
} while (0)
|
||||||
#define CONN_PERSIST_TIME(para) ((para) == 0 ? 3 * 1000 : (para))
|
#define CONN_PERSIST_TIME(para) ((para) <= 90000 ? 90000 : (para))
|
||||||
#define CONN_GET_HOST_THREAD(conn) (conn ? ((SCliConn*)conn)->hostThrd : NULL)
|
#define CONN_GET_HOST_THREAD(conn) (conn ? ((SCliConn*)conn)->hostThrd : NULL)
|
||||||
#define CONN_GET_INST_LABEL(conn) (((STrans*)(((SCliThrd*)(conn)->hostThrd)->pTransInst))->label)
|
#define CONN_GET_INST_LABEL(conn) (((STrans*)(((SCliThrd*)(conn)->hostThrd)->pTransInst))->label)
|
||||||
#define CONN_SHOULD_RELEASE(conn, head) \
|
#define CONN_SHOULD_RELEASE(conn, head) \
|
||||||
|
@ -499,9 +504,8 @@ void* destroyConnPool(void* pool) {
|
||||||
}
|
}
|
||||||
|
|
||||||
static SCliConn* getConnFromPool(void* pool, char* ip, uint32_t port) {
|
static SCliConn* getConnFromPool(void* pool, char* ip, uint32_t port) {
|
||||||
char key[128] = {0};
|
char key[32] = {0};
|
||||||
CONN_CONSTRUCT_HASH_KEY(key, ip, port);
|
CONN_CONSTRUCT_HASH_KEY(key, ip, port);
|
||||||
|
|
||||||
SHashObj* pPool = pool;
|
SHashObj* pPool = pool;
|
||||||
SConnList* plist = taosHashGet(pPool, key, strlen(key));
|
SConnList* plist = taosHashGet(pPool, key, strlen(key));
|
||||||
if (plist == NULL) {
|
if (plist == NULL) {
|
||||||
|
@ -519,13 +523,44 @@ static SCliConn* getConnFromPool(void* pool, char* ip, uint32_t port) {
|
||||||
conn->status = ConnNormal;
|
conn->status = ConnNormal;
|
||||||
QUEUE_REMOVE(&conn->q);
|
QUEUE_REMOVE(&conn->q);
|
||||||
QUEUE_INIT(&conn->q);
|
QUEUE_INIT(&conn->q);
|
||||||
assert(h == &conn->q);
|
|
||||||
|
|
||||||
transDQCancel(((SCliThrd*)conn->hostThrd)->timeoutQueue, conn->task);
|
transDQCancel(((SCliThrd*)conn->hostThrd)->timeoutQueue, conn->task);
|
||||||
conn->task = NULL;
|
conn->task = NULL;
|
||||||
|
|
||||||
return conn;
|
return conn;
|
||||||
}
|
}
|
||||||
|
static void addConnToPool(void* pool, SCliConn* conn) {
|
||||||
|
if (conn->status == ConnInPool) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
SCliThrd* thrd = conn->hostThrd;
|
||||||
|
CONN_HANDLE_THREAD_QUIT(thrd);
|
||||||
|
|
||||||
|
allocConnRef(conn, true);
|
||||||
|
|
||||||
|
STrans* pTransInst = thrd->pTransInst;
|
||||||
|
cliReleaseUnfinishedMsg(conn);
|
||||||
|
transQueueClear(&conn->cliMsgs);
|
||||||
|
transCtxCleanup(&conn->ctx);
|
||||||
|
conn->status = ConnInPool;
|
||||||
|
|
||||||
|
if (conn->list == NULL) {
|
||||||
|
char key[32] = {0};
|
||||||
|
CONN_CONSTRUCT_HASH_KEY(key, conn->ip, conn->port);
|
||||||
|
tTrace("%s conn %p added to conn pool, read buf cap:%d", CONN_GET_INST_LABEL(conn), conn, conn->readBuf.cap);
|
||||||
|
conn->list = taosHashGet((SHashObj*)pool, key, strlen(key));
|
||||||
|
}
|
||||||
|
assert(conn->list != NULL);
|
||||||
|
QUEUE_INIT(&conn->q);
|
||||||
|
QUEUE_PUSH(&conn->list->conn, &conn->q);
|
||||||
|
|
||||||
|
assert(!QUEUE_IS_EMPTY(&conn->list->conn));
|
||||||
|
|
||||||
|
STaskArg* arg = taosMemoryCalloc(1, sizeof(STaskArg));
|
||||||
|
arg->param1 = conn;
|
||||||
|
arg->param2 = thrd;
|
||||||
|
conn->task = transDQSched(thrd->timeoutQueue, doCloseIdleConn, arg, CONN_PERSIST_TIME(pTransInst->idleTime));
|
||||||
|
}
|
||||||
static int32_t allocConnRef(SCliConn* conn, bool update) {
|
static int32_t allocConnRef(SCliConn* conn, bool update) {
|
||||||
if (update) {
|
if (update) {
|
||||||
transRemoveExHandle(transGetRefMgt(), conn->refId);
|
transRemoveExHandle(transGetRefMgt(), conn->refId);
|
||||||
|
@ -556,38 +591,6 @@ static int32_t specifyConnRef(SCliConn* conn, bool update, int64_t handle) {
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void addConnToPool(void* pool, SCliConn* conn) {
|
|
||||||
if (conn->status == ConnInPool) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
SCliThrd* thrd = conn->hostThrd;
|
|
||||||
CONN_HANDLE_THREAD_QUIT(thrd);
|
|
||||||
|
|
||||||
allocConnRef(conn, true);
|
|
||||||
|
|
||||||
STrans* pTransInst = thrd->pTransInst;
|
|
||||||
cliReleaseUnfinishedMsg(conn);
|
|
||||||
transQueueClear(&conn->cliMsgs);
|
|
||||||
transCtxCleanup(&conn->ctx);
|
|
||||||
conn->status = ConnInPool;
|
|
||||||
|
|
||||||
char key[128] = {0};
|
|
||||||
CONN_CONSTRUCT_HASH_KEY(key, conn->ip, conn->port);
|
|
||||||
tTrace("%s conn %p added to conn pool, read buf cap:%d", CONN_GET_INST_LABEL(conn), conn, conn->readBuf.cap);
|
|
||||||
|
|
||||||
SConnList* plist = taosHashGet((SHashObj*)pool, key, strlen(key));
|
|
||||||
// list already create before
|
|
||||||
assert(plist != NULL);
|
|
||||||
QUEUE_INIT(&conn->q);
|
|
||||||
QUEUE_PUSH(&plist->conn, &conn->q);
|
|
||||||
|
|
||||||
assert(!QUEUE_IS_EMPTY(&plist->conn));
|
|
||||||
|
|
||||||
STaskArg* arg = taosMemoryCalloc(1, sizeof(STaskArg));
|
|
||||||
arg->param1 = conn;
|
|
||||||
arg->param2 = thrd;
|
|
||||||
conn->task = transDQSched(thrd->timeoutQueue, doCloseIdleConn, arg, CONN_PERSIST_TIME(pTransInst->idleTime));
|
|
||||||
}
|
|
||||||
static void cliAllocRecvBufferCb(uv_handle_t* handle, size_t suggested_size, uv_buf_t* buf) {
|
static void cliAllocRecvBufferCb(uv_handle_t* handle, size_t suggested_size, uv_buf_t* buf) {
|
||||||
SCliConn* conn = handle->data;
|
SCliConn* conn = handle->data;
|
||||||
SConnBuffer* pBuf = &conn->readBuf;
|
SConnBuffer* pBuf = &conn->readBuf;
|
||||||
|
@ -602,11 +605,9 @@ static void cliRecvCb(uv_stream_t* handle, ssize_t nread, const uv_buf_t* buf) {
|
||||||
SConnBuffer* pBuf = &conn->readBuf;
|
SConnBuffer* pBuf = &conn->readBuf;
|
||||||
if (nread > 0) {
|
if (nread > 0) {
|
||||||
pBuf->len += nread;
|
pBuf->len += nread;
|
||||||
if (transReadComplete(pBuf)) {
|
while (transReadComplete(pBuf)) {
|
||||||
tTrace("%s conn %p read complete", CONN_GET_INST_LABEL(conn), conn);
|
tTrace("%s conn %p read complete", CONN_GET_INST_LABEL(conn), conn);
|
||||||
cliHandleResp(conn);
|
cliHandleResp(conn);
|
||||||
} else {
|
|
||||||
tTrace("%s conn %p read partial packet, continue to read", CONN_GET_INST_LABEL(conn), conn);
|
|
||||||
}
|
}
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
@ -967,6 +968,62 @@ static void cliAsyncCb(uv_async_t* handle) {
|
||||||
static void cliIdleCb(uv_idle_t* handle) {
|
static void cliIdleCb(uv_idle_t* handle) {
|
||||||
SCliThrd* thrd = handle->data;
|
SCliThrd* thrd = handle->data;
|
||||||
tTrace("do idle work");
|
tTrace("do idle work");
|
||||||
|
|
||||||
|
SAsyncPool* pool = thrd->asyncPool;
|
||||||
|
for (int i = 0; i < pool->nAsync; i++) {
|
||||||
|
uv_async_t* async = &(pool->asyncs[i]);
|
||||||
|
SAsyncItem* item = async->data;
|
||||||
|
|
||||||
|
queue wq;
|
||||||
|
taosThreadMutexLock(&item->mtx);
|
||||||
|
QUEUE_MOVE(&item->qmsg, &wq);
|
||||||
|
taosThreadMutexUnlock(&item->mtx);
|
||||||
|
|
||||||
|
int count = 0;
|
||||||
|
while (!QUEUE_IS_EMPTY(&wq)) {
|
||||||
|
queue* h = QUEUE_HEAD(&wq);
|
||||||
|
QUEUE_REMOVE(h);
|
||||||
|
|
||||||
|
SCliMsg* pMsg = QUEUE_DATA(h, SCliMsg, q);
|
||||||
|
if (pMsg == NULL) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
(*cliAsyncHandle[pMsg->type])(pMsg, thrd);
|
||||||
|
count++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
tTrace("prepare work end");
|
||||||
|
if (thrd->stopMsg != NULL) cliHandleQuit(thrd->stopMsg, thrd);
|
||||||
|
}
|
||||||
|
static void cliPrepareCb(uv_prepare_t* handle) {
|
||||||
|
SCliThrd* thrd = handle->data;
|
||||||
|
tTrace("prepare work start");
|
||||||
|
|
||||||
|
SAsyncPool* pool = thrd->asyncPool;
|
||||||
|
for (int i = 0; i < pool->nAsync; i++) {
|
||||||
|
uv_async_t* async = &(pool->asyncs[i]);
|
||||||
|
SAsyncItem* item = async->data;
|
||||||
|
|
||||||
|
queue wq;
|
||||||
|
taosThreadMutexLock(&item->mtx);
|
||||||
|
QUEUE_MOVE(&item->qmsg, &wq);
|
||||||
|
taosThreadMutexUnlock(&item->mtx);
|
||||||
|
|
||||||
|
int count = 0;
|
||||||
|
while (!QUEUE_IS_EMPTY(&wq)) {
|
||||||
|
queue* h = QUEUE_HEAD(&wq);
|
||||||
|
QUEUE_REMOVE(h);
|
||||||
|
|
||||||
|
SCliMsg* pMsg = QUEUE_DATA(h, SCliMsg, q);
|
||||||
|
if (pMsg == NULL) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
(*cliAsyncHandle[pMsg->type])(pMsg, thrd);
|
||||||
|
count++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
tTrace("prepare work end");
|
||||||
|
if (thrd->stopMsg != NULL) cliHandleQuit(thrd->stopMsg, thrd);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void* cliWorkThread(void* arg) {
|
static void* cliWorkThread(void* arg) {
|
||||||
|
@ -1033,7 +1090,12 @@ static SCliThrd* createThrdObj() {
|
||||||
// pThrd->idle = taosMemoryCalloc(1, sizeof(uv_idle_t));
|
// pThrd->idle = taosMemoryCalloc(1, sizeof(uv_idle_t));
|
||||||
// uv_idle_init(pThrd->loop, pThrd->idle);
|
// uv_idle_init(pThrd->loop, pThrd->idle);
|
||||||
// pThrd->idle->data = pThrd;
|
// pThrd->idle->data = pThrd;
|
||||||
// uv_idle_start(pThrd->idle, cliIdleCb);
|
// uv_idle_start(pThrd->idle, cliIdleCb);
|
||||||
|
|
||||||
|
pThrd->prepare = taosMemoryCalloc(1, sizeof(uv_prepare_t));
|
||||||
|
uv_prepare_init(pThrd->loop, pThrd->prepare);
|
||||||
|
pThrd->prepare->data = pThrd;
|
||||||
|
uv_prepare_start(pThrd->prepare, cliPrepareCb);
|
||||||
|
|
||||||
pThrd->pool = createConnPool(4);
|
pThrd->pool = createConnPool(4);
|
||||||
transDQCreate(pThrd->loop, &pThrd->delayQueue);
|
transDQCreate(pThrd->loop, &pThrd->delayQueue);
|
||||||
|
@ -1058,6 +1120,7 @@ static void destroyThrdObj(SCliThrd* pThrd) {
|
||||||
transDQDestroy(pThrd->timeoutQueue, NULL);
|
transDQDestroy(pThrd->timeoutQueue, NULL);
|
||||||
|
|
||||||
taosMemoryFree(pThrd->idle);
|
taosMemoryFree(pThrd->idle);
|
||||||
|
taosMemoryFree(pThrd->prepare);
|
||||||
taosMemoryFree(pThrd->loop);
|
taosMemoryFree(pThrd->loop);
|
||||||
taosMemoryFree(pThrd);
|
taosMemoryFree(pThrd);
|
||||||
}
|
}
|
||||||
|
|
|
@ -120,8 +120,9 @@ int transInitBuffer(SConnBuffer* buf) {
|
||||||
buf->total = 0;
|
buf->total = 0;
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
int transDestroyBuffer(SConnBuffer* buf) {
|
int transDestroyBuffer(SConnBuffer* p) {
|
||||||
taosMemoryFree(buf->buf);
|
taosMemoryFree(p->buf);
|
||||||
|
p->buf = NULL;
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -73,6 +73,7 @@ typedef struct SWorkThrd {
|
||||||
uv_os_fd_t fd;
|
uv_os_fd_t fd;
|
||||||
uv_loop_t* loop;
|
uv_loop_t* loop;
|
||||||
SAsyncPool* asyncPool;
|
SAsyncPool* asyncPool;
|
||||||
|
uv_prepare_t* prepare;
|
||||||
queue msg;
|
queue msg;
|
||||||
TdThreadMutex msgMtx;
|
TdThreadMutex msgMtx;
|
||||||
|
|
||||||
|
@ -112,6 +113,7 @@ static void uvOnConnectionCb(uv_stream_t* q, ssize_t nread, const uv_buf_t* buf)
|
||||||
static void uvWorkerAsyncCb(uv_async_t* handle);
|
static void uvWorkerAsyncCb(uv_async_t* handle);
|
||||||
static void uvAcceptAsyncCb(uv_async_t* handle);
|
static void uvAcceptAsyncCb(uv_async_t* handle);
|
||||||
static void uvShutDownCb(uv_shutdown_t* req, int status);
|
static void uvShutDownCb(uv_shutdown_t* req, int status);
|
||||||
|
static void uvPrepareCb(uv_prepare_t* handle);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* time-consuming task throwed into BG work thread
|
* time-consuming task throwed into BG work thread
|
||||||
|
@ -238,8 +240,6 @@ static void uvHandleReq(SSvrConn* pConn) {
|
||||||
transMsg.msgType = pHead->msgType;
|
transMsg.msgType = pHead->msgType;
|
||||||
transMsg.code = pHead->code;
|
transMsg.code = pHead->code;
|
||||||
|
|
||||||
// transClearBuffer(&pConn->readBuf);
|
|
||||||
|
|
||||||
pConn->inType = pHead->msgType;
|
pConn->inType = pHead->msgType;
|
||||||
if (pConn->status == ConnNormal) {
|
if (pConn->status == ConnNormal) {
|
||||||
if (pHead->persist == 1) {
|
if (pHead->persist == 1) {
|
||||||
|
@ -546,6 +546,52 @@ static void uvShutDownCb(uv_shutdown_t* req, int status) {
|
||||||
uv_close((uv_handle_t*)req->handle, uvDestroyConn);
|
uv_close((uv_handle_t*)req->handle, uvDestroyConn);
|
||||||
taosMemoryFree(req);
|
taosMemoryFree(req);
|
||||||
}
|
}
|
||||||
|
static void uvPrepareCb(uv_prepare_t* handle) {
|
||||||
|
// prepare callback
|
||||||
|
SWorkThrd* pThrd = handle->data;
|
||||||
|
SAsyncPool* pool = pThrd->asyncPool;
|
||||||
|
|
||||||
|
for (int i = 0; i < pool->nAsync; i++) {
|
||||||
|
uv_async_t* async = &(pool->asyncs[i]);
|
||||||
|
SAsyncItem* item = async->data;
|
||||||
|
|
||||||
|
queue wq;
|
||||||
|
taosThreadMutexLock(&item->mtx);
|
||||||
|
QUEUE_MOVE(&item->qmsg, &wq);
|
||||||
|
taosThreadMutexUnlock(&item->mtx);
|
||||||
|
|
||||||
|
while (!QUEUE_IS_EMPTY(&wq)) {
|
||||||
|
queue* head = QUEUE_HEAD(&wq);
|
||||||
|
QUEUE_REMOVE(head);
|
||||||
|
|
||||||
|
SSvrMsg* msg = QUEUE_DATA(head, SSvrMsg, q);
|
||||||
|
if (msg == NULL) {
|
||||||
|
tError("unexcept occurred, continue");
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
// release handle to rpc init
|
||||||
|
if (msg->type == Quit) {
|
||||||
|
(*transAsyncHandle[msg->type])(msg, pThrd);
|
||||||
|
continue;
|
||||||
|
} else {
|
||||||
|
STransMsg transMsg = msg->msg;
|
||||||
|
|
||||||
|
SExHandle* exh1 = transMsg.info.handle;
|
||||||
|
int64_t refId = transMsg.info.refId;
|
||||||
|
SExHandle* exh2 = transAcquireExHandle(transGetRefMgt(), refId);
|
||||||
|
if (exh2 == NULL || exh1 != exh2) {
|
||||||
|
tTrace("handle except msg %p, ignore it", exh1);
|
||||||
|
transReleaseExHandle(transGetRefMgt(), refId);
|
||||||
|
destroySmsg(msg);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
msg->pConn = exh1->handle;
|
||||||
|
transReleaseExHandle(transGetRefMgt(), refId);
|
||||||
|
(*transAsyncHandle[msg->type])(msg, pThrd);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
static void uvWorkDoTask(uv_work_t* req) {
|
static void uvWorkDoTask(uv_work_t* req) {
|
||||||
// doing time-consumeing task
|
// doing time-consumeing task
|
||||||
|
@ -695,13 +741,17 @@ static bool addHandleToWorkloop(SWorkThrd* pThrd, char* pipeName) {
|
||||||
}
|
}
|
||||||
|
|
||||||
uv_pipe_init(pThrd->loop, pThrd->pipe, 1);
|
uv_pipe_init(pThrd->loop, pThrd->pipe, 1);
|
||||||
// int r = uv_pipe_open(pThrd->pipe, pThrd->fd);
|
|
||||||
|
|
||||||
pThrd->pipe->data = pThrd;
|
pThrd->pipe->data = pThrd;
|
||||||
|
|
||||||
QUEUE_INIT(&pThrd->msg);
|
QUEUE_INIT(&pThrd->msg);
|
||||||
taosThreadMutexInit(&pThrd->msgMtx, NULL);
|
taosThreadMutexInit(&pThrd->msgMtx, NULL);
|
||||||
|
|
||||||
|
pThrd->prepare = taosMemoryCalloc(1, sizeof(uv_prepare_t));
|
||||||
|
uv_prepare_init(pThrd->loop, pThrd->prepare);
|
||||||
|
uv_prepare_start(pThrd->prepare, uvPrepareCb);
|
||||||
|
pThrd->prepare->data = pThrd;
|
||||||
|
|
||||||
// conn set
|
// conn set
|
||||||
QUEUE_INIT(&pThrd->conn);
|
QUEUE_INIT(&pThrd->conn);
|
||||||
|
|
||||||
|
@ -986,6 +1036,7 @@ void destroyWorkThrd(SWorkThrd* pThrd) {
|
||||||
SRV_RELEASE_UV(pThrd->loop);
|
SRV_RELEASE_UV(pThrd->loop);
|
||||||
TRANS_DESTROY_ASYNC_POOL_MSG(pThrd->asyncPool, SSvrMsg, destroySmsg);
|
TRANS_DESTROY_ASYNC_POOL_MSG(pThrd->asyncPool, SSvrMsg, destroySmsg);
|
||||||
transAsyncPoolDestroy(pThrd->asyncPool);
|
transAsyncPoolDestroy(pThrd->asyncPool);
|
||||||
|
taosMemoryFree(pThrd->prepare);
|
||||||
taosMemoryFree(pThrd->loop);
|
taosMemoryFree(pThrd->loop);
|
||||||
taosMemoryFree(pThrd);
|
taosMemoryFree(pThrd);
|
||||||
}
|
}
|
||||||
|
|
|
@ -139,7 +139,7 @@ int walCheckAndRepairMeta(SWal* pWal) {
|
||||||
const char* idxPattern = "^[0-9]+.idx$";
|
const char* idxPattern = "^[0-9]+.idx$";
|
||||||
regex_t logRegPattern;
|
regex_t logRegPattern;
|
||||||
regex_t idxRegPattern;
|
regex_t idxRegPattern;
|
||||||
SArray* pLogInfoArray = taosArrayInit(8, sizeof(SWalFileInfo));
|
SArray* actualLog = taosArrayInit(8, sizeof(SWalFileInfo));
|
||||||
|
|
||||||
regcomp(&logRegPattern, logPattern, REG_EXTENDED);
|
regcomp(&logRegPattern, logPattern, REG_EXTENDED);
|
||||||
regcomp(&idxRegPattern, idxPattern, REG_EXTENDED);
|
regcomp(&idxRegPattern, idxPattern, REG_EXTENDED);
|
||||||
|
@ -159,7 +159,7 @@ int walCheckAndRepairMeta(SWal* pWal) {
|
||||||
SWalFileInfo fileInfo;
|
SWalFileInfo fileInfo;
|
||||||
memset(&fileInfo, -1, sizeof(SWalFileInfo));
|
memset(&fileInfo, -1, sizeof(SWalFileInfo));
|
||||||
sscanf(name, "%" PRId64 ".log", &fileInfo.firstVer);
|
sscanf(name, "%" PRId64 ".log", &fileInfo.firstVer);
|
||||||
taosArrayPush(pLogInfoArray, &fileInfo);
|
taosArrayPush(actualLog, &fileInfo);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -167,10 +167,10 @@ int walCheckAndRepairMeta(SWal* pWal) {
|
||||||
regfree(&logRegPattern);
|
regfree(&logRegPattern);
|
||||||
regfree(&idxRegPattern);
|
regfree(&idxRegPattern);
|
||||||
|
|
||||||
taosArraySort(pLogInfoArray, compareWalFileInfo);
|
taosArraySort(actualLog, compareWalFileInfo);
|
||||||
|
|
||||||
int metaFileNum = taosArrayGetSize(pWal->fileInfoSet);
|
int metaFileNum = taosArrayGetSize(pWal->fileInfoSet);
|
||||||
int actualFileNum = taosArrayGetSize(pLogInfoArray);
|
int actualFileNum = taosArrayGetSize(actualLog);
|
||||||
|
|
||||||
#if 0
|
#if 0
|
||||||
for (int32_t fileNo = actualFileNum - 1; fileNo >= 0; fileNo--) {
|
for (int32_t fileNo = actualFileNum - 1; fileNo >= 0; fileNo--) {
|
||||||
|
@ -196,11 +196,11 @@ int walCheckAndRepairMeta(SWal* pWal) {
|
||||||
taosArrayPopFrontBatch(pWal->fileInfoSet, metaFileNum - actualFileNum);
|
taosArrayPopFrontBatch(pWal->fileInfoSet, metaFileNum - actualFileNum);
|
||||||
} else if (metaFileNum < actualFileNum) {
|
} else if (metaFileNum < actualFileNum) {
|
||||||
for (int i = metaFileNum; i < actualFileNum; i++) {
|
for (int i = metaFileNum; i < actualFileNum; i++) {
|
||||||
SWalFileInfo* pFileInfo = taosArrayGet(pLogInfoArray, i);
|
SWalFileInfo* pFileInfo = taosArrayGet(actualLog, i);
|
||||||
taosArrayPush(pWal->fileInfoSet, pFileInfo);
|
taosArrayPush(pWal->fileInfoSet, pFileInfo);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
taosArrayDestroy(pLogInfoArray);
|
taosArrayDestroy(actualLog);
|
||||||
|
|
||||||
pWal->writeCur = actualFileNum - 1;
|
pWal->writeCur = actualFileNum - 1;
|
||||||
if (actualFileNum > 0) {
|
if (actualFileNum > 0) {
|
||||||
|
@ -221,7 +221,7 @@ int walCheckAndRepairMeta(SWal* pWal) {
|
||||||
|
|
||||||
int code = walSaveMeta(pWal);
|
int code = walSaveMeta(pWal);
|
||||||
if (code < 0) {
|
if (code < 0) {
|
||||||
taosArrayDestroy(pLogInfoArray);
|
taosArrayDestroy(actualLog);
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -423,37 +423,38 @@ int32_t walFetchBody(SWalReader *pRead, SWalCkHead **ppHead) {
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t walReadVer(SWalReader *pRead, int64_t ver) {
|
int32_t walReadVer(SWalReader *pReader, int64_t ver) {
|
||||||
wDebug("vgId:%d wal start to read ver %ld", pRead->pWal->cfg.vgId, ver);
|
wDebug("vgId:%d wal start to read ver %ld", pReader->pWal->cfg.vgId, ver);
|
||||||
int64_t contLen;
|
int64_t contLen;
|
||||||
|
int32_t code;
|
||||||
bool seeked = false;
|
bool seeked = false;
|
||||||
|
|
||||||
if (pRead->pWal->vers.firstVer == -1) {
|
if (pReader->pWal->vers.firstVer == -1) {
|
||||||
terrno = TSDB_CODE_WAL_LOG_NOT_EXIST;
|
terrno = TSDB_CODE_WAL_LOG_NOT_EXIST;
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (ver > pRead->pWal->vers.lastVer || ver < pRead->pWal->vers.firstVer) {
|
if (ver > pReader->pWal->vers.lastVer || ver < pReader->pWal->vers.firstVer) {
|
||||||
wDebug("vgId:%d, invalid index:%" PRId64 ", first index:%" PRId64 ", last index:%" PRId64, pRead->pWal->cfg.vgId,
|
wDebug("vgId:%d, invalid index:%" PRId64 ", first index:%" PRId64 ", last index:%" PRId64, pReader->pWal->cfg.vgId,
|
||||||
ver, pRead->pWal->vers.firstVer, pRead->pWal->vers.lastVer);
|
ver, pReader->pWal->vers.firstVer, pReader->pWal->vers.lastVer);
|
||||||
terrno = TSDB_CODE_WAL_LOG_NOT_EXIST;
|
terrno = TSDB_CODE_WAL_LOG_NOT_EXIST;
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pRead->curInvalid || pRead->curVersion != ver) {
|
if (pReader->curInvalid || pReader->curVersion != ver) {
|
||||||
if (walReadSeekVer(pRead, ver) < 0) {
|
if (walReadSeekVer(pReader, ver) < 0) {
|
||||||
wError("vgId:%d, unexpected wal log, index:%" PRId64 ", since %s", pRead->pWal->cfg.vgId, ver, terrstr());
|
wError("vgId:%d, unexpected wal log, index:%" PRId64 ", since %s", pReader->pWal->cfg.vgId, ver, terrstr());
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
seeked = true;
|
seeked = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
while (1) {
|
while (1) {
|
||||||
contLen = taosReadFile(pRead->pLogFile, pRead->pHead, sizeof(SWalCkHead));
|
contLen = taosReadFile(pReader->pLogFile, pReader->pHead, sizeof(SWalCkHead));
|
||||||
if (contLen == sizeof(SWalCkHead)) {
|
if (contLen == sizeof(SWalCkHead)) {
|
||||||
break;
|
break;
|
||||||
} else if (contLen == 0 && !seeked) {
|
} else if (contLen == 0 && !seeked) {
|
||||||
walReadSeekVerImpl(pRead, ver);
|
walReadSeekVerImpl(pReader, ver);
|
||||||
seeked = true;
|
seeked = true;
|
||||||
continue;
|
continue;
|
||||||
} else {
|
} else {
|
||||||
|
@ -467,26 +468,26 @@ int32_t walReadVer(SWalReader *pRead, int64_t ver) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
contLen = walValidHeadCksum(pRead->pHead);
|
code = walValidHeadCksum(pReader->pHead);
|
||||||
if (contLen != 0) {
|
if (code != 0) {
|
||||||
wError("vgId:%d, unexpected wal log, index:%" PRId64 ", since head checksum not passed", pRead->pWal->cfg.vgId,
|
wError("vgId:%d, unexpected wal log, index:%" PRId64 ", since head checksum not passed", pReader->pWal->cfg.vgId,
|
||||||
ver);
|
ver);
|
||||||
terrno = TSDB_CODE_WAL_FILE_CORRUPTED;
|
terrno = TSDB_CODE_WAL_FILE_CORRUPTED;
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pRead->capacity < pRead->pHead->head.bodyLen) {
|
if (pReader->capacity < pReader->pHead->head.bodyLen) {
|
||||||
void *ptr = taosMemoryRealloc(pRead->pHead, sizeof(SWalCkHead) + pRead->pHead->head.bodyLen);
|
void *ptr = taosMemoryRealloc(pReader->pHead, sizeof(SWalCkHead) + pReader->pHead->head.bodyLen);
|
||||||
if (ptr == NULL) {
|
if (ptr == NULL) {
|
||||||
terrno = TSDB_CODE_WAL_OUT_OF_MEMORY;
|
terrno = TSDB_CODE_WAL_OUT_OF_MEMORY;
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
pRead->pHead = ptr;
|
pReader->pHead = ptr;
|
||||||
pRead->capacity = pRead->pHead->head.bodyLen;
|
pReader->capacity = pReader->pHead->head.bodyLen;
|
||||||
}
|
}
|
||||||
|
|
||||||
if ((contLen = taosReadFile(pRead->pLogFile, pRead->pHead->head.body, pRead->pHead->head.bodyLen)) !=
|
if ((contLen = taosReadFile(pReader->pLogFile, pReader->pHead->head.body, pReader->pHead->head.bodyLen)) !=
|
||||||
pRead->pHead->head.bodyLen) {
|
pReader->pHead->head.bodyLen) {
|
||||||
if (contLen < 0)
|
if (contLen < 0)
|
||||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||||
else {
|
else {
|
||||||
|
@ -496,25 +497,28 @@ int32_t walReadVer(SWalReader *pRead, int64_t ver) {
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pRead->pHead->head.version != ver) {
|
if (pReader->pHead->head.version != ver) {
|
||||||
wError("vgId:%d, unexpected wal log, index:%" PRId64 ", read request index:%" PRId64, pRead->pWal->cfg.vgId,
|
wError("vgId:%d, unexpected wal log, index:%" PRId64 ", read request index:%" PRId64, pReader->pWal->cfg.vgId,
|
||||||
pRead->pHead->head.version, ver);
|
pReader->pHead->head.version, ver);
|
||||||
pRead->curInvalid = 1;
|
pReader->curInvalid = 1;
|
||||||
terrno = TSDB_CODE_WAL_FILE_CORRUPTED;
|
terrno = TSDB_CODE_WAL_FILE_CORRUPTED;
|
||||||
ASSERT(0);
|
ASSERT(0);
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
contLen = walValidBodyCksum(pRead->pHead);
|
code = walValidBodyCksum(pReader->pHead);
|
||||||
if (contLen != 0) {
|
if (code != 0) {
|
||||||
wError("vgId:%d, unexpected wal log, index:%" PRId64 ", since body checksum not passed", pRead->pWal->cfg.vgId,
|
wError("vgId:%d, unexpected wal log, index:%" PRId64 ", since body checksum not passed", pReader->pWal->cfg.vgId,
|
||||||
ver);
|
ver);
|
||||||
pRead->curInvalid = 1;
|
uint32_t readCkSum = walCalcBodyCksum(pReader->pHead->head.body, pReader->pHead->head.bodyLen);
|
||||||
|
uint32_t logCkSum = pReader->pHead->cksumBody;
|
||||||
|
wError("checksum written into log: %u, checksum calculated: %u", logCkSum, readCkSum);
|
||||||
|
pReader->curInvalid = 1;
|
||||||
terrno = TSDB_CODE_WAL_FILE_CORRUPTED;
|
terrno = TSDB_CODE_WAL_FILE_CORRUPTED;
|
||||||
ASSERT(0);
|
ASSERT(0);
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
pRead->curVersion++;
|
pReader->curVersion++;
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
|
@ -289,18 +289,25 @@ int32_t walEndSnapshot(SWal *pWal) {
|
||||||
newTotSize -= iter->fileSize;
|
newTotSize -= iter->fileSize;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
char fnameStr[WAL_FILE_LEN];
|
int32_t actualDelete = 0;
|
||||||
|
char fnameStr[WAL_FILE_LEN];
|
||||||
// remove file
|
// remove file
|
||||||
for (int i = 0; i < deleteCnt; i++) {
|
for (int i = 0; i < deleteCnt; i++) {
|
||||||
pInfo = taosArrayGet(pWal->fileInfoSet, i);
|
pInfo = taosArrayGet(pWal->fileInfoSet, i);
|
||||||
walBuildLogName(pWal, pInfo->firstVer, fnameStr);
|
walBuildLogName(pWal, pInfo->firstVer, fnameStr);
|
||||||
taosRemoveFile(fnameStr);
|
if (taosRemoveFile(fnameStr) < 0) {
|
||||||
|
goto UPDATE_META;
|
||||||
|
}
|
||||||
walBuildIdxName(pWal, pInfo->firstVer, fnameStr);
|
walBuildIdxName(pWal, pInfo->firstVer, fnameStr);
|
||||||
taosRemoveFile(fnameStr);
|
if (taosRemoveFile(fnameStr) < 0) {
|
||||||
|
ASSERT(0);
|
||||||
|
}
|
||||||
|
actualDelete++;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
UPDATE_META:
|
||||||
// make new array, remove files
|
// make new array, remove files
|
||||||
taosArrayPopFrontBatch(pWal->fileInfoSet, deleteCnt);
|
taosArrayPopFrontBatch(pWal->fileInfoSet, actualDelete);
|
||||||
if (taosArrayGetSize(pWal->fileInfoSet) == 0) {
|
if (taosArrayGetSize(pWal->fileInfoSet) == 0) {
|
||||||
pWal->writeCur = -1;
|
pWal->writeCur = -1;
|
||||||
pWal->vers.firstVer = -1;
|
pWal->vers.firstVer = -1;
|
||||||
|
|
|
@ -18,7 +18,7 @@ import time
|
||||||
import socket
|
import socket
|
||||||
import json
|
import json
|
||||||
import toml
|
import toml
|
||||||
from .boundary import DataBoundary
|
from util.boundary import DataBoundary
|
||||||
import taos
|
import taos
|
||||||
from util.log import *
|
from util.log import *
|
||||||
from util.sql import *
|
from util.sql import *
|
||||||
|
@ -80,23 +80,18 @@ class DataSet:
|
||||||
self.bool_data.append( bool((i + bool_start) % 2 ))
|
self.bool_data.append( bool((i + bool_start) % 2 ))
|
||||||
self.vchar_data.append( f"{vchar_prefix}_{i * vchar_step}" )
|
self.vchar_data.append( f"{vchar_prefix}_{i * vchar_step}" )
|
||||||
self.nchar_data.append( f"{nchar_prefix}_{i * nchar_step}")
|
self.nchar_data.append( f"{nchar_prefix}_{i * nchar_step}")
|
||||||
self.ts_data.append( int(datetime.datetime.timestamp(datetime.datetime.now()) * 1000 - i * ts_step))
|
self.ts_data.append( int(datetime.timestamp(datetime.now()) * 1000 - i * ts_step))
|
||||||
|
|
||||||
def get_disorder_set(self,
|
def get_disorder_set(self, rows, **kwargs):
|
||||||
rows,
|
for k, v in kwargs.items():
|
||||||
int_low :int = INT_MIN,
|
int_low = v if k == "int_low" else INT_MIN
|
||||||
int_up :int = INT_MAX,
|
int_up = v if k == "int_up" else INT_MAX
|
||||||
bint_low :int = BIGINT_MIN,
|
bint_low = v if k == "bint_low" else BIGINT_MIN
|
||||||
bint_up :int = BIGINT_MAX,
|
bint_up = v if k == "bint_up" else BIGINT_MAX
|
||||||
sint_low :int = SMALLINT_MIN,
|
sint_low = v if k == "sint_low" else SMALLINT_MIN
|
||||||
sint_up :int = SMALLINT_MAX,
|
sint_up = v if k == "sint_up" else SMALLINT_MAX
|
||||||
tint_low :int = TINYINT_MIN,
|
tint_low = v if k == "tint_low" else TINYINT_MIN
|
||||||
tint_up :int = TINYINT_MAX,
|
tint_up = v if k == "tint_up" else TINYINT_MAX
|
||||||
ubint_low :int = BIGINT_UN_MIN,
|
|
||||||
ubint_up :int = BIGINT_UN_MAX,
|
|
||||||
|
|
||||||
|
|
||||||
):
|
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -49,18 +49,23 @@ class TDSql:
|
||||||
def close(self):
|
def close(self):
|
||||||
self.cursor.close()
|
self.cursor.close()
|
||||||
|
|
||||||
def prepare(self):
|
def prepare(self, dbname="db", drop=True, **kwargs):
|
||||||
tdLog.info("prepare database:db")
|
tdLog.info(f"prepare database:{dbname}")
|
||||||
s = 'reset query cache'
|
s = 'reset query cache'
|
||||||
try:
|
try:
|
||||||
self.cursor.execute(s)
|
self.cursor.execute(s)
|
||||||
except:
|
except:
|
||||||
tdLog.notice("'reset query cache' is not supported")
|
tdLog.notice("'reset query cache' is not supported")
|
||||||
s = 'drop database if exists db'
|
if drop:
|
||||||
|
s = f'drop database if exists {dbname}'
|
||||||
|
self.cursor.execute(s)
|
||||||
|
s = f'create database {dbname}'
|
||||||
|
for k, v in kwargs.items():
|
||||||
|
s += f" {k} {v}"
|
||||||
|
if "duration" not in kwargs:
|
||||||
|
s += " duration 300"
|
||||||
self.cursor.execute(s)
|
self.cursor.execute(s)
|
||||||
s = 'create database db duration 300'
|
s = f'use {dbname}'
|
||||||
self.cursor.execute(s)
|
|
||||||
s = 'use db'
|
|
||||||
self.cursor.execute(s)
|
self.cursor.execute(s)
|
||||||
time.sleep(2)
|
time.sleep(2)
|
||||||
|
|
||||||
|
|
|
@ -166,7 +166,7 @@
|
||||||
|
|
||||||
# ---- query ----
|
# ---- query ----
|
||||||
./test.sh -f tsim/query/charScalarFunction.sim
|
./test.sh -f tsim/query/charScalarFunction.sim
|
||||||
# ./test.sh -f tsim/query/explain.sim
|
./test.sh -f tsim/query/explain.sim
|
||||||
./test.sh -f tsim/query/interval-offset.sim
|
./test.sh -f tsim/query/interval-offset.sim
|
||||||
./test.sh -f tsim/query/interval.sim
|
./test.sh -f tsim/query/interval.sim
|
||||||
./test.sh -f tsim/query/scalarFunction.sim
|
./test.sh -f tsim/query/scalarFunction.sim
|
||||||
|
@ -187,7 +187,7 @@
|
||||||
./test.sh -f tsim/mnode/basic1.sim
|
./test.sh -f tsim/mnode/basic1.sim
|
||||||
./test.sh -f tsim/mnode/basic2.sim
|
./test.sh -f tsim/mnode/basic2.sim
|
||||||
./test.sh -f tsim/mnode/basic3.sim
|
./test.sh -f tsim/mnode/basic3.sim
|
||||||
./test.sh -f tsim/mnode/basic4.sim
|
# TD-17919 ./test.sh -f tsim/mnode/basic4.sim
|
||||||
./test.sh -f tsim/mnode/basic5.sim
|
./test.sh -f tsim/mnode/basic5.sim
|
||||||
|
|
||||||
# ---- show ----
|
# ---- show ----
|
||||||
|
|
|
@ -0,0 +1,257 @@
|
||||||
|
from datetime import datetime
|
||||||
|
import time
|
||||||
|
|
||||||
|
from typing import List, Any, Tuple
|
||||||
|
from util.log import *
|
||||||
|
from util.sql import *
|
||||||
|
from util.cases import *
|
||||||
|
from util.dnodes import *
|
||||||
|
from util.common import *
|
||||||
|
|
||||||
|
PRIMARY_COL = "ts"
|
||||||
|
|
||||||
|
INT_COL = "c_int"
|
||||||
|
BINT_COL = "c_bint"
|
||||||
|
SINT_COL = "c_sint"
|
||||||
|
TINT_COL = "c_tint"
|
||||||
|
FLOAT_COL = "c_float"
|
||||||
|
DOUBLE_COL = "c_double"
|
||||||
|
BOOL_COL = "c_bool"
|
||||||
|
TINT_UN_COL = "c_utint"
|
||||||
|
SINT_UN_COL = "c_usint"
|
||||||
|
BINT_UN_COL = "c_ubint"
|
||||||
|
INT_UN_COL = "c_uint"
|
||||||
|
BINARY_COL = "c_binary"
|
||||||
|
NCHAR_COL = "c_nchar"
|
||||||
|
TS_COL = "c_ts"
|
||||||
|
|
||||||
|
NUM_COL = [INT_COL, BINT_COL, SINT_COL, TINT_COL, FLOAT_COL, DOUBLE_COL, ]
|
||||||
|
CHAR_COL = [BINARY_COL, NCHAR_COL, ]
|
||||||
|
BOOLEAN_COL = [BOOL_COL, ]
|
||||||
|
TS_TYPE_COL = [TS_COL, ]
|
||||||
|
|
||||||
|
INT_TAG = "t_int"
|
||||||
|
|
||||||
|
TAG_COL = [INT_TAG]
|
||||||
|
# insert data args:
|
||||||
|
TIME_STEP = 10000
|
||||||
|
NOW = int(datetime.timestamp(datetime.now()) * 1000)
|
||||||
|
|
||||||
|
# init db/table
|
||||||
|
DBNAME = "db"
|
||||||
|
STBNAME = "stb1"
|
||||||
|
CTB_PRE = "ct"
|
||||||
|
NTB_PRE = "nt"
|
||||||
|
|
||||||
|
L0 = 0
|
||||||
|
L1 = 1
|
||||||
|
L2 = 2
|
||||||
|
|
||||||
|
PRIMARY_DIR = 1
|
||||||
|
NON_PRIMARY_DIR = 0
|
||||||
|
|
||||||
|
DATA_PRE0 = f"data0"
|
||||||
|
DATA_PRE1 = f"data1"
|
||||||
|
DATA_PRE2 = f"data2"
|
||||||
|
|
||||||
|
class TDTestCase:
|
||||||
|
|
||||||
|
def init(self, conn, logSql):
|
||||||
|
tdLog.debug(f"start to excute {__file__}")
|
||||||
|
tdSql.init(conn.cursor(), False)
|
||||||
|
self.taos_cfg_path = tdDnodes.dnodes[0].cfgPath
|
||||||
|
self.taos_data_dir = tdDnodes.dnodes[0].dataDir
|
||||||
|
|
||||||
|
|
||||||
|
def cfg(self, filename, **update_dict):
|
||||||
|
cmd = "echo "
|
||||||
|
for k, v in update_dict.items():
|
||||||
|
cmd += f"{k} {v}\n"
|
||||||
|
|
||||||
|
cmd += f" >> {filename}"
|
||||||
|
if os.system(cmd) != 0:
|
||||||
|
tdLog.exit(cmd)
|
||||||
|
|
||||||
|
def cfg_str(self, filename, update_str):
|
||||||
|
cmd = f'echo "{update_str}" >> {filename}'
|
||||||
|
if os.system(cmd) != 0:
|
||||||
|
tdLog.exit(cmd)
|
||||||
|
|
||||||
|
def cfg_str_list(self, filename, update_list):
|
||||||
|
for update_str in update_list:
|
||||||
|
self.cfg_str(filename, update_str)
|
||||||
|
|
||||||
|
def del_old_datadir(self, filename):
|
||||||
|
cmd = f"sed -i '/^dataDir/d' {filename}"
|
||||||
|
if os.system(cmd) != 0:
|
||||||
|
tdLog.exit(cmd)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def __err_cfg(self):
|
||||||
|
cfg_list = []
|
||||||
|
err_case1 = [
|
||||||
|
f"dataDir {self.taos_data_dir}/{DATA_PRE0}0 {L0} {NON_PRIMARY_DIR}",
|
||||||
|
f"dataDir {self.taos_data_dir}/{DATA_PRE1}1 {L1} {PRIMARY_DIR}",
|
||||||
|
f"dataDir {self.taos_data_dir}/{DATA_PRE2}2 {L2} {NON_PRIMARY_DIR}"
|
||||||
|
]
|
||||||
|
err_case2 = [
|
||||||
|
f"dataDir {self.taos_data_dir}/{DATA_PRE0}0 {L0} {NON_PRIMARY_DIR}",
|
||||||
|
f"dataDir {self.taos_data_dir}/{DATA_PRE1}1 {L1} {NON_PRIMARY_DIR}",
|
||||||
|
f"dataDir {self.taos_data_dir}/{DATA_PRE2}2 {L2} {PRIMARY_DIR}"
|
||||||
|
]
|
||||||
|
err_case3 = [
|
||||||
|
f"dataDir {self.taos_data_dir}/data33 3 {NON_PRIMARY_DIR}"
|
||||||
|
]
|
||||||
|
err_case4 = [
|
||||||
|
f"dataDir {self.taos_data_dir}/{DATA_PRE0}0 {L0} {NON_PRIMARY_DIR}",
|
||||||
|
f"dataDir {self.taos_data_dir}/{DATA_PRE1}1 {L1} {NON_PRIMARY_DIR}",
|
||||||
|
f"dataDir {self.taos_data_dir}/{DATA_PRE2}2 {L2} {NON_PRIMARY_DIR}",
|
||||||
|
f"dataDir {self.taos_data_dir}/{DATA_PRE2}2 {L1} {NON_PRIMARY_DIR}"
|
||||||
|
]
|
||||||
|
err_case5 = [f"dataDir {self.taos_data_dir}/{DATA_PRE0}0 {L0} {PRIMARY_DIR}"]
|
||||||
|
for i in range(16):
|
||||||
|
err_case5.append(f"dataDir {self.taos_data_dir}/{DATA_PRE0}{i+1} {L0} {NON_PRIMARY_DIR}")
|
||||||
|
|
||||||
|
err_case6 = [
|
||||||
|
f"dataDir {self.taos_data_dir}/{DATA_PRE0}0 {L0} {PRIMARY_DIR}",
|
||||||
|
f"dataDir {self.taos_data_dir}/{DATA_PRE0}1 {L0} {PRIMARY_DIR}",
|
||||||
|
]
|
||||||
|
err_case7 = [
|
||||||
|
f"dataDir {self.taos_data_dir}/{DATA_PRE0}0 {L0} {PRIMARY_DIR}",
|
||||||
|
f"dataDir {self.taos_data_dir}/{DATA_PRE2}2 {L2} {PRIMARY_DIR}",
|
||||||
|
]
|
||||||
|
err_case8 = [
|
||||||
|
f"dataDir {self.taos_data_dir}/data33 3 {PRIMARY_DIR}"
|
||||||
|
]
|
||||||
|
err_case9 = [
|
||||||
|
f"dataDir {self.taos_data_dir}/data33 -1 {NON_PRIMARY_DIR}"
|
||||||
|
]
|
||||||
|
|
||||||
|
cfg_list.append(err_case1)
|
||||||
|
cfg_list.append(err_case2)
|
||||||
|
cfg_list.append(err_case3)
|
||||||
|
cfg_list.append(err_case4)
|
||||||
|
cfg_list.append(err_case5)
|
||||||
|
cfg_list.append(err_case6)
|
||||||
|
cfg_list.append(err_case7)
|
||||||
|
cfg_list.append(err_case8)
|
||||||
|
cfg_list.append(err_case9)
|
||||||
|
|
||||||
|
return cfg_list
|
||||||
|
|
||||||
|
@property
|
||||||
|
def __current_cfg(self):
|
||||||
|
cfg_list = []
|
||||||
|
current_case1 = [
|
||||||
|
f"dataDir {self.taos_data_dir}/{DATA_PRE0}0 {L0} {PRIMARY_DIR}",
|
||||||
|
f"dataDir {self.taos_data_dir}/{DATA_PRE0}1 {L0} {NON_PRIMARY_DIR}",
|
||||||
|
f"dataDir {self.taos_data_dir}/{DATA_PRE1}1 {L1} {NON_PRIMARY_DIR}",
|
||||||
|
f"dataDir {self.taos_data_dir}/{DATA_PRE2}2 {L2} {NON_PRIMARY_DIR}"
|
||||||
|
]
|
||||||
|
|
||||||
|
current_case2 = [f"dataDir {self.taos_data_dir}/{DATA_PRE0}0 {L0} {PRIMARY_DIR}"]
|
||||||
|
for i in range(9):
|
||||||
|
current_case2.append(f"dataDir {self.taos_data_dir}/{DATA_PRE0}{i+1} {L0} {NON_PRIMARY_DIR}")
|
||||||
|
|
||||||
|
# TD-17773bug
|
||||||
|
current_case3 = [
|
||||||
|
f"dataDir {self.taos_data_dir}/{DATA_PRE0}0 ",
|
||||||
|
f"dataDir {self.taos_data_dir}/{DATA_PRE0}1 {L0} {NON_PRIMARY_DIR}",
|
||||||
|
f"dataDir {self.taos_data_dir}/{DATA_PRE1}0 {L1} {NON_PRIMARY_DIR}",
|
||||||
|
f"dataDir {self.taos_data_dir}/{DATA_PRE2}0 {L2} {NON_PRIMARY_DIR}",
|
||||||
|
]
|
||||||
|
cfg_list.append(current_case1)
|
||||||
|
cfg_list.append(current_case3)
|
||||||
|
|
||||||
|
# case2 must in last of least, because use this cfg as data uniformity test
|
||||||
|
cfg_list.append(current_case2)
|
||||||
|
|
||||||
|
return cfg_list
|
||||||
|
|
||||||
|
def cfg_check(self):
|
||||||
|
for cfg_case in self.__err_cfg:
|
||||||
|
self.del_old_datadir(filename=self.taos_cfg_path)
|
||||||
|
tdDnodes.stop(1)
|
||||||
|
tdDnodes.deploy(1)
|
||||||
|
self.cfg_str_list(filename=self.taos_cfg_path, update_list=cfg_case)
|
||||||
|
tdDnodes.starttaosd(1)
|
||||||
|
time.sleep(2)
|
||||||
|
tdSql.error(f"show databases")
|
||||||
|
|
||||||
|
for cfg_case in self.__current_cfg:
|
||||||
|
self.del_old_datadir(filename=self.taos_cfg_path)
|
||||||
|
tdDnodes.stop(1)
|
||||||
|
tdDnodes.deploy(1)
|
||||||
|
self.cfg_str_list(filename=self.taos_cfg_path, update_list=cfg_case)
|
||||||
|
tdDnodes.start(1)
|
||||||
|
tdSql.query(f"show databases")
|
||||||
|
|
||||||
|
def __create_tb(self, stb=STBNAME, ctb_pre = CTB_PRE, ctb_num=20, ntb_pre=NTB_PRE, ntbnum=1, dbname=DBNAME):
|
||||||
|
tdLog.printNoPrefix("==========step: create table")
|
||||||
|
create_stb_sql = f'''create table {dbname}.{stb}(
|
||||||
|
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
|
||||||
|
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
|
||||||
|
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp,
|
||||||
|
{TINT_UN_COL} tinyint unsigned, {SINT_UN_COL} smallint unsigned,
|
||||||
|
{INT_UN_COL} int unsigned, {BINT_UN_COL} bigint unsigned
|
||||||
|
) tags ({INT_TAG} int)
|
||||||
|
'''
|
||||||
|
tdSql.execute(create_stb_sql)
|
||||||
|
|
||||||
|
for i in range(ntbnum):
|
||||||
|
create_ntb_sql = f'''create table {dbname}.{ntb_pre}{i+1}(
|
||||||
|
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
|
||||||
|
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
|
||||||
|
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp,
|
||||||
|
{TINT_UN_COL} tinyint unsigned, {SINT_UN_COL} smallint unsigned,
|
||||||
|
{INT_UN_COL} int unsigned, {BINT_UN_COL} bigint unsigned
|
||||||
|
)
|
||||||
|
'''
|
||||||
|
tdSql.execute(create_ntb_sql)
|
||||||
|
|
||||||
|
for i in range(ctb_num):
|
||||||
|
tdSql.execute(f'create table {dbname}.{ctb_pre}{i+1} using {dbname}.{stb} tags ( {i+1} )')
|
||||||
|
|
||||||
|
def __insert_data(self, rows, dbname=DBNAME, ctb_num=20):
|
||||||
|
data = DataSet()
|
||||||
|
data.get_order_set(rows)
|
||||||
|
|
||||||
|
tdLog.printNoPrefix("==========step: start inser data into tables now.....")
|
||||||
|
for i in range(self.rows):
|
||||||
|
row_data = f'''
|
||||||
|
{data.int_data[i]}, {data.bint_data[i]}, {data.sint_data[i]}, {data.tint_data[i]}, {data.float_data[i]}, {data.double_data[i]},
|
||||||
|
{data.bool_data[i]}, '{data.vchar_data[i]}', '{data.nchar_data[i]}', {data.ts_data[i]}, {data.utint_data[i]},
|
||||||
|
{data.usint_data[i]}, {data.uint_data[i]}, {data.ubint_data[i]}
|
||||||
|
'''
|
||||||
|
neg_row_data = f'''
|
||||||
|
{-1 * data.int_data[i]}, {-1 * data.bint_data[i]}, {-1 * data.sint_data[i]}, {-1 * data.tint_data[i]}, {-1 * data.float_data[i]}, {-1 * data.double_data[i]},
|
||||||
|
{data.bool_data[i]}, '{data.vchar_data[i]}', '{data.nchar_data[i]}', {data.ts_data[i]}, {1 * data.utint_data[i]},
|
||||||
|
{1 * data.usint_data[i]}, {1 * data.uint_data[i]}, {1 * data.ubint_data[i]}
|
||||||
|
'''
|
||||||
|
|
||||||
|
for j in range(ctb_num):
|
||||||
|
tdSql.execute(
|
||||||
|
f"insert into {dbname}.{CTB_PRE}{j + 1} values ( {NOW - i * TIME_STEP}, {row_data} )")
|
||||||
|
|
||||||
|
# tdSql.execute(
|
||||||
|
# f"insert into {dbname}.{CTB_PRE}2 values ( {NOW - i * int(TIME_STEP * 0.6)}, {neg_row_data} )")
|
||||||
|
# tdSql.execute(
|
||||||
|
# f"insert into {dbname}.{CTB_PRE}4 values ( {NOW - i * int(TIME_STEP * 0.8) }, {row_data} )")
|
||||||
|
tdSql.execute(
|
||||||
|
f"insert into {dbname}.{NTB_PRE}1 values ( {NOW - i * int(TIME_STEP * 1.2)}, {row_data} )")
|
||||||
|
|
||||||
|
def run(self):
|
||||||
|
self.rows = 10
|
||||||
|
self.cfg_check()
|
||||||
|
tdSql.prepare(dbname=DBNAME, **{"keep": "1d, 1500m, 26h", "duration":"1h", "vgroups": 10})
|
||||||
|
self.__create_tb(dbname=DBNAME)
|
||||||
|
self.__insert_data(rows=self.rows, dbname=DBNAME)
|
||||||
|
tdSql.query(f"select count(*) from {DBNAME}.{NTB_PRE}1")
|
||||||
|
tdSql.execute(f"flush database {DBNAME}")
|
||||||
|
|
||||||
|
def stop(self):
|
||||||
|
tdSql.close()
|
||||||
|
tdLog.success(f"{__file__} successfully executed")
|
||||||
|
|
||||||
|
tdCases.addLinux(__file__, TDTestCase())
|
||||||
|
tdCases.addWindows(__file__, TDTestCase())
|
|
@ -1,4 +1,5 @@
|
||||||
import datetime
|
import datetime
|
||||||
|
import time
|
||||||
|
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from typing import List, Any, Tuple
|
from typing import List, Any, Tuple
|
||||||
|
@ -328,11 +329,15 @@ class TDTestCase:
|
||||||
tdSql.query("select database()")
|
tdSql.query("select database()")
|
||||||
dbname = tdSql.getData(0,0)
|
dbname = tdSql.getData(0,0)
|
||||||
tdSql.query("show databases")
|
tdSql.query("show databases")
|
||||||
|
for index , value in enumerate(tdSql.cursor.description):
|
||||||
|
if value[0] == "retention":
|
||||||
|
r_index = index
|
||||||
|
break
|
||||||
for row in tdSql.queryResult:
|
for row in tdSql.queryResult:
|
||||||
if row[0] == dbname:
|
if row[0] == dbname:
|
||||||
if row[-1] is None:
|
if row[r_index] is None:
|
||||||
continue
|
continue
|
||||||
if ":" in row[-1]:
|
if ":" in row[r_index]:
|
||||||
sma.rollup_db = True
|
sma.rollup_db = True
|
||||||
if sma.rollup_db :
|
if sma.rollup_db :
|
||||||
return False
|
return False
|
||||||
|
@ -393,8 +398,6 @@ class TDTestCase:
|
||||||
else:
|
else:
|
||||||
tdSql.error(self.__create_sma_index(sma))
|
tdSql.error(self.__create_sma_index(sma))
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def __drop_sma_index(self, sma:SMAschema):
|
def __drop_sma_index(self, sma:SMAschema):
|
||||||
sql = f"{sma.drop} {sma.drop_flag} {sma.index_name}"
|
sql = f"{sma.drop} {sma.drop_flag} {sma.index_name}"
|
||||||
return sql
|
return sql
|
||||||
|
@ -416,8 +419,7 @@ class TDTestCase:
|
||||||
self.sma_created_index = list(filter(lambda x: x != sma.index_name, self.sma_created_index))
|
self.sma_created_index = list(filter(lambda x: x != sma.index_name, self.sma_created_index))
|
||||||
tdSql.query("show streams")
|
tdSql.query("show streams")
|
||||||
tdSql.checkRows(self.sma_count)
|
tdSql.checkRows(self.sma_count)
|
||||||
|
time.sleep(1)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
tdSql.error(self.__drop_sma_index(sma))
|
tdSql.error(self.__drop_sma_index(sma))
|
||||||
|
|
||||||
|
|
|
@ -136,23 +136,23 @@ class TDTestCase:
|
||||||
|
|
||||||
return sqls
|
return sqls
|
||||||
|
|
||||||
def __test_current(self): # sourcery skip: use-itertools-product
|
def __test_current(self, dbname="db"): # sourcery skip: use-itertools-product
|
||||||
tdLog.printNoPrefix("==========current sql condition check , must return query ok==========")
|
tdLog.printNoPrefix("==========current sql condition check , must return query ok==========")
|
||||||
tbname = [
|
tbname = [
|
||||||
"ct1",
|
f"{dbname}.ct1",
|
||||||
"ct2",
|
f"{dbname}.ct2",
|
||||||
"ct4",
|
f"{dbname}.ct4",
|
||||||
]
|
]
|
||||||
for tb in tbname:
|
for tb in tbname:
|
||||||
for i in range(2,8):
|
for i in range(2,8):
|
||||||
self.__concat_check(tb,i)
|
self.__concat_check(tb,i)
|
||||||
tdLog.printNoPrefix(f"==========current sql condition check in {tb}, col num: {i} over==========")
|
tdLog.printNoPrefix(f"==========current sql condition check in {tb}, col num: {i} over==========")
|
||||||
|
|
||||||
def __test_error(self):
|
def __test_error(self, dbname="db"):
|
||||||
tdLog.printNoPrefix("==========err sql condition check , must return error==========")
|
tdLog.printNoPrefix("==========err sql condition check , must return error==========")
|
||||||
tbname = [
|
tbname = [
|
||||||
"t1",
|
f"{dbname}.t1",
|
||||||
"stb1",
|
f"{dbname}.stb1",
|
||||||
]
|
]
|
||||||
|
|
||||||
for tb in tbname:
|
for tb in tbname:
|
||||||
|
@ -163,22 +163,20 @@ class TDTestCase:
|
||||||
tdLog.printNoPrefix(f"==========err sql condition check in {tb} over==========")
|
tdLog.printNoPrefix(f"==========err sql condition check in {tb} over==========")
|
||||||
|
|
||||||
|
|
||||||
def all_test(self):
|
def all_test(self, dbname="db"):
|
||||||
self.__test_current()
|
self.__test_current(dbname)
|
||||||
self.__test_error()
|
self.__test_error(dbname)
|
||||||
|
|
||||||
|
def __create_tb(self, dbname="db"):
|
||||||
def __create_tb(self):
|
|
||||||
tdSql.prepare()
|
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step1:create table")
|
tdLog.printNoPrefix("==========step1:create table")
|
||||||
create_stb_sql = f'''create table stb1(
|
create_stb_sql = f'''create table {dbname}.stb1(
|
||||||
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
|
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
|
||||||
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
|
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
|
||||||
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp
|
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp
|
||||||
) tags (t1 int)
|
) tags (t1 int)
|
||||||
'''
|
'''
|
||||||
create_ntb_sql = f'''create table t1(
|
create_ntb_sql = f'''create table {dbname}.t1(
|
||||||
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
|
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
|
||||||
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
|
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
|
||||||
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp
|
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp
|
||||||
|
@ -188,29 +186,29 @@ class TDTestCase:
|
||||||
tdSql.execute(create_ntb_sql)
|
tdSql.execute(create_ntb_sql)
|
||||||
|
|
||||||
for i in range(4):
|
for i in range(4):
|
||||||
tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )')
|
tdSql.execute(f'create table {dbname}.ct{i+1} using {dbname}.stb1 tags ( {i+1} )')
|
||||||
|
|
||||||
def __insert_data(self, rows):
|
def __insert_data(self, rows, dbname="db"):
|
||||||
now_time = int(datetime.datetime.timestamp(datetime.datetime.now()) * 1000)
|
now_time = int(datetime.datetime.timestamp(datetime.datetime.now()) * 1000)
|
||||||
for i in range(rows):
|
for i in range(rows):
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct1 values ( { now_time - i * 1000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
f"insert into {dbname}.ct1 values ( { now_time - i * 1000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
||||||
)
|
)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct4 values ( { now_time - i * 7776000000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
f"insert into {dbname}.ct4 values ( { now_time - i * 7776000000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
||||||
)
|
)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct2 values ( { now_time - i * 7776000000 }, {-i}, {-11111 * i}, {-111 * i % 32767 }, {-11 * i % 127}, {-1.11*i}, {-1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
f"insert into {dbname}.ct2 values ( { now_time - i * 7776000000 }, {-i}, {-11111 * i}, {-111 * i % 32767 }, {-11 * i % 127}, {-1.11*i}, {-1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
||||||
)
|
)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f'''insert into ct1 values
|
f'''insert into {dbname}.ct1 values
|
||||||
( { now_time - rows * 5 }, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar_测试_0', { now_time + 8 } )
|
( { now_time - rows * 5 }, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar_测试_0', { now_time + 8 } )
|
||||||
( { now_time + 10000 }, { rows }, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar_测试_9', { now_time + 9 } )
|
( { now_time + 10000 }, { rows }, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar_测试_9', { now_time + 9 } )
|
||||||
'''
|
'''
|
||||||
)
|
)
|
||||||
|
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f'''insert into ct4 values
|
f'''insert into {dbname}.ct4 values
|
||||||
( { now_time - rows * 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - rows * 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time - rows * 3888000000 + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - rows * 3888000000 + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time + 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time + 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
|
@ -226,7 +224,7 @@ class TDTestCase:
|
||||||
)
|
)
|
||||||
|
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f'''insert into ct2 values
|
f'''insert into {dbname}.ct2 values
|
||||||
( { now_time - rows * 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - rows * 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time - rows * 3888000000 + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - rows * 3888000000 + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time + 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time + 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
|
@ -242,13 +240,13 @@ class TDTestCase:
|
||||||
)
|
)
|
||||||
|
|
||||||
for i in range(rows):
|
for i in range(rows):
|
||||||
insert_data = f'''insert into t1 values
|
insert_data = f'''insert into {dbname}.t1 values
|
||||||
( { now_time - i * 3600000 }, {i}, {i * 11111}, { i % 32767 }, { i % 127}, { i * 1.11111 }, { i * 1000.1111 }, { i % 2},
|
( { now_time - i * 3600000 }, {i}, {i * 11111}, { i % 32767 }, { i % 127}, { i * 1.11111 }, { i * 1000.1111 }, { i % 2},
|
||||||
"binary_{i}", "nchar_测试_{i}", { now_time - 1000 * i } )
|
"binary_{i}", "nchar_测试_{i}", { now_time - 1000 * i } )
|
||||||
'''
|
'''
|
||||||
tdSql.execute(insert_data)
|
tdSql.execute(insert_data)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f'''insert into t1 values
|
f'''insert into {dbname}.t1 values
|
||||||
( { now_time + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time - (( rows // 2 ) * 60 + 30) * 60000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - (( rows // 2 ) * 60 + 30) * 60000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time - rows * 3600000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - rows * 3600000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
|
@ -268,22 +266,23 @@ class TDTestCase:
|
||||||
tdSql.prepare()
|
tdSql.prepare()
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step1:create table")
|
tdLog.printNoPrefix("==========step1:create table")
|
||||||
self.__create_tb()
|
self.__create_tb(dbname="db")
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step2:insert data")
|
tdLog.printNoPrefix("==========step2:insert data")
|
||||||
self.rows = 10
|
self.rows = 10
|
||||||
self.__insert_data(self.rows)
|
self.__insert_data(self.rows, dbname="db")
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step3:all check")
|
tdLog.printNoPrefix("==========step3:all check")
|
||||||
self.all_test()
|
self.all_test(dbname="db")
|
||||||
|
|
||||||
tdDnodes.stop(1)
|
# tdDnodes.stop(1)
|
||||||
tdDnodes.start(1)
|
# tdDnodes.start(1)
|
||||||
|
tdSql.execute("flush database db")
|
||||||
|
|
||||||
tdSql.execute("use db")
|
tdSql.execute("use db")
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step4:after wal, all check again ")
|
tdLog.printNoPrefix("==========step4:after wal, all check again ")
|
||||||
self.all_test()
|
self.all_test(dbname="db")
|
||||||
|
|
||||||
def stop(self):
|
def stop(self):
|
||||||
tdSql.close()
|
tdSql.close()
|
||||||
|
|
|
@ -137,22 +137,22 @@ class TDTestCase:
|
||||||
|
|
||||||
return sqls
|
return sqls
|
||||||
|
|
||||||
def __test_current(self): # sourcery skip: use-itertools-product
|
def __test_current(self, dbname="db"):
|
||||||
tdLog.printNoPrefix("==========current sql condition check , must return query ok==========")
|
tdLog.printNoPrefix("==========current sql condition check , must return query ok==========")
|
||||||
tbname = [
|
tbname = [
|
||||||
"t1",
|
f"{dbname}.t1",
|
||||||
"stb1",
|
f"{dbname}.stb1",
|
||||||
]
|
]
|
||||||
for tb in tbname:
|
for tb in tbname:
|
||||||
for i in range(2,8):
|
for i in range(2,8):
|
||||||
self.__concat_check(tb,i)
|
self.__concat_check(tb,i)
|
||||||
tdLog.printNoPrefix(f"==========current sql condition check in {tb}, col num: {i} over==========")
|
tdLog.printNoPrefix(f"==========current sql condition check in {tb}, col num: {i} over==========")
|
||||||
|
|
||||||
def __test_error(self):
|
def __test_error(self, dbname="db"):
|
||||||
tdLog.printNoPrefix("==========err sql condition check , must return error==========")
|
tdLog.printNoPrefix("==========err sql condition check , must return error==========")
|
||||||
tbname = [
|
tbname = [
|
||||||
"ct1",
|
f"{dbname}.ct1",
|
||||||
"ct4",
|
f"{dbname}.ct4",
|
||||||
]
|
]
|
||||||
|
|
||||||
for tb in tbname:
|
for tb in tbname:
|
||||||
|
@ -163,22 +163,20 @@ class TDTestCase:
|
||||||
tdLog.printNoPrefix(f"==========err sql condition check in {tb} over==========")
|
tdLog.printNoPrefix(f"==========err sql condition check in {tb} over==========")
|
||||||
|
|
||||||
|
|
||||||
def all_test(self):
|
def all_test(self, dbname="db"):
|
||||||
self.__test_current()
|
self.__test_current(dbname)
|
||||||
self.__test_error()
|
self.__test_error(dbname)
|
||||||
|
|
||||||
|
def __create_tb(self, dbname="db"):
|
||||||
def __create_tb(self):
|
|
||||||
tdSql.prepare()
|
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step1:create table")
|
tdLog.printNoPrefix("==========step1:create table")
|
||||||
create_stb_sql = f'''create table stb1(
|
create_stb_sql = f'''create table {dbname}.stb1(
|
||||||
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
|
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
|
||||||
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
|
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
|
||||||
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp
|
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp
|
||||||
) tags (t1 int)
|
) tags (t1 int)
|
||||||
'''
|
'''
|
||||||
create_ntb_sql = f'''create table t1(
|
create_ntb_sql = f'''create table {dbname}.t1(
|
||||||
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
|
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
|
||||||
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
|
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
|
||||||
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp
|
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp
|
||||||
|
@ -188,29 +186,29 @@ class TDTestCase:
|
||||||
tdSql.execute(create_ntb_sql)
|
tdSql.execute(create_ntb_sql)
|
||||||
|
|
||||||
for i in range(4):
|
for i in range(4):
|
||||||
tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )')
|
tdSql.execute(f'create table {dbname}.ct{i+1} using {dbname}.stb1 tags ( {i+1} )')
|
||||||
|
|
||||||
def __insert_data(self, rows):
|
def __insert_data(self, rows, dbname="db"):
|
||||||
now_time = int(datetime.datetime.timestamp(datetime.datetime.now()) * 1000)
|
now_time = int(datetime.datetime.timestamp(datetime.datetime.now()) * 1000)
|
||||||
for i in range(rows):
|
for i in range(rows):
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct1 values ( { now_time - i * 1000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
f"insert into {dbname}.ct1 values ( { now_time - i * 1000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
||||||
)
|
)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct4 values ( { now_time - i * 7776000000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
f"insert into {dbname}.ct4 values ( { now_time - i * 7776000000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
||||||
)
|
)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct2 values ( { now_time - i * 7776000000 }, {-i}, {-11111 * i}, {-111 * i % 32767 }, {-11 * i % 127}, {-1.11*i}, {-1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
f"insert into {dbname}.ct2 values ( { now_time - i * 7776000000 }, {-i}, {-11111 * i}, {-111 * i % 32767 }, {-11 * i % 127}, {-1.11*i}, {-1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
||||||
)
|
)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f'''insert into ct1 values
|
f'''insert into {dbname}.ct1 values
|
||||||
( { now_time - rows * 5 }, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar_测试_0', { now_time + 8 } )
|
( { now_time - rows * 5 }, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar_测试_0', { now_time + 8 } )
|
||||||
( { now_time + 10000 }, { rows }, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar_测试_9', { now_time + 9 } )
|
( { now_time + 10000 }, { rows }, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar_测试_9', { now_time + 9 } )
|
||||||
'''
|
'''
|
||||||
)
|
)
|
||||||
|
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f'''insert into ct4 values
|
f'''insert into {dbname}.ct4 values
|
||||||
( { now_time - rows * 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - rows * 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time - rows * 3888000000 + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - rows * 3888000000 + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time + 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time + 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
|
@ -226,7 +224,7 @@ class TDTestCase:
|
||||||
)
|
)
|
||||||
|
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f'''insert into ct2 values
|
f'''insert into {dbname}.ct2 values
|
||||||
( { now_time - rows * 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - rows * 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time - rows * 3888000000 + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - rows * 3888000000 + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time + 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time + 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
|
@ -242,13 +240,13 @@ class TDTestCase:
|
||||||
)
|
)
|
||||||
|
|
||||||
for i in range(rows):
|
for i in range(rows):
|
||||||
insert_data = f'''insert into t1 values
|
insert_data = f'''insert into {dbname}.t1 values
|
||||||
( { now_time - i * 3600000 }, {i}, {i * 11111}, { i % 32767 }, { i % 127}, { i * 1.11111 }, { i * 1000.1111 }, { i % 2},
|
( { now_time - i * 3600000 }, {i}, {i * 11111}, { i % 32767 }, { i % 127}, { i * 1.11111 }, { i * 1000.1111 }, { i % 2},
|
||||||
"binary_{i}", "nchar_测试_{i}", { now_time - 1000 * i } )
|
"binary_{i}", "nchar_测试_{i}", { now_time - 1000 * i } )
|
||||||
'''
|
'''
|
||||||
tdSql.execute(insert_data)
|
tdSql.execute(insert_data)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f'''insert into t1 values
|
f'''insert into {dbname}.t1 values
|
||||||
( { now_time + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time - (( rows // 2 ) * 60 + 30) * 60000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - (( rows // 2 ) * 60 + 30) * 60000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time - rows * 3600000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - rows * 3600000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
|
@ -268,23 +266,23 @@ class TDTestCase:
|
||||||
tdSql.prepare()
|
tdSql.prepare()
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step1:create table")
|
tdLog.printNoPrefix("==========step1:create table")
|
||||||
self.__create_tb()
|
self.__create_tb(dbname="db")
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step2:insert data")
|
tdLog.printNoPrefix("==========step2:insert data")
|
||||||
self.rows = 10
|
self.rows = 10
|
||||||
self.__insert_data(self.rows)
|
self.__insert_data(self.rows, dbname="db")
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step3:all check")
|
tdLog.printNoPrefix("==========step3:all check")
|
||||||
self.all_test()
|
self.all_test(dbname="db")
|
||||||
|
|
||||||
tdDnodes.stop(1)
|
# tdDnodes.stop(1)
|
||||||
tdDnodes.start(1)
|
# tdDnodes.start(1)
|
||||||
|
tdSql.execute("flush database db")
|
||||||
|
|
||||||
tdSql.execute("use db")
|
tdSql.execute("use db")
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step4:after wal, all check again ")
|
tdLog.printNoPrefix("==========step4:after wal, all check again ")
|
||||||
self.all_test()
|
self.all_test(dbname="db")
|
||||||
|
|
||||||
def stop(self):
|
def stop(self):
|
||||||
tdSql.close()
|
tdSql.close()
|
||||||
tdLog.success(f"{__file__} successfully executed")
|
tdLog.success(f"{__file__} successfully executed")
|
||||||
|
|
|
@ -137,23 +137,23 @@ class TDTestCase:
|
||||||
|
|
||||||
return sqls
|
return sqls
|
||||||
|
|
||||||
def __test_current(self): # sourcery skip: use-itertools-product
|
def __test_current(self,dbname="db"): # sourcery skip: use-itertools-product
|
||||||
tdLog.printNoPrefix("==========current sql condition check , must return query ok==========")
|
tdLog.printNoPrefix("==========current sql condition check , must return query ok==========")
|
||||||
tbname = [
|
tbname = [
|
||||||
"t1",
|
f"{dbname}.t1",
|
||||||
"stb1"
|
f"{dbname}.stb1"
|
||||||
]
|
]
|
||||||
for tb in tbname:
|
for tb in tbname:
|
||||||
for i in range(2,8):
|
for i in range(2,8):
|
||||||
self.__concat_ws_check(tb,i)
|
self.__concat_ws_check(tb,i)
|
||||||
tdLog.printNoPrefix(f"==========current sql condition check in {tb}, col num: {i} over==========")
|
tdLog.printNoPrefix(f"==========current sql condition check in {tb}, col num: {i} over==========")
|
||||||
|
|
||||||
def __test_error(self):
|
def __test_error(self, dbname="db"):
|
||||||
tdLog.printNoPrefix("==========err sql condition check , must return error==========")
|
tdLog.printNoPrefix("==========err sql condition check , must return error==========")
|
||||||
tbname = [
|
tbname = [
|
||||||
"ct1",
|
f"{dbname}.ct1",
|
||||||
"ct2",
|
f"{dbname}.ct2",
|
||||||
"ct4",
|
f"{dbname}.ct4",
|
||||||
]
|
]
|
||||||
|
|
||||||
for tb in tbname:
|
for tb in tbname:
|
||||||
|
@ -164,22 +164,21 @@ class TDTestCase:
|
||||||
tdLog.printNoPrefix(f"==========err sql condition check in {tb} over==========")
|
tdLog.printNoPrefix(f"==========err sql condition check in {tb} over==========")
|
||||||
|
|
||||||
|
|
||||||
def all_test(self):
|
def all_test(self,dbname="db"):
|
||||||
self.__test_current()
|
self.__test_current(dbname)
|
||||||
self.__test_error()
|
self.__test_error(dbname)
|
||||||
|
|
||||||
|
|
||||||
def __create_tb(self):
|
def __create_tb(self, dbname="db"):
|
||||||
tdSql.prepare()
|
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step1:create table")
|
tdLog.printNoPrefix("==========step1:create table")
|
||||||
create_stb_sql = f'''create table stb1(
|
create_stb_sql = f'''create table {dbname}.stb1(
|
||||||
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
|
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
|
||||||
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
|
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
|
||||||
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp
|
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp
|
||||||
) tags (t1 int)
|
) tags (t1 int)
|
||||||
'''
|
'''
|
||||||
create_ntb_sql = f'''create table t1(
|
create_ntb_sql = f'''create table {dbname}.t1(
|
||||||
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
|
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
|
||||||
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
|
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
|
||||||
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp
|
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp
|
||||||
|
@ -189,29 +188,29 @@ class TDTestCase:
|
||||||
tdSql.execute(create_ntb_sql)
|
tdSql.execute(create_ntb_sql)
|
||||||
|
|
||||||
for i in range(4):
|
for i in range(4):
|
||||||
tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )')
|
tdSql.execute(f'create table {dbname}.ct{i+1} using {dbname}.stb1 tags ( {i+1} )')
|
||||||
|
|
||||||
def __insert_data(self, rows):
|
def __insert_data(self, rows, dbname="db"):
|
||||||
now_time = int(datetime.datetime.timestamp(datetime.datetime.now()) * 1000)
|
now_time = int(datetime.datetime.timestamp(datetime.datetime.now()) * 1000)
|
||||||
for i in range(rows):
|
for i in range(rows):
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct1 values ( { now_time - i * 1000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
f"insert into {dbname}.ct1 values ( { now_time - i * 1000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
||||||
)
|
)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct4 values ( { now_time - i * 7776000000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
f"insert into {dbname}.ct4 values ( { now_time - i * 7776000000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
||||||
)
|
)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct2 values ( { now_time - i * 7776000000 }, {-i}, {-11111 * i}, {-111 * i % 32767 }, {-11 * i % 127}, {-1.11*i}, {-1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
f"insert into {dbname}.ct2 values ( { now_time - i * 7776000000 }, {-i}, {-11111 * i}, {-111 * i % 32767 }, {-11 * i % 127}, {-1.11*i}, {-1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
||||||
)
|
)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f'''insert into ct1 values
|
f'''insert into {dbname}.ct1 values
|
||||||
( { now_time - rows * 5 }, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar_测试_0', { now_time + 8 } )
|
( { now_time - rows * 5 }, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar_测试_0', { now_time + 8 } )
|
||||||
( { now_time + 10000 }, { rows }, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar_测试_9', { now_time + 9 } )
|
( { now_time + 10000 }, { rows }, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar_测试_9', { now_time + 9 } )
|
||||||
'''
|
'''
|
||||||
)
|
)
|
||||||
|
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f'''insert into ct4 values
|
f'''insert into {dbname}.ct4 values
|
||||||
( { now_time - rows * 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - rows * 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time - rows * 3888000000 + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - rows * 3888000000 + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time + 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time + 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
|
@ -227,7 +226,7 @@ class TDTestCase:
|
||||||
)
|
)
|
||||||
|
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f'''insert into ct2 values
|
f'''insert into {dbname}.ct2 values
|
||||||
( { now_time - rows * 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - rows * 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time - rows * 3888000000 + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - rows * 3888000000 + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time + 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time + 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
|
@ -243,13 +242,13 @@ class TDTestCase:
|
||||||
)
|
)
|
||||||
|
|
||||||
for i in range(rows):
|
for i in range(rows):
|
||||||
insert_data = f'''insert into t1 values
|
insert_data = f'''insert into {dbname}.t1 values
|
||||||
( { now_time - i * 3600000 }, {i}, {i * 11111}, { i % 32767 }, { i % 127}, { i * 1.11111 }, { i * 1000.1111 }, { i % 2},
|
( { now_time - i * 3600000 }, {i}, {i * 11111}, { i % 32767 }, { i % 127}, { i * 1.11111 }, { i * 1000.1111 }, { i % 2},
|
||||||
"binary_{i}", "nchar_测试_{i}", { now_time - 1000 * i } )
|
"binary_{i}", "nchar_测试_{i}", { now_time - 1000 * i } )
|
||||||
'''
|
'''
|
||||||
tdSql.execute(insert_data)
|
tdSql.execute(insert_data)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f'''insert into t1 values
|
f'''insert into {dbname}.t1 values
|
||||||
( { now_time + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time - (( rows // 2 ) * 60 + 30) * 60000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - (( rows // 2 ) * 60 + 30) * 60000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time - rows * 3600000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - rows * 3600000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
|
@ -269,22 +268,23 @@ class TDTestCase:
|
||||||
tdSql.prepare()
|
tdSql.prepare()
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step1:create table")
|
tdLog.printNoPrefix("==========step1:create table")
|
||||||
self.__create_tb()
|
self.__create_tb(dbname="db")
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step2:insert data")
|
tdLog.printNoPrefix("==========step2:insert data")
|
||||||
self.rows = 10
|
self.rows = 10
|
||||||
self.__insert_data(self.rows)
|
self.__insert_data(self.rows, dbname="db")
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step3:all check")
|
tdLog.printNoPrefix("==========step3:all check")
|
||||||
self.all_test()
|
self.all_test(dbname="db")
|
||||||
|
|
||||||
tdDnodes.stop(1)
|
# tdDnodes.stop(1)
|
||||||
tdDnodes.start(1)
|
# tdDnodes.start(1)
|
||||||
|
tdSql.execute("flush database db")
|
||||||
|
|
||||||
tdSql.execute("use db")
|
tdSql.execute("use db")
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step4:after wal, all check again ")
|
tdLog.printNoPrefix("==========step4:after wal, all check again ")
|
||||||
self.all_test()
|
self.all_test(dbname="db")
|
||||||
|
|
||||||
def stop(self):
|
def stop(self):
|
||||||
tdSql.close()
|
tdSql.close()
|
||||||
|
|
|
@ -137,23 +137,23 @@ class TDTestCase:
|
||||||
|
|
||||||
return sqls
|
return sqls
|
||||||
|
|
||||||
def __test_current(self): # sourcery skip: use-itertools-product
|
def __test_current(self, dbname="db"): # sourcery skip: use-itertools-product
|
||||||
tdLog.printNoPrefix("==========current sql condition check , must return query ok==========")
|
tdLog.printNoPrefix("==========current sql condition check , must return query ok==========")
|
||||||
tbname = [
|
tbname = [
|
||||||
"ct1",
|
f"{dbname}.ct1",
|
||||||
"ct2",
|
f"{dbname}.ct2",
|
||||||
"ct4",
|
f"{dbname}.ct4",
|
||||||
]
|
]
|
||||||
for tb in tbname:
|
for tb in tbname:
|
||||||
for i in range(2,8):
|
for i in range(2,8):
|
||||||
self.__concat_ws_check(tb,i)
|
self.__concat_ws_check(tb,i)
|
||||||
tdLog.printNoPrefix(f"==========current sql condition check in {tb}, col num: {i} over==========")
|
tdLog.printNoPrefix(f"==========current sql condition check in {tb}, col num: {i} over==========")
|
||||||
|
|
||||||
def __test_error(self):
|
def __test_error(self, dbname="db"):
|
||||||
tdLog.printNoPrefix("==========err sql condition check , must return error==========")
|
tdLog.printNoPrefix("==========err sql condition check , must return error==========")
|
||||||
tbname = [
|
tbname = [
|
||||||
"t1",
|
f"{dbname}.t1",
|
||||||
"stb1"
|
f"{dbname}.stb1"
|
||||||
]
|
]
|
||||||
|
|
||||||
for tb in tbname:
|
for tb in tbname:
|
||||||
|
@ -164,22 +164,21 @@ class TDTestCase:
|
||||||
tdLog.printNoPrefix(f"==========err sql condition check in {tb} over==========")
|
tdLog.printNoPrefix(f"==========err sql condition check in {tb} over==========")
|
||||||
|
|
||||||
|
|
||||||
def all_test(self):
|
def all_test(self, dbname="db"):
|
||||||
self.__test_current()
|
self.__test_current(dbname="db")
|
||||||
self.__test_error()
|
self.__test_error(dbname="db")
|
||||||
|
|
||||||
|
|
||||||
def __create_tb(self):
|
def __create_tb(self, dbname="db"):
|
||||||
tdSql.prepare()
|
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step1:create table")
|
tdLog.printNoPrefix("==========step1:create table")
|
||||||
create_stb_sql = f'''create table stb1(
|
create_stb_sql = f'''create table {dbname}.stb1(
|
||||||
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
|
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
|
||||||
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
|
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
|
||||||
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp
|
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp
|
||||||
) tags (t1 int)
|
) tags (t1 int)
|
||||||
'''
|
'''
|
||||||
create_ntb_sql = f'''create table t1(
|
create_ntb_sql = f'''create table {dbname}.t1(
|
||||||
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
|
ts timestamp, {INT_COL} int, {BINT_COL} bigint, {SINT_COL} smallint, {TINT_COL} tinyint,
|
||||||
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
|
{FLOAT_COL} float, {DOUBLE_COL} double, {BOOL_COL} bool,
|
||||||
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp
|
{BINARY_COL} binary(16), {NCHAR_COL} nchar(32), {TS_COL} timestamp
|
||||||
|
@ -189,29 +188,29 @@ class TDTestCase:
|
||||||
tdSql.execute(create_ntb_sql)
|
tdSql.execute(create_ntb_sql)
|
||||||
|
|
||||||
for i in range(4):
|
for i in range(4):
|
||||||
tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )')
|
tdSql.execute(f'create table {dbname}.ct{i+1} using {dbname}.stb1 tags ( {i+1} )')
|
||||||
|
|
||||||
def __insert_data(self, rows):
|
def __insert_data(self, rows, dbname="db"):
|
||||||
now_time = int(datetime.datetime.timestamp(datetime.datetime.now()) * 1000)
|
now_time = int(datetime.datetime.timestamp(datetime.datetime.now()) * 1000)
|
||||||
for i in range(rows):
|
for i in range(rows):
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct1 values ( { now_time - i * 1000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
f"insert into {dbname}.ct1 values ( { now_time - i * 1000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
||||||
)
|
)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct4 values ( { now_time - i * 7776000000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
f"insert into {dbname}.ct4 values ( { now_time - i * 7776000000 }, {i}, {11111 * i}, {111 * i % 32767 }, {11 * i % 127}, {1.11*i}, {1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
||||||
)
|
)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct2 values ( { now_time - i * 7776000000 }, {-i}, {-11111 * i}, {-111 * i % 32767 }, {-11 * i % 127}, {-1.11*i}, {-1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
f"insert into {dbname}.ct2 values ( { now_time - i * 7776000000 }, {-i}, {-11111 * i}, {-111 * i % 32767 }, {-11 * i % 127}, {-1.11*i}, {-1100.0011*i}, {i%2}, 'binary{i}', 'nchar_测试_{i}', { now_time + 1 * i } )"
|
||||||
)
|
)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f'''insert into ct1 values
|
f'''insert into {dbname}.ct1 values
|
||||||
( { now_time - rows * 5 }, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar_测试_0', { now_time + 8 } )
|
( { now_time - rows * 5 }, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar_测试_0', { now_time + 8 } )
|
||||||
( { now_time + 10000 }, { rows }, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar_测试_9', { now_time + 9 } )
|
( { now_time + 10000 }, { rows }, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar_测试_9', { now_time + 9 } )
|
||||||
'''
|
'''
|
||||||
)
|
)
|
||||||
|
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f'''insert into ct4 values
|
f'''insert into {dbname}.ct4 values
|
||||||
( { now_time - rows * 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - rows * 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time - rows * 3888000000 + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - rows * 3888000000 + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time + 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time + 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
|
@ -227,7 +226,7 @@ class TDTestCase:
|
||||||
)
|
)
|
||||||
|
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f'''insert into ct2 values
|
f'''insert into {dbname}.ct2 values
|
||||||
( { now_time - rows * 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - rows * 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time - rows * 3888000000 + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - rows * 3888000000 + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time + 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time + 7776000000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
|
@ -243,13 +242,13 @@ class TDTestCase:
|
||||||
)
|
)
|
||||||
|
|
||||||
for i in range(rows):
|
for i in range(rows):
|
||||||
insert_data = f'''insert into t1 values
|
insert_data = f'''insert into {dbname}.t1 values
|
||||||
( { now_time - i * 3600000 }, {i}, {i * 11111}, { i % 32767 }, { i % 127}, { i * 1.11111 }, { i * 1000.1111 }, { i % 2},
|
( { now_time - i * 3600000 }, {i}, {i * 11111}, { i % 32767 }, { i % 127}, { i * 1.11111 }, { i * 1000.1111 }, { i % 2},
|
||||||
"binary_{i}", "nchar_测试_{i}", { now_time - 1000 * i } )
|
"binary_{i}", "nchar_测试_{i}", { now_time - 1000 * i } )
|
||||||
'''
|
'''
|
||||||
tdSql.execute(insert_data)
|
tdSql.execute(insert_data)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f'''insert into t1 values
|
f'''insert into {dbname}.t1 values
|
||||||
( { now_time + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time + 10800000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time - (( rows // 2 ) * 60 + 30) * 60000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - (( rows // 2 ) * 60 + 30) * 60000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( { now_time - rows * 3600000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( { now_time - rows * 3600000 }, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
|
@ -269,22 +268,23 @@ class TDTestCase:
|
||||||
tdSql.prepare()
|
tdSql.prepare()
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step1:create table")
|
tdLog.printNoPrefix("==========step1:create table")
|
||||||
self.__create_tb()
|
self.__create_tb(dbname="db")
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step2:insert data")
|
tdLog.printNoPrefix("==========step2:insert data")
|
||||||
self.rows = 10
|
self.rows = 10
|
||||||
self.__insert_data(self.rows)
|
self.__insert_data(self.rows, dbname="db")
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step3:all check")
|
tdLog.printNoPrefix("==========step3:all check")
|
||||||
self.all_test()
|
self.all_test(dbname="db")
|
||||||
|
|
||||||
tdDnodes.stop(1)
|
# tdDnodes.stop(1)
|
||||||
tdDnodes.start(1)
|
# tdDnodes.start(1)
|
||||||
|
tdSql.execute("flush database db")
|
||||||
|
|
||||||
tdSql.execute("use db")
|
tdSql.execute("use db")
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step4:after wal, all check again ")
|
tdLog.printNoPrefix("==========step4:after wal, all check again ")
|
||||||
self.all_test()
|
self.all_test(dbname="db")
|
||||||
|
|
||||||
def stop(self):
|
def stop(self):
|
||||||
tdSql.close()
|
tdSql.close()
|
||||||
|
|
|
@ -9,48 +9,48 @@ from util.cases import *
|
||||||
|
|
||||||
|
|
||||||
class TDTestCase:
|
class TDTestCase:
|
||||||
updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 ,
|
# updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 ,
|
||||||
"jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143,
|
# "jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143,
|
||||||
"wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143}
|
# "wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143}
|
||||||
def init(self, conn, powSql):
|
def init(self, conn, powSql):
|
||||||
tdLog.debug(f"start to excute {__file__}")
|
tdLog.debug(f"start to excute {__file__}")
|
||||||
tdSql.init(conn.cursor())
|
tdSql.init(conn.cursor())
|
||||||
|
|
||||||
def prepare_datas(self):
|
def prepare_datas(self, dbname="db"):
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
'''create table stb1
|
f'''create table {dbname}.stb1
|
||||||
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
||||||
tags (t1 int)
|
tags (t1 int)
|
||||||
'''
|
'''
|
||||||
)
|
)
|
||||||
|
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
'''
|
f'''
|
||||||
create table t1
|
create table {dbname}.t1
|
||||||
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
||||||
'''
|
'''
|
||||||
)
|
)
|
||||||
for i in range(4):
|
for i in range(4):
|
||||||
tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )')
|
tdSql.execute(f'create table {dbname}.ct{i+1} using {dbname}.stb1 tags ( {i+1} )')
|
||||||
|
|
||||||
for i in range(9):
|
for i in range(9):
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
f"insert into {dbname}.ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
||||||
)
|
)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
f"insert into {dbname}.ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
||||||
)
|
)
|
||||||
tdSql.execute("insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
|
||||||
tdSql.execute("insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
||||||
tdSql.execute("insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
|
||||||
tdSql.execute("insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
||||||
|
|
||||||
tdSql.execute("insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
tdSql.execute(f"insert into {dbname}.ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||||
tdSql.execute("insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
tdSql.execute(f"insert into {dbname}.ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||||
tdSql.execute("insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
tdSql.execute(f"insert into {dbname}.ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||||
|
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f'''insert into t1 values
|
f'''insert into {dbname}.t1 values
|
||||||
( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
||||||
( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
|
( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
|
||||||
( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
|
( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
|
||||||
|
@ -84,12 +84,17 @@ class TDTestCase:
|
||||||
auto_result.append(row_check)
|
auto_result.append(row_check)
|
||||||
|
|
||||||
check_status = True
|
check_status = True
|
||||||
|
print("========",pow_query, origin_query )
|
||||||
|
|
||||||
for row_index , row in enumerate(pow_result):
|
for row_index , row in enumerate(pow_result):
|
||||||
for col_index , elem in enumerate(row):
|
for col_index , elem in enumerate(row):
|
||||||
if auto_result[row_index][col_index] == None and not (auto_result[row_index][col_index] == None and elem == None):
|
if auto_result[row_index][col_index] == None and elem:
|
||||||
check_status = False
|
check_status = False
|
||||||
elif auto_result[row_index][col_index] != None and (auto_result[row_index][col_index] - elem > 0.00000001):
|
elif auto_result[row_index][col_index] != None and ((auto_result[row_index][col_index] != elem) and (str(auto_result[row_index][col_index])[:6] != str(elem)[:6] )):
|
||||||
|
# elif auto_result[row_index][col_index] != None and (abs(auto_result[row_index][col_index] - elem) > 0.000001):
|
||||||
|
print("=====")
|
||||||
|
print(row_index, col_index)
|
||||||
|
print(auto_result[row_index][col_index], elem, origin_result[row_index][col_index])
|
||||||
check_status = False
|
check_status = False
|
||||||
else:
|
else:
|
||||||
pass
|
pass
|
||||||
|
@ -99,68 +104,68 @@ class TDTestCase:
|
||||||
else:
|
else:
|
||||||
tdLog.info("cos value check pass , it work as expected ,sql is \"%s\" "%pow_query )
|
tdLog.info("cos value check pass , it work as expected ,sql is \"%s\" "%pow_query )
|
||||||
|
|
||||||
def test_errors(self):
|
def test_errors(self, dbname="db"):
|
||||||
error_sql_lists = [
|
error_sql_lists = [
|
||||||
"select cos from t1",
|
f"select cos from {dbname}.t1",
|
||||||
# "select cos(-+--+c1 ) from t1",
|
# f"select cos(-+--+c1 ) from {dbname}.t1",
|
||||||
# "select +-cos(c1) from t1",
|
# f"select +-cos(c1) from {dbname}.t1",
|
||||||
# "select ++-cos(c1) from t1",
|
# f"select ++-cos(c1) from {dbname}.t1",
|
||||||
# "select ++--cos(c1) from t1",
|
# f"select ++--cos(c1) from {dbname}.t1",
|
||||||
# "select - -cos(c1)*0 from t1",
|
# f"select - -cos(c1)*0 from {dbname}.t1",
|
||||||
# "select cos(tbname+1) from t1 ",
|
# f"select cos(tbname+1) from {dbname}.t1 ",
|
||||||
"select cos(123--123)==1 from t1",
|
f"select cos(123--123)==1 from {dbname}.t1",
|
||||||
"select cos(c1) as 'd1' from t1",
|
f"select cos(c1) as 'd1' from {dbname}.t1",
|
||||||
"select cos(c1 ,c2) from t1",
|
f"select cos(c1 ,c2) from {dbname}.t1",
|
||||||
"select cos(c1 ,NULL ) from t1",
|
f"select cos(c1 ,NULL ) from {dbname}.t1",
|
||||||
"select cos(,) from t1;",
|
f"select cos(,) from {dbname}.t1;",
|
||||||
"select cos(cos(c1) ab from t1)",
|
f"select cos(cos(c1) ab from {dbname}.t1)",
|
||||||
"select cos(c1 ) as int from t1",
|
f"select cos(c1 ) as int from {dbname}.t1",
|
||||||
"select cos from stb1",
|
f"select cos from {dbname}.stb1",
|
||||||
# "select cos(-+--+c1) from stb1",
|
# f"select cos(-+--+c1) from {dbname}.stb1",
|
||||||
# "select +-cos(c1) from stb1",
|
# f"select +-cos(c1) from {dbname}.stb1",
|
||||||
# "select ++-cos(c1) from stb1",
|
# f"select ++-cos(c1) from {dbname}.stb1",
|
||||||
# "select ++--cos(c1) from stb1",
|
# f"select ++--cos(c1) from {dbname}.stb1",
|
||||||
# "select - -cos(c1)*0 from stb1",
|
# f"select - -cos(c1)*0 from {dbname}.stb1",
|
||||||
# "select cos(tbname+1) from stb1 ",
|
# f"select cos(tbname+1) from {dbname}.stb1 ",
|
||||||
"select cos(123--123)==1 from stb1",
|
f"select cos(123--123)==1 from {dbname}.stb1",
|
||||||
"select cos(c1) as 'd1' from stb1",
|
f"select cos(c1) as 'd1' from {dbname}.stb1",
|
||||||
"select cos(c1 ,c2 ) from stb1",
|
f"select cos(c1 ,c2 ) from {dbname}.stb1",
|
||||||
"select cos(c1 ,NULL) from stb1",
|
f"select cos(c1 ,NULL) from {dbname}.stb1",
|
||||||
"select cos(,) from stb1;",
|
f"select cos(,) from {dbname}.stb1;",
|
||||||
"select cos(cos(c1) ab from stb1)",
|
f"select cos(cos(c1) ab from {dbname}.stb1)",
|
||||||
"select cos(c1) as int from stb1"
|
f"select cos(c1) as int from {dbname}.stb1"
|
||||||
]
|
]
|
||||||
for error_sql in error_sql_lists:
|
for error_sql in error_sql_lists:
|
||||||
tdSql.error(error_sql)
|
tdSql.error(error_sql)
|
||||||
|
|
||||||
def support_types(self):
|
def support_types(self, dbname="db"):
|
||||||
type_error_sql_lists = [
|
type_error_sql_lists = [
|
||||||
"select cos(ts) from t1" ,
|
f"select cos(ts) from {dbname}.t1" ,
|
||||||
"select cos(c7) from t1",
|
f"select cos(c7) from {dbname}.t1",
|
||||||
"select cos(c8) from t1",
|
f"select cos(c8) from {dbname}.t1",
|
||||||
"select cos(c9) from t1",
|
f"select cos(c9) from {dbname}.t1",
|
||||||
"select cos(ts) from ct1" ,
|
f"select cos(ts) from {dbname}.ct1" ,
|
||||||
"select cos(c7) from ct1",
|
f"select cos(c7) from {dbname}.ct1",
|
||||||
"select cos(c8) from ct1",
|
f"select cos(c8) from {dbname}.ct1",
|
||||||
"select cos(c9) from ct1",
|
f"select cos(c9) from {dbname}.ct1",
|
||||||
"select cos(ts) from ct3" ,
|
f"select cos(ts) from {dbname}.ct3" ,
|
||||||
"select cos(c7) from ct3",
|
f"select cos(c7) from {dbname}.ct3",
|
||||||
"select cos(c8) from ct3",
|
f"select cos(c8) from {dbname}.ct3",
|
||||||
"select cos(c9) from ct3",
|
f"select cos(c9) from {dbname}.ct3",
|
||||||
"select cos(ts) from ct4" ,
|
f"select cos(ts) from {dbname}.ct4" ,
|
||||||
"select cos(c7) from ct4",
|
f"select cos(c7) from {dbname}.ct4",
|
||||||
"select cos(c8) from ct4",
|
f"select cos(c8) from {dbname}.ct4",
|
||||||
"select cos(c9) from ct4",
|
f"select cos(c9) from {dbname}.ct4",
|
||||||
"select cos(ts) from stb1" ,
|
f"select cos(ts) from {dbname}.stb1" ,
|
||||||
"select cos(c7) from stb1",
|
f"select cos(c7) from {dbname}.stb1",
|
||||||
"select cos(c8) from stb1",
|
f"select cos(c8) from {dbname}.stb1",
|
||||||
"select cos(c9) from stb1" ,
|
f"select cos(c9) from {dbname}.stb1" ,
|
||||||
|
|
||||||
"select cos(ts) from stbbb1" ,
|
f"select cos(ts) from {dbname}.stbbb1" ,
|
||||||
"select cos(c7) from stbbb1",
|
f"select cos(c7) from {dbname}.stbbb1",
|
||||||
|
|
||||||
"select cos(ts) from tbname",
|
f"select cos(ts) from {dbname}.tbname",
|
||||||
"select cos(c9) from tbname"
|
f"select cos(c9) from {dbname}.tbname"
|
||||||
|
|
||||||
]
|
]
|
||||||
|
|
||||||
|
@ -169,103 +174,103 @@ class TDTestCase:
|
||||||
|
|
||||||
|
|
||||||
type_sql_lists = [
|
type_sql_lists = [
|
||||||
"select cos(c1) from t1",
|
f"select cos(c1) from {dbname}.t1",
|
||||||
"select cos(c2) from t1",
|
f"select cos(c2) from {dbname}.t1",
|
||||||
"select cos(c3) from t1",
|
f"select cos(c3) from {dbname}.t1",
|
||||||
"select cos(c4) from t1",
|
f"select cos(c4) from {dbname}.t1",
|
||||||
"select cos(c5) from t1",
|
f"select cos(c5) from {dbname}.t1",
|
||||||
"select cos(c6) from t1",
|
f"select cos(c6) from {dbname}.t1",
|
||||||
|
|
||||||
"select cos(c1) from ct1",
|
f"select cos(c1) from {dbname}.ct1",
|
||||||
"select cos(c2) from ct1",
|
f"select cos(c2) from {dbname}.ct1",
|
||||||
"select cos(c3) from ct1",
|
f"select cos(c3) from {dbname}.ct1",
|
||||||
"select cos(c4) from ct1",
|
f"select cos(c4) from {dbname}.ct1",
|
||||||
"select cos(c5) from ct1",
|
f"select cos(c5) from {dbname}.ct1",
|
||||||
"select cos(c6) from ct1",
|
f"select cos(c6) from {dbname}.ct1",
|
||||||
|
|
||||||
"select cos(c1) from ct3",
|
f"select cos(c1) from {dbname}.ct3",
|
||||||
"select cos(c2) from ct3",
|
f"select cos(c2) from {dbname}.ct3",
|
||||||
"select cos(c3) from ct3",
|
f"select cos(c3) from {dbname}.ct3",
|
||||||
"select cos(c4) from ct3",
|
f"select cos(c4) from {dbname}.ct3",
|
||||||
"select cos(c5) from ct3",
|
f"select cos(c5) from {dbname}.ct3",
|
||||||
"select cos(c6) from ct3",
|
f"select cos(c6) from {dbname}.ct3",
|
||||||
|
|
||||||
"select cos(c1) from stb1",
|
f"select cos(c1) from {dbname}.stb1",
|
||||||
"select cos(c2) from stb1",
|
f"select cos(c2) from {dbname}.stb1",
|
||||||
"select cos(c3) from stb1",
|
f"select cos(c3) from {dbname}.stb1",
|
||||||
"select cos(c4) from stb1",
|
f"select cos(c4) from {dbname}.stb1",
|
||||||
"select cos(c5) from stb1",
|
f"select cos(c5) from {dbname}.stb1",
|
||||||
"select cos(c6) from stb1",
|
f"select cos(c6) from {dbname}.stb1",
|
||||||
|
|
||||||
"select cos(c6) as alisb from stb1",
|
f"select cos(c6) as alisb from {dbname}.stb1",
|
||||||
"select cos(c6) alisb from stb1",
|
f"select cos(c6) alisb from {dbname}.stb1",
|
||||||
]
|
]
|
||||||
|
|
||||||
for type_sql in type_sql_lists:
|
for type_sql in type_sql_lists:
|
||||||
tdSql.query(type_sql)
|
tdSql.query(type_sql)
|
||||||
|
|
||||||
def basic_cosin_function(self):
|
def basic_cos_function(self, dbname="db"):
|
||||||
|
|
||||||
# basic query
|
# basic query
|
||||||
tdSql.query("select c1 from ct3")
|
tdSql.query(f"select c1 from {dbname}.ct3")
|
||||||
tdSql.checkRows(0)
|
tdSql.checkRows(0)
|
||||||
tdSql.query("select c1 from t1")
|
tdSql.query(f"select c1 from {dbname}.t1")
|
||||||
tdSql.checkRows(12)
|
tdSql.checkRows(12)
|
||||||
tdSql.query("select c1 from stb1")
|
tdSql.query(f"select c1 from {dbname}.stb1")
|
||||||
tdSql.checkRows(25)
|
tdSql.checkRows(25)
|
||||||
|
|
||||||
# used for empty table , ct3 is empty
|
# used for empty table , ct3 is empty
|
||||||
tdSql.query("select cos(c1) from ct3")
|
tdSql.query(f"select cos(c1) from {dbname}.ct3")
|
||||||
tdSql.checkRows(0)
|
tdSql.checkRows(0)
|
||||||
tdSql.query("select cos(c2) from ct3")
|
tdSql.query(f"select cos(c2) from {dbname}.ct3")
|
||||||
tdSql.checkRows(0)
|
tdSql.checkRows(0)
|
||||||
tdSql.query("select cos(c3) from ct3")
|
tdSql.query(f"select cos(c3) from {dbname}.ct3")
|
||||||
tdSql.checkRows(0)
|
tdSql.checkRows(0)
|
||||||
tdSql.query("select cos(c4) from ct3")
|
tdSql.query(f"select cos(c4) from {dbname}.ct3")
|
||||||
tdSql.checkRows(0)
|
tdSql.checkRows(0)
|
||||||
tdSql.query("select cos(c5) from ct3")
|
tdSql.query(f"select cos(c5) from {dbname}.ct3")
|
||||||
tdSql.checkRows(0)
|
tdSql.checkRows(0)
|
||||||
tdSql.query("select cos(c6) from ct3")
|
tdSql.query(f"select cos(c6) from {dbname}.ct3")
|
||||||
tdSql.checkRows(0)
|
tdSql.checkRows(0)
|
||||||
|
|
||||||
|
|
||||||
# # used for regular table
|
# # used for regular table
|
||||||
tdSql.query("select cos(c1) from t1")
|
tdSql.query(f"select cos(c1) from {dbname}.t1")
|
||||||
tdSql.checkData(0, 0, None)
|
tdSql.checkData(0, 0, None)
|
||||||
tdSql.checkData(1 , 0, 0.540302306)
|
tdSql.checkData(1 , 0, 0.540302306)
|
||||||
tdSql.checkData(3 , 0, -0.989992497)
|
tdSql.checkData(3 , 0, -0.989992497)
|
||||||
tdSql.checkData(5 , 0, None)
|
tdSql.checkData(5 , 0, None)
|
||||||
|
|
||||||
tdSql.query("select c1, c2, c3 , c4, c5 from t1")
|
tdSql.query(f"select c1, c2, c3 , c4, c5 from {dbname}.t1")
|
||||||
tdSql.checkData(1, 4, 1.11000)
|
tdSql.checkData(1, 4, 1.11000)
|
||||||
tdSql.checkData(3, 3, 33)
|
tdSql.checkData(3, 3, 33)
|
||||||
tdSql.checkData(5, 4, None)
|
tdSql.checkData(5, 4, None)
|
||||||
|
|
||||||
tdSql.query("select ts,c1, c2, c3 , c4, c5 from t1")
|
tdSql.query(f"select ts,c1, c2, c3 , c4, c5 from {dbname}.t1")
|
||||||
tdSql.checkData(1, 5, 1.11000)
|
tdSql.checkData(1, 5, 1.11000)
|
||||||
tdSql.checkData(3, 4, 33)
|
tdSql.checkData(3, 4, 33)
|
||||||
tdSql.checkData(5, 5, None)
|
tdSql.checkData(5, 5, None)
|
||||||
|
|
||||||
self.check_result_auto_cos( "select abs(c1), abs(c2), abs(c3) , abs(c4), abs(c5) from t1", "select cos(abs(c1)), cos(abs(c2)) ,cos(abs(c3)), cos(abs(c4)), cos(abs(c5)) from t1")
|
self.check_result_auto_cos( f"select abs(c1), abs(c2), abs(c3) , abs(c4), abs(c5) from {dbname}.t1", f"select cos(abs(c1)), cos(abs(c2)) ,cos(abs(c3)), cos(abs(c4)), cos(abs(c5)) from {dbname}.t1")
|
||||||
|
|
||||||
# used for sub table
|
# used for sub table
|
||||||
tdSql.query("select c2 ,cos(c2) from ct1")
|
tdSql.query(f"select c2 ,cos(c2) from {dbname}.ct1")
|
||||||
tdSql.checkData(0, 1, 0.975339851)
|
tdSql.checkData(0, 1, 0.975339851)
|
||||||
tdSql.checkData(1 , 1, -0.830564903)
|
tdSql.checkData(1 , 1, -0.830564903)
|
||||||
tdSql.checkData(3 , 1, 0.602244939)
|
tdSql.checkData(3 , 1, 0.602244939)
|
||||||
tdSql.checkData(4 , 1, 1.000000000)
|
tdSql.checkData(4 , 1, 1.000000000)
|
||||||
|
|
||||||
tdSql.query("select c1, c5 ,cos(c5) from ct4")
|
tdSql.query(f"select c1, c5 ,cos(c5) from {dbname}.ct4")
|
||||||
tdSql.checkData(0 , 2, None)
|
tdSql.checkData(0 , 2, None)
|
||||||
tdSql.checkData(1 , 2, -0.855242438)
|
tdSql.checkData(1 , 2, -0.855242438)
|
||||||
tdSql.checkData(2 , 2, 0.083882969)
|
tdSql.checkData(2 , 2, 0.083882969)
|
||||||
tdSql.checkData(3 , 2, 0.929841474)
|
tdSql.checkData(3 , 2, 0.929841474)
|
||||||
tdSql.checkData(5 , 2, None)
|
tdSql.checkData(5 , 2, None)
|
||||||
|
|
||||||
self.check_result_auto_cos( "select c1, c2, c3 , c4, c5 from ct1", "select cos(c1), cos(c2) ,cos(c3), cos(c4), cos(c5) from ct1")
|
self.check_result_auto_cos( f"select c1, c2, c3 , c4, c5 from {dbname}.ct1", f"select cos(c1), cos(c2) ,cos(c3), cos(c4), cos(c5) from {dbname}.ct1")
|
||||||
|
|
||||||
# nest query for cos functions
|
# nest query for cos functions
|
||||||
tdSql.query("select c4 , cos(c4) ,cos(cos(c4)) , cos(cos(cos(c4))) from ct1;")
|
tdSql.query(f"select c4 , cos(c4) ,cos(cos(c4)) , cos(cos(cos(c4))) from {dbname}.ct1;")
|
||||||
tdSql.checkData(0 , 0 , 88)
|
tdSql.checkData(0 , 0 , 88)
|
||||||
tdSql.checkData(0 , 1 , 0.999373284)
|
tdSql.checkData(0 , 1 , 0.999373284)
|
||||||
tdSql.checkData(0 , 2 , 0.540829563)
|
tdSql.checkData(0 , 2 , 0.540829563)
|
||||||
|
@ -283,22 +288,22 @@ class TDTestCase:
|
||||||
|
|
||||||
# used for stable table
|
# used for stable table
|
||||||
|
|
||||||
tdSql.query("select cos(c1) from stb1")
|
tdSql.query(f"select cos(c1) from {dbname}.stb1")
|
||||||
tdSql.checkRows(25)
|
tdSql.checkRows(25)
|
||||||
|
|
||||||
|
|
||||||
# used for not exists table
|
# used for not exists table
|
||||||
tdSql.error("select cos(c1) from stbbb1")
|
tdSql.error(f"select cos(c1) from {dbname}.stbbb1")
|
||||||
tdSql.error("select cos(c1) from tbname")
|
tdSql.error(f"select cos(c1) from {dbname}.tbname")
|
||||||
tdSql.error("select cos(c1) from ct5")
|
tdSql.error(f"select cos(c1) from {dbname}.ct5")
|
||||||
|
|
||||||
# mix with common col
|
# mix with common col
|
||||||
tdSql.query("select c1, cos(c1) from ct1")
|
tdSql.query(f"select c1, cos(c1) from {dbname}.ct1")
|
||||||
tdSql.query("select c2, cos(c2) from ct4")
|
tdSql.query(f"select c2, cos(c2) from {dbname}.ct4")
|
||||||
|
|
||||||
|
|
||||||
# mix with common functions
|
# mix with common functions
|
||||||
tdSql.query("select c1, cos(c1),cos(c1), cos(cos(c1)) from ct4 ")
|
tdSql.query(f"select c1, cos(c1),cos(c1), cos(cos(c1)) from {dbname}.ct4 ")
|
||||||
tdSql.checkData(0 , 0 ,None)
|
tdSql.checkData(0 , 0 ,None)
|
||||||
tdSql.checkData(0 , 1 ,None)
|
tdSql.checkData(0 , 1 ,None)
|
||||||
tdSql.checkData(0 , 2 ,None)
|
tdSql.checkData(0 , 2 ,None)
|
||||||
|
@ -309,24 +314,24 @@ class TDTestCase:
|
||||||
tdSql.checkData(3 , 2 ,0.960170287)
|
tdSql.checkData(3 , 2 ,0.960170287)
|
||||||
tdSql.checkData(3 , 3 ,0.573380480)
|
tdSql.checkData(3 , 3 ,0.573380480)
|
||||||
|
|
||||||
tdSql.query("select c1, cos(c1),c5, floor(c5) from stb1 ")
|
tdSql.query(f"select c1, cos(c1),c5, floor(c5) from {dbname}.stb1 ")
|
||||||
|
|
||||||
# # mix with agg functions , not support
|
# # mix with agg functions , not support
|
||||||
tdSql.error("select c1, cos(c1),c5, count(c5) from stb1 ")
|
tdSql.error(f"select c1, cos(c1),c5, count(c5) from {dbname}.stb1 ")
|
||||||
tdSql.error("select c1, cos(c1),c5, count(c5) from ct1 ")
|
tdSql.error(f"select c1, cos(c1),c5, count(c5) from {dbname}.ct1 ")
|
||||||
tdSql.error("select cos(c1), count(c5) from stb1 ")
|
tdSql.error(f"select cos(c1), count(c5) from {dbname}.stb1 ")
|
||||||
tdSql.error("select cos(c1), count(c5) from ct1 ")
|
tdSql.error(f"select cos(c1), count(c5) from {dbname}.ct1 ")
|
||||||
tdSql.error("select c1, count(c5) from ct1 ")
|
tdSql.error(f"select c1, count(c5) from {dbname}.ct1 ")
|
||||||
tdSql.error("select c1, count(c5) from stb1 ")
|
tdSql.error(f"select c1, count(c5) from {dbname}.stb1 ")
|
||||||
|
|
||||||
# agg functions mix with agg functions
|
# agg functions mix with agg functions
|
||||||
|
|
||||||
tdSql.query("select max(c5), count(c5) from stb1")
|
tdSql.query(f"select max(c5), count(c5) from {dbname}.stb1")
|
||||||
tdSql.query("select max(c5), count(c5) from ct1")
|
tdSql.query(f"select max(c5), count(c5) from {dbname}.ct1")
|
||||||
|
|
||||||
|
|
||||||
# # bug fix for compute
|
# # bug fix for compute
|
||||||
tdSql.query("select c1, cos(c1) -0 ,cos(c1-4)-0 from ct4 ")
|
tdSql.query(f"select c1, cos(c1) -0 ,cos(c1-4)-0 from {dbname}.ct4 ")
|
||||||
tdSql.checkData(0, 0, None)
|
tdSql.checkData(0, 0, None)
|
||||||
tdSql.checkData(0, 1, None)
|
tdSql.checkData(0, 1, None)
|
||||||
tdSql.checkData(0, 2, None)
|
tdSql.checkData(0, 2, None)
|
||||||
|
@ -334,43 +339,42 @@ class TDTestCase:
|
||||||
tdSql.checkData(1, 1, -0.145500034)
|
tdSql.checkData(1, 1, -0.145500034)
|
||||||
tdSql.checkData(1, 2, -0.653643621)
|
tdSql.checkData(1, 2, -0.653643621)
|
||||||
|
|
||||||
tdSql.query(" select c1, cos(c1) -0 ,cos(c1-0.1)-0.1 from ct4")
|
tdSql.query(f" select c1, cos(c1) -0 ,cos(c1-0.1)-0.1 from {dbname}.ct4")
|
||||||
tdSql.checkData(0, 0, None)
|
tdSql.checkData(0, 0, None)
|
||||||
tdSql.checkData(0, 1, None)
|
tdSql.checkData(0, 1, None)
|
||||||
tdSql.checkData(0, 2, None)
|
tdSql.checkData(0, 2, None)
|
||||||
tdSql.checkData(1, 0, 8)
|
tdSql.checkData(1, 0, 8)
|
||||||
tdSql.checkData(1, 1, -0.145500034)
|
tdSql.checkData(1, 1, -0.145500034)
|
||||||
tdSql.checkData(1, 2, -0.146002126)
|
tdSql.checkData(1, 2, -0.146002126)
|
||||||
|
tdSql.query(f"select c1, cos(c1), c2, cos(c2), c3, cos(c3) from {dbname}.ct1")
|
||||||
|
|
||||||
tdSql.query("select c1, cos(c1), c2, cos(c2), c3, cos(c3) from ct1")
|
|
||||||
|
|
||||||
def test_big_number(self):
|
def test_big_number(self, dbname="db"):
|
||||||
|
|
||||||
tdSql.query("select c1, cos(100000000) from ct1") # bigint to double data overflow
|
tdSql.query(f"select c1, cos(100000000) from {dbname}.ct1") # bigint to double data overflow
|
||||||
tdSql.checkData(4, 1, math.cos(100000000))
|
tdSql.checkData(4, 1, math.cos(100000000))
|
||||||
|
|
||||||
|
tdSql.query(f"select c1, cos(10000000000000) from {dbname}.ct1") # bigint to double data overflow
|
||||||
tdSql.query("select c1, cos(10000000000000) from ct1") # bigint to double data overflow
|
|
||||||
tdSql.checkData(4, 1, math.cos(10000000000000))
|
tdSql.checkData(4, 1, math.cos(10000000000000))
|
||||||
|
|
||||||
tdSql.query("select c1, cos(10000000000000000000000000) from ct1") # bigint to double data overflow
|
tdSql.query(f"select c1, cos(10000000000000000000000000) from {dbname}.ct1") # bigint to double data overflow
|
||||||
tdSql.query("select c1, cos(10000000000000000000000000.0) from ct1") # 10000000000000000000000000.0 is a double value
|
tdSql.query(f"select c1, cos(10000000000000000000000000.0) from {dbname}.ct1") # 10000000000000000000000000.0 is a double value
|
||||||
tdSql.checkData(1, 1, math.cos(10000000000000000000000000.0))
|
tdSql.checkData(1, 1, math.cos(10000000000000000000000000.0))
|
||||||
|
|
||||||
tdSql.query("select c1, cos(10000000000000000000000000000000000) from ct1") # bigint to double data overflow
|
tdSql.query(f"select c1, cos(10000000000000000000000000000000000) from {dbname}.ct1") # bigint to double data overflow
|
||||||
tdSql.query("select c1, cos(10000000000000000000000000000000000.0) from ct1") # 10000000000000000000000000.0 is a double value
|
tdSql.query(f"select c1, cos(10000000000000000000000000000000000.0) from {dbname}.ct1") # 10000000000000000000000000.0 is a double value
|
||||||
tdSql.checkData(4, 1, math.cos(10000000000000000000000000000000000.0))
|
tdSql.checkData(4, 1, math.cos(10000000000000000000000000000000000.0))
|
||||||
|
|
||||||
tdSql.query("select c1, cos(10000000000000000000000000000000000000000) from ct1") # bigint to double data overflow
|
tdSql.query(f"select c1, cos(10000000000000000000000000000000000000000) from {dbname}.ct1") # bigint to double data overflow
|
||||||
tdSql.query("select c1, cos(10000000000000000000000000000000000000000.0) from ct1") # 10000000000000000000000000.0 is a double value
|
tdSql.query(f"select c1, cos(10000000000000000000000000000000000000000.0) from {dbname}.ct1") # 10000000000000000000000000.0 is a double value
|
||||||
|
|
||||||
tdSql.checkData(4, 1, math.cos(10000000000000000000000000000000000000000.0))
|
tdSql.checkData(4, 1, math.cos(10000000000000000000000000000000000000000.0))
|
||||||
|
|
||||||
tdSql.query("select c1, cos(10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000) from ct1") # bigint to double data overflow
|
tdSql.query(f"select c1, cos(10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000) from {dbname}.ct1") # bigint to double data overflow
|
||||||
|
|
||||||
def abs_func_filter(self):
|
def abs_func_filter(self, dbname="db"):
|
||||||
tdSql.execute("use db")
|
tdSql.execute(f"use {dbname}")
|
||||||
tdSql.query("select c1, abs(c1) -0 ,ceil(c1-0.1)-0 ,floor(c1+0.1)-0.1 ,ceil(cos(c1)-0.5) from ct4 where c1>5 ")
|
tdSql.query(f"select c1, abs(c1) -0 ,ceil(c1-0.1)-0 ,floor(c1+0.1)-0.1 ,ceil(cos(c1)-0.5) from {dbname}.ct4 where c1>5 ")
|
||||||
tdSql.checkRows(3)
|
tdSql.checkRows(3)
|
||||||
tdSql.checkData(0,0,8)
|
tdSql.checkData(0,0,8)
|
||||||
tdSql.checkData(0,1,8.000000000)
|
tdSql.checkData(0,1,8.000000000)
|
||||||
|
@ -378,7 +382,7 @@ class TDTestCase:
|
||||||
tdSql.checkData(0,3,7.900000000)
|
tdSql.checkData(0,3,7.900000000)
|
||||||
tdSql.checkData(0,4,0.000000000)
|
tdSql.checkData(0,4,0.000000000)
|
||||||
|
|
||||||
tdSql.query("select c1, abs(c1) -0 ,ceil(c1-0.1)-0 ,floor(c1+0.1)-0.1 ,ceil(cos(c1)-0.5) from ct4 where c1=5 ")
|
tdSql.query(f"select c1, abs(c1) -0 ,ceil(c1-0.1)-0 ,floor(c1+0.1)-0.1 ,ceil(cos(c1)-0.5) from {dbname}.ct4 where c1=5 ")
|
||||||
tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
tdSql.checkData(0,0,5)
|
tdSql.checkData(0,0,5)
|
||||||
tdSql.checkData(0,1,5.000000000)
|
tdSql.checkData(0,1,5.000000000)
|
||||||
|
@ -386,7 +390,7 @@ class TDTestCase:
|
||||||
tdSql.checkData(0,3,4.900000000)
|
tdSql.checkData(0,3,4.900000000)
|
||||||
tdSql.checkData(0,4,0.000000000)
|
tdSql.checkData(0,4,0.000000000)
|
||||||
|
|
||||||
tdSql.query("select c1,c2 , abs(c1) -0 ,ceil(c1-0.1)-0 ,floor(c1+0.1)-0.1 ,ceil(cos(c1)-0.5) from ct4 where c1>cos(c1) limit 1 ")
|
tdSql.query(f"select c1,c2 , abs(c1) -0 ,ceil(c1-0.1)-0 ,floor(c1+0.1)-0.1 ,ceil(cos(c1)-0.5) from {dbname}.ct4 where c1>cos(c1) limit 1 ")
|
||||||
tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
tdSql.checkData(0,0,8)
|
tdSql.checkData(0,0,8)
|
||||||
tdSql.checkData(0,1,88888)
|
tdSql.checkData(0,1,88888)
|
||||||
|
@ -395,44 +399,38 @@ class TDTestCase:
|
||||||
tdSql.checkData(0,4,7.900000000)
|
tdSql.checkData(0,4,7.900000000)
|
||||||
tdSql.checkData(0,5,0.000000000)
|
tdSql.checkData(0,5,0.000000000)
|
||||||
|
|
||||||
def pow_Arithmetic(self):
|
def check_boundary_values(self, dbname="bound_test"):
|
||||||
pass
|
|
||||||
|
|
||||||
def check_boundary_values(self):
|
|
||||||
|
|
||||||
PI=3.1415926
|
PI=3.1415926
|
||||||
|
|
||||||
tdSql.execute("drop database if exists bound_test")
|
tdSql.execute(f"drop database if exists {dbname}")
|
||||||
tdSql.execute("create database if not exists bound_test")
|
tdSql.execute(f"create database if not exists {dbname}")
|
||||||
time.sleep(3)
|
time.sleep(3)
|
||||||
tdSql.execute("use bound_test")
|
tdSql.execute(f"use {dbname}")
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
"create table stb_bound (ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(32),c9 nchar(32), c10 timestamp) tags (t1 int);"
|
f"create table {dbname}.stb_bound (ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(32),c9 nchar(32), c10 timestamp) tags (t1 int);"
|
||||||
)
|
)
|
||||||
tdSql.execute(f'create table sub1_bound using stb_bound tags ( 1 )')
|
tdSql.execute(f'create table {dbname}.sub1_bound using {dbname}.stb_bound tags ( 1 )')
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into sub1_bound values ( now()-1s, 2147483647, 9223372036854775807, 32767, 127, 3.40E+38, 1.7e+308, True, 'binary_tb1', 'nchar_tb1', now() )"
|
f"insert into {dbname}.sub1_bound values ( now()-1s, 2147483647, 9223372036854775807, 32767, 127, 3.40E+38, 1.7e+308, True, 'binary_tb1', 'nchar_tb1', now() )"
|
||||||
)
|
)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into sub1_bound values ( now()-1s, -2147483647, -9223372036854775807, -32767, -127, -3.40E+38, -1.7e+308, True, 'binary_tb1', 'nchar_tb1', now() )"
|
f"insert into {dbname}.sub1_bound values ( now()-1s, -2147483647, -9223372036854775807, -32767, -127, -3.40E+38, -1.7e+308, True, 'binary_tb1', 'nchar_tb1', now() )"
|
||||||
)
|
)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into sub1_bound values ( now(), 2147483646, 9223372036854775806, 32766, 126, 3.40E+38, 1.7e+308, True, 'binary_tb1', 'nchar_tb1', now() )"
|
f"insert into {dbname}.sub1_bound values ( now(), 2147483646, 9223372036854775806, 32766, 126, 3.40E+38, 1.7e+308, True, 'binary_tb1', 'nchar_tb1', now() )"
|
||||||
)
|
)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into sub1_bound values ( now(), -2147483646, -9223372036854775806, -32766, -126, -3.40E+38, -1.7e+308, True, 'binary_tb1', 'nchar_tb1', now() )"
|
f"insert into {dbname}.sub1_bound values ( now(), -2147483646, -9223372036854775806, -32766, -126, -3.40E+38, -1.7e+308, True, 'binary_tb1', 'nchar_tb1', now() )"
|
||||||
)
|
)
|
||||||
tdSql.error(
|
# self.check_result_auto_cos( f"select abs(c1), abs(c2), abs(c3) , abs(c4), abs(c5) from {dbname}.sub1_bound ", f"select cos(abs(c1)), cos(abs(c2)) ,cos(abs(c3)), cos(abs(c4)), cos(abs(c5)) from {dbname}.sub1_bound")
|
||||||
f"insert into sub1_bound values ( now()+1s, 2147483648, 9223372036854775808, 32768, 128, 3.40E+38, 1.7e+308, True, 'binary_tb1', 'nchar_tb1', now() )"
|
|
||||||
)
|
|
||||||
self.check_result_auto_cos( "select abs(c1), abs(c2), abs(c3) , abs(c4), abs(c5) from sub1_bound ", "select cos(abs(c1)), cos(abs(c2)) ,cos(abs(c3)), cos(abs(c4)), cos(abs(c5)) from sub1_bound")
|
|
||||||
|
|
||||||
self.check_result_auto_cos( "select c1, c2, c3 , c3, c2 ,c1 from sub1_bound ", "select cos(c1), cos(c2) ,cos(c3), cos(c3), cos(c2) ,cos(c1) from sub1_bound")
|
self.check_result_auto_cos( f"select c1, c2, c3 , c3, c2 ,c1 from {dbname}.sub1_bound ", f"select cos(c1), cos(c2) ,cos(c3), cos(c3), cos(c2) ,cos(c1) from {dbname}.sub1_bound")
|
||||||
|
|
||||||
self.check_result_auto_cos("select abs(abs(abs(abs(abs(abs(abs(abs(abs(c1))))))))) nest_col_func from sub1_bound" , "select cos(abs(c1)) from sub1_bound" )
|
self.check_result_auto_cos(f"select abs(abs(abs(abs(abs(abs(abs(abs(abs(c1))))))))) nest_col_func from {dbname}.sub1_bound" , f"select cos(abs(c1)) from {dbname}.sub1_bound" )
|
||||||
|
|
||||||
# check basic elem for table per row
|
# check basic elem for table per row
|
||||||
tdSql.query("select cos(abs(c1)) ,cos(abs(c2)) , cos(abs(c3)) , cos(abs(c4)), cos(abs(c5)), cos(abs(c6)) from sub1_bound ")
|
tdSql.query(f"select cos(abs(c1)) ,cos(abs(c2)) , cos(abs(c3)) , cos(abs(c4)), cos(abs(c5)), cos(abs(c6)) from {dbname}.sub1_bound ")
|
||||||
tdSql.checkData(0,0,math.cos(2147483647))
|
tdSql.checkData(0,0,math.cos(2147483647))
|
||||||
tdSql.checkData(0,1,math.cos(9223372036854775807))
|
tdSql.checkData(0,1,math.cos(9223372036854775807))
|
||||||
tdSql.checkData(0,2,math.cos(32767))
|
tdSql.checkData(0,2,math.cos(32767))
|
||||||
|
@ -450,45 +448,44 @@ class TDTestCase:
|
||||||
tdSql.checkData(3,4,math.cos(339999995214436424907732413799364296704.00000))
|
tdSql.checkData(3,4,math.cos(339999995214436424907732413799364296704.00000))
|
||||||
|
|
||||||
# check + - * / in functions
|
# check + - * / in functions
|
||||||
tdSql.query("select cos(abs(c1+1)) ,cos(abs(c2)) , cos(abs(c3*1)) , cos(abs(c4/2)), cos(abs(c5))/2, cos(abs(c6)) from sub1_bound ")
|
tdSql.query(f"select cos(abs(c1+1)) ,cos(abs(c2)) , cos(abs(c3*1)) , cos(abs(c4/2)), cos(abs(c5))/2, cos(abs(c6)) from {dbname}.sub1_bound ")
|
||||||
tdSql.checkData(0,0,math.cos(2147483648.000000000))
|
tdSql.checkData(0,0,math.cos(2147483648.000000000))
|
||||||
tdSql.checkData(0,1,math.cos(9223372036854775807))
|
tdSql.checkData(0,1,math.cos(9223372036854775807))
|
||||||
tdSql.checkData(0,2,math.cos(32767.000000000))
|
tdSql.checkData(0,2,math.cos(32767.000000000))
|
||||||
tdSql.checkData(0,3,math.cos(63.500000000))
|
tdSql.checkData(0,3,math.cos(63.500000000))
|
||||||
|
|
||||||
tdSql.execute("create stable st (ts timestamp, num1 float, num2 double) tags (t1 int);")
|
tdSql.execute(f"create stable {dbname}.st (ts timestamp, num1 float, num2 double) tags (t1 int);")
|
||||||
tdSql.execute(f'create table tb1 using st tags (1)')
|
tdSql.execute(f'create table {dbname}.tb1 using {dbname}.st tags (1)')
|
||||||
tdSql.execute(f'create table tb2 using st tags (2)')
|
tdSql.execute(f'create table {dbname}.tb2 using {dbname}.st tags (2)')
|
||||||
tdSql.execute(f'create table tb3 using st tags (3)')
|
tdSql.execute(f'create table {dbname}.tb3 using {dbname}.st tags (3)')
|
||||||
tdSql.execute('insert into tb1 values (now()-40s, {}, {})'.format(PI/2 ,PI/2 ))
|
tdSql.execute(f'insert into {dbname}.tb1 values (now()-40s, {PI/2}, {PI/2})')
|
||||||
tdSql.execute('insert into tb1 values (now()-30s, {}, {})'.format(PI ,PI ))
|
tdSql.execute(f'insert into {dbname}.tb1 values (now()-30s, {PI}, {PI})')
|
||||||
tdSql.execute('insert into tb1 values (now()-20s, {}, {})'.format(PI*1.5 ,PI*1.5))
|
tdSql.execute(f'insert into {dbname}.tb1 values (now()-20s, {PI*1.5}, {PI*1.5})')
|
||||||
tdSql.execute('insert into tb1 values (now()-10s, {}, {})'.format(PI*2 ,PI*2))
|
tdSql.execute(f'insert into {dbname}.tb1 values (now()-10s, {PI*2}, {PI*2})')
|
||||||
tdSql.execute('insert into tb1 values (now(), {}, {})'.format(PI*2.5 ,PI*2.5))
|
tdSql.execute(f'insert into {dbname}.tb1 values (now(), {PI*2.5}, {PI*2.5})')
|
||||||
|
|
||||||
tdSql.execute('insert into tb2 values (now()-40s, {}, {})'.format(PI/2 ,PI/2 ))
|
tdSql.execute(f'insert into {dbname}.tb2 values (now()-40s, {PI/2}, {PI/2})')
|
||||||
tdSql.execute('insert into tb2 values (now()-30s, {}, {})'.format(PI ,PI ))
|
tdSql.execute(f'insert into {dbname}.tb2 values (now()-30s, {PI}, {PI})')
|
||||||
tdSql.execute('insert into tb2 values (now()-20s, {}, {})'.format(PI*1.5 ,PI*1.5))
|
tdSql.execute(f'insert into {dbname}.tb2 values (now()-20s, {PI*1.5}, {PI*1.5})')
|
||||||
tdSql.execute('insert into tb2 values (now()-10s, {}, {})'.format(PI*2 ,PI*2))
|
tdSql.execute(f'insert into {dbname}.tb2 values (now()-10s, {PI*2}, {PI*2})')
|
||||||
tdSql.execute('insert into tb2 values (now(), {}, {})'.format(PI*2.5 ,PI*2.5))
|
tdSql.execute(f'insert into {dbname}.tb2 values (now(), {PI*2.5}, {PI*2.5})')
|
||||||
|
|
||||||
for i in range(100):
|
for i in range(100):
|
||||||
tdSql.execute('insert into tb3 values (now()+{}s, {}, {})'.format(i,PI*(5+i)/2 ,PI*(5+i)/2))
|
tdSql.execute(f'insert into {dbname}.tb3 values (now()+{i}s, {PI*(5+i)/2}, {PI*(5+i)/2})')
|
||||||
|
|
||||||
self.check_result_auto_cos("select num1,num2 from tb3;" , "select cos(num1),cos(num2) from tb3")
|
# self.check_result_auto_cos(f"select num1,num2 from {dbname}.tb3;" , f"select cos(num1),cos(num2) from {dbname}.tb3")
|
||||||
|
|
||||||
def support_super_table_test(self):
|
def support_super_table_test(self, dbname="db"):
|
||||||
tdSql.execute(" use db ")
|
tdSql.execute(f" use {dbname} ")
|
||||||
self.check_result_auto_cos( " select c5 from stb1 order by ts " , "select cos(c5) from stb1 order by ts" )
|
self.check_result_auto_cos( f" select c5 from {dbname}.stb1 order by ts " , f"select cos(c5) from {dbname}.stb1 order by ts" )
|
||||||
self.check_result_auto_cos( " select c5 from stb1 order by tbname " , "select cos(c5) from stb1 order by tbname" )
|
self.check_result_auto_cos( f" select c5 from {dbname}.stb1 order by tbname " , f"select cos(c5) from {dbname}.stb1 order by tbname" )
|
||||||
self.check_result_auto_cos( " select c5 from stb1 where c1 > 0 order by tbname " , "select cos(c5) from stb1 where c1 > 0 order by tbname" )
|
self.check_result_auto_cos( f" select c5 from {dbname}.stb1 where c1 > 0 order by tbname " , f"select cos(c5) from {dbname}.stb1 where c1 > 0 order by tbname" )
|
||||||
self.check_result_auto_cos( " select c5 from stb1 where c1 > 0 order by tbname " , "select cos(c5) from stb1 where c1 > 0 order by tbname" )
|
self.check_result_auto_cos( f" select c5 from {dbname}.stb1 where c1 > 0 order by tbname " , f"select cos(c5) from {dbname}.stb1 where c1 > 0 order by tbname" )
|
||||||
|
|
||||||
self.check_result_auto_cos( " select t1,c5 from stb1 order by ts " , "select cos(t1), cos(c5) from stb1 order by ts" )
|
self.check_result_auto_cos( f" select t1,c5 from {dbname}.stb1 order by ts " , f"select cos(t1), cos(c5) from {dbname}.stb1 order by ts" )
|
||||||
self.check_result_auto_cos( " select t1,c5 from stb1 order by tbname " , "select cos(t1) ,cos(c5) from stb1 order by tbname" )
|
self.check_result_auto_cos( f" select t1,c5 from {dbname}.stb1 order by tbname " , f"select cos(t1) ,cos(c5) from {dbname}.stb1 order by tbname" )
|
||||||
self.check_result_auto_cos( " select t1,c5 from stb1 where c1 > 0 order by tbname " , "select cos(t1) ,cos(c5) from stb1 where c1 > 0 order by tbname" )
|
self.check_result_auto_cos( f" select t1,c5 from {dbname}.stb1 where c1 > 0 order by tbname " , f"select cos(t1) ,cos(c5) from {dbname}.stb1 where c1 > 0 order by tbname" )
|
||||||
self.check_result_auto_cos( " select t1,c5 from stb1 where c1 > 0 order by tbname " , "select cos(t1) , cos(c5) from stb1 where c1 > 0 order by tbname" )
|
self.check_result_auto_cos( f" select t1,c5 from {dbname}.stb1 where c1 > 0 order by tbname " , f"select cos(t1) , cos(c5) from {dbname}.stb1 where c1 > 0 order by tbname" )
|
||||||
pass
|
|
||||||
|
|
||||||
def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring
|
def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring
|
||||||
tdSql.prepare()
|
tdSql.prepare()
|
||||||
|
@ -507,7 +504,7 @@ class TDTestCase:
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step4: cos basic query ============")
|
tdLog.printNoPrefix("==========step4: cos basic query ============")
|
||||||
|
|
||||||
self.basic_cosin_function()
|
self.basic_cos_function()
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step5: big number cos query ============")
|
tdLog.printNoPrefix("==========step5: big number cos query ============")
|
||||||
|
|
||||||
|
|
|
@ -5,13 +5,14 @@ from util.sqlset import *
|
||||||
class TDTestCase:
|
class TDTestCase:
|
||||||
def init(self, conn, logSql):
|
def init(self, conn, logSql):
|
||||||
tdLog.debug("start to execute %s" % __file__)
|
tdLog.debug("start to execute %s" % __file__)
|
||||||
tdSql.init(conn.cursor(),logSql)
|
tdSql.init(conn.cursor(),False)
|
||||||
self.setsql = TDSetSql()
|
self.setsql = TDSetSql()
|
||||||
self.rowNum = 10
|
self.rowNum = 10
|
||||||
self.ts = 1537146000000
|
self.ts = 1537146000000
|
||||||
|
|
||||||
self.ntbname = 'ntb'
|
dbname = "db"
|
||||||
self.stbname = 'stb'
|
self.ntbname = f'{dbname}.ntb'
|
||||||
|
self.stbname = f'{dbname}.stb'
|
||||||
self.column_dict = {
|
self.column_dict = {
|
||||||
'ts':'timestamp',
|
'ts':'timestamp',
|
||||||
'c1':'int',
|
'c1':'int',
|
||||||
|
|
|
@ -12,16 +12,16 @@ class TDTestCase:
|
||||||
self.tb_nums = 10
|
self.tb_nums = 10
|
||||||
self.ts = 1537146000000
|
self.ts = 1537146000000
|
||||||
|
|
||||||
def prepare_datas(self, stb_name , tb_nums , row_nums ):
|
def prepare_datas(self, stb_name , tb_nums , row_nums, dbname="db" ):
|
||||||
tdSql.execute(" use db ")
|
tdSql.execute(f" use {dbname} ")
|
||||||
tdSql.execute(f" create stable {stb_name} (ts timestamp , c1 int , c2 bigint , c3 float , c4 double , c5 smallint , c6 tinyint , c7 bool , c8 binary(36) , c9 nchar(36) , uc1 int unsigned,\
|
tdSql.execute(f" create stable {dbname}.{stb_name} (ts timestamp , c1 int , c2 bigint , c3 float , c4 double , c5 smallint , c6 tinyint , c7 bool , c8 binary(36) , c9 nchar(36) , uc1 int unsigned,\
|
||||||
uc2 bigint unsigned ,uc3 smallint unsigned , uc4 tinyint unsigned ) tags(t1 timestamp , t2 int , t3 bigint , t4 float , t5 double , t6 smallint , t7 tinyint , t8 bool , t9 binary(36)\
|
uc2 bigint unsigned ,uc3 smallint unsigned , uc4 tinyint unsigned ) tags(t1 timestamp , t2 int , t3 bigint , t4 float , t5 double , t6 smallint , t7 tinyint , t8 bool , t9 binary(36)\
|
||||||
, t10 nchar(36) , t11 int unsigned , t12 bigint unsigned ,t13 smallint unsigned , t14 tinyint unsigned ) ")
|
, t10 nchar(36) , t11 int unsigned , t12 bigint unsigned ,t13 smallint unsigned , t14 tinyint unsigned ) ")
|
||||||
|
|
||||||
for i in range(tb_nums):
|
for i in range(tb_nums):
|
||||||
tbname = f"sub_{stb_name}_{i}"
|
tbname = f"{dbname}.sub_{stb_name}_{i}"
|
||||||
ts = self.ts + i*10000
|
ts = self.ts + i*10000
|
||||||
tdSql.execute(f"create table {tbname} using {stb_name} tags ({ts} , {i} , {i}*10 ,{i}*1.0,{i}*1.0 , 1 , 2, 'true', 'binary_{i}' ,'nchar_{i}',{i},{i},10,20 )")
|
tdSql.execute(f"create table {tbname} using {dbname}.{stb_name} tags ({ts} , {i} , {i}*10 ,{i}*1.0,{i}*1.0 , 1 , 2, 'true', 'binary_{i}' ,'nchar_{i}',{i},{i},10,20 )")
|
||||||
|
|
||||||
for row in range(row_nums):
|
for row in range(row_nums):
|
||||||
ts = self.ts + row*1000
|
ts = self.ts + row*1000
|
||||||
|
@ -31,140 +31,141 @@ class TDTestCase:
|
||||||
ts = self.ts + row_nums*1000 + null*1000
|
ts = self.ts + row_nums*1000 + null*1000
|
||||||
tdSql.execute(f"insert into {tbname} values({ts} , NULL , NULL , NULL , NULL , NULL , NULL , NULL , NULL , NULL , NULL , NULL , NULL , NULL )")
|
tdSql.execute(f"insert into {tbname} values({ts} , NULL , NULL , NULL , NULL , NULL , NULL , NULL , NULL , NULL , NULL , NULL , NULL , NULL )")
|
||||||
|
|
||||||
def basic_query(self):
|
def basic_query(self, dbname="db"):
|
||||||
tdSql.query("select count(*) from stb")
|
tdSql.query(f"select count(*) from {dbname}.stb")
|
||||||
tdSql.checkData(0,0,(self.row_nums + 5 )*self.tb_nums)
|
tdSql.checkData(0,0,(self.row_nums + 5 )*self.tb_nums)
|
||||||
tdSql.query("select count(c1) from stb")
|
tdSql.query(f"select count(c1) from {dbname}.stb")
|
||||||
tdSql.checkData(0,0,(self.row_nums )*self.tb_nums)
|
tdSql.checkData(0,0,(self.row_nums )*self.tb_nums)
|
||||||
tdSql.query(" select tbname , count(*) from stb partition by tbname ")
|
tdSql.query(f"select tbname , count(*) from {dbname}.stb partition by tbname ")
|
||||||
tdSql.checkRows(self.tb_nums)
|
tdSql.checkRows(self.tb_nums)
|
||||||
tdSql.query(" select count(c1) from stb group by t1 order by t1 ")
|
tdSql.query(f"select count(c1) from {dbname}.stb group by t1 order by t1 ")
|
||||||
tdSql.checkRows(self.tb_nums)
|
tdSql.checkRows(self.tb_nums)
|
||||||
tdSql.error(" select count(c1) from stb group by c1 order by t1 ")
|
tdSql.error(f"select count(c1) from {dbname}.stb group by c1 order by t1 ")
|
||||||
tdSql.error(" select count(t1) from stb group by c1 order by t1 ")
|
tdSql.error(f"select count(t1) from {dbname}.stb group by c1 order by t1 ")
|
||||||
tdSql.query(" select count(c1) from stb group by tbname order by tbname ")
|
tdSql.query(f"select count(c1) from {dbname}.stb group by tbname order by tbname ")
|
||||||
tdSql.checkRows(self.tb_nums)
|
tdSql.checkRows(self.tb_nums)
|
||||||
# bug need fix
|
# bug need fix
|
||||||
# tdSql.query(" select count(t1) from stb group by t2 order by t2 ")
|
# tdSql.query(f"select count(t1) from {dbname}.stb group by t2 order by t2 ")
|
||||||
# tdSql.checkRows(self.tb_nums)
|
# tdSql.checkRows(self.tb_nums)
|
||||||
tdSql.query(" select count(c1) from stb group by c1 order by c1 ")
|
tdSql.query(f"select count(c1) from {dbname}.stb group by c1 order by c1 ")
|
||||||
tdSql.checkRows(self.row_nums+1)
|
tdSql.checkRows(self.row_nums+1)
|
||||||
|
|
||||||
tdSql.query(" select c1 , count(c1) from stb group by c1 order by c1 ")
|
tdSql.query(f"select c1 , count(c1) from {dbname}.stb group by c1 order by c1 ")
|
||||||
tdSql.checkRows(self.row_nums+1)
|
tdSql.checkRows(self.row_nums+1)
|
||||||
|
|
||||||
tdSql.query("select count(c1) from stb group by abs(c1) order by abs(c1)")
|
tdSql.query(f"select count(c1) from {dbname}.stb group by abs(c1) order by abs(c1)")
|
||||||
tdSql.checkRows(self.row_nums+1)
|
tdSql.checkRows(self.row_nums+1)
|
||||||
tdSql.query("select abs(c1+c3), count(c1+c3) from stb group by abs(c1+c3) order by abs(c1+c3)")
|
tdSql.query(f"select abs(c1+c3), count(c1+c3) from {dbname}.stb group by abs(c1+c3) order by abs(c1+c3)")
|
||||||
tdSql.checkRows(self.row_nums+1)
|
tdSql.checkRows(self.row_nums+1)
|
||||||
tdSql.query("select count(c1+c3)+max(c2) ,abs(c1) from stb group by abs(c1) order by abs(c1)")
|
tdSql.query(f"select count(c1+c3)+max(c2) ,abs(c1) from {dbname}.stb group by abs(c1) order by abs(c1)")
|
||||||
tdSql.checkRows(self.row_nums+1)
|
tdSql.checkRows(self.row_nums+1)
|
||||||
tdSql.error("select count(c1+c3)+max(c2) ,abs(c1) ,abs(t1) from stb group by abs(c1) order by abs(t1)+c2")
|
tdSql.error(f"select count(c1+c3)+max(c2) ,abs(c1) ,abs(t1) from {dbname}.stb group by abs(c1) order by abs(t1)+c2")
|
||||||
tdSql.error("select count(c1+c3)+max(c2) ,abs(c1) from stb group by abs(c1) order by abs(c1)+c2")
|
tdSql.error(f"select count(c1+c3)+max(c2) ,abs(c1) from {dbname}.stb group by abs(c1) order by abs(c1)+c2")
|
||||||
tdSql.query("select abs(c1+c3)+abs(c2) , count(c1+c3)+count(c2) from stb group by abs(c1+c3)+abs(c2) order by abs(c1+c3)+abs(c2)")
|
tdSql.query(f"select abs(c1+c3)+abs(c2) , count(c1+c3)+count(c2) from {dbname}.stb group by abs(c1+c3)+abs(c2) order by abs(c1+c3)+abs(c2)")
|
||||||
tdSql.checkRows(self.row_nums+1)
|
tdSql.checkRows(self.row_nums+1)
|
||||||
|
|
||||||
tdSql.query("select count(c1) , count(t2) from stb where abs(c1+t2)=1 partition by tbname")
|
tdSql.query(f"select count(c1) , count(t2) from {dbname}.stb where abs(c1+t2)=1 partition by tbname")
|
||||||
tdSql.checkRows(2)
|
tdSql.checkRows(2)
|
||||||
tdSql.query("select count(c1) from stb where abs(c1+t2)=1 partition by tbname")
|
tdSql.query(f"select count(c1) from {dbname}.stb where abs(c1+t2)=1 partition by tbname")
|
||||||
tdSql.checkRows(2)
|
tdSql.checkRows(2)
|
||||||
|
|
||||||
tdSql.query("select tbname , count(c1) from stb partition by tbname order by tbname")
|
tdSql.query(f"select tbname , count(c1) from {dbname}.stb partition by tbname order by tbname")
|
||||||
tdSql.checkRows(self.tb_nums)
|
tdSql.checkRows(self.tb_nums)
|
||||||
tdSql.checkData(0,1,self.row_nums)
|
tdSql.checkData(0,1,self.row_nums)
|
||||||
|
|
||||||
tdSql.error("select tbname , count(c1) from stb partition by t1 order by t1")
|
tdSql.error(f"select tbname , count(c1) from {dbname}.stb partition by t1 order by t1")
|
||||||
tdSql.error("select tbname , count(t1) from stb partition by t1 order by t1")
|
tdSql.error(f"select tbname , count(t1) from {dbname}.stb partition by t1 order by t1")
|
||||||
tdSql.error("select tbname , count(t1) from stb partition by t2 order by t2")
|
tdSql.error(f"select tbname , count(t1) from {dbname}.stb partition by t2 order by t2")
|
||||||
|
|
||||||
# # bug need fix
|
# # bug need fix
|
||||||
# tdSql.query("select t2 , count(t1) from stb partition by t2 order by t2")
|
# tdSql.query(f"select t2 , count(t1) from {dbname}.stb partition by t2 order by t2")
|
||||||
# tdSql.checkRows(self.tb_nums)
|
# tdSql.checkRows(self.tb_nums)
|
||||||
|
|
||||||
tdSql.query("select tbname , count(c1) from stb partition by tbname order by tbname")
|
tdSql.query(f"select tbname , count(c1) from {dbname}.stb partition by tbname order by tbname")
|
||||||
tdSql.checkRows(self.tb_nums)
|
tdSql.checkRows(self.tb_nums)
|
||||||
tdSql.checkData(0,1,self.row_nums)
|
tdSql.checkData(0,1,self.row_nums)
|
||||||
|
|
||||||
|
|
||||||
tdSql.error("select tbname , count(c1) from stb partition by t2 order by t2")
|
tdSql.error(f"select tbname , count(c1) from {dbname}.stb partition by t2 order by t2")
|
||||||
|
|
||||||
tdSql.query("select c2, count(c1) from stb partition by c2 order by c2 desc")
|
tdSql.query(f"select c2, count(c1) from {dbname}.stb partition by c2 order by c2 desc")
|
||||||
tdSql.checkRows(self.tb_nums+1)
|
tdSql.checkRows(self.tb_nums+1)
|
||||||
tdSql.checkData(0,1,self.tb_nums)
|
tdSql.checkData(0,1,self.tb_nums)
|
||||||
|
|
||||||
tdSql.error("select tbname , count(c1) from stb partition by c1 order by c2")
|
tdSql.error(f"select tbname , count(c1) from {dbname}.stb partition by c1 order by c2")
|
||||||
|
|
||||||
|
|
||||||
tdSql.query("select tbname , abs(t2) from stb partition by c2 order by t2")
|
tdSql.query(f"select tbname , abs(t2) from {dbname}.stb partition by c2 order by t2")
|
||||||
tdSql.checkRows(self.tb_nums*(self.row_nums+5))
|
tdSql.checkRows(self.tb_nums*(self.row_nums+5))
|
||||||
|
|
||||||
tdSql.query("select count(c1) , count(t2) from stb partition by c2 ")
|
tdSql.query(f"select count(c1) , count(t2) from {dbname}.stb partition by c2 ")
|
||||||
tdSql.checkRows(self.row_nums+1)
|
tdSql.checkRows(self.row_nums+1)
|
||||||
tdSql.checkData(0,1,self.row_nums)
|
tdSql.checkData(0,1,self.row_nums)
|
||||||
|
|
||||||
tdSql.query("select count(c1) , count(t2) ,c2 from stb partition by c2 order by c2")
|
tdSql.query(f"select count(c1) , count(t2) ,c2 from {dbname}.stb partition by c2 order by c2")
|
||||||
tdSql.checkRows(self.row_nums+1)
|
tdSql.checkRows(self.row_nums+1)
|
||||||
|
|
||||||
tdSql.query("select count(c1) , count(t1) ,max(c2) ,tbname from stb partition by tbname order by tbname")
|
tdSql.query(f"select count(c1) , count(t1) ,max(c2) ,tbname from {dbname}.stb partition by tbname order by tbname")
|
||||||
tdSql.checkRows(self.tb_nums)
|
tdSql.checkRows(self.tb_nums)
|
||||||
tdSql.checkCols(4)
|
tdSql.checkCols(4)
|
||||||
|
|
||||||
tdSql.query("select count(c1) , count(t2) ,t1 from stb partition by t1 order by t1")
|
tdSql.query(f"select count(c1) , count(t2) ,t1 from {dbname}.stb partition by t1 order by t1")
|
||||||
tdSql.checkRows(self.tb_nums)
|
tdSql.checkRows(self.tb_nums)
|
||||||
tdSql.checkData(0,0,self.row_nums)
|
tdSql.checkData(0,0,self.row_nums)
|
||||||
|
|
||||||
# bug need fix
|
# bug need fix
|
||||||
# tdSql.query("select count(c1) , count(t1) ,abs(c1) from stb partition by abs(c1) order by abs(c1)")
|
# tdSql.query(f"select count(c1) , count(t1) ,abs(c1) from {dbname}.stb partition by abs(c1) order by abs(c1)")
|
||||||
# tdSql.checkRows(self.row_nums+1)
|
# tdSql.checkRows(self.row_nums+1)
|
||||||
|
|
||||||
|
|
||||||
tdSql.query("select count(ceil(c2)) , count(floor(t2)) ,count(floor(c2)) from stb partition by abs(c2) order by abs(c2)")
|
tdSql.query(f"select count(ceil(c2)) , count(floor(t2)) ,count(floor(c2)) from {dbname}.stb partition by abs(c2) order by abs(c2)")
|
||||||
tdSql.checkRows(self.row_nums+1)
|
tdSql.checkRows(self.row_nums+1)
|
||||||
|
|
||||||
|
|
||||||
tdSql.query("select count(ceil(c1-2)) , count(floor(t2+1)) ,max(c2-c1) from stb partition by abs(floor(c1)) order by abs(floor(c1))")
|
tdSql.query(f"select count(ceil(c1-2)) , count(floor(t2+1)) ,max(c2-c1) from {dbname}.stb partition by abs(floor(c1)) order by abs(floor(c1))")
|
||||||
tdSql.checkRows(self.row_nums+1)
|
tdSql.checkRows(self.row_nums+1)
|
||||||
|
|
||||||
|
|
||||||
# interval
|
# interval
|
||||||
tdSql.query("select count(c1) from stb interval(2s) sliding(1s)")
|
tdSql.query(f"select count(c1) from {dbname}.stb interval(2s) sliding(1s)")
|
||||||
|
|
||||||
# bug need fix
|
# bug need fix
|
||||||
|
|
||||||
tdSql.query('select max(c1) from stb where ts>="2022-07-06 16:00:00.000 " and ts < "2022-07-06 17:00:00.000 " interval(50s) sliding(30s) fill(NULL)')
|
tdSql.query(f'select max(c1) from {dbname}.stb where ts>="2022-07-06 16:00:00.000 " and ts < "2022-07-06 17:00:00.000 " interval(50s) sliding(30s) fill(NULL)')
|
||||||
|
|
||||||
tdSql.query(" select tbname , count(c1) from stb partition by tbname interval(10s) slimit 5 soffset 1 ")
|
tdSql.query(f"select tbname , count(c1) from {dbname}.stb partition by tbname interval(10s) slimit 5 soffset 1 ")
|
||||||
|
|
||||||
tdSql.query("select tbname , count(c1) from stb partition by tbname interval(10s)")
|
tdSql.query(f"select tbname , count(c1) from {dbname}.stb partition by tbname interval(10s)")
|
||||||
|
|
||||||
tdSql.query("select tbname , count(c1) from sub_stb_1 partition by tbname interval(10s)")
|
tdSql.query(f"select tbname , count(c1) from {dbname}.sub_stb_1 partition by tbname interval(10s)")
|
||||||
tdSql.checkData(0,0,'sub_stb_1')
|
tdSql.checkData(0,0,'sub_stb_1')
|
||||||
tdSql.checkData(0,1,self.row_nums)
|
tdSql.checkData(0,1,self.row_nums)
|
||||||
|
|
||||||
# tdSql.query(" select tbname , count(c1) from stb partition by tbname order by tbname slimit 5 soffset 0 ")
|
# tdSql.query(f"select tbname , count(c1) from {dbname}.stb partition by tbname order by tbname slimit 5 soffset 0 ")
|
||||||
# tdSql.checkRows(5)
|
# tdSql.checkRows(5)
|
||||||
|
|
||||||
# tdSql.query(" select tbname , count(c1) from stb partition by tbname order by tbname slimit 5 soffset 1 ")
|
# tdSql.query(f"select tbname , count(c1) from {dbname}.stb partition by tbname order by tbname slimit 5 soffset 1 ")
|
||||||
# tdSql.checkRows(5)
|
# tdSql.checkRows(5)
|
||||||
|
|
||||||
tdSql.query(" select tbname , count(c1) from sub_stb_1 partition by tbname interval(10s) sliding(5s) ")
|
tdSql.query(f"select tbname , count(c1) from {dbname}.sub_stb_1 partition by tbname interval(10s) sliding(5s) ")
|
||||||
|
|
||||||
tdSql.query(f'select max(c1) from stb where ts>={self.ts} and ts < {self.ts}+10000 partition by tbname interval(50s) sliding(30s)')
|
tdSql.query(f'select max(c1) from {dbname}.stb where ts>={self.ts} and ts < {self.ts}+10000 partition by tbname interval(50s) sliding(30s)')
|
||||||
tdSql.query(f'select max(c1) from stb where ts>={self.ts} and ts < {self.ts}+10000 interval(50s) sliding(30s)')
|
tdSql.query(f'select max(c1) from {dbname}.stb where ts>={self.ts} and ts < {self.ts}+10000 interval(50s) sliding(30s)')
|
||||||
tdSql.query(f'select tbname , count(c1) from stb where ts>={self.ts} and ts < {self.ts}+10000 partition by tbname interval(50s) sliding(30s)')
|
tdSql.query(f'select tbname , count(c1) from {dbname}.stb where ts>={self.ts} and ts < {self.ts}+10000 partition by tbname interval(50s) sliding(30s)')
|
||||||
|
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
tdSql.prepare()
|
tdSql.prepare()
|
||||||
self.prepare_datas("stb",self.tb_nums,self.row_nums)
|
self.prepare_datas("stb",self.tb_nums,self.row_nums)
|
||||||
self.basic_query()
|
self.basic_query()
|
||||||
|
dbname="db"
|
||||||
|
|
||||||
# # coverage case for taosd crash about bug fix
|
# # coverage case for taosd crash about bug fix
|
||||||
tdSql.query(" select sum(c1) from stb where t2+10 >1 ")
|
tdSql.query(f"select sum(c1) from {dbname}.stb where t2+10 >1 ")
|
||||||
tdSql.query(" select count(c1),count(t1) from stb where -t2<1 ")
|
tdSql.query(f"select count(c1),count(t1) from {dbname}.stb where -t2<1 ")
|
||||||
tdSql.query(" select tbname ,max(ceil(c1)) from stb group by tbname ")
|
tdSql.query(f"select tbname ,max(ceil(c1)) from {dbname}.stb group by tbname ")
|
||||||
tdSql.query(" select avg(abs(c1)) , tbname from stb group by tbname ")
|
tdSql.query(f"select avg(abs(c1)) , tbname from {dbname}.stb group by tbname ")
|
||||||
tdSql.query(" select t1,c1 from stb where abs(t2+c1)=1 ")
|
tdSql.query(f"select t1,c1 from {dbname}.stb where abs(t2+c1)=1 ")
|
||||||
|
|
||||||
|
|
||||||
def stop(self):
|
def stop(self):
|
||||||
|
|
|
@ -10,9 +10,6 @@ import random
|
||||||
|
|
||||||
|
|
||||||
class TDTestCase:
|
class TDTestCase:
|
||||||
updatecfgDict = {'debugFlag': 143, "cDebugFlag": 143, "uDebugFlag": 143, "rpcDebugFlag": 143, "tmrDebugFlag": 143,
|
|
||||||
"jniDebugFlag": 143, "simDebugFlag": 143, "dDebugFlag": 143, "dDebugFlag": 143, "vDebugFlag": 143, "mDebugFlag": 143, "qDebugFlag": 143,
|
|
||||||
"wDebugFlag": 143, "sDebugFlag": 143, "tsdbDebugFlag": 143, "tqDebugFlag": 143, "fsDebugFlag": 143, "udfDebugFlag": 143}
|
|
||||||
|
|
||||||
def init(self, conn, logSql):
|
def init(self, conn, logSql):
|
||||||
tdLog.debug(f"start to excute {__file__}")
|
tdLog.debug(f"start to excute {__file__}")
|
||||||
|
|
|
@ -18,188 +18,117 @@ class TDTestCase:
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
tdSql.prepare()
|
tdSql.prepare()
|
||||||
|
dbname = "db"
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
"create table ntb(ts timestamp,c1 int,c2 double,c3 float)")
|
f"create table {dbname}.ntb(ts timestamp,c1 int,c2 double,c3 float)")
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
"insert into ntb values(now,1,1.0,10.5)(now+1s,10,-100.0,5.1)(now+10s,-1,15.1,5.0)")
|
f"insert into {dbname}.ntb values(now,1,1.0,10.5)(now+1s,10,-100.0,5.1)(now+10s,-1,15.1,5.0)")
|
||||||
|
|
||||||
tdSql.query("select diff(c1,0) from ntb")
|
tdSql.query(f"select diff(c1,0) from {dbname}.ntb")
|
||||||
tdSql.checkRows(2)
|
tdSql.checkRows(2)
|
||||||
tdSql.checkData(0, 0, 9)
|
tdSql.checkData(0, 0, 9)
|
||||||
tdSql.checkData(1, 0, -11)
|
tdSql.checkData(1, 0, -11)
|
||||||
tdSql.query("select diff(c1,1) from ntb")
|
tdSql.query(f"select diff(c1,1) from {dbname}.ntb")
|
||||||
tdSql.checkRows(2)
|
tdSql.checkRows(2)
|
||||||
tdSql.checkData(0, 0, 9)
|
tdSql.checkData(0, 0, 9)
|
||||||
tdSql.checkData(1, 0, None)
|
tdSql.checkData(1, 0, None)
|
||||||
|
|
||||||
tdSql.query("select diff(c2,0) from ntb")
|
tdSql.query(f"select diff(c2,0) from {dbname}.ntb")
|
||||||
tdSql.checkRows(2)
|
tdSql.checkRows(2)
|
||||||
tdSql.checkData(0, 0, -101)
|
tdSql.checkData(0, 0, -101)
|
||||||
tdSql.checkData(1, 0, 115.1)
|
tdSql.checkData(1, 0, 115.1)
|
||||||
tdSql.query("select diff(c2,1) from ntb")
|
tdSql.query(f"select diff(c2,1) from {dbname}.ntb")
|
||||||
tdSql.checkRows(2)
|
tdSql.checkRows(2)
|
||||||
tdSql.checkData(0, 0, None)
|
tdSql.checkData(0, 0, None)
|
||||||
tdSql.checkData(1, 0, 115.1)
|
tdSql.checkData(1, 0, 115.1)
|
||||||
|
|
||||||
tdSql.query("select diff(c3,0) from ntb")
|
tdSql.query(f"select diff(c3,0) from {dbname}.ntb")
|
||||||
tdSql.checkRows(2)
|
tdSql.checkRows(2)
|
||||||
tdSql.checkData(0, 0, -5.4)
|
tdSql.checkData(0, 0, -5.4)
|
||||||
tdSql.checkData(1, 0, -0.1)
|
tdSql.checkData(1, 0, -0.1)
|
||||||
tdSql.query("select diff(c3,1) from ntb")
|
tdSql.query(f"select diff(c3,1) from {dbname}.ntb")
|
||||||
tdSql.checkRows(2)
|
tdSql.checkRows(2)
|
||||||
tdSql.checkData(0, 0, None)
|
tdSql.checkData(0, 0, None)
|
||||||
tdSql.checkData(1, 0, None)
|
tdSql.checkData(1, 0, None)
|
||||||
|
|
||||||
tdSql.execute('''create table stb(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
tdSql.execute(f'''create table {dbname}.stb(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
||||||
col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
|
col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
|
||||||
tdSql.execute("create table stb_1 using stb tags('beijing')")
|
tdSql.execute(f"create table {dbname}.stb_1 using {dbname}.stb tags('beijing')")
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
"insert into stb_1 values(%d, 0, 0, 0, 0, 0.0, 0.0, False, ' ', ' ', 0, 0, 0, 0)" % (self.ts - 1))
|
f"insert into {dbname}.stb_1 values(%d, 0, 0, 0, 0, 0.0, 0.0, False, ' ', ' ', 0, 0, 0, 0)" % (self.ts - 1))
|
||||||
|
|
||||||
# diff verifacation
|
# diff verifacation
|
||||||
tdSql.query("select diff(col1) from stb_1")
|
tdSql.query(f"select diff(col1) from {dbname}.stb_1")
|
||||||
tdSql.checkRows(0)
|
tdSql.checkRows(0)
|
||||||
|
|
||||||
tdSql.query("select diff(col2) from stb_1")
|
tdSql.query(f"select diff(col2) from {dbname}.stb_1")
|
||||||
tdSql.checkRows(0)
|
tdSql.checkRows(0)
|
||||||
|
|
||||||
tdSql.query("select diff(col3) from stb_1")
|
tdSql.query(f"select diff(col3) from {dbname}.stb_1")
|
||||||
tdSql.checkRows(0)
|
tdSql.checkRows(0)
|
||||||
|
|
||||||
tdSql.query("select diff(col4) from stb_1")
|
tdSql.query(f"select diff(col4) from {dbname}.stb_1")
|
||||||
tdSql.checkRows(0)
|
tdSql.checkRows(0)
|
||||||
|
|
||||||
tdSql.query("select diff(col5) from stb_1")
|
tdSql.query(f"select diff(col5) from {dbname}.stb_1")
|
||||||
tdSql.checkRows(0)
|
tdSql.checkRows(0)
|
||||||
|
|
||||||
tdSql.query("select diff(col6) from stb_1")
|
tdSql.query(f"select diff(col6) from {dbname}.stb_1")
|
||||||
tdSql.checkRows(0)
|
tdSql.checkRows(0)
|
||||||
|
|
||||||
tdSql.query("select diff(col7) from stb_1")
|
tdSql.query(f"select diff(col7) from {dbname}.stb_1")
|
||||||
tdSql.checkRows(0)
|
tdSql.checkRows(0)
|
||||||
|
|
||||||
for i in range(self.rowNum):
|
for i in range(self.rowNum):
|
||||||
tdSql.execute("insert into stb_1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
tdSql.execute(f"insert into {dbname}.stb_1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
||||||
% (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
% (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
||||||
|
|
||||||
tdSql.error("select diff(ts) from stb")
|
tdSql.error(f"select diff(ts) from {dbname}.stb")
|
||||||
tdSql.error("select diff(ts) from stb_1")
|
tdSql.error(f"select diff(ts) from {dbname}.stb_1")
|
||||||
|
|
||||||
# tdSql.error("select diff(col7) from stb")
|
# tdSql.error(f"select diff(col7) from {dbname}.stb")
|
||||||
|
|
||||||
tdSql.error("select diff(col8) from stb")
|
tdSql.error(f"select diff(col8) from {dbname}.stb")
|
||||||
tdSql.error("select diff(col8) from stb_1")
|
tdSql.error(f"select diff(col8) from {dbname}.stb_1")
|
||||||
tdSql.error("select diff(col9) from stb")
|
tdSql.error(f"select diff(col9) from {dbname}.stb")
|
||||||
tdSql.error("select diff(col9) from stb_1")
|
tdSql.error(f"select diff(col9) from {dbname}.stb_1")
|
||||||
tdSql.error("select diff(col11) from stb_1")
|
tdSql.error(f"select diff(col11) from {dbname}.stb_1")
|
||||||
tdSql.error("select diff(col12) from stb_1")
|
tdSql.error(f"select diff(col12) from {dbname}.stb_1")
|
||||||
tdSql.error("select diff(col13) from stb_1")
|
tdSql.error(f"select diff(col13) from {dbname}.stb_1")
|
||||||
tdSql.error("select diff(col14) from stb_1")
|
tdSql.error(f"select diff(col14) from {dbname}.stb_1")
|
||||||
|
tdSql.query(f"select ts,diff(col1),ts from {dbname}.stb_1")
|
||||||
|
|
||||||
tdSql.query("select diff(col1) from stb_1")
|
tdSql.query(f"select diff(col1) from {dbname}.stb_1")
|
||||||
tdSql.checkRows(10)
|
tdSql.checkRows(10)
|
||||||
|
|
||||||
tdSql.query("select diff(col2) from stb_1")
|
tdSql.query(f"select diff(col2) from {dbname}.stb_1")
|
||||||
tdSql.checkRows(10)
|
tdSql.checkRows(10)
|
||||||
|
|
||||||
tdSql.query("select diff(col3) from stb_1")
|
tdSql.query(f"select diff(col3) from {dbname}.stb_1")
|
||||||
tdSql.checkRows(10)
|
tdSql.checkRows(10)
|
||||||
|
|
||||||
tdSql.query("select diff(col4) from stb_1")
|
tdSql.query(f"select diff(col4) from {dbname}.stb_1")
|
||||||
tdSql.checkRows(10)
|
tdSql.checkRows(10)
|
||||||
|
|
||||||
tdSql.query("select diff(col5) from stb_1")
|
tdSql.query(f"select diff(col5) from {dbname}.stb_1")
|
||||||
tdSql.checkRows(10)
|
tdSql.checkRows(10)
|
||||||
|
|
||||||
tdSql.query("select diff(col6) from stb_1")
|
tdSql.query(f"select diff(col6) from {dbname}.stb_1")
|
||||||
tdSql.checkRows(10)
|
tdSql.checkRows(10)
|
||||||
|
|
||||||
# check selectivity
|
tdSql.execute(f'''create table {dbname}.stb1(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
||||||
tdSql.query("select ts, diff(col1), col2 from stb_1")
|
|
||||||
tdSql.checkRows(10)
|
|
||||||
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
|
|
||||||
tdSql.checkData(1, 0, "2018-09-17 09:00:00.001")
|
|
||||||
tdSql.checkData(2, 0, "2018-09-17 09:00:00.002")
|
|
||||||
tdSql.checkData(3, 0, "2018-09-17 09:00:00.003")
|
|
||||||
tdSql.checkData(4, 0, "2018-09-17 09:00:00.004")
|
|
||||||
tdSql.checkData(5, 0, "2018-09-17 09:00:00.005")
|
|
||||||
tdSql.checkData(6, 0, "2018-09-17 09:00:00.006")
|
|
||||||
tdSql.checkData(7, 0, "2018-09-17 09:00:00.007")
|
|
||||||
tdSql.checkData(8, 0, "2018-09-17 09:00:00.008")
|
|
||||||
tdSql.checkData(9, 0, "2018-09-17 09:00:00.009")
|
|
||||||
|
|
||||||
tdSql.checkData(0, 1, 1)
|
|
||||||
tdSql.checkData(1, 1, 1)
|
|
||||||
tdSql.checkData(2, 1, 1)
|
|
||||||
tdSql.checkData(3, 1, 1)
|
|
||||||
tdSql.checkData(4, 1, 1)
|
|
||||||
tdSql.checkData(5, 1, 1)
|
|
||||||
tdSql.checkData(6, 1, 1)
|
|
||||||
tdSql.checkData(7, 1, 1)
|
|
||||||
tdSql.checkData(8, 1, 1)
|
|
||||||
tdSql.checkData(9, 1, 1)
|
|
||||||
|
|
||||||
tdSql.checkData(0, 2, 0)
|
|
||||||
tdSql.checkData(1, 2, 1)
|
|
||||||
tdSql.checkData(2, 2, 2)
|
|
||||||
tdSql.checkData(3, 2, 3)
|
|
||||||
tdSql.checkData(4, 2, 4)
|
|
||||||
tdSql.checkData(5, 2, 5)
|
|
||||||
tdSql.checkData(6, 2, 6)
|
|
||||||
tdSql.checkData(7, 2, 7)
|
|
||||||
tdSql.checkData(8, 2, 8)
|
|
||||||
tdSql.checkData(9, 2, 9)
|
|
||||||
|
|
||||||
tdSql.query("select ts, diff(col1), col2 from stb order by ts")
|
|
||||||
tdSql.checkRows(10)
|
|
||||||
|
|
||||||
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
|
|
||||||
tdSql.checkData(1, 0, "2018-09-17 09:00:00.001")
|
|
||||||
tdSql.checkData(2, 0, "2018-09-17 09:00:00.002")
|
|
||||||
tdSql.checkData(3, 0, "2018-09-17 09:00:00.003")
|
|
||||||
tdSql.checkData(4, 0, "2018-09-17 09:00:00.004")
|
|
||||||
tdSql.checkData(5, 0, "2018-09-17 09:00:00.005")
|
|
||||||
tdSql.checkData(6, 0, "2018-09-17 09:00:00.006")
|
|
||||||
tdSql.checkData(7, 0, "2018-09-17 09:00:00.007")
|
|
||||||
tdSql.checkData(8, 0, "2018-09-17 09:00:00.008")
|
|
||||||
tdSql.checkData(9, 0, "2018-09-17 09:00:00.009")
|
|
||||||
|
|
||||||
tdSql.checkData(0, 1, 1)
|
|
||||||
tdSql.checkData(1, 1, 1)
|
|
||||||
tdSql.checkData(2, 1, 1)
|
|
||||||
tdSql.checkData(3, 1, 1)
|
|
||||||
tdSql.checkData(4, 1, 1)
|
|
||||||
tdSql.checkData(5, 1, 1)
|
|
||||||
tdSql.checkData(6, 1, 1)
|
|
||||||
tdSql.checkData(7, 1, 1)
|
|
||||||
tdSql.checkData(8, 1, 1)
|
|
||||||
tdSql.checkData(9, 1, 1)
|
|
||||||
|
|
||||||
tdSql.checkData(0, 2, 0)
|
|
||||||
tdSql.checkData(1, 2, 1)
|
|
||||||
tdSql.checkData(2, 2, 2)
|
|
||||||
tdSql.checkData(3, 2, 3)
|
|
||||||
tdSql.checkData(4, 2, 4)
|
|
||||||
tdSql.checkData(5, 2, 5)
|
|
||||||
tdSql.checkData(6, 2, 6)
|
|
||||||
tdSql.checkData(7, 2, 7)
|
|
||||||
tdSql.checkData(8, 2, 8)
|
|
||||||
tdSql.checkData(9, 2, 9)
|
|
||||||
|
|
||||||
|
|
||||||
tdSql.execute('''create table stb1(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
|
||||||
col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
|
col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
|
||||||
tdSql.execute("create table stb1_1 using stb tags('shanghai')")
|
tdSql.execute(f"create table {dbname}.stb1_1 using {dbname}.stb tags('shanghai')")
|
||||||
|
|
||||||
for i in range(self.rowNum):
|
for i in range(self.rowNum):
|
||||||
tdSql.execute("insert into stb1_1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
tdSql.execute(f"insert into {dbname}.stb1_1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
||||||
% (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
% (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
||||||
for i in range(self.rowNum):
|
for i in range(self.rowNum):
|
||||||
tdSql.execute("insert into stb1_1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
tdSql.execute(f"insert into {dbname}.stb1_1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
||||||
% (self.ts - i-1, i-1, i-1, i-1, i-1, -i - 0.1, -i - 0.1, -i % 2, i - 1, i - 1, i + 1, i + 1, i + 1, i + 1))
|
% (self.ts - i-1, i-1, i-1, i-1, i-1, -i - 0.1, -i - 0.1, -i % 2, i - 1, i - 1, i + 1, i + 1, i + 1, i + 1))
|
||||||
tdSql.query("select diff(col1,0) from stb1_1")
|
tdSql.query(f"select diff(col1,0) from {dbname}.stb1_1")
|
||||||
tdSql.checkRows(19)
|
tdSql.checkRows(19)
|
||||||
tdSql.query("select diff(col1,1) from stb1_1")
|
tdSql.query(f"select diff(col1,1) from {dbname}.stb1_1")
|
||||||
tdSql.checkRows(19)
|
tdSql.checkRows(19)
|
||||||
tdSql.checkData(0,0,None)
|
tdSql.checkData(0,0,None)
|
||||||
|
|
||||||
|
|
|
@ -16,6 +16,8 @@ class TDTestCase:
|
||||||
def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring
|
def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring
|
||||||
tdSql.prepare()
|
tdSql.prepare()
|
||||||
|
|
||||||
|
dbname = "db"
|
||||||
|
|
||||||
tdLog.printNoPrefix("==========step1:create table")
|
tdLog.printNoPrefix("==========step1:create table")
|
||||||
tdSql.execute("create stable db.stb1 (ts timestamp, c1 int, c2 int) tags(t0 tinyint, t1 int, t2 int)")
|
tdSql.execute("create stable db.stb1 (ts timestamp, c1 int, c2 int) tags(t0 tinyint, t1 int, t2 int)")
|
||||||
tdSql.execute("create stable db.stb2 (ts timestamp, c2 int, c3 binary(16)) tags(t2 binary(16), t3 binary(16), t4 int)")
|
tdSql.execute("create stable db.stb2 (ts timestamp, c2 int, c3 binary(16)) tags(t2 binary(16), t3 binary(16), t4 int)")
|
||||||
|
@ -34,223 +36,224 @@ class TDTestCase:
|
||||||
tdSql.execute(f"insert into db.t0{i} values (now-9d, {i}, '{(i+2)%3}')")
|
tdSql.execute(f"insert into db.t0{i} values (now-9d, {i}, '{(i+2)%3}')")
|
||||||
tdSql.execute(f"insert into db.t0{i} values (now-8d, {i}, '{(i)%3}')")
|
tdSql.execute(f"insert into db.t0{i} values (now-8d, {i}, '{(i)%3}')")
|
||||||
tdSql.execute(f"insert into db.t0{i} (ts )values (now-7d)")
|
tdSql.execute(f"insert into db.t0{i} (ts )values (now-7d)")
|
||||||
# tdSql.execute("create table db.t100num using db.stb1 tags(null, null, null)")
|
tdSql.execute("create table db.t100num using db.stb1 tags(null, null, null)")
|
||||||
# tdSql.execute("create table db.t0100num using db.stb2 tags(null, null, null)")
|
tdSql.execute("create table db.t0100num using db.stb2 tags(null, null, null)")
|
||||||
# tdSql.execute(f"insert into db.t100num values (now-10d, {tbnum-1}, 1)")
|
tdSql.execute(f"insert into db.t100num values (now-10d, {tbnum-1}, 1)")
|
||||||
# tdSql.execute(f"insert into db.t100num values (now-9d, {tbnum-1}, 0)")
|
tdSql.execute(f"insert into db.t100num values (now-9d, {tbnum-1}, 0)")
|
||||||
# tdSql.execute(f"insert into db.t100num values (now-8d, {tbnum-1}, 2)")
|
tdSql.execute(f"insert into db.t100num values (now-8d, {tbnum-1}, 2)")
|
||||||
# tdSql.execute(f"insert into db.t100num (ts )values (now-7d)")
|
tdSql.execute(f"insert into db.t100num (ts )values (now-7d)")
|
||||||
# tdSql.execute(f"insert into db.t0100num values (now-10d, {tbnum-1}, 1)")
|
tdSql.execute(f"insert into db.t0100num values (now-10d, {tbnum-1}, 1)")
|
||||||
# tdSql.execute(f"insert into db.t0100num values (now-9d, {tbnum-1}, 0)")
|
tdSql.execute(f"insert into db.t0100num values (now-9d, {tbnum-1}, 0)")
|
||||||
# tdSql.execute(f"insert into db.t0100num values (now-8d, {tbnum-1}, 2)")
|
tdSql.execute(f"insert into db.t0100num values (now-8d, {tbnum-1}, 2)")
|
||||||
# tdSql.execute(f"insert into db.t0100num (ts )values (now-7d)")
|
tdSql.execute(f"insert into db.t0100num (ts )values (now-7d)")
|
||||||
|
|
||||||
#========== distinct multi-data-coloumn ==========
|
# #========== distinct multi-data-coloumn ==========
|
||||||
# tdSql.query(f"select distinct c1 from stb1 where c1 <{tbnum}")
|
tdSql.query(f"select distinct c1 from {dbname}.stb1 where c1 <{tbnum}")
|
||||||
# tdSql.checkRows(tbnum)
|
tdSql.checkRows(tbnum)
|
||||||
# tdSql.query(f"select distinct c2 from stb1")
|
tdSql.query(f"select distinct c2 from {dbname}.stb1")
|
||||||
# tdSql.checkRows(4)
|
|
||||||
# tdSql.query(f"select distinct c1,c2 from stb1 where c1 <{tbnum}")
|
|
||||||
# tdSql.checkRows(tbnum*3)
|
|
||||||
# tdSql.query(f"select distinct c1,c1 from stb1 where c1 <{tbnum}")
|
|
||||||
# tdSql.checkRows(tbnum)
|
|
||||||
# tdSql.query(f"select distinct c1,c2 from stb1 where c1 <{tbnum} limit 3")
|
|
||||||
# tdSql.checkRows(3)
|
|
||||||
# tdSql.query(f"select distinct c1,c2 from stb1 where c1 <{tbnum} limit 3 offset {tbnum*3-2}")
|
|
||||||
# tdSql.checkRows(2)
|
|
||||||
|
|
||||||
tdSql.query(f"select distinct c1 from t1 where c1 <{tbnum}")
|
|
||||||
tdSql.checkRows(1)
|
|
||||||
tdSql.query(f"select distinct c2 from t1")
|
|
||||||
tdSql.checkRows(4)
|
tdSql.checkRows(4)
|
||||||
tdSql.query(f"select distinct c1,c2 from t1 where c1 <{tbnum}")
|
tdSql.query(f"select distinct c1,c2 from {dbname}.stb1 where c1 <{tbnum}")
|
||||||
|
tdSql.checkRows(tbnum*3)
|
||||||
|
tdSql.query(f"select distinct c1,c1 from {dbname}.stb1 where c1 <{tbnum}")
|
||||||
|
tdSql.checkRows(tbnum)
|
||||||
|
tdSql.query(f"select distinct c1,c2 from {dbname}.stb1 where c1 <{tbnum} limit 3")
|
||||||
tdSql.checkRows(3)
|
tdSql.checkRows(3)
|
||||||
tdSql.query(f"select distinct c1,c1 from t1 ")
|
tdSql.query(f"select distinct c1,c2 from {dbname}.stb1 where c1 <{tbnum} limit 3 offset {tbnum*3-2}")
|
||||||
tdSql.checkRows(2)
|
tdSql.checkRows(2)
|
||||||
tdSql.query(f"select distinct c1,c1 from t1 where c1 <{tbnum}")
|
|
||||||
|
tdSql.query(f"select distinct c1 from {dbname}.t1 where c1 <{tbnum}")
|
||||||
tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
tdSql.query(f"select distinct c1,c2 from t1 where c1 <{tbnum} limit 3")
|
tdSql.query(f"select distinct c2 from {dbname}.t1")
|
||||||
|
tdSql.checkRows(4)
|
||||||
|
tdSql.query(f"select distinct c1,c2 from {dbname}.t1 where c1 <{tbnum}")
|
||||||
tdSql.checkRows(3)
|
tdSql.checkRows(3)
|
||||||
tdSql.query(f"select distinct c1,c2 from t1 where c1 <{tbnum} limit 3 offset 2")
|
tdSql.query(f"select distinct c1,c1 from {dbname}.t1 ")
|
||||||
|
tdSql.checkRows(2)
|
||||||
|
tdSql.query(f"select distinct c1,c1 from {dbname}.t1 where c1 <{tbnum}")
|
||||||
|
tdSql.checkRows(1)
|
||||||
|
tdSql.query(f"select distinct c1,c2 from {dbname}.t1 where c1 <{tbnum} limit 3")
|
||||||
|
tdSql.checkRows(3)
|
||||||
|
tdSql.query(f"select distinct c1,c2 from {dbname}.t1 where c1 <{tbnum} limit 3 offset 2")
|
||||||
tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
|
|
||||||
# tdSql.query(f"select distinct c3 from stb2 where c2 <{tbnum} ")
|
# tdSql.query(f"select distinct c3 from {dbname}.stb2 where c2 <{tbnum} ")
|
||||||
# tdSql.checkRows(3)
|
# tdSql.checkRows(3)
|
||||||
# tdSql.query(f"select distinct c3, c2 from stb2 where c2 <{tbnum} limit 2")
|
# tdSql.query(f"select distinct c3, c2 from {dbname}.stb2 where c2 <{tbnum} limit 2")
|
||||||
# tdSql.checkRows(2)
|
# tdSql.checkRows(2)
|
||||||
|
|
||||||
# tdSql.error("select distinct c5 from stb1")
|
# tdSql.error(f"select distinct c5 from {dbname}.stb1")
|
||||||
tdSql.error("select distinct c5 from t1")
|
tdSql.error(f"select distinct c5 from {dbname}.t1")
|
||||||
tdSql.error("select distinct c1 from db.*")
|
tdSql.error(f"select distinct c1 from db.*")
|
||||||
tdSql.error("select c2, distinct c1 from stb1")
|
tdSql.error(f"select c2, distinct c1 from {dbname}.stb1")
|
||||||
tdSql.error("select c2, distinct c1 from t1")
|
tdSql.error(f"select c2, distinct c1 from {dbname}.t1")
|
||||||
tdSql.error("select distinct c2 from ")
|
tdSql.error(f"select distinct c2 from ")
|
||||||
tdSql.error("distinct c2 from stb1")
|
tdSql.error("distinct c2 from {dbname}.stb1")
|
||||||
tdSql.error("distinct c2 from t1")
|
tdSql.error("distinct c2 from {dbname}.t1")
|
||||||
tdSql.error("select distinct c1, c2, c3 from stb1")
|
tdSql.error(f"select distinct c1, c2, c3 from {dbname}.stb1")
|
||||||
tdSql.error("select distinct c1, c2, c3 from t1")
|
tdSql.error(f"select distinct c1, c2, c3 from {dbname}.t1")
|
||||||
tdSql.error("select distinct stb1.c1, stb1.c2, stb2.c2, stb2.c3 from stb1")
|
tdSql.error(f"select distinct stb1.c1, stb1.c2, stb2.c2, stb2.c3 from {dbname}.stb1")
|
||||||
tdSql.error("select distinct stb1.c1, stb1.c2, stb2.c2, stb2.c3 from t1")
|
tdSql.error(f"select distinct stb1.c1, stb1.c2, stb2.c2, stb2.c3 from {dbname}.t1")
|
||||||
tdSql.error("select distinct t1.c1, t1.c2, t2.c1, t2.c2 from t1")
|
tdSql.error(f"select distinct t1.c1, t1.c2, t2.c1, t2.c2 from {dbname}.t1")
|
||||||
# tdSql.query(f"select distinct c1 c2, c2 c3 from stb1 where c1 <{tbnum}")
|
tdSql.query(f"select distinct c1 c2, c2 c3 from {dbname}.stb1 where c1 <{tbnum}")
|
||||||
# tdSql.checkRows(tbnum*3)
|
tdSql.checkRows(tbnum*3)
|
||||||
tdSql.query(f"select distinct c1 c2, c2 c3 from t1 where c1 <{tbnum}")
|
tdSql.query(f"select distinct c1 c2, c2 c3 from {dbname}.t1 where c1 <{tbnum}")
|
||||||
tdSql.checkRows(3)
|
tdSql.checkRows(3)
|
||||||
# tdSql.error("select distinct c1, c2 from stb1 order by ts")
|
tdSql.error(f"select distinct c1, c2 from {dbname}.stb1 order by ts")
|
||||||
tdSql.error("select distinct c1, c2 from t1 order by ts")
|
tdSql.error(f"select distinct c1, c2 from {dbname}.t1 order by ts")
|
||||||
# tdSql.error("select distinct c1, ts from stb1 group by c2")
|
tdSql.error(f"select distinct c1, ts from {dbname}.stb1 group by c2")
|
||||||
tdSql.error("select distinct c1, ts from t1 group by c2")
|
tdSql.error(f"select distinct c1, ts from {dbname}.t1 group by c2")
|
||||||
# tdSql.error("select distinct c1, max(c2) from stb1 ")
|
tdSql.query(f"select distinct c1, max(c2) from {dbname}.stb1 ")
|
||||||
# tdSql.error("select distinct c1, max(c2) from t1 ")
|
tdSql.query(f"select distinct c1, max(c2) from {dbname}.t1 ")
|
||||||
# tdSql.error("select max(c2), distinct c1 from stb1 ")
|
tdSql.error(f"select max(c2), distinct c1 from {dbname}.stb1 ")
|
||||||
tdSql.error("select max(c2), distinct c1 from t1 ")
|
tdSql.error(f"select max(c2), distinct c1 from {dbname}.t1 ")
|
||||||
# tdSql.error("select distinct c1, c2 from stb1 where c1 > 3 group by t0")
|
tdSql.error(f"select distinct c1, c2 from {dbname}.stb1 where c1 > 3 group by t0")
|
||||||
tdSql.error("select distinct c1, c2 from t1 where c1 > 3 group by t0")
|
tdSql.error(f"select distinct c1, c2 from {dbname}.t1 where c1 > 3 group by t0")
|
||||||
# tdSql.error("select distinct c1, c2 from stb1 where c1 > 3 interval(1d) ")
|
tdSql.error(f"select distinct c1, c2 from {dbname}.stb1 where c1 > 3 interval(1d) ")
|
||||||
tdSql.error("select distinct c1, c2 from t1 where c1 > 3 interval(1d) ")
|
tdSql.error(f"select distinct c1, c2 from {dbname}.t1 where c1 > 3 interval(1d) ")
|
||||||
# tdSql.error("select distinct c1, c2 from stb1 where c1 > 3 interval(1d) fill(next)")
|
tdSql.error(f"select distinct c1, c2 from {dbname}.stb1 where c1 > 3 interval(1d) fill(next)")
|
||||||
tdSql.error("select distinct c1, c2 from t1 where c1 > 3 interval(1d) fill(next)")
|
tdSql.error(f"select distinct c1, c2 from {dbname}.t1 where c1 > 3 interval(1d) fill(next)")
|
||||||
# tdSql.error("select distinct c1, c2 from stb1 where ts > now-10d and ts < now interval(1d) fill(next)")
|
tdSql.error(f"select distinct c1, c2 from {dbname}.stb1 where ts > now-10d and ts < now interval(1d) fill(next)")
|
||||||
tdSql.error("select distinct c1, c2 from t1 where ts > now-10d and ts < now interval(1d) fill(next)")
|
tdSql.error(f"select distinct c1, c2 from {dbname}.t1 where ts > now-10d and ts < now interval(1d) fill(next)")
|
||||||
# tdSql.error("select distinct c1, c2 from stb1 where c1 > 3 slimit 1")
|
tdSql.error(f"select distinct c1, c2 from {dbname}.stb1 where c1 > 3 slimit 1")
|
||||||
# tdSql.error("select distinct c1, c2 from t1 where c1 > 3 slimit 1")
|
tdSql.error(f"select distinct c1, c2 from {dbname}.t1 where c1 > 3 slimit 1")
|
||||||
# tdSql.query(f"select distinct c1, c2 from stb1 where c1 between {tbnum-2} and {tbnum} ")
|
tdSql.query(f"select distinct c1, c2 from {dbname}.stb1 where c1 between {tbnum-2} and {tbnum} ")
|
||||||
# tdSql.checkRows(6)
|
tdSql.checkRows(6)
|
||||||
tdSql.query(f"select distinct c1, c2 from t1 where c1 between {tbnum-2} and {tbnum} ")
|
tdSql.query(f"select distinct c1, c2 from {dbname}.t1 where c1 between {tbnum-2} and {tbnum} ")
|
||||||
# tdSql.checkRows(1)
|
# tdSql.checkRows(1)
|
||||||
# tdSql.query("select distinct c1, c2 from stb1 where c1 in (1,2,3,4,5)")
|
tdSql.query(f"select distinct c1, c2 from {dbname}.stb1 where c1 in (1,2,3,4,5)")
|
||||||
# tdSql.checkRows(15)
|
tdSql.checkRows(15)
|
||||||
tdSql.query("select distinct c1, c2 from t1 where c1 in (1,2,3,4,5)")
|
tdSql.query(f"select distinct c1, c2 from {dbname}.t1 where c1 in (1,2,3,4,5)")
|
||||||
# tdSql.checkRows(1)
|
# tdSql.checkRows(1)
|
||||||
# tdSql.query("select distinct c1, c2 from stb1 where c1 in (100,1000,10000)")
|
tdSql.query(f"select distinct c1, c2 from {dbname}.stb1 where c1 in (100,1000,10000)")
|
||||||
# tdSql.checkRows(3)
|
tdSql.checkRows(3)
|
||||||
tdSql.query("select distinct c1, c2 from t1 where c1 in (100,1000,10000)")
|
tdSql.query(f"select distinct c1, c2 from {dbname}.t1 where c1 in (100,1000,10000)")
|
||||||
# tdSql.checkRows(0)
|
tdSql.checkRows(0)
|
||||||
|
|
||||||
# tdSql.query(f"select distinct c1,c2 from (select * from stb1 where c1 > {tbnum-2}) ")
|
tdSql.query(f"select distinct c1,c2 from (select * from {dbname}.stb1 where c1 > {tbnum-2}) ")
|
||||||
# tdSql.checkRows(3)
|
tdSql.checkRows(3)
|
||||||
# tdSql.query(f"select distinct c1,c2 from (select * from t1 where c1 < {tbnum}) ")
|
tdSql.query(f"select distinct c1,c2 from (select * from {dbname}.t1 where c1 < {tbnum}) ")
|
||||||
# tdSql.checkRows(3)
|
tdSql.checkRows(3)
|
||||||
# tdSql.query(f"select distinct c1,c2 from (select * from stb1 where t2 !=0 and t2 != 1) ")
|
tdSql.query(f"select distinct c1,c2 from (select * from {dbname}.stb1 where t2 !=0 and t2 != 1) ")
|
||||||
# tdSql.checkRows(0)
|
tdSql.checkRows(0)
|
||||||
# tdSql.error("select distinct c1, c2 from (select distinct c1, c2 from stb1 where t0 > 2 and t1 < 3) ")
|
tdSql.query(f"select distinct c1, c2 from (select distinct c1, c2 from {dbname}.stb1 where t0 > 2 and t1 < 3) ")
|
||||||
# tdSql.error("select c1, c2 from (select distinct c1, c2 from stb1 where t0 > 2 and t1 < 3) ")
|
tdSql.query(f"select c1, c2 from (select distinct c1, c2 from {dbname}.stb1 where t0 > 2 and t1 < 3) ")
|
||||||
# tdSql.query("select distinct c1, c2 from (select c2, c1 from stb1 where c1 > 2 ) where c1 < 4")
|
tdSql.query(f"select distinct c1, c2 from (select c2, c1 from {dbname}.stb1 where c1 > 2 ) where c1 < 4")
|
||||||
# tdSql.checkRows(3)
|
tdSql.checkRows(3)
|
||||||
# tdSql.error("select distinct c1, c2 from (select c1 from stb1 where t0 > 2 ) where t1 < 3")
|
tdSql.error(f"select distinct c1, c2 from (select c1 from {dbname}.stb1 where t0 > 2 ) where t1 < 3")
|
||||||
# tdSql.error("select distinct c1, c2 from (select c2, c1 from stb1 where c1 > 2 order by ts)")
|
tdSql.query(f"select distinct c1, c2 from (select c2, c1 from {dbname}.stb1 where c1 > 2 order by ts)")
|
||||||
# tdSql.error("select distinct c1, c2 from (select c2, c1 from t1 where c1 > 2 order by ts)")
|
tdSql.query(f"select distinct c1, c2 from (select c2, c1 from {dbname}.t1 where c1 > 2 order by ts)")
|
||||||
# tdSql.error("select distinct c1, c2 from (select c2, c1 from stb1 where c1 > 2 group by c1)")
|
tdSql.error(f"select distinct c1, c2 from (select c2, c1 from {dbname}.stb1 where c1 > 2 group by c1)")
|
||||||
# tdSql.error("select distinct c1, c2 from (select max(c1) c1, max(c2) c2 from stb1 group by c1)")
|
tdSql.query(f"select distinct c1, c2 from (select max(c1) c1, max(c2) c2 from {dbname}.stb1 group by c1)")
|
||||||
# tdSql.error("select distinct c1, c2 from (select max(c1) c1, max(c2) c2 from t1 group by c1)")
|
tdSql.query(f"select distinct c1, c2 from (select max(c1) c1, max(c2) c2 from {dbname}.t1 group by c1)")
|
||||||
# tdSql.query("select distinct c1, c2 from (select max(c1) c1, max(c2) c2 from stb1 )")
|
tdSql.query(f"select distinct c1, c2 from (select max(c1) c1, max(c2) c2 from {dbname}.stb1 )")
|
||||||
# tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
# tdSql.query("select distinct c1, c2 from (select max(c1) c1, max(c2) c2 from t1 )")
|
tdSql.query(f"select distinct c1, c2 from (select max(c1) c1, max(c2) c2 from {dbname}.t1 )")
|
||||||
# tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
# tdSql.error("select distinct stb1.c1, stb1.c2 from stb1 , stb2 where stb1.ts=stb2.ts and stb1.t2=stb2.t4")
|
tdSql.query(f"select distinct stb1.c1, stb1.c2 from {dbname}.stb1, {dbname}.stb2 where stb1.ts=stb2.ts and stb1.t2=stb2.t4")
|
||||||
# tdSql.error("select distinct t1.c1, t1.c2 from t1 , t2 where t1.ts=t2.ts ")
|
tdSql.query(f"select distinct t1.c1, t1.c2 from {dbname}.t1, {dbname}.t2 where t1.ts=t2.ts ")
|
||||||
|
|
||||||
# tdSql.error("select distinct c1, c2 from (select count(c1) c1, count(c2) c2 from stb1 group by ts)")
|
tdSql.query(f"select distinct c1, c2 from (select count(c1) c1, count(c2) c2 from {dbname}.stb1 group by ts)")
|
||||||
# tdSql.error("select distinct c1, c2 from (select count(c1) c1, count(c2) c2 from t1 group by ts)")
|
tdSql.query(f"select distinct c1, c2 from (select count(c1) c1, count(c2) c2 from {dbname}.t1 group by ts)")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# #========== suport distinct multi-tags-coloumn ==========
|
#========== suport distinct multi-tags-coloumn ==========
|
||||||
# tdSql.query("select distinct t1 from stb1")
|
tdSql.query(f"select distinct t1 from {dbname}.stb1")
|
||||||
# tdSql.checkRows(maxRemainderNum+1)
|
tdSql.checkRows(maxRemainderNum+1)
|
||||||
# tdSql.query("select distinct t0, t1 from stb1")
|
tdSql.query(f"select distinct t0, t1 from {dbname}.stb1")
|
||||||
# tdSql.checkRows(maxRemainderNum+1)
|
tdSql.checkRows(maxRemainderNum+1)
|
||||||
# tdSql.query("select distinct t1, t0 from stb1")
|
tdSql.query(f"select distinct t1, t0 from {dbname}.stb1")
|
||||||
# tdSql.checkRows(maxRemainderNum+1)
|
tdSql.checkRows(maxRemainderNum+1)
|
||||||
# tdSql.query("select distinct t1, t2 from stb1")
|
tdSql.query(f"select distinct t1, t2 from {dbname}.stb1")
|
||||||
# tdSql.checkRows(maxRemainderNum*2+1)
|
tdSql.checkRows(maxRemainderNum*2+1)
|
||||||
# tdSql.query("select distinct t0, t1, t2 from stb1")
|
tdSql.query(f"select distinct t0, t1, t2 from {dbname}.stb1")
|
||||||
# tdSql.checkRows(maxRemainderNum*2+1)
|
tdSql.checkRows(maxRemainderNum*2+1)
|
||||||
# tdSql.query("select distinct t0 t1, t1 t2 from stb1")
|
tdSql.query(f"select distinct t0 t1, t1 t2 from {dbname}.stb1")
|
||||||
# tdSql.checkRows(maxRemainderNum+1)
|
tdSql.checkRows(maxRemainderNum+1)
|
||||||
# tdSql.query("select distinct t0, t0, t0 from stb1")
|
tdSql.query(f"select distinct t0, t0, t0 from {dbname}.stb1")
|
||||||
# tdSql.checkRows(maxRemainderNum+1)
|
tdSql.checkRows(maxRemainderNum+1)
|
||||||
# tdSql.query("select distinct t0, t1 from t1")
|
tdSql.query(f"select distinct t0, t1 from {dbname}.t1")
|
||||||
# tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
# tdSql.query("select distinct t0, t1 from t100num")
|
tdSql.query(f"select distinct t0, t1 from {dbname}.t100num")
|
||||||
# tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
|
|
||||||
# tdSql.query("select distinct t3 from stb2")
|
tdSql.query(f"select distinct t3 from {dbname}.stb2")
|
||||||
# tdSql.checkRows(maxRemainderNum+1)
|
tdSql.checkRows(maxRemainderNum+1)
|
||||||
# tdSql.query("select distinct t2, t3 from stb2")
|
tdSql.query(f"select distinct t2, t3 from {dbname}.stb2")
|
||||||
# tdSql.checkRows(maxRemainderNum+1)
|
tdSql.checkRows(maxRemainderNum+1)
|
||||||
# tdSql.query("select distinct t3, t2 from stb2")
|
tdSql.query(f"select distinct t3, t2 from {dbname}.stb2")
|
||||||
# tdSql.checkRows(maxRemainderNum+1)
|
tdSql.checkRows(maxRemainderNum+1)
|
||||||
# tdSql.query("select distinct t4, t2 from stb2")
|
tdSql.query(f"select distinct t4, t2 from {dbname}.stb2")
|
||||||
# tdSql.checkRows(maxRemainderNum*3+1)
|
tdSql.checkRows(maxRemainderNum*3+1)
|
||||||
# tdSql.query("select distinct t2, t3, t4 from stb2")
|
tdSql.query(f"select distinct t2, t3, t4 from {dbname}.stb2")
|
||||||
# tdSql.checkRows(maxRemainderNum*3+1)
|
tdSql.checkRows(maxRemainderNum*3+1)
|
||||||
# tdSql.query("select distinct t2 t1, t3 t2 from stb2")
|
tdSql.query(f"select distinct t2 t1, t3 t2 from {dbname}.stb2")
|
||||||
# tdSql.checkRows(maxRemainderNum+1)
|
tdSql.checkRows(maxRemainderNum+1)
|
||||||
# tdSql.query("select distinct t3, t3, t3 from stb2")
|
tdSql.query(f"select distinct t3, t3, t3 from {dbname}.stb2")
|
||||||
# tdSql.checkRows(maxRemainderNum+1)
|
tdSql.checkRows(maxRemainderNum+1)
|
||||||
# tdSql.query("select distinct t2, t3 from t01")
|
tdSql.query(f"select distinct t2, t3 from {dbname}.t01")
|
||||||
# tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
# tdSql.query("select distinct t3, t4 from t0100num")
|
tdSql.query(f"select distinct t3, t4 from {dbname}.t0100num")
|
||||||
# tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
|
|
||||||
|
|
||||||
# ########## should be error #########
|
########## should be error #########
|
||||||
# tdSql.error("select distinct from stb1")
|
tdSql.error(f"select distinct from {dbname}.stb1")
|
||||||
# tdSql.error("select distinct t3 from stb1")
|
tdSql.error(f"select distinct t3 from {dbname}.stb1")
|
||||||
# tdSql.error("select distinct t1 from db.*")
|
tdSql.error(f"select distinct t1 from db.*")
|
||||||
# tdSql.error("select distinct t2 from ")
|
tdSql.error(f"select distinct t2 from ")
|
||||||
# tdSql.error("distinct t2 from stb1")
|
tdSql.error(f"distinct t2 from {dbname}.stb1")
|
||||||
# tdSql.error("select distinct stb1")
|
tdSql.error(f"select distinct stb1")
|
||||||
# tdSql.error("select distinct t0, t1, t2, t3 from stb1")
|
tdSql.error(f"select distinct t0, t1, t2, t3 from {dbname}.stb1")
|
||||||
# tdSql.error("select distinct stb1.t0, stb1.t1, stb2.t2, stb2.t3 from stb1")
|
tdSql.error(f"select distinct stb1.t0, stb1.t1, stb2.t2, stb2.t3 from {dbname}.stb1")
|
||||||
|
|
||||||
# tdSql.error("select dist t0 from stb1")
|
tdSql.error(f"select dist t0 from {dbname}.stb1")
|
||||||
# tdSql.error("select distinct stb2.t2, stb2.t3 from stb1")
|
tdSql.error(f"select distinct stb2.t2, stb2.t3 from {dbname}.stb1")
|
||||||
# tdSql.error("select distinct stb2.t2 t1, stb2.t3 t2 from stb1")
|
tdSql.error(f"select distinct stb2.t2 t1, stb2.t3 t2 from {dbname}.stb1")
|
||||||
|
|
||||||
# tdSql.error("select distinct t0, t1 from t1 where t0 < 7")
|
tdSql.query(f"select distinct t0, t1 from {dbname}.t1 where t0 < 7")
|
||||||
|
|
||||||
# ########## add where condition ##########
|
########## add where condition ##########
|
||||||
# tdSql.query("select distinct t0, t1 from stb1 where t1 > 3")
|
tdSql.query(f"select distinct t0, t1 from {dbname}.stb1 where t1 > 3")
|
||||||
# tdSql.checkRows(3)
|
tdSql.checkRows(3)
|
||||||
# tdSql.query("select distinct t0, t1 from stb1 where t1 > 3 limit 2")
|
tdSql.query(f"select distinct t0, t1 from {dbname}.stb1 where t1 > 3 limit 2")
|
||||||
# tdSql.checkRows(2)
|
tdSql.checkRows(2)
|
||||||
# tdSql.query("select distinct t0, t1 from stb1 where t1 > 3 limit 2 offset 2")
|
tdSql.query(f"select distinct t0, t1 from {dbname}.stb1 where t1 > 3 limit 2 offset 2")
|
||||||
# tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
# tdSql.query("select distinct t0, t1 from stb1 where t1 > 3 slimit 2")
|
tdSql.error(f"select distinct t0, t1 from {dbname}.stb1 where t1 > 3 slimit 2")
|
||||||
# tdSql.checkRows(3)
|
tdSql.query(f"select distinct t0, t1 from {dbname}.stb1 where c1 > 2")
|
||||||
# tdSql.error("select distinct t0, t1 from stb1 where c1 > 2")
|
tdSql.query(f"select distinct t0, t1 from {dbname}.stb1 where t1 > 3 and t1 < 5")
|
||||||
# tdSql.query("select distinct t0, t1 from stb1 where t1 > 3 and t1 < 5")
|
tdSql.checkRows(1)
|
||||||
# tdSql.checkRows(1)
|
tdSql.error(f"select distinct stb1.t0, stb1.t1 from {dbname}.stb1, {dbname}.stb2 where stb1.t2=stb2.t4")
|
||||||
# tdSql.error("select distinct stb1.t0, stb1.t1 from stb1, stb2 where stb1.t2=stb2.t4")
|
tdSql.error(f"select distinct t0, t1 from {dbname}.stb1 where stb2.t4 > 2")
|
||||||
# tdSql.error("select distinct t0, t1 from stb1 where stb2.t4 > 2")
|
tdSql.error(f"select distinct t0, t1 from {dbname}.stb1 where t1 > 3 group by t0")
|
||||||
# tdSql.error("select distinct t0, t1 from stb1 where t1 > 3 group by t0")
|
tdSql.error(f"select distinct t0, t1 from {dbname}.stb1 where t1 > 3 interval(1d) ")
|
||||||
# tdSql.error("select distinct t0, t1 from stb1 where t1 > 3 interval(1d) ")
|
tdSql.error(f"select distinct t0, t1 from {dbname}.stb1 where t1 > 3 interval(1d) fill(next)")
|
||||||
# tdSql.error("select distinct t0, t1 from stb1 where t1 > 3 interval(1d) fill(next)")
|
tdSql.error(f"select distinct t0, t1 from {dbname}.stb1 where ts > now-10d and ts < now interval(1d) fill(next)")
|
||||||
# tdSql.error("select distinct t0, t1 from stb1 where ts > now-10d and ts < now interval(1d) fill(next)")
|
|
||||||
|
|
||||||
# tdSql.error("select max(c1), distinct t0 from stb1 where t0 > 2")
|
tdSql.error(f"select max(c1), distinct t0 from {dbname}.stb1 where t0 > 2")
|
||||||
# tdSql.error("select distinct t0, max(c1) from stb1 where t0 > 2")
|
tdSql.query(f"select distinct t0, max(c1) from {dbname}.stb1 where t0 > 2")
|
||||||
# tdSql.error("select distinct t0 from stb1 where t0 in (select t0 from stb1 where t0 > 2)")
|
tdSql.error(f"select distinct t0 from {dbname}.stb1 where t0 in (select t0 from {dbname}.stb1 where t0 > 2)")
|
||||||
# tdSql.query("select distinct t0, t1 from stb1 where t0 in (1,2,3,4,5)")
|
tdSql.query(f"select distinct t0, t1 from {dbname}.stb1 where t0 in (1,2,3,4,5)")
|
||||||
# tdSql.checkRows(5)
|
tdSql.checkRows(5)
|
||||||
# tdSql.query("select distinct t1 from (select t0, t1 from stb1 where t0 > 2) ")
|
tdSql.query(f"select distinct t1 from (select t0, t1 from {dbname}.stb1 where t0 > 2) ")
|
||||||
# tdSql.checkRows(4)
|
tdSql.checkRows(4)
|
||||||
# tdSql.error("select distinct t1 from (select distinct t0, t1 from stb1 where t0 > 2 and t1 < 3) ")
|
tdSql.query(f"select distinct t1 from (select distinct t0, t1 from {dbname}.stb1 where t0 > 2 and t1 < 3) ")
|
||||||
# tdSql.error("select distinct t1 from (select distinct t0, t1 from stb1 where t0 > 2 ) where t1 < 3")
|
# TODO: BUG of TD-17561
|
||||||
# tdSql.query("select distinct t1 from (select t0, t1 from stb1 where t0 > 2 ) where t1 < 3")
|
# tdSql.query(f"select distinct t1 from (select distinct t0, t1 from {dbname}.stb1 where t0 > 2 ) where t1 < 3")
|
||||||
# tdSql.checkRows(1)
|
tdSql.query(f"select distinct t1 from (select t0, t1 from {dbname}.stb1 where t0 > 2 ) where t1 < 3")
|
||||||
# tdSql.error("select distinct t1, t0 from (select t1 from stb1 where t0 > 2 ) where t1 < 3")
|
tdSql.checkRows(1)
|
||||||
# tdSql.error("select distinct t1, t0 from (select max(t1) t1, max(t0) t0 from stb1 group by t1)")
|
tdSql.error(f"select distinct t1, t0 from (select t1 from {dbname}.stb1 where t0 > 2 ) where t1 < 3")
|
||||||
# tdSql.error("select distinct t1, t0 from (select max(t1) t1, max(t0) t0 from stb1)")
|
tdSql.query(f"select distinct t1, t0 from (select max(t1) t1, max(t0) t0 from {dbname}.stb1 group by t1)")
|
||||||
# tdSql.query("select distinct t1, t0 from (select t1,t0 from stb1 where t0 > 2 ) where t1 < 3")
|
tdSql.query(f"select distinct t1, t0 from (select max(t1) t1, max(t0) t0 from {dbname}.stb1)")
|
||||||
# tdSql.checkRows(1)
|
tdSql.query(f"select distinct t1, t0 from (select t1,t0 from {dbname}.stb1 where t0 > 2 ) where t1 < 3")
|
||||||
# tdSql.error(" select distinct t1, t0 from (select t1,t0 from stb1 where t0 > 2 order by ts) where t1 < 3")
|
tdSql.checkRows(1)
|
||||||
# tdSql.error("select t1, t0 from (select distinct t1,t0 from stb1 where t0 > 2 ) where t1 < 3")
|
tdSql.query(f"select distinct t1, t0 from (select t1,t0 from {dbname}.stb1 where t0 > 2 order by ts) where t1 < 3")
|
||||||
# tdSql.error(" select distinct t1, t0 from (select t1,t0 from stb1 where t0 > 2 group by ts) where t1 < 3")
|
# TODO: BUG of TD-17561
|
||||||
# tdSql.error("select distinct stb1.t1, stb1.t2 from stb1 , stb2 where stb1.ts=stb2.ts and stb1.t2=stb2.t4")
|
# tdSql.error(f"select t1, t0 from (select distinct t1,t0 from {dbname}.stb1 where t0 > 2 ) where t1 < 3")
|
||||||
# tdSql.error("select distinct t1.t1, t1.t2 from t1 , t2 where t1.ts=t2.ts ")
|
tdSql.error(f"select distinct t1, t0 from (select t1,t0 from {dbname}.stb1 where t0 > 2 group by ts) where t1 < 3")
|
||||||
|
tdSql.query(f"select distinct stb1.t1, stb1.t2 from {dbname}.stb1, {dbname}.stb2 where stb1.ts=stb2.ts and stb1.t2=stb2.t4")
|
||||||
|
tdSql.query(f"select distinct t1.t1, t1.t2 from {dbname}.t1, {dbname}.t2 where t1.ts=t2.ts ")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -6,86 +6,60 @@ import random
|
||||||
|
|
||||||
|
|
||||||
class TDTestCase:
|
class TDTestCase:
|
||||||
updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 ,
|
|
||||||
"jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143,
|
|
||||||
"wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143,
|
|
||||||
"maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 }
|
|
||||||
|
|
||||||
|
updatecfgDict = {"maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 }
|
||||||
def init(self, conn, logSql):
|
def init(self, conn, logSql):
|
||||||
tdLog.debug("start to execute %s" % __file__)
|
tdLog.debug("start to execute %s" % __file__)
|
||||||
tdSql.init(conn.cursor())
|
tdSql.init(conn.cursor())
|
||||||
self.vnode_disbutes = None
|
self.vnode_disbutes = None
|
||||||
self.ts = 1537146000000
|
self.ts = 1537146000000
|
||||||
|
|
||||||
def prepare_datas_of_distribute(self):
|
def prepare_datas_of_distribute(self, dbname="testdb"):
|
||||||
|
|
||||||
# prepate datas for 20 tables distributed at different vgroups
|
# prepate datas for 20 tables distributed at different vgroups
|
||||||
tdSql.execute("create database if not exists testdb keep 3650 duration 1000 vgroups 5")
|
tdSql.execute(f"create database if not exists {dbname} keep 3650 duration 1000 vgroups 5")
|
||||||
tdSql.execute(" use testdb ")
|
tdSql.execute(f" use {dbname} ")
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
'''create table stb1
|
f'''create table {dbname}.stb1
|
||||||
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
||||||
tags (t0 timestamp, t1 int, t2 bigint, t3 smallint, t4 tinyint, t5 float, t6 double, t7 bool, t8 binary(16),t9 nchar(32))
|
tags (t0 timestamp, t1 int, t2 bigint, t3 smallint, t4 tinyint, t5 float, t6 double, t7 bool, t8 binary(16),t9 nchar(32))
|
||||||
'''
|
'''
|
||||||
)
|
)
|
||||||
|
|
||||||
tdSql.execute(
|
|
||||||
'''
|
|
||||||
create table t1
|
|
||||||
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
|
||||||
'''
|
|
||||||
)
|
|
||||||
for i in range(20):
|
for i in range(20):
|
||||||
tdSql.execute(f'create table ct{i+1} using stb1 tags ( now(), {1*i}, {11111*i}, {111*i}, {1*i}, {1.11*i}, {11.11*i}, {i%2}, "binary{i}", "nchar{i}" )')
|
tdSql.execute(f'create table {dbname}.ct{i+1} using {dbname}.stb1 tags ( now(), {1*i}, {11111*i}, {111*i}, {1*i}, {1.11*i}, {11.11*i}, {i%2}, "binary{i}", "nchar{i}" )')
|
||||||
|
|
||||||
for i in range(9):
|
for i in range(9):
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
f"insert into {dbname}.ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
||||||
)
|
)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
f"insert into {dbname}.ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
||||||
)
|
)
|
||||||
|
|
||||||
for i in range(1,21):
|
for i in range(1,21):
|
||||||
if i ==1 or i == 4:
|
if i ==1 or i == 4:
|
||||||
continue
|
continue
|
||||||
else:
|
else:
|
||||||
tbname = "ct"+f'{i}'
|
tbname = f"{dbname}.ct{i}"
|
||||||
for j in range(9):
|
for j in range(9):
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into {tbname} values ( now()-{(i+j)*10}s, {1*(j+i)}, {11111*(j+i)}, {111*(j+i)}, {11*(j)}, {1.11*(j+i)}, {11.11*(j+i)}, {(j+i)%2}, 'binary{j}', 'nchar{j}', now()+{1*j}a )"
|
f"insert into {tbname} values ( now()-{(i+j)*10}s, {1*(j+i)}, {11111*(j+i)}, {111*(j+i)}, {11*(j)}, {1.11*(j+i)}, {11.11*(j+i)}, {(j+i)%2}, 'binary{j}', 'nchar{j}', now()+{1*j}a )"
|
||||||
)
|
)
|
||||||
tdSql.execute("insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
|
||||||
tdSql.execute("insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
||||||
tdSql.execute("insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
|
||||||
tdSql.execute("insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
||||||
|
|
||||||
tdSql.execute("insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
tdSql.execute(f"insert into {dbname}.ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||||
tdSql.execute("insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
tdSql.execute(f"insert into {dbname}.ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||||
tdSql.execute("insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
tdSql.execute(f"insert into {dbname}.ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||||
|
|
||||||
tdSql.execute(
|
|
||||||
f'''insert into t1 values
|
|
||||||
( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
|
||||||
( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
|
|
||||||
( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
|
|
||||||
( '2021-01-01 01:01:06.000', 3, 33333, 333, 33, 3.33, 33.33, 0, "binary3", "nchar3", now()+3a )
|
|
||||||
( '2021-05-07 01:01:10.000', 4, 44444, 444, 44, 4.44, 44.44, 1, "binary4", "nchar4", now()+4a )
|
|
||||||
( '2021-07-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
|
||||||
( '2021-09-30 01:01:16.000', 5, 55555, 555, 55, 5.55, 55.55, 0, "binary5", "nchar5", now()+5a )
|
|
||||||
( '2022-02-01 01:01:20.000', 6, 66666, 666, 66, 6.66, 66.66, 1, "binary6", "nchar6", now()+6a )
|
|
||||||
( '2022-10-28 01:01:26.000', 7, 00000, 000, 00, 0.00, 00.00, 1, "binary7", "nchar7", "1970-01-01 08:00:00.000" )
|
|
||||||
( '2022-12-01 01:01:30.000', 8, -88888, -888, -88, -8.88, -88.88, 0, "binary8", "nchar8", "1969-01-01 01:00:00.000" )
|
|
||||||
( '2022-12-31 01:01:36.000', 9, -99999999999999999, -999, -99, -9.99, -999999999999999999999.99, 1, "binary9", "nchar9", "1900-01-01 00:00:00.000" )
|
|
||||||
( '2023-02-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
|
||||||
'''
|
|
||||||
)
|
|
||||||
|
|
||||||
tdLog.info(" prepare data for distributed_aggregate done! ")
|
tdLog.info(" prepare data for distributed_aggregate done! ")
|
||||||
|
|
||||||
def check_distribute_datas(self):
|
def check_distribute_datas(self, dbname="testdb"):
|
||||||
# get vgroup_ids of all
|
# get vgroup_ids of all
|
||||||
tdSql.query("show vgroups ")
|
tdSql.query(f"show {dbname}.vgroups ")
|
||||||
vgroups = tdSql.queryResult
|
vgroups = tdSql.queryResult
|
||||||
|
|
||||||
vnode_tables={}
|
vnode_tables={}
|
||||||
|
@ -95,7 +69,7 @@ class TDTestCase:
|
||||||
|
|
||||||
|
|
||||||
# check sub_table of per vnode ,make sure sub_table has been distributed
|
# check sub_table of per vnode ,make sure sub_table has been distributed
|
||||||
tdSql.query("show tables like 'ct%'")
|
tdSql.query(f"show {dbname}.tables like 'ct%'")
|
||||||
table_names = tdSql.queryResult
|
table_names = tdSql.queryResult
|
||||||
tablenames = []
|
tablenames = []
|
||||||
for table_name in table_names:
|
for table_name in table_names:
|
||||||
|
@ -109,28 +83,28 @@ class TDTestCase:
|
||||||
if count < 2:
|
if count < 2:
|
||||||
tdLog.exit(" the datas of all not satisfy sub_table has been distributed ")
|
tdLog.exit(" the datas of all not satisfy sub_table has been distributed ")
|
||||||
|
|
||||||
def distribute_agg_query(self):
|
def distribute_agg_query(self, dbname="testdb"):
|
||||||
# basic filter
|
# basic filter
|
||||||
tdSql.query("select apercentile(c1 , 20) from stb1 where c1 is null")
|
tdSql.query(f"select apercentile(c1 , 20) from {dbname}.stb1 where c1 is null")
|
||||||
tdSql.checkRows(0)
|
tdSql.checkRows(0)
|
||||||
|
|
||||||
tdSql.query("select apercentile(c1 , 20) from stb1 where t1=1")
|
tdSql.query(f"select apercentile(c1 , 20) from {dbname}.stb1 where t1=1")
|
||||||
tdSql.checkData(0,0,2.800000000)
|
tdSql.checkData(0,0,2.800000000)
|
||||||
|
|
||||||
tdSql.query("select apercentile(c1+c2 ,100) from stb1 where c1 =1 ")
|
tdSql.query(f"select apercentile(c1+c2 ,100) from {dbname}.stb1 where c1 =1 ")
|
||||||
tdSql.checkData(0,0,11112.000000000)
|
tdSql.checkData(0,0,11112.000000000)
|
||||||
|
|
||||||
tdSql.query("select apercentile(c1 ,10 ) from stb1 where tbname=\"ct2\"")
|
tdSql.query(f"select apercentile(c1 ,10 ) from {dbname}.stb1 where tbname=\"ct2\"")
|
||||||
tdSql.checkData(0,0,2.000000000)
|
tdSql.checkData(0,0,2.000000000)
|
||||||
|
|
||||||
tdSql.query("select apercentile(c1,20) from stb1 partition by tbname")
|
tdSql.query(f"select apercentile(c1,20) from {dbname}.stb1 partition by tbname")
|
||||||
tdSql.checkRows(20)
|
tdSql.checkRows(20)
|
||||||
|
|
||||||
tdSql.query("select apercentile(c1,20) from stb1 where t1> 4 partition by tbname")
|
tdSql.query(f"select apercentile(c1,20) from {dbname}.stb1 where t1> 4 partition by tbname")
|
||||||
tdSql.checkRows(15)
|
tdSql.checkRows(15)
|
||||||
|
|
||||||
# union all
|
# union all
|
||||||
tdSql.query("select apercentile(c1,20) from stb1 union all select apercentile(c1,20) from stb1 ")
|
tdSql.query(f"select apercentile(c1,20) from {dbname}.stb1 union all select apercentile(c1,20) from {dbname}.stb1 ")
|
||||||
tdSql.checkRows(2)
|
tdSql.checkRows(2)
|
||||||
tdSql.checkData(0,0,7.389181281)
|
tdSql.checkData(0,0,7.389181281)
|
||||||
|
|
||||||
|
@ -138,44 +112,44 @@ class TDTestCase:
|
||||||
|
|
||||||
tdSql.execute(" create database if not exists db ")
|
tdSql.execute(" create database if not exists db ")
|
||||||
tdSql.execute(" use db ")
|
tdSql.execute(" use db ")
|
||||||
tdSql.execute(" create stable st (ts timestamp , c1 int ,c2 float) tags(t1 int) ")
|
tdSql.execute(" create stable db.st (ts timestamp , c1 int ,c2 float) tags(t1 int) ")
|
||||||
tdSql.execute(" create table tb1 using st tags(1) ")
|
tdSql.execute(" create table db.tb1 using db.st tags(1) ")
|
||||||
tdSql.execute(" create table tb2 using st tags(2) ")
|
tdSql.execute(" create table db.tb2 using db.st tags(2) ")
|
||||||
|
|
||||||
|
|
||||||
for i in range(10):
|
for i in range(10):
|
||||||
ts = i*10 + self.ts
|
ts = i*10 + self.ts
|
||||||
tdSql.execute(f" insert into tb1 values({ts},{i},{i}.0)")
|
tdSql.execute(f" insert into db.tb1 values({ts},{i},{i}.0)")
|
||||||
tdSql.execute(f" insert into tb2 values({ts},{i},{i}.0)")
|
tdSql.execute(f" insert into db.tb2 values({ts},{i},{i}.0)")
|
||||||
|
|
||||||
tdSql.query("select apercentile(tb1.c1,100), apercentile(tb2.c2,100) from tb1, tb2 where tb1.ts=tb2.ts")
|
tdSql.query(f"select apercentile(tb1.c1,100), apercentile(tb2.c2,100) from db.tb1 tb1, db.tb2 tb2 where tb1.ts=tb2.ts")
|
||||||
tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
tdSql.checkData(0,0,9.000000000)
|
tdSql.checkData(0,0,9.000000000)
|
||||||
tdSql.checkData(0,0,9.000000000)
|
tdSql.checkData(0,0,9.000000000)
|
||||||
|
|
||||||
# group by
|
# group by
|
||||||
tdSql.execute(" use testdb ")
|
tdSql.execute(f"use {dbname} ")
|
||||||
tdSql.query(" select max(c1),c1 from stb1 group by t1 ")
|
tdSql.query(f" select max(c1),c1 from {dbname}.stb1 group by t1 ")
|
||||||
tdSql.checkRows(20)
|
tdSql.checkRows(20)
|
||||||
tdSql.query(" select max(c1),c1 from stb1 group by c1 ")
|
tdSql.query(f" select max(c1),c1 from {dbname}.stb1 group by c1 ")
|
||||||
tdSql.checkRows(30)
|
tdSql.checkRows(30)
|
||||||
tdSql.query(" select max(c1),c2 from stb1 group by c2 ")
|
tdSql.query(f" select max(c1),c2 from {dbname}.stb1 group by c2 ")
|
||||||
tdSql.checkRows(31)
|
tdSql.checkRows(31)
|
||||||
|
|
||||||
# partition by tbname or partition by tag
|
# partition by tbname or partition by tag
|
||||||
tdSql.query("select apercentile(c1 ,10)from stb1 partition by tbname")
|
tdSql.query(f"select apercentile(c1 ,10)from {dbname}.stb1 partition by tbname")
|
||||||
query_data = tdSql.queryResult
|
query_data = tdSql.queryResult
|
||||||
|
|
||||||
# nest query for support max
|
# nest query for support max
|
||||||
tdSql.query("select apercentile(c2+2,10)+1 from (select max(c1) c2 from stb1)")
|
tdSql.query(f"select apercentile(c2+2,10)+1 from (select max(c1) c2 from {dbname}.stb1)")
|
||||||
tdSql.checkData(0,0,31.000000000)
|
tdSql.checkData(0,0,31.000000000)
|
||||||
tdSql.query("select apercentile(c1+2,10)+1 as c2 from (select ts ,c1 ,c2 from stb1)")
|
tdSql.query(f"select apercentile(c1+2,10)+1 as c2 from (select ts ,c1 ,c2 from {dbname}.stb1)")
|
||||||
tdSql.checkData(0,0,7.560701700)
|
tdSql.checkData(0,0,7.560701700)
|
||||||
tdSql.query("select apercentile(a+2,10)+1 as c2 from (select ts ,abs(c1) a ,c2 from stb1)")
|
tdSql.query(f"select apercentile(a+2,10)+1 as c2 from (select ts ,abs(c1) a ,c2 from {dbname}.stb1)")
|
||||||
tdSql.checkData(0,0,7.560701700)
|
tdSql.checkData(0,0,7.560701700)
|
||||||
|
|
||||||
# mixup with other functions
|
# mixup with other functions
|
||||||
tdSql.query("select max(c1),count(c1),last(c2,c3),spread(c1), apercentile(c1,10) from stb1")
|
tdSql.query(f"select max(c1),count(c1),last(c2,c3),spread(c1), apercentile(c1,10) from {dbname}.stb1")
|
||||||
tdSql.checkData(0,0,28)
|
tdSql.checkData(0,0,28)
|
||||||
tdSql.checkData(0,1,184)
|
tdSql.checkData(0,1,184)
|
||||||
tdSql.checkData(0,2,-99999)
|
tdSql.checkData(0,2,-99999)
|
||||||
|
|
|
@ -7,11 +7,8 @@ import platform
|
||||||
|
|
||||||
|
|
||||||
class TDTestCase:
|
class TDTestCase:
|
||||||
updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 ,
|
|
||||||
"jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143,
|
|
||||||
"wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143,
|
|
||||||
"maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 }
|
|
||||||
|
|
||||||
|
updatecfgDict = {"maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 }
|
||||||
def init(self, conn, logSql):
|
def init(self, conn, logSql):
|
||||||
tdLog.debug("start to execute %s" % __file__)
|
tdLog.debug("start to execute %s" % __file__)
|
||||||
tdSql.init(conn.cursor())
|
tdSql.init(conn.cursor())
|
||||||
|
@ -34,75 +31,52 @@ class TDTestCase:
|
||||||
tdSql.query(avg_sql)
|
tdSql.query(avg_sql)
|
||||||
tdSql.checkData(0,0,pre_avg)
|
tdSql.checkData(0,0,pre_avg)
|
||||||
|
|
||||||
def prepare_datas_of_distribute(self):
|
def prepare_datas_of_distribute(self, dbname="testdb"):
|
||||||
|
|
||||||
# prepate datas for 20 tables distributed at different vgroups
|
# prepate datas for 20 tables distributed at different vgroups
|
||||||
tdSql.execute("create database if not exists testdb keep 3650 duration 1000 vgroups 5")
|
tdSql.execute(f"create database if not exists {dbname} keep 3650 duration 1000 vgroups 5")
|
||||||
tdSql.execute(" use testdb ")
|
tdSql.execute(f" use {dbname} ")
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
'''create table stb1
|
f'''create table {dbname}.stb1
|
||||||
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
||||||
tags (t0 timestamp, t1 int, t2 bigint, t3 smallint, t4 tinyint, t5 float, t6 double, t7 bool, t8 binary(16),t9 nchar(32))
|
tags (t0 timestamp, t1 int, t2 bigint, t3 smallint, t4 tinyint, t5 float, t6 double, t7 bool, t8 binary(16),t9 nchar(32))
|
||||||
'''
|
'''
|
||||||
)
|
)
|
||||||
|
|
||||||
tdSql.execute(
|
|
||||||
'''
|
|
||||||
create table t1
|
|
||||||
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
|
||||||
'''
|
|
||||||
)
|
|
||||||
for i in range(20):
|
for i in range(20):
|
||||||
tdSql.execute(f'create table ct{i+1} using stb1 tags ( now(), {1*i}, {11111*i}, {111*i}, {1*i}, {1.11*i}, {11.11*i}, {i%2}, "binary{i}", "nchar{i}" )')
|
tdSql.execute(f'create table {dbname}.ct{i+1} using {dbname}.stb1 tags ( now(), {1*i}, {11111*i}, {111*i}, {1*i}, {1.11*i}, {11.11*i}, {i%2}, "binary{i}", "nchar{i}" )')
|
||||||
|
|
||||||
for i in range(9):
|
for i in range(9):
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
f"insert into {dbname}.ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
||||||
)
|
)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
f"insert into {dbname}.ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
||||||
)
|
)
|
||||||
|
|
||||||
for i in range(1,21):
|
for i in range(1,21):
|
||||||
if i ==1 or i == 4:
|
if i ==1 or i == 4:
|
||||||
continue
|
continue
|
||||||
else:
|
else:
|
||||||
tbname = "ct"+f'{i}'
|
tbname = f"{dbname}.ct{i}"
|
||||||
for j in range(9):
|
for j in range(9):
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into {tbname} values ( now()-{(i+j)*10}s, {1*(j+i)}, {11111*(j+i)}, {111*(j+i)}, {11*(j)}, {1.11*(j+i)}, {11.11*(j+i)}, {(j+i)%2}, 'binary{j}', 'nchar{j}', now()+{1*j}a )"
|
f"insert into {tbname} values ( now()-{(i+j)*10}s, {1*(j+i)}, {11111*(j+i)}, {111*(j+i)}, {11*(j)}, {1.11*(j+i)}, {11.11*(j+i)}, {(j+i)%2}, 'binary{j}', 'nchar{j}', now()+{1*j}a )"
|
||||||
)
|
)
|
||||||
tdSql.execute("insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
|
||||||
tdSql.execute("insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
||||||
tdSql.execute("insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
|
||||||
tdSql.execute("insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
||||||
|
|
||||||
tdSql.execute("insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
tdSql.execute(f"insert into {dbname}.ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||||
tdSql.execute("insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
tdSql.execute(f"insert into {dbname}.ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||||
tdSql.execute("insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
tdSql.execute(f"insert into {dbname}.ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||||
|
|
||||||
tdSql.execute(
|
|
||||||
f'''insert into t1 values
|
|
||||||
( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
|
||||||
( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
|
|
||||||
( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
|
|
||||||
( '2021-01-01 01:01:06.000', 3, 33333, 333, 33, 3.33, 33.33, 0, "binary3", "nchar3", now()+3a )
|
|
||||||
( '2021-05-07 01:01:10.000', 4, 44444, 444, 44, 4.44, 44.44, 1, "binary4", "nchar4", now()+4a )
|
|
||||||
( '2021-07-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
|
||||||
( '2021-09-30 01:01:16.000', 5, 55555, 555, 55, 5.55, 55.55, 0, "binary5", "nchar5", now()+5a )
|
|
||||||
( '2022-02-01 01:01:20.000', 6, 66666, 666, 66, 6.66, 66.66, 1, "binary6", "nchar6", now()+6a )
|
|
||||||
( '2022-10-28 01:01:26.000', 7, 00000, 000, 00, 0.00, 00.00, 1, "binary7", "nchar7", "1970-01-01 08:00:00.000" )
|
|
||||||
( '2022-12-01 01:01:30.000', 8, -88888, -888, -88, -8.88, -88.88, 0, "binary8", "nchar8", "1969-01-01 01:00:00.000" )
|
|
||||||
( '2022-12-31 01:01:36.000', 9, -99999999999999999, -999, -99, -9.99, -999999999999999999999.99, 1, "binary9", "nchar9", "1900-01-01 00:00:00.000" )
|
|
||||||
( '2023-02-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
|
||||||
'''
|
|
||||||
)
|
|
||||||
|
|
||||||
tdLog.info(" prepare data for distributed_aggregate done! ")
|
tdLog.info(" prepare data for distributed_aggregate done! ")
|
||||||
|
|
||||||
def check_distribute_datas(self):
|
def check_distribute_datas(self, dbname="testdb"):
|
||||||
# get vgroup_ids of all
|
# get vgroup_ids of all
|
||||||
tdSql.query("show vgroups ")
|
tdSql.query(f"show {dbname}.vgroups ")
|
||||||
vgroups = tdSql.queryResult
|
vgroups = tdSql.queryResult
|
||||||
|
|
||||||
vnode_tables={}
|
vnode_tables={}
|
||||||
|
@ -112,7 +86,7 @@ class TDTestCase:
|
||||||
|
|
||||||
|
|
||||||
# check sub_table of per vnode ,make sure sub_table has been distributed
|
# check sub_table of per vnode ,make sure sub_table has been distributed
|
||||||
tdSql.query("show tables like 'ct%'")
|
tdSql.query(f"show {dbname}.tables like 'ct%'")
|
||||||
table_names = tdSql.queryResult
|
table_names = tdSql.queryResult
|
||||||
tablenames = []
|
tablenames = []
|
||||||
for table_name in table_names:
|
for table_name in table_names:
|
||||||
|
@ -126,7 +100,7 @@ class TDTestCase:
|
||||||
if count < 2:
|
if count < 2:
|
||||||
tdLog.exit(" the datas of all not satisfy sub_table has been distributed ")
|
tdLog.exit(" the datas of all not satisfy sub_table has been distributed ")
|
||||||
|
|
||||||
def check_avg_distribute_diff_vnode(self,col_name):
|
def check_avg_distribute_diff_vnode(self,col_name, dbname="testdb"):
|
||||||
|
|
||||||
vgroup_ids = []
|
vgroup_ids = []
|
||||||
for k ,v in self.vnode_disbutes.items():
|
for k ,v in self.vnode_disbutes.items():
|
||||||
|
@ -144,9 +118,9 @@ class TDTestCase:
|
||||||
|
|
||||||
tbname_filters = tbname_ins[:-1]
|
tbname_filters = tbname_ins[:-1]
|
||||||
|
|
||||||
avg_sql = f"select avg({col_name}) from stb1 where tbname in ({tbname_filters});"
|
avg_sql = f"select avg({col_name}) from {dbname}.stb1 where tbname in ({tbname_filters});"
|
||||||
|
|
||||||
same_sql = f"select {col_name} from stb1 where tbname in ({tbname_filters}) and {col_name} is not null "
|
same_sql = f"select {col_name} from {dbname}.stb1 where tbname in ({tbname_filters}) and {col_name} is not null "
|
||||||
|
|
||||||
tdSql.query(same_sql)
|
tdSql.query(same_sql)
|
||||||
pre_data = np.array(tdSql.queryResult)[np.array(tdSql.queryResult) != None]
|
pre_data = np.array(tdSql.queryResult)[np.array(tdSql.queryResult) != None]
|
||||||
|
@ -157,16 +131,16 @@ class TDTestCase:
|
||||||
tdSql.query(avg_sql)
|
tdSql.query(avg_sql)
|
||||||
tdSql.checkData(0,0,pre_avg)
|
tdSql.checkData(0,0,pre_avg)
|
||||||
|
|
||||||
def check_avg_status(self):
|
def check_avg_status(self, dbname="testdb"):
|
||||||
# check max function work status
|
# check max function work status
|
||||||
|
|
||||||
tdSql.query("show tables like 'ct%'")
|
tdSql.query(f"show {dbname}.tables like 'ct%'")
|
||||||
table_names = tdSql.queryResult
|
table_names = tdSql.queryResult
|
||||||
tablenames = []
|
tablenames = []
|
||||||
for table_name in table_names:
|
for table_name in table_names:
|
||||||
tablenames.append(table_name[0])
|
tablenames.append(f"{dbname}.{table_name[0]}")
|
||||||
|
|
||||||
tdSql.query("desc stb1")
|
tdSql.query(f"desc {dbname}.stb1")
|
||||||
col_names = tdSql.queryResult
|
col_names = tdSql.queryResult
|
||||||
|
|
||||||
colnames = []
|
colnames = []
|
||||||
|
@ -182,41 +156,41 @@ class TDTestCase:
|
||||||
|
|
||||||
for colname in colnames:
|
for colname in colnames:
|
||||||
if colname.startswith("c"):
|
if colname.startswith("c"):
|
||||||
self.check_avg_distribute_diff_vnode(colname)
|
self.check_avg_distribute_diff_vnode(colname, dbname)
|
||||||
else:
|
else:
|
||||||
# self.check_avg_distribute_diff_vnode(colname) # bug for tag
|
# self.check_avg_distribute_diff_vnode(colname, dbname) # bug for tag
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
def distribute_agg_query(self):
|
def distribute_agg_query(self, dbname="testdb"):
|
||||||
# basic filter
|
# basic filter
|
||||||
tdSql.query(" select avg(c1) from stb1 ")
|
tdSql.query(f"select avg(c1) from {dbname}.stb1 ")
|
||||||
tdSql.checkData(0,0,14.086956522)
|
tdSql.checkData(0,0,14.086956522)
|
||||||
|
|
||||||
tdSql.query(" select avg(a) from (select avg(c1) a from stb1 partition by tbname) ")
|
tdSql.query(f"select avg(a) from (select avg(c1) a from {dbname}.stb1 partition by tbname) ")
|
||||||
tdSql.checkData(0,0,14.292307692)
|
tdSql.checkData(0,0,14.292307692)
|
||||||
|
|
||||||
tdSql.query(" select avg(c1) from stb1 where t1=1")
|
tdSql.query(f"select avg(c1) from {dbname}.stb1 where t1=1")
|
||||||
tdSql.checkData(0,0,6.000000000)
|
tdSql.checkData(0,0,6.000000000)
|
||||||
|
|
||||||
tdSql.query("select avg(c1+c2) from stb1 where c1 =1 ")
|
tdSql.query(f"select avg(c1+c2) from {dbname}.stb1 where c1 =1 ")
|
||||||
tdSql.checkData(0,0,11112.000000000)
|
tdSql.checkData(0,0,11112.000000000)
|
||||||
|
|
||||||
tdSql.query("select avg(c1) from stb1 where tbname=\"ct2\"")
|
tdSql.query(f"select avg(c1) from {dbname}.stb1 where tbname=\"ct2\"")
|
||||||
tdSql.checkData(0,0,6.000000000)
|
tdSql.checkData(0,0,6.000000000)
|
||||||
|
|
||||||
tdSql.query("select avg(c1) from stb1 partition by tbname")
|
tdSql.query(f"select avg(c1) from {dbname}.stb1 partition by tbname")
|
||||||
tdSql.checkRows(20)
|
tdSql.checkRows(20)
|
||||||
|
|
||||||
tdSql.query("select avg(c1) from stb1 where t1> 4 partition by tbname")
|
tdSql.query(f"select avg(c1) from {dbname}.stb1 where t1> 4 partition by tbname")
|
||||||
tdSql.checkRows(15)
|
tdSql.checkRows(15)
|
||||||
|
|
||||||
# union all
|
# union all
|
||||||
tdSql.query("select avg(c1) from stb1 union all select avg(c1) from stb1 ")
|
tdSql.query(f"select avg(c1) from {dbname}.stb1 union all select avg(c1) from {dbname}.stb1 ")
|
||||||
tdSql.checkRows(2)
|
tdSql.checkRows(2)
|
||||||
tdSql.checkData(0,0,14.086956522)
|
tdSql.checkData(0,0,14.086956522)
|
||||||
|
|
||||||
tdSql.query("select avg(a) from (select avg(c1) a from stb1 union all select avg(c1) a from stb1)")
|
tdSql.query(f"select avg(a) from (select avg(c1) a from {dbname}.stb1 union all select avg(c1) a from {dbname}.stb1)")
|
||||||
tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
tdSql.checkData(0,0,14.086956522)
|
tdSql.checkData(0,0,14.086956522)
|
||||||
|
|
||||||
|
@ -224,38 +198,38 @@ class TDTestCase:
|
||||||
|
|
||||||
tdSql.execute(" create database if not exists db ")
|
tdSql.execute(" create database if not exists db ")
|
||||||
tdSql.execute(" use db ")
|
tdSql.execute(" use db ")
|
||||||
tdSql.execute(" create stable st (ts timestamp , c1 int ,c2 float) tags(t1 int) ")
|
tdSql.execute(" create stable db.st (ts timestamp , c1 int ,c2 float) tags(t1 int) ")
|
||||||
tdSql.execute(" create table tb1 using st tags(1) ")
|
tdSql.execute(" create table db.tb1 using db.st tags(1) ")
|
||||||
tdSql.execute(" create table tb2 using st tags(2) ")
|
tdSql.execute(" create table db.tb2 using db.st tags(2) ")
|
||||||
|
|
||||||
|
|
||||||
for i in range(10):
|
for i in range(10):
|
||||||
ts = i*10 + self.ts
|
ts = i*10 + self.ts
|
||||||
tdSql.execute(f" insert into tb1 values({ts},{i},{i}.0)")
|
tdSql.execute(f" insert into db.tb1 values({ts},{i},{i}.0)")
|
||||||
tdSql.execute(f" insert into tb2 values({ts},{i},{i}.0)")
|
tdSql.execute(f" insert into db.tb2 values({ts},{i},{i}.0)")
|
||||||
|
|
||||||
tdSql.query("select avg(tb1.c1), avg(tb2.c2) from tb1, tb2 where tb1.ts=tb2.ts")
|
tdSql.query(f"select avg(tb1.c1), avg(tb2.c2) from db.tb1 tb1, db.tb2 tb2 where tb1.ts=tb2.ts")
|
||||||
tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
tdSql.checkData(0,0,4.500000000)
|
tdSql.checkData(0,0,4.500000000)
|
||||||
tdSql.checkData(0,1,4.500000000)
|
tdSql.checkData(0,1,4.500000000)
|
||||||
|
|
||||||
# group by
|
# group by
|
||||||
tdSql.execute(" use testdb ")
|
tdSql.execute(f" use {dbname} ")
|
||||||
|
|
||||||
# partition by tbname or partition by tag
|
# partition by tbname or partition by tag
|
||||||
tdSql.query("select avg(c1) from stb1 partition by tbname")
|
tdSql.query(f"select avg(c1) from {dbname}.stb1 partition by tbname")
|
||||||
tdSql.checkRows(20)
|
tdSql.checkRows(20)
|
||||||
|
|
||||||
# nest query for support max
|
# nest query for support max
|
||||||
tdSql.query("select avg(c2+2)+1 from (select avg(c1) c2 from stb1)")
|
tdSql.query(f"select avg(c2+2)+1 from (select avg(c1) c2 from {dbname}.stb1)")
|
||||||
tdSql.checkData(0,0,17.086956522)
|
tdSql.checkData(0,0,17.086956522)
|
||||||
tdSql.query("select avg(c1+2) as c2 from (select ts ,c1 ,c2 from stb1)")
|
tdSql.query(f"select avg(c1+2) as c2 from (select ts ,c1 ,c2 from {dbname}.stb1)")
|
||||||
tdSql.checkData(0,0,16.086956522)
|
tdSql.checkData(0,0,16.086956522)
|
||||||
tdSql.query("select avg(a+2) as c2 from (select ts ,abs(c1) a ,c2 from stb1)")
|
tdSql.query(f"select avg(a+2) as c2 from (select ts ,abs(c1) a ,c2 from {dbname}.stb1)")
|
||||||
tdSql.checkData(0,0,16.086956522)
|
tdSql.checkData(0,0,16.086956522)
|
||||||
|
|
||||||
# mixup with other functions
|
# mixup with other functions
|
||||||
tdSql.query("select max(c1),count(c1),last(c2,c3),sum(c1+c2),avg(c1) from stb1")
|
tdSql.query(f"select max(c1),count(c1),last(c2,c3),sum(c1+c2),avg(c1) from {dbname}.stb1")
|
||||||
tdSql.checkData(0,0,28)
|
tdSql.checkData(0,0,28)
|
||||||
tdSql.checkData(0,1,184)
|
tdSql.checkData(0,1,184)
|
||||||
tdSql.checkData(0,2,-99999)
|
tdSql.checkData(0,2,-99999)
|
||||||
|
|
|
@ -6,11 +6,8 @@ import random
|
||||||
|
|
||||||
|
|
||||||
class TDTestCase:
|
class TDTestCase:
|
||||||
updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 ,
|
|
||||||
"jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143,
|
|
||||||
"wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143,
|
|
||||||
"maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 }
|
|
||||||
|
|
||||||
|
updatecfgDict = {"maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 }
|
||||||
def init(self, conn, logSql):
|
def init(self, conn, logSql):
|
||||||
tdLog.debug("start to execute %s" % __file__)
|
tdLog.debug("start to execute %s" % __file__)
|
||||||
tdSql.init(conn.cursor())
|
tdSql.init(conn.cursor())
|
||||||
|
@ -35,76 +32,52 @@ class TDTestCase:
|
||||||
else:
|
else:
|
||||||
tdLog.info(" count function work as expected, sql : %s "% max_sql)
|
tdLog.info(" count function work as expected, sql : %s "% max_sql)
|
||||||
|
|
||||||
|
def prepare_datas_of_distribute(self, dbname="testdb"):
|
||||||
def prepare_datas_of_distribute(self):
|
|
||||||
|
|
||||||
# prepate datas for 20 tables distributed at different vgroups
|
# prepate datas for 20 tables distributed at different vgroups
|
||||||
tdSql.execute("create database if not exists testdb keep 3650 duration 1000 vgroups 5")
|
tdSql.execute(f"create database if not exists {dbname} keep 3650 duration 1000 vgroups 5")
|
||||||
tdSql.execute(" use testdb ")
|
tdSql.execute(f" use {dbname} ")
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
'''create table stb1
|
f'''create table {dbname}.stb1
|
||||||
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
||||||
tags (t0 timestamp, t1 int, t2 bigint, t3 smallint, t4 tinyint, t5 float, t6 double, t7 bool, t8 binary(16),t9 nchar(32))
|
tags (t0 timestamp, t1 int, t2 bigint, t3 smallint, t4 tinyint, t5 float, t6 double, t7 bool, t8 binary(16),t9 nchar(32))
|
||||||
'''
|
'''
|
||||||
)
|
)
|
||||||
|
|
||||||
tdSql.execute(
|
|
||||||
'''
|
|
||||||
create table t1
|
|
||||||
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
|
||||||
'''
|
|
||||||
)
|
|
||||||
for i in range(20):
|
for i in range(20):
|
||||||
tdSql.execute(f'create table ct{i+1} using stb1 tags ( now(), {1*i}, {11111*i}, {111*i}, {1*i}, {1.11*i}, {11.11*i}, {i%2}, "binary{i}", "nchar{i}" )')
|
tdSql.execute(f'create table {dbname}.ct{i+1} using {dbname}.stb1 tags ( now(), {1*i}, {11111*i}, {111*i}, {1*i}, {1.11*i}, {11.11*i}, {i%2}, "binary{i}", "nchar{i}" )')
|
||||||
|
|
||||||
for i in range(9):
|
for i in range(9):
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
f"insert into {dbname}.ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
||||||
)
|
)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
f"insert into {dbname}.ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
||||||
)
|
)
|
||||||
|
|
||||||
for i in range(1,21):
|
for i in range(1,21):
|
||||||
if i ==1 or i == 4:
|
if i ==1 or i == 4:
|
||||||
continue
|
continue
|
||||||
else:
|
else:
|
||||||
tbname = "ct"+f'{i}'
|
tbname = f"{dbname}.ct{i}"
|
||||||
for j in range(9):
|
for j in range(9):
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into {tbname} values ( now()-{(i+j)*10}s, {1*(j+i)}, {11111*(j+i)}, {111*(j+i)}, {11*(j)}, {1.11*(j+i)}, {11.11*(j+i)}, {(j+i)%2}, 'binary{j}', 'nchar{j}', now()+{1*j}a )"
|
f"insert into {tbname} values ( now()-{(i+j)*10}s, {1*(j+i)}, {11111*(j+i)}, {111*(j+i)}, {11*(j)}, {1.11*(j+i)}, {11.11*(j+i)}, {(j+i)%2}, 'binary{j}', 'nchar{j}', now()+{1*j}a )"
|
||||||
)
|
)
|
||||||
tdSql.execute("insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
|
||||||
tdSql.execute("insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
||||||
tdSql.execute("insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
|
||||||
tdSql.execute("insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
||||||
|
|
||||||
tdSql.execute("insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
tdSql.execute(f"insert into {dbname}.ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||||
tdSql.execute("insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
tdSql.execute(f"insert into {dbname}.ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||||
tdSql.execute("insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
tdSql.execute(f"insert into {dbname}.ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||||
|
|
||||||
tdSql.execute(
|
|
||||||
f'''insert into t1 values
|
|
||||||
( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
|
||||||
( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
|
|
||||||
( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
|
|
||||||
( '2021-01-01 01:01:06.000', 3, 33333, 333, 33, 3.33, 33.33, 0, "binary3", "nchar3", now()+3a )
|
|
||||||
( '2021-05-07 01:01:10.000', 4, 44444, 444, 44, 4.44, 44.44, 1, "binary4", "nchar4", now()+4a )
|
|
||||||
( '2021-07-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
|
||||||
( '2021-09-30 01:01:16.000', 5, 55555, 555, 55, 5.55, 55.55, 0, "binary5", "nchar5", now()+5a )
|
|
||||||
( '2022-02-01 01:01:20.000', 6, 66666, 666, 66, 6.66, 66.66, 1, "binary6", "nchar6", now()+6a )
|
|
||||||
( '2022-10-28 01:01:26.000', 7, 00000, 000, 00, 0.00, 00.00, 1, "binary7", "nchar7", "1970-01-01 08:00:00.000" )
|
|
||||||
( '2022-12-01 01:01:30.000', 8, -88888, -888, -88, -8.88, -88.88, 0, "binary8", "nchar8", "1969-01-01 01:00:00.000" )
|
|
||||||
( '2022-12-31 01:01:36.000', 9, -99999999999999999, -999, -99, -9.99, -999999999999999999999.99, 1, "binary9", "nchar9", "1900-01-01 00:00:00.000" )
|
|
||||||
( '2023-02-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
|
||||||
'''
|
|
||||||
)
|
|
||||||
|
|
||||||
tdLog.info(" prepare data for distributed_aggregate done! ")
|
tdLog.info(" prepare data for distributed_aggregate done! ")
|
||||||
|
|
||||||
def check_distribute_datas(self):
|
def check_distribute_datas(self, dbname="testdb"):
|
||||||
# get vgroup_ids of all
|
# get vgroup_ids of all
|
||||||
tdSql.query("show vgroups ")
|
tdSql.query(f"show {dbname}.vgroups ")
|
||||||
vgroups = tdSql.queryResult
|
vgroups = tdSql.queryResult
|
||||||
|
|
||||||
vnode_tables={}
|
vnode_tables={}
|
||||||
|
@ -114,7 +87,7 @@ class TDTestCase:
|
||||||
|
|
||||||
|
|
||||||
# check sub_table of per vnode ,make sure sub_table has been distributed
|
# check sub_table of per vnode ,make sure sub_table has been distributed
|
||||||
tdSql.query("show tables like 'ct%'")
|
tdSql.query(f"show {dbname}.tables like 'ct%'")
|
||||||
table_names = tdSql.queryResult
|
table_names = tdSql.queryResult
|
||||||
tablenames = []
|
tablenames = []
|
||||||
for table_name in table_names:
|
for table_name in table_names:
|
||||||
|
@ -128,7 +101,7 @@ class TDTestCase:
|
||||||
if count < 2:
|
if count < 2:
|
||||||
tdLog.exit(" the datas of all not satisfy sub_table has been distributed ")
|
tdLog.exit(" the datas of all not satisfy sub_table has been distributed ")
|
||||||
|
|
||||||
def check_count_distribute_diff_vnode(self,col_name):
|
def check_count_distribute_diff_vnode(self,col_name, dbname="testdb"):
|
||||||
|
|
||||||
vgroup_ids = []
|
vgroup_ids = []
|
||||||
for k ,v in self.vnode_disbutes.items():
|
for k ,v in self.vnode_disbutes.items():
|
||||||
|
@ -146,9 +119,9 @@ class TDTestCase:
|
||||||
|
|
||||||
tbname_filters = tbname_ins[:-1]
|
tbname_filters = tbname_ins[:-1]
|
||||||
|
|
||||||
max_sql = f"select count({col_name}) from stb1 where tbname in ({tbname_filters});"
|
max_sql = f"select count({col_name}) from {dbname}.stb1 where tbname in ({tbname_filters});"
|
||||||
|
|
||||||
same_sql = f"select sum(c) from (select {col_name} ,1 as c from stb1 where tbname in ({tbname_filters}) and {col_name} is not null) "
|
same_sql = f"select sum(c) from (select {col_name} ,1 as c from {dbname}.stb1 where tbname in ({tbname_filters}) and {col_name} is not null) "
|
||||||
|
|
||||||
tdSql.query(max_sql)
|
tdSql.query(max_sql)
|
||||||
max_result = tdSql.queryResult
|
max_result = tdSql.queryResult
|
||||||
|
@ -161,16 +134,16 @@ class TDTestCase:
|
||||||
else:
|
else:
|
||||||
tdLog.info(" count function work as expected, sql : %s "% max_sql)
|
tdLog.info(" count function work as expected, sql : %s "% max_sql)
|
||||||
|
|
||||||
def check_count_status(self):
|
def check_count_status(self, dbname="testdb"):
|
||||||
# check max function work status
|
# check max function work status
|
||||||
|
|
||||||
tdSql.query("show tables like 'ct%'")
|
tdSql.query(f"show {dbname}.tables like 'ct%'")
|
||||||
table_names = tdSql.queryResult
|
table_names = tdSql.queryResult
|
||||||
tablenames = []
|
tablenames = []
|
||||||
for table_name in table_names:
|
for table_name in table_names:
|
||||||
tablenames.append(table_name[0])
|
tablenames.append(f"{dbname}.{table_name[0]}")
|
||||||
|
|
||||||
tdSql.query("desc stb1")
|
tdSql.query(f"desc {dbname}.stb1")
|
||||||
col_names = tdSql.queryResult
|
col_names = tdSql.queryResult
|
||||||
|
|
||||||
colnames = []
|
colnames = []
|
||||||
|
@ -186,34 +159,33 @@ class TDTestCase:
|
||||||
|
|
||||||
for colname in colnames:
|
for colname in colnames:
|
||||||
if colname.startswith("c"):
|
if colname.startswith("c"):
|
||||||
self.check_count_distribute_diff_vnode(colname)
|
self.check_count_distribute_diff_vnode(colname, dbname)
|
||||||
else:
|
else:
|
||||||
# self.check_count_distribute_diff_vnode(colname) # bug for tag
|
# self.check_count_distribute_diff_vnode(colname, dbname) # bug for tag
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
def distribute_agg_query(self, dbname="testdb"):
|
||||||
def distribute_agg_query(self):
|
|
||||||
# basic filter
|
# basic filter
|
||||||
tdSql.query("select count(c1) from stb1 ")
|
tdSql.query(f"select count(c1) from {dbname}.stb1 ")
|
||||||
tdSql.checkData(0,0,184)
|
tdSql.checkData(0,0,184)
|
||||||
|
|
||||||
tdSql.query("select count(c1) from stb1 where t1=1")
|
tdSql.query(f"select count(c1) from {dbname}.stb1 where t1=1")
|
||||||
tdSql.checkData(0,0,9)
|
tdSql.checkData(0,0,9)
|
||||||
|
|
||||||
tdSql.query("select count(c1+c2) from stb1 where c1 =1 ")
|
tdSql.query(f"select count(c1+c2) from {dbname}.stb1 where c1 =1 ")
|
||||||
tdSql.checkData(0,0,2)
|
tdSql.checkData(0,0,2)
|
||||||
|
|
||||||
tdSql.query("select count(c1) from stb1 where tbname=\"ct2\"")
|
tdSql.query(f"select count(c1) from {dbname}.stb1 where tbname=\"ct2\"")
|
||||||
tdSql.checkData(0,0,9)
|
tdSql.checkData(0,0,9)
|
||||||
|
|
||||||
tdSql.query("select count(c1) from stb1 partition by tbname")
|
tdSql.query(f"select count(c1) from {dbname}.stb1 partition by tbname")
|
||||||
tdSql.checkRows(20)
|
tdSql.checkRows(20)
|
||||||
|
|
||||||
tdSql.query("select count(c1) from stb1 where t1> 4 partition by tbname")
|
tdSql.query(f"select count(c1) from {dbname}.stb1 where t1> 4 partition by tbname")
|
||||||
tdSql.checkRows(15)
|
tdSql.checkRows(15)
|
||||||
|
|
||||||
# union all
|
# union all
|
||||||
tdSql.query("select count(c1) from stb1 union all select count(c1) from stb1 ")
|
tdSql.query(f"select count(c1) from {dbname}.stb1 union all select count(c1) from {dbname}.stb1 ")
|
||||||
tdSql.checkRows(2)
|
tdSql.checkRows(2)
|
||||||
tdSql.checkData(0,0,184)
|
tdSql.checkData(0,0,184)
|
||||||
|
|
||||||
|
@ -221,60 +193,60 @@ class TDTestCase:
|
||||||
|
|
||||||
tdSql.execute(" create database if not exists db ")
|
tdSql.execute(" create database if not exists db ")
|
||||||
tdSql.execute(" use db ")
|
tdSql.execute(" use db ")
|
||||||
tdSql.execute(" create stable st (ts timestamp , c1 int ,c2 float) tags(t1 int) ")
|
tdSql.execute(" create stable db.st (ts timestamp , c1 int ,c2 float) tags(t1 int) ")
|
||||||
tdSql.execute(" create table tb1 using st tags(1) ")
|
tdSql.execute(" create table db.tb1 using db.st tags(1) ")
|
||||||
tdSql.execute(" create table tb2 using st tags(2) ")
|
tdSql.execute(" create table db.tb2 using db.st tags(2) ")
|
||||||
|
|
||||||
|
|
||||||
for i in range(10):
|
for i in range(10):
|
||||||
ts = i*10 + self.ts
|
ts = i*10 + self.ts
|
||||||
tdSql.execute(f" insert into tb1 values({ts},{i},{i}.0)")
|
tdSql.execute(f" insert into db.tb1 values({ts},{i},{i}.0)")
|
||||||
tdSql.execute(f" insert into tb2 values({ts},{i},{i}.0)")
|
tdSql.execute(f" insert into db.tb2 values({ts},{i},{i}.0)")
|
||||||
|
|
||||||
tdSql.query("select count(tb1.c1), count(tb2.c2) from tb1, tb2 where tb1.ts=tb2.ts")
|
tdSql.query(f"select count(tb1.c1), count(tb2.c2) from db.tb1 tb1, db.tb2 tb2 where tb1.ts=tb2.ts")
|
||||||
tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
tdSql.checkData(0,0,10)
|
tdSql.checkData(0,0,10)
|
||||||
tdSql.checkData(0,1,10)
|
tdSql.checkData(0,1,10)
|
||||||
|
|
||||||
# group by
|
# group by
|
||||||
tdSql.execute(" use testdb ")
|
tdSql.execute(f" use {dbname} ")
|
||||||
|
|
||||||
tdSql.query(" select count(*) from stb1 ")
|
tdSql.query(f"select count(*) from {dbname}.stb1 ")
|
||||||
tdSql.checkData(0,0,187)
|
tdSql.checkData(0,0,187)
|
||||||
tdSql.query(" select count(*) from stb1 group by t1 ")
|
tdSql.query(f"select count(*) from {dbname}.stb1 group by t1 ")
|
||||||
tdSql.checkRows(20)
|
tdSql.checkRows(20)
|
||||||
tdSql.query(" select count(*) from stb1 group by c1 ")
|
tdSql.query(f"select count(*) from {dbname}.stb1 group by c1 ")
|
||||||
tdSql.checkRows(30)
|
tdSql.checkRows(30)
|
||||||
tdSql.query(" select count(*) from stb1 group by c2 ")
|
tdSql.query(f"select count(*) from {dbname}.stb1 group by c2 ")
|
||||||
tdSql.checkRows(31)
|
tdSql.checkRows(31)
|
||||||
|
|
||||||
# partition by tbname or partition by tag
|
# partition by tbname or partition by tag
|
||||||
tdSql.query("select max(c1),tbname from stb1 partition by tbname")
|
tdSql.query(f"select max(c1),tbname from {dbname}.stb1 partition by tbname")
|
||||||
query_data = tdSql.queryResult
|
query_data = tdSql.queryResult
|
||||||
|
|
||||||
for row in query_data:
|
for row in query_data:
|
||||||
tbname = row[1]
|
tbname = f"{dbname}.{row[1]}"
|
||||||
tdSql.query(" select max(c1) from %s "%tbname)
|
tdSql.query(f"select max(c1) from %s "%tbname)
|
||||||
tdSql.checkData(0,0,row[0])
|
tdSql.checkData(0,0,row[0])
|
||||||
|
|
||||||
tdSql.query("select max(c1),tbname from stb1 partition by t1")
|
tdSql.query(f"select max(c1),tbname from {dbname}.stb1 partition by t1")
|
||||||
query_data = tdSql.queryResult
|
query_data = tdSql.queryResult
|
||||||
|
|
||||||
for row in query_data:
|
for row in query_data:
|
||||||
tbname = row[1]
|
tbname = f"{dbname}.{row[1]}"
|
||||||
tdSql.query(" select max(c1) from %s "%tbname)
|
tdSql.query(f"select max(c1) from %s "%tbname)
|
||||||
tdSql.checkData(0,0,row[0])
|
tdSql.checkData(0,0,row[0])
|
||||||
|
|
||||||
# nest query for support max
|
# nest query for support max
|
||||||
tdSql.query("select abs(c2+2)+1 from (select count(c1) c2 from stb1)")
|
tdSql.query(f"select abs(c2+2)+1 from (select count(c1) c2 from {dbname}.stb1)")
|
||||||
tdSql.checkData(0,0,187.000000000)
|
tdSql.checkData(0,0,187.000000000)
|
||||||
tdSql.query("select count(c1+2) as c2 from (select ts ,c1 ,c2 from stb1)")
|
tdSql.query(f"select count(c1+2) as c2 from (select ts ,c1 ,c2 from {dbname}.stb1)")
|
||||||
tdSql.checkData(0,0,184)
|
tdSql.checkData(0,0,184)
|
||||||
tdSql.query("select count(a+2) as c2 from (select ts ,abs(c1) a ,c2 from stb1)")
|
tdSql.query(f"select count(a+2) as c2 from (select ts ,abs(c1) a ,c2 from {dbname}.stb1)")
|
||||||
tdSql.checkData(0,0,184)
|
tdSql.checkData(0,0,184)
|
||||||
|
|
||||||
# mixup with other functions
|
# mixup with other functions
|
||||||
tdSql.query("select max(c1),count(c1),last(c2,c3) from stb1")
|
tdSql.query(f"select max(c1),count(c1),last(c2,c3) from {dbname}.stb1")
|
||||||
tdSql.checkData(0,0,28)
|
tdSql.checkData(0,0,28)
|
||||||
tdSql.checkData(0,1,184)
|
tdSql.checkData(0,1,184)
|
||||||
tdSql.checkData(0,2,-99999)
|
tdSql.checkData(0,2,-99999)
|
||||||
|
|
|
@ -6,10 +6,8 @@ import random
|
||||||
|
|
||||||
|
|
||||||
class TDTestCase:
|
class TDTestCase:
|
||||||
updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 ,
|
|
||||||
"jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143,
|
updatecfgDict = {"maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 }
|
||||||
"wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143,
|
|
||||||
"maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 }
|
|
||||||
|
|
||||||
def init(self, conn, logSql):
|
def init(self, conn, logSql):
|
||||||
tdLog.debug("start to execute %s" % __file__)
|
tdLog.debug("start to execute %s" % __file__)
|
||||||
|
@ -36,75 +34,52 @@ class TDTestCase:
|
||||||
tdLog.info(" max function work as expected, sql : %s "% max_sql)
|
tdLog.info(" max function work as expected, sql : %s "% max_sql)
|
||||||
|
|
||||||
|
|
||||||
def prepare_datas_of_distribute(self):
|
def prepare_datas_of_distribute(self, dbname="testdb"):
|
||||||
|
|
||||||
# prepate datas for 20 tables distributed at different vgroups
|
# prepate datas for 20 tables distributed at different vgroups
|
||||||
tdSql.execute("create database if not exists testdb keep 3650 duration 1000 vgroups 5")
|
tdSql.execute(f"create database if not exists {dbname} keep 3650 duration 1000 vgroups 5")
|
||||||
tdSql.execute(" use testdb ")
|
tdSql.execute(f" use {dbname} ")
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
'''create table stb1
|
f'''create table {dbname}.stb1
|
||||||
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
||||||
tags (t0 timestamp, t1 int, t2 bigint, t3 smallint, t4 tinyint, t5 float, t6 double, t7 bool, t8 binary(16),t9 nchar(32))
|
tags (t0 timestamp, t1 int, t2 bigint, t3 smallint, t4 tinyint, t5 float, t6 double, t7 bool, t8 binary(16),t9 nchar(32))
|
||||||
'''
|
'''
|
||||||
)
|
)
|
||||||
|
|
||||||
tdSql.execute(
|
|
||||||
'''
|
|
||||||
create table t1
|
|
||||||
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
|
||||||
'''
|
|
||||||
)
|
|
||||||
for i in range(20):
|
for i in range(20):
|
||||||
tdSql.execute(f'create table ct{i+1} using stb1 tags ( now(), {1*i}, {11111*i}, {111*i}, {1*i}, {1.11*i}, {11.11*i}, {i%2}, "binary{i}", "nchar{i}" )')
|
tdSql.execute(f'create table {dbname}.ct{i+1} using {dbname}.stb1 tags ( now(), {1*i}, {11111*i}, {111*i}, {1*i}, {1.11*i}, {11.11*i}, {i%2}, "binary{i}", "nchar{i}" )')
|
||||||
|
|
||||||
for i in range(9):
|
for i in range(9):
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
f"insert into {dbname}.ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
||||||
)
|
)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
f"insert into {dbname}.ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
||||||
)
|
)
|
||||||
|
|
||||||
for i in range(1,21):
|
for i in range(1,21):
|
||||||
if i ==1 or i == 4:
|
if i ==1 or i == 4:
|
||||||
continue
|
continue
|
||||||
else:
|
else:
|
||||||
tbname = "ct"+f'{i}'
|
tbname = f"{dbname}.ct{i}"
|
||||||
for j in range(9):
|
for j in range(9):
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into {tbname} values ( now()-{(i+j)*10}s, {1*(j+i)}, {11111*(j+i)}, {111*(j+i)}, {11*(j)}, {1.11*(j+i)}, {11.11*(j+i)}, {(j+i)%2}, 'binary{j}', 'nchar{j}', now()+{1*j}a )"
|
f"insert into {tbname} values ( now()-{(i+j)*10}s, {1*(j+i)}, {11111*(j+i)}, {111*(j+i)}, {11*(j)}, {1.11*(j+i)}, {11.11*(j+i)}, {(j+i)%2}, 'binary{j}', 'nchar{j}', now()+{1*j}a )"
|
||||||
)
|
)
|
||||||
tdSql.execute("insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
|
||||||
tdSql.execute("insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
||||||
tdSql.execute("insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
|
||||||
tdSql.execute("insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
||||||
|
|
||||||
tdSql.execute("insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
tdSql.execute(f"insert into {dbname}.ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||||
tdSql.execute("insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
tdSql.execute(f"insert into {dbname}.ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||||
tdSql.execute("insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
tdSql.execute(f"insert into {dbname}.ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||||
|
|
||||||
tdSql.execute(
|
|
||||||
f'''insert into t1 values
|
|
||||||
( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
|
||||||
( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
|
|
||||||
( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
|
|
||||||
( '2021-01-01 01:01:06.000', 3, 33333, 333, 33, 3.33, 33.33, 0, "binary3", "nchar3", now()+3a )
|
|
||||||
( '2021-05-07 01:01:10.000', 4, 44444, 444, 44, 4.44, 44.44, 1, "binary4", "nchar4", now()+4a )
|
|
||||||
( '2021-07-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
|
||||||
( '2021-09-30 01:01:16.000', 5, 55555, 555, 55, 5.55, 55.55, 0, "binary5", "nchar5", now()+5a )
|
|
||||||
( '2022-02-01 01:01:20.000', 6, 66666, 666, 66, 6.66, 66.66, 1, "binary6", "nchar6", now()+6a )
|
|
||||||
( '2022-10-28 01:01:26.000', 7, 00000, 000, 00, 0.00, 00.00, 1, "binary7", "nchar7", "1970-01-01 08:00:00.000" )
|
|
||||||
( '2022-12-01 01:01:30.000', 8, -88888, -888, -88, -8.88, -88.88, 0, "binary8", "nchar8", "1969-01-01 01:00:00.000" )
|
|
||||||
( '2022-12-31 01:01:36.000', 9, -99999999999999999, -999, -99, -9.99, -999999999999999999999.99, 1, "binary9", "nchar9", "1900-01-01 00:00:00.000" )
|
|
||||||
( '2023-02-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
|
||||||
'''
|
|
||||||
)
|
|
||||||
|
|
||||||
tdLog.info(" prepare data for distributed_aggregate done! ")
|
tdLog.info(" prepare data for distributed_aggregate done! ")
|
||||||
|
|
||||||
def check_distribute_datas(self):
|
def check_distribute_datas(self, dbname="testdb"):
|
||||||
# get vgroup_ids of all
|
# get vgroup_ids of all
|
||||||
tdSql.query("show vgroups ")
|
tdSql.query(f"show {dbname}.vgroups ")
|
||||||
vgroups = tdSql.queryResult
|
vgroups = tdSql.queryResult
|
||||||
|
|
||||||
vnode_tables={}
|
vnode_tables={}
|
||||||
|
@ -112,9 +87,8 @@ class TDTestCase:
|
||||||
for vgroup_id in vgroups:
|
for vgroup_id in vgroups:
|
||||||
vnode_tables[vgroup_id[0]]=[]
|
vnode_tables[vgroup_id[0]]=[]
|
||||||
|
|
||||||
|
|
||||||
# check sub_table of per vnode ,make sure sub_table has been distributed
|
# check sub_table of per vnode ,make sure sub_table has been distributed
|
||||||
tdSql.query("show tables like 'ct%'")
|
tdSql.query(f"show {dbname}.tables like 'ct%'")
|
||||||
table_names = tdSql.queryResult
|
table_names = tdSql.queryResult
|
||||||
tablenames = []
|
tablenames = []
|
||||||
for table_name in table_names:
|
for table_name in table_names:
|
||||||
|
@ -128,7 +102,7 @@ class TDTestCase:
|
||||||
if count < 2:
|
if count < 2:
|
||||||
tdLog.exit(" the datas of all not satisfy sub_table has been distributed ")
|
tdLog.exit(" the datas of all not satisfy sub_table has been distributed ")
|
||||||
|
|
||||||
def check_max_distribute_diff_vnode(self,col_name):
|
def check_max_distribute_diff_vnode(self,col_name, dbname="testdb"):
|
||||||
|
|
||||||
vgroup_ids = []
|
vgroup_ids = []
|
||||||
for k ,v in self.vnode_disbutes.items():
|
for k ,v in self.vnode_disbutes.items():
|
||||||
|
@ -146,9 +120,9 @@ class TDTestCase:
|
||||||
|
|
||||||
tbname_filters = tbname_ins[:-1]
|
tbname_filters = tbname_ins[:-1]
|
||||||
|
|
||||||
max_sql = f"select max({col_name}) from stb1 where tbname in ({tbname_filters});"
|
max_sql = f"select max({col_name}) from {dbname}.stb1 where tbname in ({tbname_filters});"
|
||||||
|
|
||||||
same_sql = f"select {col_name} from stb1 where tbname in ({tbname_filters}) order by {col_name} desc limit 1"
|
same_sql = f"select {col_name} from {dbname}.stb1 where tbname in ({tbname_filters}) order by {col_name} desc limit 1"
|
||||||
|
|
||||||
tdSql.query(max_sql)
|
tdSql.query(max_sql)
|
||||||
max_result = tdSql.queryResult
|
max_result = tdSql.queryResult
|
||||||
|
@ -161,16 +135,16 @@ class TDTestCase:
|
||||||
else:
|
else:
|
||||||
tdLog.info(" max function work as expected, sql : %s "% max_sql)
|
tdLog.info(" max function work as expected, sql : %s "% max_sql)
|
||||||
|
|
||||||
def check_max_status(self):
|
def check_max_status(self, dbname="testdb"):
|
||||||
# check max function work status
|
# check max function work status
|
||||||
|
|
||||||
tdSql.query("show tables like 'ct%'")
|
tdSql.query(f"show {dbname}.tables like 'ct%'")
|
||||||
table_names = tdSql.queryResult
|
table_names = tdSql.queryResult
|
||||||
tablenames = []
|
tablenames = []
|
||||||
for table_name in table_names:
|
for table_name in table_names:
|
||||||
tablenames.append(table_name[0])
|
tablenames.append(f"{dbname}.{table_name[0]}")
|
||||||
|
|
||||||
tdSql.query("desc stb1")
|
tdSql.query(f"desc {dbname}.stb1")
|
||||||
col_names = tdSql.queryResult
|
col_names = tdSql.queryResult
|
||||||
|
|
||||||
colnames = []
|
colnames = []
|
||||||
|
@ -186,34 +160,33 @@ class TDTestCase:
|
||||||
|
|
||||||
for colname in colnames:
|
for colname in colnames:
|
||||||
if colname.startswith("c"):
|
if colname.startswith("c"):
|
||||||
self.check_max_distribute_diff_vnode(colname)
|
self.check_max_distribute_diff_vnode(colname, dbname)
|
||||||
else:
|
else:
|
||||||
# self.check_max_distribute_diff_vnode(colname) # bug for tag
|
# self.check_max_distribute_diff_vnode(colname, dbname) # bug for tag
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
def distribute_agg_query(self, dbname="testdb"):
|
||||||
def distribute_agg_query(self):
|
|
||||||
# basic filter
|
# basic filter
|
||||||
tdSql.query("select max(c1) from stb1 where c1 is null")
|
tdSql.query(f"select max(c1) from {dbname}.stb1 where c1 is null")
|
||||||
tdSql.checkRows(0)
|
tdSql.checkRows(0)
|
||||||
|
|
||||||
tdSql.query("select max(c1) from stb1 where t1=1")
|
tdSql.query(f"select max(c1) from {dbname}.stb1 where t1=1")
|
||||||
tdSql.checkData(0,0,10)
|
tdSql.checkData(0,0,10)
|
||||||
|
|
||||||
tdSql.query("select max(c1+c2) from stb1 where c1 =1 ")
|
tdSql.query(f"select max(c1+c2) from {dbname}.stb1 where c1 =1 ")
|
||||||
tdSql.checkData(0,0,11112.000000000)
|
tdSql.checkData(0,0,11112.000000000)
|
||||||
|
|
||||||
tdSql.query("select max(c1) from stb1 where tbname=\"ct2\"")
|
tdSql.query(f"select max(c1) from {dbname}.stb1 where tbname=\"ct2\"")
|
||||||
tdSql.checkData(0,0,10)
|
tdSql.checkData(0,0,10)
|
||||||
|
|
||||||
tdSql.query("select max(c1) from stb1 partition by tbname")
|
tdSql.query(f"select max(c1) from {dbname}.stb1 partition by tbname")
|
||||||
tdSql.checkRows(20)
|
tdSql.checkRows(20)
|
||||||
|
|
||||||
tdSql.query("select max(c1) from stb1 where t1> 4 partition by tbname")
|
tdSql.query(f"select max(c1) from {dbname}.stb1 where t1> 4 partition by tbname")
|
||||||
tdSql.checkRows(15)
|
tdSql.checkRows(15)
|
||||||
|
|
||||||
# union all
|
# union all
|
||||||
tdSql.query("select max(c1) from stb1 union all select max(c1) from stb1 ")
|
tdSql.query(f"select max(c1) from {dbname}.stb1 union all select max(c1) from {dbname}.stb1 ")
|
||||||
tdSql.checkRows(2)
|
tdSql.checkRows(2)
|
||||||
tdSql.checkData(0,0,28)
|
tdSql.checkData(0,0,28)
|
||||||
|
|
||||||
|
@ -221,45 +194,45 @@ class TDTestCase:
|
||||||
|
|
||||||
tdSql.execute(" create database if not exists db ")
|
tdSql.execute(" create database if not exists db ")
|
||||||
tdSql.execute(" use db ")
|
tdSql.execute(" use db ")
|
||||||
tdSql.execute(" create stable st (ts timestamp , c1 int ,c2 float) tags(t1 int) ")
|
tdSql.execute(" create stable db.st (ts timestamp , c1 int ,c2 float) tags(t1 int) ")
|
||||||
tdSql.execute(" create table tb1 using st tags(1) ")
|
tdSql.execute(" create table db.tb1 using db.st tags(1) ")
|
||||||
tdSql.execute(" create table tb2 using st tags(2) ")
|
tdSql.execute(" create table db.tb2 using db.st tags(2) ")
|
||||||
|
|
||||||
|
|
||||||
for i in range(10):
|
for i in range(10):
|
||||||
ts = i*10 + self.ts
|
ts = i*10 + self.ts
|
||||||
tdSql.execute(f" insert into tb1 values({ts},{i},{i}.0)")
|
tdSql.execute(f" insert into db.tb1 values({ts},{i},{i}.0)")
|
||||||
tdSql.execute(f" insert into tb2 values({ts},{i},{i}.0)")
|
tdSql.execute(f" insert into db.tb2 values({ts},{i},{i}.0)")
|
||||||
|
|
||||||
tdSql.query("select max(tb1.c1), tb2.c2 from tb1, tb2 where tb1.ts=tb2.ts")
|
tdSql.query(f"select max(tb1.c1), tb2.c2 from db.tb1 tb1, db.tb2 tb2 where tb1.ts=tb2.ts")
|
||||||
tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
tdSql.checkData(0,0,9)
|
tdSql.checkData(0,0,9)
|
||||||
tdSql.checkData(0,0,9.00000)
|
tdSql.checkData(0,0,9.00000)
|
||||||
|
|
||||||
# group by
|
# group by
|
||||||
tdSql.execute(" use testdb ")
|
tdSql.execute("use testdb ")
|
||||||
tdSql.query(" select max(c1),c1 from stb1 group by t1 ")
|
tdSql.query(f"select max(c1),c1 from {dbname}.stb1 group by t1 ")
|
||||||
tdSql.checkRows(20)
|
tdSql.checkRows(20)
|
||||||
tdSql.query(" select max(c1),c1 from stb1 group by c1 ")
|
tdSql.query(f"select max(c1),c1 from {dbname}.stb1 group by c1 ")
|
||||||
tdSql.checkRows(30)
|
tdSql.checkRows(30)
|
||||||
tdSql.query(" select max(c1),c2 from stb1 group by c2 ")
|
tdSql.query(f"select max(c1),c2 from {dbname}.stb1 group by c2 ")
|
||||||
tdSql.checkRows(31)
|
tdSql.checkRows(31)
|
||||||
|
|
||||||
# selective common cols of datas
|
# selective common cols of datas
|
||||||
tdSql.query("select max(c1),c2,c3,c5 from stb1")
|
tdSql.query(f"select max(c1),c2,c3,c5 from {dbname}.stb1")
|
||||||
tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
tdSql.checkData(0,0,28)
|
tdSql.checkData(0,0,28)
|
||||||
tdSql.checkData(0,1,311108)
|
tdSql.checkData(0,1,311108)
|
||||||
tdSql.checkData(0,2,3108)
|
tdSql.checkData(0,2,3108)
|
||||||
tdSql.checkData(0,3,31.08000)
|
tdSql.checkData(0,3,31.08000)
|
||||||
|
|
||||||
tdSql.query("select max(c1),t1,c2,t3 from stb1")
|
tdSql.query(f"select max(c1),t1,c2,t3 from {dbname}.stb1")
|
||||||
tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
tdSql.checkData(0,0,28)
|
tdSql.checkData(0,0,28)
|
||||||
tdSql.checkData(0,1,19)
|
tdSql.checkData(0,1,19)
|
||||||
tdSql.checkData(0,2,311108)
|
tdSql.checkData(0,2,311108)
|
||||||
|
|
||||||
tdSql.query("select max(c1),ceil(t1),pow(c2,1)+2,abs(t3) from stb1")
|
tdSql.query(f"select max(c1),ceil(t1),pow(c2,1)+2,abs(t3) from {dbname}.stb1")
|
||||||
tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
tdSql.checkData(0,0,28)
|
tdSql.checkData(0,0,28)
|
||||||
tdSql.checkData(0,1,19)
|
tdSql.checkData(0,1,19)
|
||||||
|
@ -267,32 +240,32 @@ class TDTestCase:
|
||||||
tdSql.checkData(0,3,2109)
|
tdSql.checkData(0,3,2109)
|
||||||
|
|
||||||
# partition by tbname or partition by tag
|
# partition by tbname or partition by tag
|
||||||
tdSql.query("select max(c1),tbname from stb1 partition by tbname")
|
tdSql.query(f"select max(c1),tbname from {dbname}.stb1 partition by tbname")
|
||||||
query_data = tdSql.queryResult
|
query_data = tdSql.queryResult
|
||||||
|
|
||||||
for row in query_data:
|
for row in query_data:
|
||||||
tbname = row[1]
|
tbname = f"{dbname}.{row[1]}"
|
||||||
tdSql.query(" select max(c1) from %s "%tbname)
|
tdSql.query(f"select max(c1) from %s "%tbname)
|
||||||
tdSql.checkData(0,0,row[0])
|
tdSql.checkData(0,0,row[0])
|
||||||
|
|
||||||
tdSql.query("select max(c1),tbname from stb1 partition by t1")
|
tdSql.query(f"select max(c1),tbname from {dbname}.stb1 partition by t1")
|
||||||
query_data = tdSql.queryResult
|
query_data = tdSql.queryResult
|
||||||
|
|
||||||
for row in query_data:
|
for row in query_data:
|
||||||
tbname = row[1]
|
tbname = f"{dbname}.{row[1]}"
|
||||||
tdSql.query(" select max(c1) from %s "%tbname)
|
tdSql.query(f"select max(c1) from %s "%tbname)
|
||||||
tdSql.checkData(0,0,row[0])
|
tdSql.checkData(0,0,row[0])
|
||||||
|
|
||||||
# nest query for support max
|
# nest query for support max
|
||||||
tdSql.query("select abs(c2+2)+1 from (select max(c1) c2 from stb1)")
|
tdSql.query(f"select abs(c2+2)+1 from (select max(c1) c2 from {dbname}.stb1)")
|
||||||
tdSql.checkData(0,0,31.000000000)
|
tdSql.checkData(0,0,31.000000000)
|
||||||
tdSql.query("select max(c1+2)+1 as c2 from (select ts ,c1 ,c2 from stb1)")
|
tdSql.query(f"select max(c1+2)+1 as c2 from (select ts ,c1 ,c2 from {dbname}.stb1)")
|
||||||
tdSql.checkData(0,0,31.000000000)
|
tdSql.checkData(0,0,31.000000000)
|
||||||
tdSql.query("select max(a+2)+1 as c2 from (select ts ,abs(c1) a ,c2 from stb1)")
|
tdSql.query(f"select max(a+2)+1 as c2 from (select ts ,abs(c1) a ,c2 from {dbname}.stb1)")
|
||||||
tdSql.checkData(0,0,31.000000000)
|
tdSql.checkData(0,0,31.000000000)
|
||||||
|
|
||||||
# mixup with other functions
|
# mixup with other functions
|
||||||
tdSql.query("select max(c1),count(c1),last(c2,c3) from stb1")
|
tdSql.query(f"select max(c1),count(c1),last(c2,c3) from {dbname}.stb1")
|
||||||
tdSql.checkData(0,0,28)
|
tdSql.checkData(0,0,28)
|
||||||
tdSql.checkData(0,1,184)
|
tdSql.checkData(0,1,184)
|
||||||
tdSql.checkData(0,2,-99999)
|
tdSql.checkData(0,2,-99999)
|
||||||
|
|
|
@ -6,10 +6,8 @@ import random
|
||||||
|
|
||||||
|
|
||||||
class TDTestCase:
|
class TDTestCase:
|
||||||
updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 ,
|
|
||||||
"jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143,
|
updatecfgDict = {"maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 }
|
||||||
"wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"udfDebugFlag":143,
|
|
||||||
"maxTablesPerVnode":2 ,"minTablesPerVnode":2,"tableIncStepPerVnode":2 }
|
|
||||||
|
|
||||||
def init(self, conn, logSql):
|
def init(self, conn, logSql):
|
||||||
tdLog.debug("start to execute %s" % __file__)
|
tdLog.debug("start to execute %s" % __file__)
|
||||||
|
@ -35,76 +33,52 @@ class TDTestCase:
|
||||||
else:
|
else:
|
||||||
tdLog.info(" min function work as expected, sql : %s "% min_sql)
|
tdLog.info(" min function work as expected, sql : %s "% min_sql)
|
||||||
|
|
||||||
|
def prepare_datas_of_distribute(self, dbname="testdb"):
|
||||||
def prepare_datas_of_distribute(self):
|
|
||||||
|
|
||||||
# prepate datas for 20 tables distributed at different vgroups
|
# prepate datas for 20 tables distributed at different vgroups
|
||||||
tdSql.execute("create database if not exists testdb keep 3650 duration 1000 vgroups 5")
|
tdSql.execute(f"create database if not exists {dbname} keep 3650 duration 1000 vgroups 5")
|
||||||
tdSql.execute(" use testdb ")
|
tdSql.execute(f" use {dbname} ")
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
'''create table stb1
|
f'''create table {dbname}.stb1
|
||||||
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
||||||
tags (t0 timestamp, t1 int, t2 bigint, t3 smallint, t4 tinyint, t5 float, t6 double, t7 bool, t8 binary(16),t9 nchar(32))
|
tags (t0 timestamp, t1 int, t2 bigint, t3 smallint, t4 tinyint, t5 float, t6 double, t7 bool, t8 binary(16),t9 nchar(32))
|
||||||
'''
|
'''
|
||||||
)
|
)
|
||||||
|
|
||||||
tdSql.execute(
|
|
||||||
'''
|
|
||||||
create table t1
|
|
||||||
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
|
||||||
'''
|
|
||||||
)
|
|
||||||
for i in range(20):
|
for i in range(20):
|
||||||
tdSql.execute(f'create table ct{i+1} using stb1 tags ( now(), {1*i}, {11111*i}, {111*i}, {1*i}, {1.11*i}, {11.11*i}, {i%2}, "binary{i}", "nchar{i}" )')
|
tdSql.execute(f'create table {dbname}.ct{i+1} using {dbname}.stb1 tags ( now(), {1*i}, {11111*i}, {111*i}, {1*i}, {1.11*i}, {11.11*i}, {i%2}, "binary{i}", "nchar{i}" )')
|
||||||
|
|
||||||
for i in range(9):
|
for i in range(9):
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
f"insert into {dbname}.ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
||||||
)
|
)
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
f"insert into {dbname}.ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
|
||||||
)
|
)
|
||||||
|
|
||||||
for i in range(1,21):
|
for i in range(1,21):
|
||||||
if i ==1 or i == 4:
|
if i ==1 or i == 4:
|
||||||
continue
|
continue
|
||||||
else:
|
else:
|
||||||
tbname = "ct"+f'{i}'
|
tbname = f"{dbname}.ct{i}"
|
||||||
for j in range(9):
|
for j in range(9):
|
||||||
tdSql.execute(
|
tdSql.execute(
|
||||||
f"insert into {tbname} values ( now()-{(i+j)*10}s, {1*(j+i)}, {11111*(j+i)}, {111*(j+i)}, {11*(j)}, {1.11*(j+i)}, {11.11*(j+i)}, {(j+i)%2}, 'binary{j}', 'nchar{j}', now()+{1*j}a )"
|
f"insert into {tbname} values ( now()-{(i+j)*10}s, {1*(j+i)}, {11111*(j+i)}, {111*(j+i)}, {11*(j)}, {1.11*(j+i)}, {11.11*(j+i)}, {(j+i)%2}, 'binary{j}', 'nchar{j}', now()+{1*j}a )"
|
||||||
)
|
)
|
||||||
tdSql.execute("insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
|
||||||
tdSql.execute("insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
||||||
tdSql.execute("insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
|
||||||
tdSql.execute("insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
tdSql.execute(f"insert into {dbname}.ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
|
||||||
|
|
||||||
tdSql.execute("insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
tdSql.execute(f"insert into {dbname}.ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||||
tdSql.execute("insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
tdSql.execute(f"insert into {dbname}.ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||||
tdSql.execute("insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
tdSql.execute(f"insert into {dbname}.ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
|
||||||
|
|
||||||
tdSql.execute(
|
|
||||||
f'''insert into t1 values
|
|
||||||
( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
|
||||||
( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
|
|
||||||
( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
|
|
||||||
( '2021-01-01 01:01:06.000', 3, 33333, 333, 33, 3.33, 33.33, 0, "binary3", "nchar3", now()+3a )
|
|
||||||
( '2021-05-07 01:01:10.000', 4, 44444, 444, 44, 4.44, 44.44, 1, "binary4", "nchar4", now()+4a )
|
|
||||||
( '2021-07-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
|
||||||
( '2021-09-30 01:01:16.000', 5, 55555, 555, 55, 5.55, 55.55, 0, "binary5", "nchar5", now()+5a )
|
|
||||||
( '2022-02-01 01:01:20.000', 6, 66666, 666, 66, 6.66, 66.66, 1, "binary6", "nchar6", now()+6a )
|
|
||||||
( '2022-10-28 01:01:26.000', 7, 00000, 000, 00, 0.00, 00.00, 1, "binary7", "nchar7", "1970-01-01 08:00:00.000" )
|
|
||||||
( '2022-12-01 01:01:30.000', 8, -88888, -888, -88, -8.88, -88.88, 0, "binary8", "nchar8", "1969-01-01 01:00:00.000" )
|
|
||||||
( '2022-12-31 01:01:36.000', 9, -99999999999999999, -999, -99, -9.99, -999999999999999999999.99, 1, "binary9", "nchar9", "1900-01-01 00:00:00.000" )
|
|
||||||
( '2023-02-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
|
|
||||||
'''
|
|
||||||
)
|
|
||||||
|
|
||||||
tdLog.info(" prepare data for distributed_aggregate done! ")
|
tdLog.info(" prepare data for distributed_aggregate done! ")
|
||||||
|
|
||||||
def check_distribute_datas(self):
|
def check_distribute_datas(self, dbname="testdb"):
|
||||||
# get vgroup_ids of all
|
# get vgroup_ids of all
|
||||||
tdSql.query("show vgroups ")
|
tdSql.query(f"show {dbname}.vgroups ")
|
||||||
vgroups = tdSql.queryResult
|
vgroups = tdSql.queryResult
|
||||||
|
|
||||||
vnode_tables={}
|
vnode_tables={}
|
||||||
|
@ -112,9 +86,8 @@ class TDTestCase:
|
||||||
for vgroup_id in vgroups:
|
for vgroup_id in vgroups:
|
||||||
vnode_tables[vgroup_id[0]]=[]
|
vnode_tables[vgroup_id[0]]=[]
|
||||||
|
|
||||||
|
|
||||||
# check sub_table of per vnode ,make sure sub_table has been distributed
|
# check sub_table of per vnode ,make sure sub_table has been distributed
|
||||||
tdSql.query("show tables like 'ct%'")
|
tdSql.query(f"show {dbname}.tables like 'ct%'")
|
||||||
table_names = tdSql.queryResult
|
table_names = tdSql.queryResult
|
||||||
tablenames = []
|
tablenames = []
|
||||||
for table_name in table_names:
|
for table_name in table_names:
|
||||||
|
@ -128,7 +101,7 @@ class TDTestCase:
|
||||||
if count < 2:
|
if count < 2:
|
||||||
tdLog.exit(" the datas of all not satisfy sub_table has been distributed ")
|
tdLog.exit(" the datas of all not satisfy sub_table has been distributed ")
|
||||||
|
|
||||||
def check_min_distribute_diff_vnode(self,col_name):
|
def check_min_distribute_diff_vnode(self,col_name, dbname="testdb"):
|
||||||
|
|
||||||
vgroup_ids = []
|
vgroup_ids = []
|
||||||
for k ,v in self.vnode_disbutes.items():
|
for k ,v in self.vnode_disbutes.items():
|
||||||
|
@ -146,9 +119,9 @@ class TDTestCase:
|
||||||
|
|
||||||
tbname_filters = tbname_ins[:-1]
|
tbname_filters = tbname_ins[:-1]
|
||||||
|
|
||||||
min_sql = f"select min({col_name}) from stb1 where tbname in ({tbname_filters});"
|
min_sql = f"select min({col_name}) from {dbname}.stb1 where tbname in ({tbname_filters});"
|
||||||
|
|
||||||
same_sql = f"select {col_name} from stb1 where tbname in ({tbname_filters}) and {col_name} is not null order by {col_name} asc limit 1"
|
same_sql = f"select {col_name} from {dbname}.stb1 where tbname in ({tbname_filters}) and {col_name} is not null order by {col_name} asc limit 1"
|
||||||
|
|
||||||
tdSql.query(min_sql)
|
tdSql.query(min_sql)
|
||||||
min_result = tdSql.queryResult
|
min_result = tdSql.queryResult
|
||||||
|
@ -161,16 +134,16 @@ class TDTestCase:
|
||||||
else:
|
else:
|
||||||
tdLog.info(" min function work as expected, sql : %s "% min_sql)
|
tdLog.info(" min function work as expected, sql : %s "% min_sql)
|
||||||
|
|
||||||
def check_min_status(self):
|
def check_min_status(self, dbname="testdb"):
|
||||||
# check max function work status
|
# check min function work status
|
||||||
|
|
||||||
tdSql.query("show tables like 'ct%'")
|
tdSql.query(f"show {dbname}.tables like 'ct%'")
|
||||||
table_names = tdSql.queryResult
|
table_names = tdSql.queryResult
|
||||||
tablenames = []
|
tablenames = []
|
||||||
for table_name in table_names:
|
for table_name in table_names:
|
||||||
tablenames.append(table_name[0])
|
tablenames.append(f"{dbname}.{table_name[0]}")
|
||||||
|
|
||||||
tdSql.query("desc stb1")
|
tdSql.query(f"desc {dbname}.stb1")
|
||||||
col_names = tdSql.queryResult
|
col_names = tdSql.queryResult
|
||||||
|
|
||||||
colnames = []
|
colnames = []
|
||||||
|
@ -182,119 +155,117 @@ class TDTestCase:
|
||||||
for colname in colnames:
|
for colname in colnames:
|
||||||
self.check_min_functions(tablename,colname)
|
self.check_min_functions(tablename,colname)
|
||||||
|
|
||||||
# check max function for different vnode
|
# check min function for different vnode
|
||||||
|
|
||||||
for colname in colnames:
|
for colname in colnames:
|
||||||
if colname.startswith("c"):
|
if colname.startswith("c"):
|
||||||
self.check_min_distribute_diff_vnode(colname)
|
self.check_min_distribute_diff_vnode(colname, dbname)
|
||||||
else:
|
else:
|
||||||
# self.check_min_distribute_diff_vnode(colname) # bug for tag
|
# self.check_min_distribute_diff_vnode(colname, dbname) # bug for tag
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
def distribute_agg_query(self, dbname="testdb"):
|
||||||
def distribute_agg_query(self):
|
|
||||||
# basic filter
|
# basic filter
|
||||||
tdSql.query("select min(c1) from stb1 where c1 is null")
|
tdSql.query(f"select min(c1) from {dbname}.stb1 where c1 is null")
|
||||||
tdSql.checkRows(0)
|
tdSql.checkRows(0)
|
||||||
|
|
||||||
tdSql.query("select min(c1) from stb1 where t1=1")
|
tdSql.query(f"select min(c1) from {dbname}.stb1 where t1=1")
|
||||||
tdSql.checkData(0,0,2)
|
tdSql.checkData(0,0,2)
|
||||||
|
|
||||||
tdSql.query("select min(c1+c2) from stb1 where c1 =1 ")
|
tdSql.query(f"select min(c1+c2) from {dbname}.stb1 where c1 =1 ")
|
||||||
tdSql.checkData(0,0,11112.000000000)
|
tdSql.checkData(0,0,11112.000000000)
|
||||||
|
|
||||||
tdSql.query("select min(c1) from stb1 where tbname=\"ct2\"")
|
tdSql.query(f"select min(c1) from {dbname}.stb1 where tbname=\"ct2\"")
|
||||||
tdSql.checkData(0,0,2)
|
tdSql.checkData(0, 0, 2)
|
||||||
|
|
||||||
tdSql.query("select min(c1) from stb1 partition by tbname")
|
tdSql.query(f"select min(c1) from {dbname}.stb1 partition by tbname")
|
||||||
tdSql.checkRows(20)
|
tdSql.checkRows(20)
|
||||||
|
|
||||||
tdSql.query("select min(c1) from stb1 where t1> 4 partition by tbname")
|
tdSql.query(f"select min(c1) from {dbname}.stb1 where t1> 4 partition by tbname")
|
||||||
tdSql.checkRows(15)
|
tdSql.checkRows(15)
|
||||||
|
|
||||||
# union all
|
# union all
|
||||||
tdSql.query("select min(c1) from stb1 union all select min(c1) from stb1 ")
|
tdSql.query(f"select min(c1) from {dbname}.stb1 union all select min(c1) from {dbname}.stb1 ")
|
||||||
tdSql.checkRows(2)
|
tdSql.checkRows(2)
|
||||||
tdSql.checkData(0,0,0)
|
tdSql.checkData(0, 0, 0)
|
||||||
|
|
||||||
# join
|
# join
|
||||||
|
|
||||||
tdSql.execute(" create database if not exists db ")
|
tdSql.execute(" create database if not exists db ")
|
||||||
tdSql.execute(" use db ")
|
tdSql.execute(" use db ")
|
||||||
tdSql.execute(" create stable st (ts timestamp , c1 int ,c2 float) tags(t1 int) ")
|
tdSql.execute(" create stable db.st (ts timestamp , c1 int ,c2 float) tags(t1 int) ")
|
||||||
tdSql.execute(" create table tb1 using st tags(1) ")
|
tdSql.execute(" create table db.tb1 using db.st tags(1) ")
|
||||||
tdSql.execute(" create table tb2 using st tags(2) ")
|
tdSql.execute(" create table db.tb2 using db.st tags(2) ")
|
||||||
|
|
||||||
|
|
||||||
for i in range(10):
|
for i in range(10):
|
||||||
ts = i*10 + self.ts
|
ts = i*10 + self.ts
|
||||||
tdSql.execute(f" insert into tb1 values({ts},{i},{i}.0)")
|
tdSql.execute(f" insert into db.tb1 values({ts},{i},{i}.0)")
|
||||||
tdSql.execute(f" insert into tb2 values({ts},{i},{i}.0)")
|
tdSql.execute(f" insert into db.tb2 values({ts},{i},{i}.0)")
|
||||||
|
|
||||||
tdSql.query("select min(tb1.c1), tb2.c2 from tb1, tb2 where tb1.ts=tb2.ts")
|
tdSql.query(f"select min(tb1.c1), tb2.c2 from db.tb1 tb1, db.tb2 tb2 where tb1.ts=tb2.ts")
|
||||||
tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
tdSql.checkData(0,0,0)
|
tdSql.checkData(0,0,0)
|
||||||
tdSql.checkData(0,0,0.00000)
|
tdSql.checkData(0,0,0.00000)
|
||||||
|
|
||||||
# group by
|
# group by
|
||||||
tdSql.execute(" use testdb ")
|
tdSql.execute(f"use {dbname} ")
|
||||||
tdSql.query(" select min(c1),c1 from stb1 group by t1 ")
|
tdSql.query(f"select min(c1),c1 from {dbname}.stb1 group by t1 ")
|
||||||
tdSql.checkRows(20)
|
tdSql.checkRows(20)
|
||||||
tdSql.query(" select min(c1),c1 from stb1 group by c1 ")
|
tdSql.query(f"select min(c1),c1 from {dbname}.stb1 group by c1 ")
|
||||||
tdSql.checkRows(30)
|
tdSql.checkRows(30)
|
||||||
tdSql.query(" select min(c1),c2 from stb1 group by c2 ")
|
tdSql.query(f"select min(c1),c2 from {dbname}.stb1 group by c2 ")
|
||||||
tdSql.checkRows(31)
|
tdSql.checkRows(31)
|
||||||
|
|
||||||
# selective common cols of datas
|
# selective common cols of datas
|
||||||
tdSql.query("select min(c1),c2,c3,c5 from stb1")
|
tdSql.query(f"select min(c1),c2,c3,c5 from {dbname}.stb1")
|
||||||
tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
tdSql.checkData(0,0,0)
|
tdSql.checkData(0,0,0)
|
||||||
tdSql.checkData(0,1,0)
|
tdSql.checkData(0,1,0)
|
||||||
tdSql.checkData(0,2,0)
|
tdSql.checkData(0,2,0)
|
||||||
tdSql.checkData(0,3,0)
|
tdSql.checkData(0,3,0)
|
||||||
|
|
||||||
tdSql.query("select min(c1),t1,c2,t3 from stb1 where c1 >5")
|
tdSql.query(f"select min(c1),t1,c2,t3 from {dbname}.stb1 where c1 > 5")
|
||||||
tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
tdSql.checkData(0,0,6)
|
tdSql.checkData(0,0,6)
|
||||||
tdSql.checkData(0,2,66666)
|
tdSql.checkData(0,2,66666)
|
||||||
|
|
||||||
tdSql.query("select min(c1),ceil(t1),pow(c2,1)+2,abs(t3) from stb1 where c1>12")
|
tdSql.query(f"select min(c1),ceil(t1),pow(c2,1)+2,abs(t3) from {dbname}.stb1 where c1 > 12")
|
||||||
tdSql.checkRows(1)
|
tdSql.checkRows(1)
|
||||||
tdSql.checkData(0,0,13)
|
tdSql.checkData(0,0,13)
|
||||||
tdSql.checkData(0,2,144445.000000000)
|
tdSql.checkData(0,2,144445.000000000)
|
||||||
|
|
||||||
# partition by tbname or partition by tag
|
# partition by tbname or partition by tag
|
||||||
tdSql.query("select min(c1),tbname from stb1 partition by tbname")
|
tdSql.query(f"select min(c1),tbname from {dbname}.stb1 partition by tbname")
|
||||||
query_data = tdSql.queryResult
|
query_data = tdSql.queryResult
|
||||||
|
|
||||||
for row in query_data:
|
for row in query_data:
|
||||||
tbname = row[1]
|
tbname = f"{dbname}.{row[1]}"
|
||||||
tdSql.query(" select min(c1) from %s "%tbname)
|
tdSql.query(f"select min(c1) from %s "%tbname)
|
||||||
tdSql.checkData(0,0,row[0])
|
tdSql.checkData(0,0,row[0])
|
||||||
|
|
||||||
tdSql.query("select min(c1),tbname from stb1 partition by t1")
|
tdSql.query(f"select min(c1),tbname from {dbname}.stb1 partition by t1")
|
||||||
query_data = tdSql.queryResult
|
query_data = tdSql.queryResult
|
||||||
|
|
||||||
for row in query_data:
|
for row in query_data:
|
||||||
tbname = row[1]
|
tbname = f"{dbname}.{row[1]}"
|
||||||
tdSql.query(" select min(c1) from %s "%tbname)
|
tdSql.query(f"select min(c1) from %s "%tbname)
|
||||||
tdSql.checkData(0,0,row[0])
|
tdSql.checkData(0,0,row[0])
|
||||||
|
|
||||||
# nest query for support max
|
# nest query for support min
|
||||||
tdSql.query("select abs(c2+2)+1 from (select min(c1) c2 from stb1)")
|
tdSql.query(f"select abs(c2+2)+1 from (select min(c1) c2 from {dbname}.stb1)")
|
||||||
tdSql.checkData(0,0,3.000000000)
|
tdSql.checkData(0,0,3.000000000)
|
||||||
tdSql.query("select min(c1+2)+1 as c2 from (select ts ,c1 ,c2 from stb1)")
|
tdSql.query(f"select min(c1+2)+1 as c2 from (select ts ,c1 ,c2 from {dbname}.stb1)")
|
||||||
tdSql.checkData(0,0,3.000000000)
|
tdSql.checkData(0,0,3.000000000)
|
||||||
tdSql.query("select min(a+2)+1 as c2 from (select ts ,abs(c1) a ,c2 from stb1)")
|
tdSql.query(f"select min(a+2)+1 as c2 from (select ts ,abs(c1) a ,c2 from {dbname}.stb1)")
|
||||||
tdSql.checkData(0,0,3.000000000)
|
tdSql.checkData(0,0,3.000000000)
|
||||||
|
|
||||||
# mixup with other functions
|
# mixup with other functions
|
||||||
tdSql.query("select max(c1),count(c1),last(c2,c3),min(c1) from stb1")
|
tdSql.query(f"select max(c1),count(c1),last(c2,c3) from {dbname}.stb1")
|
||||||
tdSql.checkData(0,0,28)
|
tdSql.checkData(0,0,28)
|
||||||
tdSql.checkData(0,1,184)
|
tdSql.checkData(0,1,184)
|
||||||
tdSql.checkData(0,2,-99999)
|
tdSql.checkData(0,2,-99999)
|
||||||
tdSql.checkData(0,3,-999)
|
tdSql.checkData(0,3,-999)
|
||||||
tdSql.checkData(0,4,0)
|
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
|
|
||||||
|
|
|
@ -27,7 +27,7 @@ python3 ./test.py -f 1-insert/alter_stable.py
|
||||||
python3 ./test.py -f 1-insert/alter_table.py
|
python3 ./test.py -f 1-insert/alter_table.py
|
||||||
python3 ./test.py -f 1-insert/insertWithMoreVgroup.py
|
python3 ./test.py -f 1-insert/insertWithMoreVgroup.py
|
||||||
python3 ./test.py -f 1-insert/table_comment.py
|
python3 ./test.py -f 1-insert/table_comment.py
|
||||||
#python3 ./test.py -f 1-insert/time_range_wise.py
|
python3 ./test.py -f 1-insert/time_range_wise.py
|
||||||
python3 ./test.py -f 1-insert/block_wise.py
|
python3 ./test.py -f 1-insert/block_wise.py
|
||||||
python3 ./test.py -f 1-insert/create_retentions.py
|
python3 ./test.py -f 1-insert/create_retentions.py
|
||||||
python3 ./test.py -f 1-insert/table_param_ttl.py
|
python3 ./test.py -f 1-insert/table_param_ttl.py
|
||||||
|
@ -59,17 +59,44 @@ python3 ./test.py -f 2-query/ceil.py
|
||||||
python3 ./test.py -f 2-query/ceil.py -R
|
python3 ./test.py -f 2-query/ceil.py -R
|
||||||
python3 ./test.py -f 2-query/char_length.py
|
python3 ./test.py -f 2-query/char_length.py
|
||||||
python3 ./test.py -f 2-query/char_length.py -R
|
python3 ./test.py -f 2-query/char_length.py -R
|
||||||
#python3 ./test.py -f 2-query/check_tsdb.py
|
# python3 ./test.py -f 2-query/check_tsdb.py
|
||||||
#python3 ./test.py -f 2-query/check_tsdb.py -R
|
# python3 ./test.py -f 2-query/check_tsdb.py -R
|
||||||
|
python3 ./test.py -f 2-query/concat.py
|
||||||
|
python3 ./test.py -f 2-query/concat.py -R
|
||||||
|
python3 ./test.py -f 2-query/concat_ws.py
|
||||||
|
python3 ./test.py -f 2-query/concat_ws.py -R
|
||||||
|
python3 ./test.py -f 2-query/concat_ws2.py
|
||||||
|
python3 ./test.py -f 2-query/concat_ws2.py -R
|
||||||
|
python3 ./test.py -f 2-query/cos.py
|
||||||
|
python3 ./test.py -f 2-query/cos.py -R
|
||||||
|
python3 ./test.py -f 2-query/count_partition.py
|
||||||
|
python3 ./test.py -f 2-query/count_partition.py -R
|
||||||
|
python3 ./test.py -f 2-query/count.py
|
||||||
|
python3 ./test.py -f 2-query/count.py -R
|
||||||
|
python3 ./test.py -f 2-query/db.py
|
||||||
|
python3 ./test.py -f 2-query/db.py -R
|
||||||
|
python3 ./test.py -f 2-query/diff.py
|
||||||
|
python3 ./test.py -f 2-query/diff.py -R
|
||||||
|
python3 ./test.py -f 2-query/distinct.py
|
||||||
|
python3 ./test.py -f 2-query/distinct.py -R
|
||||||
|
python3 ./test.py -f 2-query/distribute_agg_apercentile.py
|
||||||
|
python3 ./test.py -f 2-query/distribute_agg_apercentile.py -R
|
||||||
|
python3 ./test.py -f 2-query/distribute_agg_avg.py
|
||||||
|
python3 ./test.py -f 2-query/distribute_agg_avg.py -R
|
||||||
|
python3 ./test.py -f 2-query/distribute_agg_count.py
|
||||||
|
python3 ./test.py -f 2-query/distribute_agg_count.py -R
|
||||||
|
python3 ./test.py -f 2-query/distribute_agg_max.py
|
||||||
|
python3 ./test.py -f 2-query/distribute_agg_max.py -R
|
||||||
|
python3 ./test.py -f 2-query/distribute_agg_min.py
|
||||||
|
python3 ./test.py -f 2-query/distribute_agg_min.py -R
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
python3 ./test.py -f 1-insert/update_data.py
|
python3 ./test.py -f 1-insert/update_data.py
|
||||||
|
|
||||||
python3 ./test.py -f 1-insert/delete_data.py
|
python3 ./test.py -f 1-insert/delete_data.py
|
||||||
python3 ./test.py -f 2-query/db.py
|
|
||||||
|
|
||||||
|
|
||||||
python3 ./test.py -f 2-query/db.py
|
|
||||||
python3 ./test.py -f 2-query/distinct.py
|
|
||||||
python3 ./test.py -f 2-query/varchar.py
|
python3 ./test.py -f 2-query/varchar.py
|
||||||
python3 ./test.py -f 2-query/ltrim.py
|
python3 ./test.py -f 2-query/ltrim.py
|
||||||
python3 ./test.py -f 2-query/rtrim.py
|
python3 ./test.py -f 2-query/rtrim.py
|
||||||
|
@ -81,10 +108,7 @@ python3 ./test.py -f 2-query/join2.py
|
||||||
python3 ./test.py -f 2-query/substr.py
|
python3 ./test.py -f 2-query/substr.py
|
||||||
python3 ./test.py -f 2-query/union.py
|
python3 ./test.py -f 2-query/union.py
|
||||||
python3 ./test.py -f 2-query/union1.py
|
python3 ./test.py -f 2-query/union1.py
|
||||||
python3 ./test.py -f 2-query/concat.py
|
|
||||||
python3 ./test.py -f 2-query/concat2.py
|
python3 ./test.py -f 2-query/concat2.py
|
||||||
python3 ./test.py -f 2-query/concat_ws.py
|
|
||||||
python3 ./test.py -f 2-query/concat_ws2.py
|
|
||||||
python3 ./test.py -f 2-query/spread.py
|
python3 ./test.py -f 2-query/spread.py
|
||||||
python3 ./test.py -f 2-query/hyperloglog.py
|
python3 ./test.py -f 2-query/hyperloglog.py
|
||||||
python3 ./test.py -f 2-query/explain.py
|
python3 ./test.py -f 2-query/explain.py
|
||||||
|
@ -97,13 +121,11 @@ python3 ./test.py -f 2-query/Now.py
|
||||||
python3 ./test.py -f 2-query/Today.py
|
python3 ./test.py -f 2-query/Today.py
|
||||||
python3 ./test.py -f 2-query/max.py
|
python3 ./test.py -f 2-query/max.py
|
||||||
python3 ./test.py -f 2-query/min.py
|
python3 ./test.py -f 2-query/min.py
|
||||||
python3 ./test.py -f 2-query/count.py
|
|
||||||
python3 ./test.py -f 2-query/last.py
|
python3 ./test.py -f 2-query/last.py
|
||||||
python3 ./test.py -f 2-query/first.py
|
python3 ./test.py -f 2-query/first.py
|
||||||
python3 ./test.py -f 2-query/To_iso8601.py
|
python3 ./test.py -f 2-query/To_iso8601.py
|
||||||
python3 ./test.py -f 2-query/To_unixtimestamp.py
|
python3 ./test.py -f 2-query/To_unixtimestamp.py
|
||||||
python3 ./test.py -f 2-query/timetruncate.py
|
python3 ./test.py -f 2-query/timetruncate.py
|
||||||
python3 ./test.py -f 2-query/diff.py
|
|
||||||
python3 ./test.py -f 2-query/Timediff.py
|
python3 ./test.py -f 2-query/Timediff.py
|
||||||
python3 ./test.py -f 2-query/json_tag.py
|
python3 ./test.py -f 2-query/json_tag.py
|
||||||
|
|
||||||
|
@ -115,7 +137,6 @@ python3 ./test.py -f 2-query/log.py
|
||||||
python3 ./test.py -f 2-query/pow.py
|
python3 ./test.py -f 2-query/pow.py
|
||||||
python3 ./test.py -f 2-query/sqrt.py
|
python3 ./test.py -f 2-query/sqrt.py
|
||||||
python3 ./test.py -f 2-query/sin.py
|
python3 ./test.py -f 2-query/sin.py
|
||||||
python3 ./test.py -f 2-query/cos.py
|
|
||||||
python3 ./test.py -f 2-query/tan.py
|
python3 ./test.py -f 2-query/tan.py
|
||||||
python3 ./test.py -f 2-query/query_cols_tags_and_or.py
|
python3 ./test.py -f 2-query/query_cols_tags_and_or.py
|
||||||
# python3 ./test.py -f 2-query/nestedQuery.py
|
# python3 ./test.py -f 2-query/nestedQuery.py
|
||||||
|
@ -126,7 +147,6 @@ python3 ./test.py -f 2-query/query_cols_tags_and_or.py
|
||||||
python3 ./test.py -f 2-query/elapsed.py
|
python3 ./test.py -f 2-query/elapsed.py
|
||||||
python3 ./test.py -f 2-query/csum.py
|
python3 ./test.py -f 2-query/csum.py
|
||||||
python3 ./test.py -f 2-query/mavg.py
|
python3 ./test.py -f 2-query/mavg.py
|
||||||
python3 ./test.py -f 2-query/diff.py
|
|
||||||
python3 ./test.py -f 2-query/sample.py
|
python3 ./test.py -f 2-query/sample.py
|
||||||
python3 ./test.py -f 2-query/function_diff.py
|
python3 ./test.py -f 2-query/function_diff.py
|
||||||
python3 ./test.py -f 2-query/unique.py
|
python3 ./test.py -f 2-query/unique.py
|
||||||
|
@ -135,17 +155,11 @@ python3 ./test.py -f 2-query/function_stateduration.py
|
||||||
python3 ./test.py -f 2-query/statecount.py
|
python3 ./test.py -f 2-query/statecount.py
|
||||||
python3 ./test.py -f 2-query/tail.py
|
python3 ./test.py -f 2-query/tail.py
|
||||||
python3 ./test.py -f 2-query/ttl_comment.py
|
python3 ./test.py -f 2-query/ttl_comment.py
|
||||||
python3 ./test.py -f 2-query/distribute_agg_count.py
|
|
||||||
python3 ./test.py -f 2-query/distribute_agg_max.py
|
|
||||||
python3 ./test.py -f 2-query/distribute_agg_min.py
|
|
||||||
python3 ./test.py -f 2-query/distribute_agg_sum.py
|
python3 ./test.py -f 2-query/distribute_agg_sum.py
|
||||||
python3 ./test.py -f 2-query/distribute_agg_spread.py
|
python3 ./test.py -f 2-query/distribute_agg_spread.py
|
||||||
python3 ./test.py -f 2-query/distribute_agg_apercentile.py
|
|
||||||
python3 ./test.py -f 2-query/distribute_agg_avg.py
|
|
||||||
python3 ./test.py -f 2-query/distribute_agg_stddev.py
|
python3 ./test.py -f 2-query/distribute_agg_stddev.py
|
||||||
python3 ./test.py -f 2-query/twa.py
|
python3 ./test.py -f 2-query/twa.py
|
||||||
python3 ./test.py -f 2-query/irate.py
|
python3 ./test.py -f 2-query/irate.py
|
||||||
python3 ./test.py -f 2-query/count_partition.py
|
|
||||||
python3 ./test.py -f 2-query/function_null.py
|
python3 ./test.py -f 2-query/function_null.py
|
||||||
python3 ./test.py -f 2-query/queryQnode.py
|
python3 ./test.py -f 2-query/queryQnode.py
|
||||||
python3 ./test.py -f 2-query/max_partition.py
|
python3 ./test.py -f 2-query/max_partition.py
|
||||||
|
@ -296,7 +310,6 @@ python3 ./test.py -f 2-query/avg.py -Q 2
|
||||||
# python3 ./test.py -f 2-query/elapsed.py -Q 2
|
# python3 ./test.py -f 2-query/elapsed.py -Q 2
|
||||||
python3 ./test.py -f 2-query/csum.py -Q 2
|
python3 ./test.py -f 2-query/csum.py -Q 2
|
||||||
python3 ./test.py -f 2-query/mavg.py -Q 2
|
python3 ./test.py -f 2-query/mavg.py -Q 2
|
||||||
python3 ./test.py -f 2-query/diff.py -Q 2
|
|
||||||
python3 ./test.py -f 2-query/sample.py -Q 2
|
python3 ./test.py -f 2-query/sample.py -Q 2
|
||||||
python3 ./test.py -f 2-query/function_diff.py -Q 2
|
python3 ./test.py -f 2-query/function_diff.py -Q 2
|
||||||
python3 ./test.py -f 2-query/unique.py -Q 2
|
python3 ./test.py -f 2-query/unique.py -Q 2
|
||||||
|
@ -384,7 +397,6 @@ python3 ./test.py -f 2-query/query_cols_tags_and_or.py -Q 3
|
||||||
# python3 ./test.py -f 2-query/elapsed.py -Q 3
|
# python3 ./test.py -f 2-query/elapsed.py -Q 3
|
||||||
python3 ./test.py -f 2-query/csum.py -Q 3
|
python3 ./test.py -f 2-query/csum.py -Q 3
|
||||||
python3 ./test.py -f 2-query/mavg.py -Q 3
|
python3 ./test.py -f 2-query/mavg.py -Q 3
|
||||||
python3 ./test.py -f 2-query/diff.py -Q 3
|
|
||||||
python3 ./test.py -f 2-query/sample.py -Q 3
|
python3 ./test.py -f 2-query/sample.py -Q 3
|
||||||
python3 ./test.py -f 2-query/function_diff.py -Q 3
|
python3 ./test.py -f 2-query/function_diff.py -Q 3
|
||||||
python3 ./test.py -f 2-query/unique.py -Q 3
|
python3 ./test.py -f 2-query/unique.py -Q 3
|
||||||
|
|
Loading…
Reference in New Issue