Merge branch '3.0' of github.com:taosdata/TDengine into 3.0
This commit is contained in:
commit
a288d08a1a
|
@ -36,7 +36,7 @@ TDengine is an open source, cloud native time-series database optimized for Inte
|
|||
|
||||
# Documentation
|
||||
|
||||
For user manual, system design and architecture, engineering blogs, refer to [TDengine Documentation](https://docs.tdengine.com)(中文版请点击[这里](https://docs.taosdata.com))
|
||||
For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.tdengine.com) (中文版请点击[这里](https://docs.taosdata.com))
|
||||
|
||||
# Building
|
||||
|
||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 8.8 KiB After Width: | Height: | Size: 37 KiB |
|
@ -1,84 +1,128 @@
|
|||
---
|
||||
sidebar_label: 连续查询
|
||||
description: "连续查询是一个按照预设频率自动执行的查询功能,提供按照时间窗口的聚合查询能力,是一种简化的时间驱动流式计算。"
|
||||
title: "连续查询(Continuous Query)"
|
||||
---
|
||||
|
||||
连续查询是 TDengine 定期自动执行的查询,采用滑动窗口的方式进行计算,是一种简化的时间驱动的流式计算。针对库中的表或超级表,TDengine 可提供定期自动执行的连续查询,用户可让 TDengine 推送查询的结果,也可以将结果再写回到 TDengine 中。每次执行的查询是一个时间窗口,时间窗口随着时间流动向前滑动。在定义连续查询的时候需要指定时间窗口(time window, 参数 interval)大小和每次前向增量时间(forward sliding times, 参数 sliding)。
|
||||
|
||||
TDengine 的连续查询采用时间驱动模式,可以直接使用 TAOS SQL 进行定义,不需要额外的操作。使用连续查询,可以方便快捷地按照时间窗口生成结果,从而对原始采集数据进行降采样(down sampling)。用户通过 TAOS SQL 定义连续查询以后,TDengine 自动在最后的一个完整的时间周期末端拉起查询,并将计算获得的结果推送给用户或者写回 TDengine。
|
||||
|
||||
TDengine 提供的连续查询与普通流计算中的时间窗口计算具有以下区别:
|
||||
|
||||
- 不同于流计算的实时反馈计算结果,连续查询只在时间窗口关闭以后才开始计算。例如时间周期是 1 天,那么当天的结果只会在 23:59:59 以后才会生成。
|
||||
- 如果有历史记录写入到已经计算完成的时间区间,连续查询并不会重新进行计算,也不会重新将结果推送给用户。对于写回 TDengine 的模式,也不会更新已经存在的计算结果。
|
||||
- 使用连续查询推送结果的模式,服务端并不缓存客户端计算状态,也不提供 Exactly-Once 的语义保证。如果用户的应用端崩溃,再次拉起的连续查询将只会从再次拉起的时间开始重新计算最近的一个完整的时间窗口。如果使用写回模式,TDengine 可确保数据写回的有效性和连续性。
|
||||
|
||||
## 连续查询语法
|
||||
|
||||
```sql
|
||||
[CREATE TABLE AS] SELECT select_expr [, select_expr ...]
|
||||
FROM {tb_name_list}
|
||||
[WHERE where_condition]
|
||||
[INTERVAL(interval_val [, interval_offset]) [SLIDING sliding_val]]
|
||||
|
||||
```
|
||||
|
||||
INTERVAL: 连续查询作用的时间窗口
|
||||
|
||||
SLIDING: 连续查询的时间窗口向前滑动的时间间隔
|
||||
|
||||
## 使用连续查询
|
||||
|
||||
下面以智能电表场景为例介绍连续查询的具体使用方法。假设我们通过下列 SQL 语句创建了超级表和子表:
|
||||
|
||||
```sql
|
||||
create table meters (ts timestamp, current float, voltage int, phase float) tags (location binary(64), groupId int);
|
||||
create table D1001 using meters tags ("California.SanFrancisco", 2);
|
||||
create table D1002 using meters tags ("California.LosAngeles", 2);
|
||||
...
|
||||
```
|
||||
|
||||
可以通过下面这条 SQL 语句以一分钟为时间窗口、30 秒为前向增量统计这些电表的平均电压。
|
||||
|
||||
```sql
|
||||
select avg(voltage) from meters interval(1m) sliding(30s);
|
||||
```
|
||||
|
||||
每次执行这条语句,都会重新计算所有数据。 如果需要每隔 30 秒执行一次来增量计算最近一分钟的数据,可以把上面的语句改进成下面的样子,每次使用不同的 `startTime` 并定期执行:
|
||||
|
||||
```sql
|
||||
select avg(voltage) from meters where ts > {startTime} interval(1m) sliding(30s);
|
||||
```
|
||||
|
||||
这样做没有问题,但 TDengine 提供了更简单的方法,只要在最初的查询语句前面加上 `create table {tableName} as` 就可以了,例如:
|
||||
|
||||
```sql
|
||||
create table avg_vol as select avg(voltage) from meters interval(1m) sliding(30s);
|
||||
```
|
||||
|
||||
会自动创建一个名为 `avg_vol` 的新表,然后每隔 30 秒,TDengine 会增量执行 `as` 后面的 SQL 语句,并将查询结果写入这个表中,用户程序后续只要从 `avg_vol` 中查询数据即可。例如:
|
||||
|
||||
```sql
|
||||
taos> select * from avg_vol;
|
||||
ts | avg_voltage_ |
|
||||
===================================================
|
||||
2020-07-29 13:37:30.000 | 222.0000000 |
|
||||
2020-07-29 13:38:00.000 | 221.3500000 |
|
||||
2020-07-29 13:38:30.000 | 220.1700000 |
|
||||
2020-07-29 13:39:00.000 | 223.0800000 |
|
||||
```
|
||||
|
||||
需要注意,查询时间窗口的最小值是 10 毫秒,没有时间窗口范围的上限。
|
||||
|
||||
此外,TDengine 还支持用户指定连续查询的起止时间。如果不输入开始时间,连续查询将从第一条原始数据所在的时间窗口开始;如果没有输入结束时间,连续查询将永久运行;如果用户指定了结束时间,连续查询在系统时间达到指定的时间以后停止运行。比如使用下面的 SQL 创建的连续查询将运行一小时,之后会自动停止。
|
||||
|
||||
```sql
|
||||
create table avg_vol as select avg(voltage) from meters where ts > now and ts <= now + 1h interval(1m) sliding(30s);
|
||||
```
|
||||
|
||||
需要说明的是,上面例子中的 `now` 是指创建连续查询的时间,而不是查询执行的时间,否则,查询就无法自动停止了。另外,为了尽量避免原始数据延迟写入导致的问题,TDengine 中连续查询的计算有一定的延迟。也就是说,一个时间窗口过去后,TDengine 并不会立即计算这个窗口的数据,所以要稍等一会(一般不会超过 1 分钟)才能查到计算结果。
|
||||
|
||||
## 管理连续查询
|
||||
|
||||
用户可在控制台中通过 `show streams` 命令来查看系统中全部运行的连续查询,并可以通过 `kill stream` 命令杀掉对应的连续查询。后续版本会提供更细粒度和便捷的连续查询管理命令。
|
||||
---
|
||||
sidebar_label: 流式计算
|
||||
description: "TDengine 流式计算将数据的写入、预处理、复杂分析、实时计算、报警触发等功能融为一体,是一个能够降低用户部署成本、存储成本和运维成本的计算引擎。"
|
||||
title: 流式计算
|
||||
---
|
||||
|
||||
在时序数据的处理中,经常要对原始数据进行清洗、预处理,再使用时序数据库进行长久的储存。用户通常需要在时序数据库之外再搭建 Kafka、Flink、Spark 等流计算处理引擎,增加了用户的开发成本和维护成本。
|
||||
使用 TDengine 3.0 的流式计算引擎能够最大限度的减少对这些额外中间件的依赖,真正将数据的写入、预处理、长期存储、复杂分析、实时计算、实时报警触发等功能融为一体,并且,所有这些任务只需要使用 SQL 完成,极大降低了用户的学习成本、使用成本。
|
||||
|
||||
## 流式计算的创建
|
||||
|
||||
```sql
|
||||
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name AS subquery
|
||||
stream_options: {
|
||||
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
|
||||
WATERMARK time
|
||||
IGNORE EXPIRED
|
||||
}
|
||||
```
|
||||
|
||||
详细的语法规则参考 [流式计算](../../taos-sql/stream)
|
||||
|
||||
## 示例一
|
||||
|
||||
企业电表的数据经常都是成百上千亿条的,那么想要将这些分散、凌乱的数据清洗或转换都需要比较长的时间,很难做到高效性和实时性,以下例子中,通过流计算可以将过去 12 小时电表电压大于 220V 的数据清洗掉,然后以小时为窗口整合并计算出每个窗口中电流的最大值,并将结果输出到指定的数据表中。
|
||||
|
||||
### 创建 DB 和原始数据表
|
||||
|
||||
首先准备数据,完成建库、建一张超级表和多张子表操作
|
||||
|
||||
```sql
|
||||
drop database if exists stream_db;
|
||||
create database stream_db;
|
||||
|
||||
create stable stream_db.meters (ts timestamp, current float, voltage int) TAGS (location varchar(64), groupId int);
|
||||
|
||||
create table stream_db.d1001 using stream_db.meters tags("beijing", 1);
|
||||
create table stream_db.d1002 using stream_db.meters tags("guangzhou", 2);
|
||||
create table stream_db.d1003 using stream_db.meters tags("shanghai", 3);
|
||||
```
|
||||
|
||||
### 创建流
|
||||
|
||||
```sql
|
||||
create stream stream1 into stream_db.stream1_output_stb as select _wstart as start, _wend as end, max(current) as max_current from stream_db.meters where voltage <= 220 and ts > now - 12h interval (1h);
|
||||
```
|
||||
|
||||
### 写入数据
|
||||
```sql
|
||||
insert into stream_db.d1001 values(now-14h, 10.3, 210);
|
||||
insert into stream_db.d1001 values(now-13h, 13.5, 216);
|
||||
insert into stream_db.d1001 values(now-12h, 12.5, 219);
|
||||
insert into stream_db.d1002 values(now-11h, 14.7, 221);
|
||||
insert into stream_db.d1002 values(now-10h, 10.5, 218);
|
||||
insert into stream_db.d1002 values(now-9h, 11.2, 220);
|
||||
insert into stream_db.d1003 values(now-8h, 11.5, 217);
|
||||
insert into stream_db.d1003 values(now-7h, 12.3, 227);
|
||||
insert into stream_db.d1003 values(now-6h, 12.3, 215);
|
||||
```
|
||||
|
||||
### 查询以观查结果
|
||||
```sql
|
||||
taos> select * from stream_db.stream1_output_stb;
|
||||
start | end | max_current | group_id |
|
||||
===================================================================================================
|
||||
2022-08-09 14:00:00.000 | 2022-08-09 15:00:00.000 | 10.50000 | 0 |
|
||||
2022-08-09 15:00:00.000 | 2022-08-09 16:00:00.000 | 11.20000 | 0 |
|
||||
2022-08-09 16:00:00.000 | 2022-08-09 17:00:00.000 | 11.50000 | 0 |
|
||||
2022-08-09 18:00:00.000 | 2022-08-09 19:00:00.000 | 12.30000 | 0 |
|
||||
Query OK, 4 rows in database (0.012033s)
|
||||
```
|
||||
|
||||
## 示例二
|
||||
某运营商平台要采集机房所有服务器的系统资源指标,包含 cpu、内存、网络延迟等,采集后需要对数据进行四舍五入运算,将地域和服务器名以下划线拼接,然后将结果按时间排序并以服务器名分组输出到新的数据表中。
|
||||
|
||||
### 创建 DB 和原始数据表
|
||||
首先准备数据,完成建库、建一张超级表和多张子表操作
|
||||
|
||||
```sql
|
||||
drop database if exists stream_db;
|
||||
create database stream_db;
|
||||
|
||||
create stable stream_db.idc (ts timestamp, cpu float, mem float, latency float) TAGS (location varchar(64), groupId int);
|
||||
|
||||
create table stream_db.server01 using stream_db.idc tags("beijing", 1);
|
||||
create table stream_db.server02 using stream_db.idc tags("shanghai", 2);
|
||||
create table stream_db.server03 using stream_db.idc tags("beijing", 2);
|
||||
create table stream_db.server04 using stream_db.idc tags("tianjin", 3);
|
||||
create table stream_db.server05 using stream_db.idc tags("shanghai", 1);
|
||||
```
|
||||
|
||||
### 创建流
|
||||
|
||||
```sql
|
||||
create stream stream2 into stream_db.stream2_output_stb as select ts, concat_ws("_", location, tbname) as server_location, round(cpu) as cpu, round(mem) as mem, round(latency) as latency from stream_db.idc partition by tbname order by ts;
|
||||
```
|
||||
|
||||
### 写入数据
|
||||
```sql
|
||||
insert into stream_db.server01 values(now-14h, 50.9, 654.8, 23.11);
|
||||
insert into stream_db.server01 values(now-13h, 13.5, 221.2, 11.22);
|
||||
insert into stream_db.server02 values(now-12h, 154.7, 218.3, 22.33);
|
||||
insert into stream_db.server02 values(now-11h, 120.5, 111.5, 5.55);
|
||||
insert into stream_db.server03 values(now-10h, 101.5, 125.6, 5.99);
|
||||
insert into stream_db.server03 values(now-9h, 12.3, 165.6, 6.02);
|
||||
insert into stream_db.server04 values(now-8h, 160.9, 120.7, 43.51);
|
||||
insert into stream_db.server04 values(now-7h, 240.9, 520.7, 54.55);
|
||||
insert into stream_db.server05 values(now-6h, 190.9, 320.7, 55.43);
|
||||
insert into stream_db.server05 values(now-5h, 110.9, 600.7, 35.54);
|
||||
```
|
||||
### 查询以观查结果
|
||||
```sql
|
||||
taos> select ts, server_location, cpu, mem, latency from stream_db.stream2_output_stb;
|
||||
ts | server_location | cpu | mem | latency |
|
||||
================================================================================================================================
|
||||
2022-08-09 21:24:56.785 | beijing_server01 | 51.00000 | 655.00000 | 23.00000 |
|
||||
2022-08-09 22:24:56.795 | beijing_server01 | 14.00000 | 221.00000 | 11.00000 |
|
||||
2022-08-09 23:24:56.806 | shanghai_server02 | 155.00000 | 218.00000 | 22.00000 |
|
||||
2022-08-10 00:24:56.815 | shanghai_server02 | 121.00000 | 112.00000 | 6.00000 |
|
||||
2022-08-10 01:24:56.826 | beijing_server03 | 102.00000 | 126.00000 | 6.00000 |
|
||||
2022-08-10 02:24:56.838 | beijing_server03 | 12.00000 | 166.00000 | 6.00000 |
|
||||
2022-08-10 03:24:56.846 | tianjin_server04 | 161.00000 | 121.00000 | 44.00000 |
|
||||
2022-08-10 04:24:56.853 | tianjin_server04 | 241.00000 | 521.00000 | 55.00000 |
|
||||
2022-08-10 05:24:56.866 | shanghai_server05 | 191.00000 | 321.00000 | 55.00000 |
|
||||
2022-08-10 06:24:57.301 | shanghai_server05 | 111.00000 | 601.00000 | 36.00000 |
|
||||
Query OK, 10 rows in database (0.022950s)
|
||||
```
|
||||
|
||||
|
|
|
@ -140,10 +140,6 @@ taos> SELECT ts, ts AS primary_key_ts FROM d1001;
|
|||
|
||||
但是针对`first(*)`、`last(*)`、`last_row(*)`不支持针对单列的重命名。
|
||||
|
||||
### 隐式结果列
|
||||
|
||||
`Select_exprs`可以是表所属列的列名,也可以是基于列的函数表达式或计算式,数量的上限 256 个。当用户使用了`interval`或`group by tags`的子句以后,在最后返回结果中会强制返回时间戳列(第一列)和 group by 子句中的标签列。后续的版本中可以支持关闭 group by 子句中隐式列的输出,列输出完全由 select 子句控制。
|
||||
|
||||
### 伪列
|
||||
|
||||
**TBNAME**
|
||||
|
@ -152,7 +148,13 @@ taos> SELECT ts, ts AS primary_key_ts FROM d1001;
|
|||
获取一个超级表所有的子表名及相关的标签信息:
|
||||
|
||||
```mysql
|
||||
SELECT TBNAME, location FROM meters;
|
||||
SELECT DISTINCT TBNAME, location FROM meters;
|
||||
```
|
||||
|
||||
建议用户使用 INFORMATION_SCHEMA 下的 INS_TAGS 系统表来查询超级表的子表标签信息,例如获取超级表 meters 所有的子表名和标签值:
|
||||
|
||||
```mysql
|
||||
SELECT table_name, tag_name, tag_type, tag_value FROM information_schema.ins_tags WHERE stable_name='meters';
|
||||
```
|
||||
|
||||
统计超级表下辖子表数量:
|
||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 8.8 KiB After Width: | Height: | Size: 37 KiB |
|
@ -675,6 +675,7 @@ typedef struct SWindowRowsSup {
|
|||
TSKEY prevTs;
|
||||
int32_t startRowIndex;
|
||||
int32_t numOfRows;
|
||||
uint64_t groupId;
|
||||
} SWindowRowsSup;
|
||||
|
||||
typedef struct SSessionAggOperatorInfo {
|
||||
|
|
|
@ -90,16 +90,18 @@ static void updateTimeWindowInfo(SColumnInfoData* pColData, STimeWindow* pWin, b
|
|||
ts[4] = pWin->ekey + delta; // window end key
|
||||
}
|
||||
|
||||
static void doKeepTuple(SWindowRowsSup* pRowSup, int64_t ts) {
|
||||
static void doKeepTuple(SWindowRowsSup* pRowSup, int64_t ts, uint64_t groupId) {
|
||||
pRowSup->win.ekey = ts;
|
||||
pRowSup->prevTs = ts;
|
||||
pRowSup->numOfRows += 1;
|
||||
pRowSup->groupId = groupId;
|
||||
}
|
||||
|
||||
static void doKeepNewWindowStartInfo(SWindowRowsSup* pRowSup, const int64_t* tsList, int32_t rowIndex) {
|
||||
static void doKeepNewWindowStartInfo(SWindowRowsSup* pRowSup, const int64_t* tsList, int32_t rowIndex, uint64_t groupId) {
|
||||
pRowSup->startRowIndex = rowIndex;
|
||||
pRowSup->numOfRows = 0;
|
||||
pRowSup->win.skey = tsList[rowIndex];
|
||||
pRowSup->groupId = groupId;
|
||||
}
|
||||
|
||||
static FORCE_INLINE int32_t getForwardStepsInBlock(int32_t numOfRows, __block_search_fn_t searchFn, TSKEY ekey,
|
||||
|
@ -1156,7 +1158,7 @@ static void doStateWindowAggImpl(SOperatorInfo* pOperator, SStateWindowOperatorI
|
|||
|
||||
char* val = colDataGetData(pStateColInfoData, j);
|
||||
|
||||
if (!pInfo->hasKey) {
|
||||
if (gid != pRowSup->groupId || !pInfo->hasKey) {
|
||||
// todo extract method
|
||||
if (IS_VAR_DATA_TYPE(pInfo->stateKey.type)) {
|
||||
varDataCopy(pInfo->stateKey.pData, val);
|
||||
|
@ -1166,10 +1168,10 @@ static void doStateWindowAggImpl(SOperatorInfo* pOperator, SStateWindowOperatorI
|
|||
|
||||
pInfo->hasKey = true;
|
||||
|
||||
doKeepNewWindowStartInfo(pRowSup, tsList, j);
|
||||
doKeepTuple(pRowSup, tsList[j]);
|
||||
doKeepNewWindowStartInfo(pRowSup, tsList, j, gid);
|
||||
doKeepTuple(pRowSup, tsList[j], gid);
|
||||
} else if (compareVal(val, &pInfo->stateKey)) {
|
||||
doKeepTuple(pRowSup, tsList[j]);
|
||||
doKeepTuple(pRowSup, tsList[j], gid);
|
||||
if (j == 0 && pRowSup->startRowIndex != 0) {
|
||||
pRowSup->startRowIndex = 0;
|
||||
}
|
||||
|
@ -1191,8 +1193,8 @@ static void doStateWindowAggImpl(SOperatorInfo* pOperator, SStateWindowOperatorI
|
|||
pRowSup->numOfRows, NULL, pBlock->info.rows, numOfOutput, TSDB_ORDER_ASC);
|
||||
|
||||
// here we start a new session window
|
||||
doKeepNewWindowStartInfo(pRowSup, tsList, j);
|
||||
doKeepTuple(pRowSup, tsList[j]);
|
||||
doKeepNewWindowStartInfo(pRowSup, tsList, j, gid);
|
||||
doKeepTuple(pRowSup, tsList[j], gid);
|
||||
|
||||
// todo extract method
|
||||
if (IS_VAR_DATA_TYPE(pInfo->stateKey.type)) {
|
||||
|
@ -1935,7 +1937,7 @@ _error:
|
|||
return NULL;
|
||||
}
|
||||
|
||||
// todo handle multiple tables cases.
|
||||
// todo handle multiple timeline cases. assume no timeline interweaving
|
||||
static void doSessionWindowAggImpl(SOperatorInfo* pOperator, SSessionAggOperatorInfo* pInfo, SSDataBlock* pBlock) {
|
||||
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
|
||||
SExprSupp* pSup = &pOperator->exprSupp;
|
||||
|
@ -1959,12 +1961,13 @@ static void doSessionWindowAggImpl(SOperatorInfo* pOperator, SSessionAggOperator
|
|||
// In case of ascending or descending order scan data, only one time window needs to be kepted for each table.
|
||||
TSKEY* tsList = (TSKEY*)pColInfoData->pData;
|
||||
for (int32_t j = 0; j < pBlock->info.rows; ++j) {
|
||||
if (pInfo->winSup.prevTs == INT64_MIN) {
|
||||
doKeepNewWindowStartInfo(pRowSup, tsList, j);
|
||||
doKeepTuple(pRowSup, tsList[j]);
|
||||
} else if (tsList[j] - pRowSup->prevTs <= gap && (tsList[j] - pRowSup->prevTs) >= 0) {
|
||||
if (gid != pRowSup->groupId || pInfo->winSup.prevTs == INT64_MIN) {
|
||||
doKeepNewWindowStartInfo(pRowSup, tsList, j, gid);
|
||||
doKeepTuple(pRowSup, tsList[j], gid);
|
||||
} else if ((tsList[j] - pRowSup->prevTs >= 0) && tsList[j] - pRowSup->prevTs <= gap ||
|
||||
(pRowSup->prevTs - tsList[j] >= 0 ) && (pRowSup->prevTs - tsList[j] <= gap)) {
|
||||
// The gap is less than the threshold, so it belongs to current session window that has been opened already.
|
||||
doKeepTuple(pRowSup, tsList[j]);
|
||||
doKeepTuple(pRowSup, tsList[j], gid);
|
||||
if (j == 0 && pRowSup->startRowIndex != 0) {
|
||||
pRowSup->startRowIndex = 0;
|
||||
}
|
||||
|
@ -1987,8 +1990,8 @@ static void doSessionWindowAggImpl(SOperatorInfo* pOperator, SSessionAggOperator
|
|||
pRowSup->numOfRows, NULL, pBlock->info.rows, numOfOutput, TSDB_ORDER_ASC);
|
||||
|
||||
// here we start a new session window
|
||||
doKeepNewWindowStartInfo(pRowSup, tsList, j);
|
||||
doKeepTuple(pRowSup, tsList[j]);
|
||||
doKeepNewWindowStartInfo(pRowSup, tsList, j, gid);
|
||||
doKeepTuple(pRowSup, tsList[j], gid);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3586,9 +3589,7 @@ SOperatorInfo* createStreamSessionAggOperatorInfo(SOperatorInfo* downstream, SPh
|
|||
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY);
|
||||
pInfo->pStDeleted = taosHashInit(64, hashFn, true, HASH_NO_LOCK);
|
||||
pInfo->pDelIterator = NULL;
|
||||
// pInfo->pDelRes = createSpecialDataBlock(STREAM_DELETE_RESULT);
|
||||
pInfo->pDelRes = createOneDataBlock(pInfo->binfo.pRes, false); // todo(liuyao) for delete
|
||||
pInfo->pDelRes->info.type = STREAM_DELETE_RESULT; // todo(liuyao) for delete
|
||||
pInfo->pDelRes = createSpecialDataBlock(STREAM_DELETE_RESULT);
|
||||
pInfo->pChildren = NULL;
|
||||
pInfo->isFinal = false;
|
||||
pInfo->pPhyNode = pPhyNode;
|
||||
|
|
|
@ -748,6 +748,7 @@ int32_t sumCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
|
|||
pDBuf->dsum += pSBuf->dsum;
|
||||
}
|
||||
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
||||
pDResInfo->isNullRes &= pSResInfo->isNullRes;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
@ -1746,6 +1747,7 @@ int32_t minMaxCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx, int3
|
|||
}
|
||||
}
|
||||
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
||||
pDResInfo->isNullRes &= pSResInfo->isNullRes;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
@ -2121,6 +2123,7 @@ int32_t stddevCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
|
|||
}
|
||||
pDBuf->count += pSBuf->count;
|
||||
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
||||
pDResInfo->isNullRes &= pSResInfo->isNullRes;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
@ -2311,6 +2314,7 @@ int32_t leastSQRCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
|
|||
pDparam[1][2] += pSparam[1][2];
|
||||
pDBuf->num += pSBuf->num;
|
||||
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
||||
pDResInfo->isNullRes &= pSResInfo->isNullRes;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
@ -2707,6 +2711,7 @@ int32_t apercentileCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx)
|
|||
|
||||
apercentileTransferInfo(pSBuf, pDBuf);
|
||||
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
||||
pDResInfo->isNullRes &= pSResInfo->isNullRes;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
@ -3890,6 +3895,7 @@ int32_t spreadCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
|
|||
SSpreadInfo* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo);
|
||||
spreadTransferInfo(pSBuf, pDBuf);
|
||||
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
||||
pDResInfo->isNullRes &= pSResInfo->isNullRes;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
@ -4062,6 +4068,7 @@ int32_t elapsedCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
|
|||
|
||||
elapsedTransferInfo(pSBuf, pDBuf);
|
||||
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
||||
pDResInfo->isNullRes &= pSResInfo->isNullRes;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
@ -4379,6 +4386,7 @@ int32_t histogramCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
|
|||
|
||||
histogramTransferInfo(pSBuf, pDBuf);
|
||||
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
||||
pDResInfo->isNullRes &= pSResInfo->isNullRes;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
@ -4576,6 +4584,7 @@ int32_t hllCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
|
|||
|
||||
hllTransferInfo(pSBuf, pDBuf);
|
||||
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
||||
pDResInfo->isNullRes &= pSResInfo->isNullRes;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@
|
|||
./test.sh -f tsim/user/privilege_db.sim
|
||||
./test.sh -f tsim/user/privilege_sysinfo.sim
|
||||
|
||||
# ---- db
|
||||
# ---- db ----
|
||||
./test.sh -f tsim/db/alter_option.sim
|
||||
# unsupport ./test.sh -f tsim/db/alter_replica_13.sim
|
||||
# unsupport ./test.sh -f tsim/db/alter_replica_31.sim
|
||||
|
@ -316,6 +316,7 @@
|
|||
./test.sh -f tsim/valgrind/checkError5.sim
|
||||
./test.sh -f tsim/valgrind/checkError6.sim
|
||||
./test.sh -f tsim/valgrind/checkError7.sim
|
||||
./test.sh -f tsim/valgrind/checkError8.sim
|
||||
./test.sh -f tsim/valgrind/checkUdf.sim
|
||||
|
||||
# --- vnode ----
|
||||
|
|
|
@ -82,8 +82,6 @@ if $data26 != 21600m then
|
|||
return -1
|
||||
endi
|
||||
|
||||
return
|
||||
|
||||
print =============== step6
|
||||
$i = $i + 1
|
||||
while $i < 5
|
||||
|
@ -251,7 +249,7 @@ if $rows != 5 then
|
|||
return -1
|
||||
endi
|
||||
|
||||
sql select * from $stb
|
||||
sql select * from $st
|
||||
if $rows != 5 then
|
||||
return -1
|
||||
endi
|
||||
|
@ -306,7 +304,7 @@ if $rows != 5 then
|
|||
return -1
|
||||
endi
|
||||
|
||||
sql select * from $stb
|
||||
sql select * from $st
|
||||
if $rows != 5 then
|
||||
return -1
|
||||
endi
|
||||
|
|
|
@ -5,15 +5,15 @@ sleep 50
|
|||
sql connect
|
||||
|
||||
print =============== create database
|
||||
sql create database test vgroups 1
|
||||
sql show databases
|
||||
sql create database test vgroups 1;
|
||||
sql show databases;
|
||||
if $rows != 3 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print $data00 $data01 $data02
|
||||
|
||||
sql use test
|
||||
sql use test;
|
||||
|
||||
|
||||
sql create table t1(ts timestamp, a int, b int , c int, d double,id int);
|
||||
|
|
|
@ -53,6 +53,16 @@ print =============== step3: tb
|
|||
|
||||
sql select sum(tbcol) from stb partition by tbname interval(1s) slimit 1 soffset 1;
|
||||
sql select sum(tbcol) from stb partition by tbname interval(1s) slimit 2 soffset 4 limit 10 offset 1;
|
||||
sql select sum(tbcol) from stb partition by tbname interval(1s) slimit 1 soffset 1;
|
||||
sql select sum(tbcol) from stb partition by tbname interval(1s) slimit 2 soffset 4 limit 10 offset 1;
|
||||
sql select sum(tbcol) from stb partition by tbname interval(1s) slimit 1 soffset 1;
|
||||
sql select sum(tbcol) from stb partition by tbname interval(1s) slimit 2 soffset 4 limit 10 offset 1;
|
||||
sql select sum(tbcol) from stb partition by tbname interval(1s) slimit 1 soffset 1;
|
||||
sql select sum(tbcol) from stb partition by tbname interval(1s) slimit 2 soffset 4 limit 10 offset 1;
|
||||
sql select sum(tbcol) from stb partition by tbname interval(1s) slimit 1 soffset 1;
|
||||
sql select sum(tbcol) from stb partition by tbname interval(1s) slimit 2 soffset 4 limit 10 offset 1;
|
||||
|
||||
sleep 1000
|
||||
|
||||
_OVER:
|
||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||
|
|
|
@ -113,6 +113,9 @@ sql select tbcol5 - tbcol3 from stb
|
|||
|
||||
sql select spread( tbcol2 )/44, spread(tbcol2), 0.204545455 * 44 from stb;
|
||||
sql select min(tbcol) * max(tbcol) /4, sum(tbcol2) * apercentile(tbcol2, 20), apercentile(tbcol2, 33) + 52/9 from stb;
|
||||
sql select distinct(tbname), tgcol from stb;
|
||||
#sql select sum(tbcol) from stb partition by tbname interval(1s) slimit 1 soffset 1;
|
||||
#sql select sum(tbcol) from stb partition by tbname interval(1s) slimit 2 soffset 4 limit 10 offset 1;
|
||||
|
||||
print =============== step5: explain
|
||||
sql explain analyze select ts from stb where -2;
|
||||
|
|
Loading…
Reference in New Issue