Merge branch '2.0' of github.com:taosdata/TDengine into szhou/fix/udf
This commit is contained in:
commit
56f067229c
|
@ -6,53 +6,85 @@ description: "创建、删除数据库,查看、修改数据库参数"
|
||||||
|
|
||||||
## 创建数据库
|
## 创建数据库
|
||||||
|
|
||||||
```
|
```sql
|
||||||
CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [DAYS days] [UPDATE 1];
|
CREATE DATABASE [IF NOT EXISTS] db_name [database_options]
|
||||||
|
|
||||||
|
database_options:
|
||||||
|
database_option ...
|
||||||
|
|
||||||
|
database_option: {
|
||||||
|
BUFFER value
|
||||||
|
| CACHEMODEL {'none' | 'last_row' | 'last_value' | 'both'}
|
||||||
|
| CACHESIZE value
|
||||||
|
| COMP {0 | 1 | 2}
|
||||||
|
| DURATION value
|
||||||
|
| FSYNC value
|
||||||
|
| MAXROWS value
|
||||||
|
| MINROWS value
|
||||||
|
| KEEP value
|
||||||
|
| PAGES value
|
||||||
|
| PAGESIZE value
|
||||||
|
| PRECISION {'ms' | 'us' | 'ns'}
|
||||||
|
| REPLICA value
|
||||||
|
| RETENTIONS ingestion_duration:keep_duration ...
|
||||||
|
| STRICT {'off' | 'on'}
|
||||||
|
| WAL {1 | 2}
|
||||||
|
| VGROUPS value
|
||||||
|
| SINGLE_STABLE {0 | 1}
|
||||||
|
| WAL_RETENTION_PERIOD value
|
||||||
|
| WAL_ROLL_PERIOD value
|
||||||
|
| WAL_RETENTION_SIZE value
|
||||||
|
| WAL_SEGMENT_SIZE value
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
:::info
|
### 参数说明
|
||||||
1. KEEP 是该数据库的数据保留多长天数,缺省是 3650 天(10 年),数据库会自动删除超过时限的数据;<!-- REPLACE_OPEN_TO_ENTERPRISE__KEEP_PARAM_DESCRIPTION -->
|
- buffer: 一个 VNODE 写入内存池大小,单位为MB,默认为96,最小为3,最大为16384。
|
||||||
2. UPDATE 标志数据库支持更新相同时间戳数据;(从 2.1.7.0 版本开始此参数支持设为 2,表示允许部分列更新,也即更新数据行时未被设置的列会保留原值。)(从 2.0.8.0 版本开始支持此参数。注意此参数不能通过 `ALTER DATABASE` 指令进行修改。)
|
- CACHEMODEL:表示是否在内存中缓存子表的最近数据。默认为none。
|
||||||
1. UPDATE 设为 0 时,表示不允许更新数据,后发送的相同时间戳的数据会被直接丢弃;
|
- none:表示不缓存。
|
||||||
2. UPDATE 设为 1 时,表示更新全部列数据,即如果更新一个数据行,其中某些列没有提供取值,那么这些列会被设为 NULL;
|
- last_row:表示缓存子表最近一行数据。这将显著改善 LAST_ROW 函数的性能表现。
|
||||||
3. UPDATE 设为 2 时,表示支持更新部分列数据,即如果更新一个数据行,其中某些列没有提供取值,那么这些列会保持原有数据行中的对应值;
|
- last_value:表示缓存子表每一列的最近的非 NULL 值。这将显著改善无特殊影响(WHERE、ORDER BY、GROUP BY、INTERVAL)下的 LAST 函数的性能表现。
|
||||||
4. 更多关于 UPDATE 参数的用法,请参考[FAQ](/train-faq/faq)。
|
- both:表示同时打开缓存最近行和列功能。
|
||||||
3. 数据库名最大长度为 33;
|
- CACHESIZE:表示缓存子表最近数据的内存大小。默认为 1 ,范围是[1, 65536],单位是 MB。
|
||||||
4. 一条 SQL 语句的最大长度为 65480 个字符;
|
- COMP:表示数据库文件压缩标志位,缺省值为 2,取值范围为 [0, 2]。
|
||||||
5. 创建数据库时可用的参数有:
|
- 0:表示不压缩。
|
||||||
- cache: [详细说明](/reference/config/#cache)
|
- 1:表示一阶段压缩。
|
||||||
- blocks: [详细说明](/reference/config/#blocks)
|
- 2:表示两阶段压缩。
|
||||||
- days: [详细说明](/reference/config/#days)
|
- DURATION:数据文件存储数据的时间跨度。可以使用加单位的表示形式,如 DURATION 100h、DURATION 10d等,支持 m(分钟)、h(小时)和 d(天)三个单位。不加时间单位时默认单位为天,如 DURATION 50 表示 50 天。
|
||||||
- keep: [详细说明](/reference/config/#keep)
|
- FSYNC:当 WAL 参数设置为2时,落盘的周期。默认为3000,单位毫秒。最小为0,表示每次写入立即落盘;最大为180000,即三分钟。
|
||||||
- minRows: [详细说明](/reference/config/#minrows)
|
- MAXROWS:文件块中记录的最大条数,默认为4096条。
|
||||||
- maxRows: [详细说明](/reference/config/#maxrows)
|
- MINROWS:文件块中记录的最小条数,默认为100条。
|
||||||
- wal: [详细说明](/reference/config/#wallevel)
|
- KEEP:表示数据文件保存的天数,缺省值为 3650,取值范围 [1, 365000],且必须大于或等于 DURATION 参数值。数据库会自动删除保存时间超过KEEP值的数据。KEEP 可以使用加单位的表示形式,如 KEEP 100h、KEEP 10d 等,支持m(分钟)、h(小时)和 d(天)三个单位。也可以不写单位,如 KEEP 50,此时默认单位为天。
|
||||||
- fsync: [详细说明](/reference/config/#fsync)
|
- PAGES:一个 VNODE 中元数据存储引擎的缓存页个数,默认为256,最小64。一个 VNODE 元数据存储占用 PAGESIZE * PAGES,默认情况下为1MB内存。
|
||||||
- update: [详细说明](/reference/config/#update)
|
- PAGESIZE:一个 VNODE 中元数据存储引擎的页大小,单位为KB,默认为4 KB。范围为1到16384,即1 KB到16 MB。
|
||||||
- cacheLast: [详细说明](/reference/config/#cachelast)
|
- PRECISION:数据库的时间戳精度。ms表示毫秒,us表示微秒,ns表示纳秒,默认ms毫秒。
|
||||||
- replica: [详细说明](/reference/config/#replica)
|
- REPLICA:表示数据库副本数,取值为1或3,默认为1。在集群中使用,副本数必须小于或等于 DNODE 的数目。
|
||||||
- quorum: [详细说明](/reference/config/#quorum)
|
- RETENTIONS:表示数据的聚合周期和保存时长,如RETENTIONS 15s:7d,1m:21d,15m:50d表示数据原始采集周期为15秒,原始数据保存7天;按1分钟聚合的数据保存21天;按15分钟聚合的数据保存50天。目前支持且只支持三级存储周期。
|
||||||
- comp: [详细说明](/reference/config/#comp)
|
- STRICT:表示数据同步的一致性要求,默认为off。
|
||||||
- precision: [详细说明](/reference/config/#precision)
|
- on 表示强一致,即运行标准的 raft 协议,半数提交返回成功。
|
||||||
6. 请注意上面列出的所有参数都可以配置在配置文件 `taosd.cfg` 中作为创建数据库时使用的默认配置, `create database` 的参数中明确指定的会覆盖配置文件中的设置。
|
- off表示弱一致,本地提交即返回成功。
|
||||||
|
- WAL:WAL级别,默认为1。
|
||||||
:::
|
- 1:写WAL,但不执行fsync。
|
||||||
|
- 2:写WAL,而且执行fsync。
|
||||||
|
- VGROUPS:数据库中初始vgroup的数目。
|
||||||
|
- SINGLE_STABLE:表示此数据库中是否只可以创建一个超级表,用于超级表列非常多的情况。
|
||||||
|
- 0:表示可以创建多张超级表。
|
||||||
|
- 1:表示只可以创建一张超级表。
|
||||||
|
- WAL_RETENTION_PERIOD:wal文件的额外保留策略,用于数据订阅。wal的保存时长,单位为s。默认为0,即落盘后立即删除。-1表示不删除。
|
||||||
|
- WAL_RETENTION_SIZE:wal文件的额外保留策略,用于数据订阅。wal的保存的最大上限,单位为KB。默认为0,即落盘后立即删除。-1表示不删除。
|
||||||
|
- WAL_ROLL_PERIOD:wal文件切换时长,单位为s。当wal文件创建并写入后,经过该时间,会自动创建一个新的wal文件。默认为0,即仅在落盘时创建新文件。
|
||||||
|
- WAL_SEGMENT_SIZE:wal单个文件大小,单位为KB。当前写入文件大小超过上限后会自动创建一个新的wal文件。默认为0,即仅在落盘时创建新文件。
|
||||||
|
|
||||||
### 创建数据库示例
|
### 创建数据库示例
|
||||||
|
|
||||||
创建时间精度为纳秒的数据库, 保留 1 年数据:
|
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
CREATE DATABASE test PRECISION 'ns' KEEP 365;
|
create database if not exists db vgroups 10 buffer 10
|
||||||
```
|
|
||||||
|
|
||||||
## 显示系统当前参数
|
|
||||||
|
|
||||||
```
|
|
||||||
SHOW VARIABLES;
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## 使用数据库
|
以上示例创建了一个有 10 个 vgroup 名为 db 的数据库, 其中每个 vnode 分配也 10MB 的写入缓存
|
||||||
|
|
||||||
|
### 使用数据库
|
||||||
|
|
||||||
```
|
```
|
||||||
USE db_name;
|
USE db_name;
|
||||||
|
@ -63,61 +95,42 @@ USE db_name;
|
||||||
## 删除数据库
|
## 删除数据库
|
||||||
|
|
||||||
```
|
```
|
||||||
DROP DATABASE [IF EXISTS] db_name;
|
DROP DATABASE [IF EXISTS] db_name
|
||||||
```
|
```
|
||||||
|
|
||||||
删除数据库。指定 Database 所包含的全部数据表将被删除,谨慎使用!
|
删除数据库。指定 Database 所包含的全部数据表将被删除,该数据库的所有 vgroups 也会被全部销毁,请谨慎使用!
|
||||||
|
|
||||||
## 修改数据库参数
|
## 修改数据库参数
|
||||||
|
|
||||||
```
|
```sql
|
||||||
ALTER DATABASE db_name COMP 2;
|
ALTER DATABASE db_name [alter_database_options]
|
||||||
|
|
||||||
|
alter_database_options:
|
||||||
|
alter_database_option ...
|
||||||
|
|
||||||
|
alter_database_option: {
|
||||||
|
CACHEMODEL {'none' | 'last_row' | 'last_value' | 'both'}
|
||||||
|
| CACHESIZE value
|
||||||
|
| FSYNC value
|
||||||
|
| KEEP value
|
||||||
|
| WAL value
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
COMP 参数是指修改数据库文件压缩标志位,缺省值为 2,取值范围为 [0, 2]。0 表示不压缩,1 表示一阶段压缩,2 表示两阶段压缩。
|
:::note
|
||||||
|
其它参数在3.0.0.0中暂不支持修改
|
||||||
|
|
||||||
```
|
|
||||||
ALTER DATABASE db_name REPLICA 2;
|
|
||||||
```
|
|
||||||
|
|
||||||
REPLICA 参数是指修改数据库副本数,取值范围 [1, 3]。在集群中使用,副本数必须小于或等于 DNODE 的数目。
|
|
||||||
|
|
||||||
```
|
|
||||||
ALTER DATABASE db_name KEEP 365;
|
|
||||||
```
|
|
||||||
|
|
||||||
KEEP 参数是指修改数据文件保存的天数,缺省值为 3650,取值范围 [days, 365000],必须大于或等于 days 参数值。
|
|
||||||
|
|
||||||
```
|
|
||||||
ALTER DATABASE db_name QUORUM 2;
|
|
||||||
```
|
|
||||||
|
|
||||||
QUORUM 参数是指数据写入成功所需要的确认数,取值范围 [1, 2]。对于异步复制,quorum 设为 1,具有 master 角色的虚拟节点自己确认即可。对于同步复制,quorum 设为 2。原则上,Quorum >= 1 并且 Quorum <= replica(副本数),这个参数在启动一个同步模块实例时需要提供。
|
|
||||||
|
|
||||||
```
|
|
||||||
ALTER DATABASE db_name BLOCKS 100;
|
|
||||||
```
|
|
||||||
|
|
||||||
BLOCKS 参数是每个 VNODE (TSDB) 中有多少 cache 大小的内存块,因此一个 VNODE 的用的内存大小粗略为(cache \* blocks)。取值范围 [3, 1000]。
|
|
||||||
|
|
||||||
```
|
|
||||||
ALTER DATABASE db_name CACHELAST 0;
|
|
||||||
```
|
|
||||||
|
|
||||||
CACHELAST 参数控制是否在内存中缓存子表的最近数据。缺省值为 0,取值范围 [0, 1, 2, 3]。其中 0 表示不缓存,1 表示缓存子表最近一行数据,2 表示缓存子表每一列的最近的非 NULL 值,3 表示同时打开缓存最近行和列功能。(从 2.0.11.0 版本开始支持参数值 [0, 1],从 2.1.2.0 版本开始支持参数值 [0, 1, 2, 3]。)
|
|
||||||
说明:缓存最近行,将显著改善 LAST_ROW 函数的性能表现;缓存每列的最近非 NULL 值,将显著改善无特殊影响(WHERE、ORDER BY、GROUP BY、INTERVAL)下的 LAST 函数的性能表现。
|
|
||||||
|
|
||||||
:::tip
|
|
||||||
以上所有参数修改后都可以用 show databases 来确认是否修改成功。另外,从 2.1.3.0 版本开始,修改这些参数后无需重启服务器即可生效。
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
## 显示系统所有数据库
|
## 查看数据库
|
||||||
|
|
||||||
|
### 查看系统中的所有数据库
|
||||||
|
|
||||||
```
|
```
|
||||||
SHOW DATABASES;
|
SHOW DATABASES;
|
||||||
```
|
```
|
||||||
|
|
||||||
## 显示一个数据库的创建语句
|
### 显示一个数据库的创建语句
|
||||||
|
|
||||||
```
|
```
|
||||||
SHOW CREATE DATABASE db_name;
|
SHOW CREATE DATABASE db_name;
|
||||||
|
@ -125,3 +138,4 @@ SHOW CREATE DATABASE db_name;
|
||||||
|
|
||||||
常用于数据库迁移。对一个已经存在的数据库,返回其创建语句;在另一个集群中执行该语句,就能得到一个设置完全相同的 Database。
|
常用于数据库迁移。对一个已经存在的数据库,返回其创建语句;在另一个集群中执行该语句,就能得到一个设置完全相同的 Database。
|
||||||
|
|
||||||
|
### 查看数据库参数
|
||||||
|
|
|
@ -2,13 +2,45 @@
|
||||||
title: 表管理
|
title: 表管理
|
||||||
---
|
---
|
||||||
|
|
||||||
## 创建数据表
|
## 创建表
|
||||||
|
|
||||||
|
`CREATE TABLE` 语句用于创建普通表和以超级表为模板创建子表。
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definitionn] ...) [table_options]
|
||||||
|
|
||||||
|
CREATE TABLE create_subtable_clause
|
||||||
|
|
||||||
|
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definitionn] ...)
|
||||||
|
[TAGS (create_definition [, create_definitionn] ...)]
|
||||||
|
[table_options]
|
||||||
|
|
||||||
|
create_subtable_clause: {
|
||||||
|
create_subtable_clause [create_subtable_clause] ...
|
||||||
|
| [IF NOT EXISTS] [db_name.]tb_name USING [db_name.]stb_name [(tag_name [, tag_name] ...)] TAGS (tag_value [, tag_value] ...)
|
||||||
|
}
|
||||||
|
|
||||||
|
create_definition:
|
||||||
|
col_name column_definition
|
||||||
|
|
||||||
|
column_definition:
|
||||||
|
type_name [comment 'string_value']
|
||||||
|
|
||||||
|
table_options:
|
||||||
|
table_option ...
|
||||||
|
|
||||||
|
table_option: {
|
||||||
|
COMMENT 'string_value'
|
||||||
|
| WATERMARK duration[,duration]
|
||||||
|
| MAX_DELAY duration[,duration]
|
||||||
|
| ROLLUP(func_name [, func_name] ...)
|
||||||
|
| SMA(col_name [, col_name] ...)
|
||||||
|
| TTL value
|
||||||
|
}
|
||||||
|
|
||||||
```
|
|
||||||
CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]);
|
|
||||||
```
|
```
|
||||||
|
|
||||||
:::info 说明
|
**使用说明**
|
||||||
|
|
||||||
1. 表的第一个字段必须是 TIMESTAMP,并且系统自动将其设为主键;
|
1. 表的第一个字段必须是 TIMESTAMP,并且系统自动将其设为主键;
|
||||||
2. 表名最大长度为 192;
|
2. 表名最大长度为 192;
|
||||||
|
@ -18,101 +50,112 @@ CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_nam
|
||||||
6. 为了兼容支持更多形式的表名,TDengine 引入新的转义符 "\`",可以让表名与关键词不冲突,同时不受限于上述表名称合法性约束检查。但是同样具有长度限制要求。使用转义字符以后,不再对转义字符中的内容进行大小写统一。
|
6. 为了兼容支持更多形式的表名,TDengine 引入新的转义符 "\`",可以让表名与关键词不冲突,同时不受限于上述表名称合法性约束检查。但是同样具有长度限制要求。使用转义字符以后,不再对转义字符中的内容进行大小写统一。
|
||||||
例如:\`aBc\` 和 \`abc\` 是不同的表名,但是 abc 和 aBc 是相同的表名。
|
例如:\`aBc\` 和 \`abc\` 是不同的表名,但是 abc 和 aBc 是相同的表名。
|
||||||
需要注意的是转义字符中的内容必须是可打印字符。
|
需要注意的是转义字符中的内容必须是可打印字符。
|
||||||
上述的操作逻辑和约束要求与 MySQL 数据的操作一致。
|
|
||||||
从 2.3.0.0 版本开始支持这种方式。
|
|
||||||
|
|
||||||
:::
|
**参数说明**
|
||||||
|
1. COMMENT:表注释。可用于超级表、子表和普通表。
|
||||||
|
2. WATERMARK:指定窗口的关闭时间,默认值为 5 秒,最小单位毫秒,范围为0到15分钟,多个以逗号分隔。只可用于超级表,且只有当数据库使用了RETENTIONS参数时,才可以使用此表参数。
|
||||||
|
3. MAX_DELAY:用于控制推送计算结果的最大延迟,默认值为 interval 的值(但不能超过最大值),最小单位毫秒,范围为1毫秒到15分钟,多个以逗号分隔。注:不建议 MAX_DELAY 设置太小,否则会过于频繁的推送结果,影响存储和查询性能,如无特殊需求,取默认值即可。只可用于超级表,且只有当数据库使用了RETENTIONS参数时,才可以使用此表参数。
|
||||||
|
4. ROLLUP:Rollup 指定的聚合函数,提供基于多层级的降采样聚合结果。只可用于超级表。只有当数据库使用了RETENTIONS参数时,才可以使用此表参数。作用于超级表除TS列外的其它所有列,但是只能定义一个聚合函数。 聚合函数支持 avg, sum, min, max, last, first。
|
||||||
|
5. SMA:Small Materialized Aggregates,提供基于数据块的自定义预计算功能。预计算类型包括MAX、MIN和SUM。可用于超级表/普通表。
|
||||||
|
6. TTL:Time to Live,是用户用来指定表的生命周期的参数。如果在持续的TTL时间内,都没有数据写入该表,则TDengine系统会自动删除该表。这个TTL的时间只是一个大概时间,我们系统不保证到了时间一定会将其删除,而只保证存在这样一个机制。TTL单位是天,默认为0,表示不限制。用户需要注意,TTL优先级高于KEEP,即TTL时间满足删除机制时,即使当前数据的存在时间小于KEEP,此表也会被删除。只可用于子表和普通表。
|
||||||
|
|
||||||
### 以超级表为模板创建数据表
|
## 创建子表
|
||||||
|
|
||||||
```
|
### 创建子表
|
||||||
|
|
||||||
|
```sql
|
||||||
CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name TAGS (tag_value1, ...);
|
CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name TAGS (tag_value1, ...);
|
||||||
```
|
```
|
||||||
|
|
||||||
以指定的超级表为模板,指定 TAGS 的值来创建数据表。
|
### 创建子表并指定标签的值
|
||||||
|
|
||||||
### 以超级表为模板创建数据表,并指定具体的 TAGS 列
|
```sql
|
||||||
|
|
||||||
```
|
|
||||||
CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name (tag_name1, ...) TAGS (tag_value1, ...);
|
CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name (tag_name1, ...) TAGS (tag_value1, ...);
|
||||||
```
|
```
|
||||||
|
|
||||||
以指定的超级表为模板,指定一部分 TAGS 列的值来创建数据表(没被指定的 TAGS 列会设为空值)。
|
以指定的超级表为模板,也可以指定一部分 TAGS 列的值来创建数据表(没被指定的 TAGS 列会设为空值)。
|
||||||
说明:从 2.0.17.0 版本开始支持这种方式。在之前的版本中,不允许指定 TAGS 列,而必须显式给出所有 TAGS 列的取值。
|
|
||||||
|
|
||||||
### 批量创建数据表
|
### 批量创建子表
|
||||||
|
|
||||||
```
|
```sql
|
||||||
CREATE TABLE [IF NOT EXISTS] tb_name1 USING stb_name TAGS (tag_value1, ...) [IF NOT EXISTS] tb_name2 USING stb_name TAGS (tag_value2, ...) ...;
|
CREATE TABLE [IF NOT EXISTS] tb_name1 USING stb_name TAGS (tag_value1, ...) [IF NOT EXISTS] tb_name2 USING stb_name TAGS (tag_value2, ...) ...;
|
||||||
```
|
```
|
||||||
|
|
||||||
以更快的速度批量创建大量数据表(服务器端 2.0.14 及以上版本)。
|
批量建表方式要求数据表必须以超级表为模板。 在不超出 SQL 语句长度限制的前提下,单条语句中的建表数量建议控制在 1000 ~ 3000 之间,将会获得比较理想的建表速度。
|
||||||
|
|
||||||
:::info
|
## 修改普通表
|
||||||
|
|
||||||
1.批量建表方式要求数据表必须以超级表为模板。 2.在不超出 SQL 语句长度限制的前提下,单条语句中的建表数量建议控制在 1000 ~ 3000 之间,将会获得比较理想的建表速度。
|
```sql
|
||||||
|
ALTER TABLE [db_name.]tb_name alter_table_clause
|
||||||
:::
|
|
||||||
|
alter_table_clause: {
|
||||||
## 删除数据表
|
alter_table_options
|
||||||
|
| ADD COLUMN col_name column_type
|
||||||
|
| DROP COLUMN col_name
|
||||||
|
| MODIFY COLUMN col_name column_type
|
||||||
|
| RENAME COLUMN old_col_name new_col_name
|
||||||
|
}
|
||||||
|
|
||||||
|
alter_table_options:
|
||||||
|
alter_table_option ...
|
||||||
|
|
||||||
|
alter_table_option: {
|
||||||
|
TTL value
|
||||||
|
| COMMENT 'string_value'
|
||||||
|
}
|
||||||
|
|
||||||
```
|
|
||||||
DROP TABLE [IF EXISTS] tb_name;
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## 显示当前数据库下的所有数据表信息
|
**使用说明**
|
||||||
|
对普通表可以进行如下修改操作
|
||||||
|
1. ADD COLUMN:添加列。
|
||||||
|
2. DROP COLUMN:删除列。
|
||||||
|
3. ODIFY COLUMN:修改列定义,如果数据列的类型是可变长类型,那么可以使用此指令修改其宽度,只能改大,不能改小。
|
||||||
|
4. RENAME COLUMN:修改列名称。
|
||||||
|
|
||||||
```
|
### 增加列
|
||||||
SHOW TABLES [LIKE tb_name_wildchar];
|
|
||||||
```
|
|
||||||
|
|
||||||
显示当前数据库下的所有数据表信息。
|
```sql
|
||||||
|
|
||||||
## 显示一个数据表的创建语句
|
|
||||||
|
|
||||||
```
|
|
||||||
SHOW CREATE TABLE tb_name;
|
|
||||||
```
|
|
||||||
|
|
||||||
常用于数据库迁移。对一个已经存在的数据表,返回其创建语句;在另一个集群中执行该语句,就能得到一个结构完全相同的数据表。
|
|
||||||
|
|
||||||
## 获取表的结构信息
|
|
||||||
|
|
||||||
```
|
|
||||||
DESCRIBE tb_name;
|
|
||||||
```
|
|
||||||
|
|
||||||
## 修改表定义
|
|
||||||
|
|
||||||
### 表增加列
|
|
||||||
|
|
||||||
```
|
|
||||||
ALTER TABLE tb_name ADD COLUMN field_name data_type;
|
ALTER TABLE tb_name ADD COLUMN field_name data_type;
|
||||||
```
|
```
|
||||||
|
|
||||||
:::info
|
### 删除列
|
||||||
|
|
||||||
1. 列的最大个数为 1024,最小个数为 2;(从 2.1.7.0 版本开始,改为最多允许 4096 列)
|
```sql
|
||||||
2. 列名最大长度为 64。
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
### 表删除列
|
|
||||||
|
|
||||||
```
|
|
||||||
ALTER TABLE tb_name DROP COLUMN field_name;
|
ALTER TABLE tb_name DROP COLUMN field_name;
|
||||||
```
|
```
|
||||||
|
|
||||||
如果表是通过超级表创建,更改表结构的操作只能对超级表进行。同时针对超级表的结构更改对所有通过该结构创建的表生效。对于不是通过超级表创建的表,可以直接修改表结构。
|
### 修改列宽
|
||||||
|
|
||||||
### 表修改列宽
|
```sql
|
||||||
|
|
||||||
```
|
|
||||||
ALTER TABLE tb_name MODIFY COLUMN field_name data_type(length);
|
ALTER TABLE tb_name MODIFY COLUMN field_name data_type(length);
|
||||||
```
|
```
|
||||||
|
|
||||||
如果数据列的类型是可变长格式(BINARY 或 NCHAR),那么可以使用此指令修改其宽度(只能改大,不能改小)。(2.1.3.0 版本新增)
|
### 修改列名
|
||||||
如果表是通过超级表创建,更改表结构的操作只能对超级表进行。同时针对超级表的结构更改对所有通过该结构创建的表生效。对于不是通过超级表创建的表,可以直接修改表结构。
|
|
||||||
|
```sql
|
||||||
|
ALTER TABLE tb_name RENAME COLUMN old_col_name new_col_name
|
||||||
|
```
|
||||||
|
|
||||||
|
## 修改子表
|
||||||
|
|
||||||
|
ALTER TABLE [db_name.]tb_name alter_table_clause
|
||||||
|
|
||||||
|
alter_table_clause: {
|
||||||
|
alter_table_options
|
||||||
|
| SET TAG tag_name = new_tag_value
|
||||||
|
}
|
||||||
|
|
||||||
|
alter_table_options:
|
||||||
|
alter_table_option ...
|
||||||
|
|
||||||
|
alter_table_option: {
|
||||||
|
TTL value
|
||||||
|
| COMMENT 'string_value'
|
||||||
|
}
|
||||||
|
|
||||||
|
**使用说明**
|
||||||
|
1. 对子表的列和标签的修改,除了更改标签值以外,都要通过超级表才能进行。
|
||||||
|
|
||||||
### 修改子表标签值
|
### 修改子表标签值
|
||||||
|
|
||||||
|
@ -120,4 +163,34 @@ ALTER TABLE tb_name MODIFY COLUMN field_name data_type(length);
|
||||||
ALTER TABLE tb_name SET TAG tag_name=new_tag_value;
|
ALTER TABLE tb_name SET TAG tag_name=new_tag_value;
|
||||||
```
|
```
|
||||||
|
|
||||||
如果表是通过超级表创建,可以使用此指令修改其标签值
|
## 删除表
|
||||||
|
|
||||||
|
可以在一条SQL语句中删除一个或多个普通表或子表。
|
||||||
|
|
||||||
|
```sql
|
||||||
|
DROP TABLE [IF EXISTS] [db_name.]tb_name [, [IF EXISTS] [db_name.]tb_name] ...
|
||||||
|
```
|
||||||
|
|
||||||
|
## 查看表的信息
|
||||||
|
|
||||||
|
### 显示所有表
|
||||||
|
|
||||||
|
如下SQL语句可以列出当前数据库中的所有表名。
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SHOW TABLES [LIKE tb_name_wildchar];
|
||||||
|
```
|
||||||
|
|
||||||
|
### 显示表创建语句
|
||||||
|
|
||||||
|
```
|
||||||
|
SHOW CREATE TABLE tb_name;
|
||||||
|
```
|
||||||
|
|
||||||
|
常用于数据库迁移。对一个已经存在的数据表,返回其创建语句;在另一个集群中执行该语句,就能得到一个结构完全相同的数据表。
|
||||||
|
|
||||||
|
### 获取表结构信息
|
||||||
|
|
||||||
|
```
|
||||||
|
DESCRIBE tb_name;
|
||||||
|
```
|
|
@ -3,38 +3,31 @@ sidebar_label: 超级表管理
|
||||||
title: 超级表 STable 管理
|
title: 超级表 STable 管理
|
||||||
---
|
---
|
||||||
|
|
||||||
:::note
|
|
||||||
|
|
||||||
在 2.0.15.0 及以后的版本中开始支持 STABLE 保留字。也即,在本节后文的指令说明中,CREATE、DROP、ALTER 三个指令在 2.0.15.0 之前的版本中 STABLE 保留字需写作 TABLE。
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
## 创建超级表
|
## 创建超级表
|
||||||
|
|
||||||
```
|
```sql
|
||||||
CREATE STABLE [IF NOT EXISTS] stb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]) TAGS (tag1_name tag_type1, tag2_name tag_type2 [, tag3_name tag_type3]);
|
CREATE STABLE [IF NOT EXISTS] stb_name (create_definition [, create_definitionn] ...) TAGS (create_definition [, create_definition] ...) [table_options]
|
||||||
|
|
||||||
|
create_definition:
|
||||||
|
col_name column_definition
|
||||||
|
|
||||||
|
column_definition:
|
||||||
|
type_name [COMMENT 'string_value']
|
||||||
```
|
```
|
||||||
|
|
||||||
创建 STable,与创建表的 SQL 语法相似,但需要指定 TAGS 字段的名称和类型。
|
**使用说明**
|
||||||
|
- 超级表中列的最大个数为 4096,需要注意,这里的 4096 是包含 TAG 列在内的,最小个数为 3,包含一个时间戳主键、一个 TAG 列和一个数据列。
|
||||||
|
- 建表时可以给列或标签附加注释。
|
||||||
|
- TAGS语法指定超级表的标签列,标签列需要遵循以下约定:
|
||||||
|
- TAGS 中的 TIMESTAMP 列写入数据时需要提供给定值,而暂不支持四则运算,例如 NOW + 10s 这类表达式。
|
||||||
|
- TAGS 列名不能与其他列名相同。
|
||||||
|
- TAGS 列名不能为预留关键字。
|
||||||
|
- TAGS 最多允许 128 个,至少 1 个,总长度不超过 16 KB。
|
||||||
|
- 关于表参数的详细说明,参见 CREATE TABLE 中的介绍。
|
||||||
|
|
||||||
:::info
|
## 查看超级表
|
||||||
|
|
||||||
1. TAGS 列的数据类型不能是 timestamp 类型;(从 2.1.3.0 版本开始,TAGS 列中支持使用 timestamp 类型,但需注意在 TAGS 中的 timestamp 列写入数据时需要提供给定值,而暂不支持四则运算,例如 `NOW + 10s` 这类表达式)
|
### 显示当前数据库下的所有超级表信息
|
||||||
2. TAGS 列名不能与其他列名相同;
|
|
||||||
3. TAGS 列名不能为预留关键字(参见:[参数限制与保留关键字](/taos-sql/keywords/) 章节);
|
|
||||||
4. TAGS 最多允许 128 个,至少 1 个,总长度不超过 16 KB。
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
## 删除超级表
|
|
||||||
|
|
||||||
```
|
|
||||||
DROP STABLE [IF EXISTS] stb_name;
|
|
||||||
```
|
|
||||||
|
|
||||||
删除 STable 会自动删除通过 STable 创建的子表。
|
|
||||||
|
|
||||||
## 显示当前数据库下的所有超级表信息
|
|
||||||
|
|
||||||
```
|
```
|
||||||
SHOW STABLES [LIKE tb_name_wildcard];
|
SHOW STABLES [LIKE tb_name_wildcard];
|
||||||
|
@ -42,7 +35,7 @@ SHOW STABLES [LIKE tb_name_wildcard];
|
||||||
|
|
||||||
查看数据库内全部 STable,及其相关信息,包括 STable 的名称、创建时间、列数量、标签(TAG)数量、通过该 STable 建表的数量。
|
查看数据库内全部 STable,及其相关信息,包括 STable 的名称、创建时间、列数量、标签(TAG)数量、通过该 STable 建表的数量。
|
||||||
|
|
||||||
## 显示一个超级表的创建语句
|
### 显示一个超级表的创建语句
|
||||||
|
|
||||||
```
|
```
|
||||||
SHOW CREATE STABLE stb_name;
|
SHOW CREATE STABLE stb_name;
|
||||||
|
@ -50,40 +43,81 @@ SHOW CREATE STABLE stb_name;
|
||||||
|
|
||||||
常用于数据库迁移。对一个已经存在的超级表,返回其创建语句;在另一个集群中执行该语句,就能得到一个结构完全相同的超级表。
|
常用于数据库迁移。对一个已经存在的超级表,返回其创建语句;在另一个集群中执行该语句,就能得到一个结构完全相同的超级表。
|
||||||
|
|
||||||
## 获取超级表的结构信息
|
### 获取超级表的结构信息
|
||||||
|
|
||||||
```
|
```
|
||||||
DESCRIBE stb_name;
|
DESCRIBE stb_name;
|
||||||
```
|
```
|
||||||
|
|
||||||
## 修改超级表普通列
|
## 删除超级表
|
||||||
|
|
||||||
### 超级表增加列
|
|
||||||
|
|
||||||
```
|
```
|
||||||
ALTER STABLE stb_name ADD COLUMN field_name data_type;
|
DROP STABLE [IF EXISTS] [db_name.]stb_name
|
||||||
```
|
```
|
||||||
|
|
||||||
### 超级表删除列
|
删除 STable 会自动删除通过 STable 创建的子表以及子表中的所有数据。
|
||||||
|
|
||||||
|
## 修改超级表
|
||||||
|
|
||||||
|
```sql
|
||||||
|
ALTER STABLE [db_name.]tb_name alter_table_clause
|
||||||
|
|
||||||
|
alter_table_clause: {
|
||||||
|
alter_table_options
|
||||||
|
| ADD COLUMN col_name column_type
|
||||||
|
| DROP COLUMN col_name
|
||||||
|
| MODIFY COLUMN col_name column_type
|
||||||
|
| ADD TAG tag_name tag_type
|
||||||
|
| DROP TAG tag_name
|
||||||
|
| MODIFY TAG tag_name tag_type
|
||||||
|
| RENAME TAG old_tag_name new_tag_name
|
||||||
|
}
|
||||||
|
|
||||||
|
alter_table_options:
|
||||||
|
alter_table_option ...
|
||||||
|
|
||||||
|
alter_table_option: {
|
||||||
|
COMMENT 'string_value'
|
||||||
|
}
|
||||||
|
|
||||||
```
|
|
||||||
ALTER STABLE stb_name DROP COLUMN field_name;
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### 超级表修改列宽
|
**使用说明**
|
||||||
|
|
||||||
|
修改超级表的结构会对其下的所有子表生效。无法针对某个特定子表修改表结构。标签结构的修改需要对超级表下发,TDengine 会自动作用于此超级表的所有子表。
|
||||||
|
|
||||||
|
- ADD COLUMN:添加列。
|
||||||
|
- DROP COLUMN:删除列。
|
||||||
|
- MODIFY COLUMN:修改列定义,如果数据列的类型是可变长类型,那么可以使用此指令修改其宽度,只能改大,不能改小。
|
||||||
|
- ADD TAG:给超级表添加一个标签。
|
||||||
|
- DROP TAG:删除超级表的一个标签。从超级表删除某个标签后,该超级表下的所有子表也会自动删除该标签。
|
||||||
|
- MODIFY TAG:修改超级表的一个标签的定义。如果标签的类型是可变长类型,那么可以使用此指令修改其宽度,只能改大,不能改小。
|
||||||
|
- RENAME TAG:修改超级表的一个标签的名称。从超级表修改某个标签名后,该超级表下的所有子表也会自动更新该标签名。
|
||||||
|
|
||||||
|
### 增加列
|
||||||
|
|
||||||
```
|
```
|
||||||
ALTER STABLE stb_name MODIFY COLUMN field_name data_type(length);
|
ALTER STABLE stb_name ADD COLUMN col_name column_type;
|
||||||
```
|
```
|
||||||
|
|
||||||
如果数据列的类型是可变长格式(BINARY 或 NCHAR),那么可以使用此指令修改其宽度(只能改大,不能改小)。(2.1.3.0 版本新增)
|
### 删除列
|
||||||
|
|
||||||
## 修改超级表标签列
|
```
|
||||||
|
ALTER STABLE stb_name DROP COLUMN col_name;
|
||||||
|
```
|
||||||
|
|
||||||
|
### 修改列宽
|
||||||
|
|
||||||
|
```
|
||||||
|
ALTER STABLE stb_name MODIFY COLUMN col_name data_type(length);
|
||||||
|
```
|
||||||
|
|
||||||
|
如果数据列的类型是可变长格式(BINARY 或 NCHAR),那么可以使用此指令修改其宽度(只能改大,不能改小)。
|
||||||
|
|
||||||
### 添加标签
|
### 添加标签
|
||||||
|
|
||||||
```
|
```
|
||||||
ALTER STABLE stb_name ADD TAG new_tag_name tag_type;
|
ALTER STABLE stb_name ADD TAG tag_name tag_type;
|
||||||
```
|
```
|
||||||
|
|
||||||
为 STable 增加一个新的标签,并指定新标签的类型。标签总数不能超过 128 个,总长度不超过 16KB 。
|
为 STable 增加一个新的标签,并指定新标签的类型。标签总数不能超过 128 个,总长度不超过 16KB 。
|
||||||
|
@ -99,7 +133,7 @@ ALTER STABLE stb_name DROP TAG tag_name;
|
||||||
### 修改标签名
|
### 修改标签名
|
||||||
|
|
||||||
```
|
```
|
||||||
ALTER STABLE stb_name CHANGE TAG old_tag_name new_tag_name;
|
ALTER STABLE stb_name RENAME TAG old_tag_name new_tag_name;
|
||||||
```
|
```
|
||||||
|
|
||||||
修改超级表的标签名,从超级表修改某个标签名后,该超级表下的所有子表也会自动更新该标签名。
|
修改超级表的标签名,从超级表修改某个标签名后,该超级表下的所有子表也会自动更新该标签名。
|
||||||
|
|
|
@ -1154,6 +1154,10 @@ typedef struct {
|
||||||
int32_t numOfRetensions;
|
int32_t numOfRetensions;
|
||||||
SArray* pRetensions; // SRetention
|
SArray* pRetensions; // SRetention
|
||||||
void* pTsma;
|
void* pTsma;
|
||||||
|
int32_t walRetentionPeriod;
|
||||||
|
int64_t walRetentionSize;
|
||||||
|
int32_t walRollPeriod;
|
||||||
|
int64_t walSegmentSize;
|
||||||
} SCreateVnodeReq;
|
} SCreateVnodeReq;
|
||||||
|
|
||||||
int32_t tSerializeSCreateVnodeReq(void* buf, int32_t bufLen, SCreateVnodeReq* pReq);
|
int32_t tSerializeSCreateVnodeReq(void* buf, int32_t bufLen, SCreateVnodeReq* pReq);
|
||||||
|
|
|
@ -143,6 +143,7 @@ typedef struct SqlFunctionCtx {
|
||||||
struct SExprInfo *pExpr;
|
struct SExprInfo *pExpr;
|
||||||
struct SDiskbasedBuf *pBuf;
|
struct SDiskbasedBuf *pBuf;
|
||||||
struct SSDataBlock *pSrcBlock;
|
struct SSDataBlock *pSrcBlock;
|
||||||
|
struct SSDataBlock *pDstBlock; // used by indifinite rows function to set selectivity
|
||||||
int32_t curBufPage;
|
int32_t curBufPage;
|
||||||
bool increase;
|
bool increase;
|
||||||
|
|
||||||
|
|
|
@ -103,8 +103,8 @@ typedef struct SWal {
|
||||||
int32_t fsyncSeq;
|
int32_t fsyncSeq;
|
||||||
// meta
|
// meta
|
||||||
SWalVer vers;
|
SWalVer vers;
|
||||||
TdFilePtr pWriteLogTFile;
|
TdFilePtr pLogFile;
|
||||||
TdFilePtr pWriteIdxTFile;
|
TdFilePtr pIdxFile;
|
||||||
int32_t writeCur;
|
int32_t writeCur;
|
||||||
SArray *fileInfoSet; // SArray<SWalFileInfo>
|
SArray *fileInfoSet; // SArray<SWalFileInfo>
|
||||||
// status
|
// status
|
||||||
|
|
|
@ -2019,7 +2019,7 @@ int32_t transferTableNameList(const char* tbList, int32_t acctId, char* dbName,
|
||||||
}
|
}
|
||||||
|
|
||||||
if (('a' <= *(tbList + i) && 'z' >= *(tbList + i)) || ('A' <= *(tbList + i) && 'Z' >= *(tbList + i)) ||
|
if (('a' <= *(tbList + i) && 'z' >= *(tbList + i)) || ('A' <= *(tbList + i) && 'Z' >= *(tbList + i)) ||
|
||||||
('0' <= *(tbList + i) && '9' >= *(tbList + i))) {
|
('0' <= *(tbList + i) && '9' >= *(tbList + i)) || ('_' == *(tbList + i))) {
|
||||||
if (vLen[vIdx] > 0) {
|
if (vLen[vIdx] > 0) {
|
||||||
goto _return;
|
goto _return;
|
||||||
}
|
}
|
||||||
|
|
|
@ -973,7 +973,7 @@ int taos_load_table_info(TAOS *taos, const char *tableNameList) {
|
||||||
|
|
||||||
conn.mgmtEps = getEpSet_s(&pTscObj->pAppInfo->mgmtEp);
|
conn.mgmtEps = getEpSet_s(&pTscObj->pAppInfo->mgmtEp);
|
||||||
|
|
||||||
code = catalogAsyncGetAllMeta(pCtg, &conn, &catalogReq, syncCatalogFn, NULL, NULL);
|
code = catalogAsyncGetAllMeta(pCtg, &conn, &catalogReq, syncCatalogFn, pRequest->body.param, NULL);
|
||||||
if (code) {
|
if (code) {
|
||||||
goto _return;
|
goto _return;
|
||||||
}
|
}
|
||||||
|
|
|
@ -1763,9 +1763,9 @@ char* dumpBlockData(SSDataBlock* pDataBlock, const char* flag, char** pDataBuf)
|
||||||
int32_t colNum = taosArrayGetSize(pDataBlock->pDataBlock);
|
int32_t colNum = taosArrayGetSize(pDataBlock->pDataBlock);
|
||||||
int32_t rows = pDataBlock->info.rows;
|
int32_t rows = pDataBlock->info.rows;
|
||||||
int32_t len = 0;
|
int32_t len = 0;
|
||||||
len += snprintf(dumpBuf + len, size - len, "===stream===%s |block type %d |child id %d|group id:%" PRIu64 "| uid:%ld|\n", flag,
|
len += snprintf(dumpBuf + len, size - len, "===stream===%s |block type %d|child id %d|group id:%" PRIu64 "|uid:%ld|rows:%d\n", flag,
|
||||||
(int32_t)pDataBlock->info.type, pDataBlock->info.childId, pDataBlock->info.groupId,
|
(int32_t)pDataBlock->info.type, pDataBlock->info.childId, pDataBlock->info.groupId,
|
||||||
pDataBlock->info.uid);
|
pDataBlock->info.uid, pDataBlock->info.rows);
|
||||||
if (len >= size - 1) return dumpBuf;
|
if (len >= size - 1) return dumpBuf;
|
||||||
|
|
||||||
for (int32_t j = 0; j < rows; j++) {
|
for (int32_t j = 0; j < rows; j++) {
|
||||||
|
|
|
@ -3750,6 +3750,10 @@ int32_t tSerializeSCreateVnodeReq(void *buf, int32_t bufLen, SCreateVnodeReq *pR
|
||||||
uint32_t tsmaLen = (uint32_t)(htonl(((SMsgHead *)pReq->pTsma)->contLen));
|
uint32_t tsmaLen = (uint32_t)(htonl(((SMsgHead *)pReq->pTsma)->contLen));
|
||||||
if (tEncodeBinary(&encoder, (const uint8_t *)pReq->pTsma, tsmaLen) < 0) return -1;
|
if (tEncodeBinary(&encoder, (const uint8_t *)pReq->pTsma, tsmaLen) < 0) return -1;
|
||||||
}
|
}
|
||||||
|
if (tEncodeI32(&encoder, pReq->walRetentionPeriod) < 0) return -1;
|
||||||
|
if (tEncodeI64(&encoder, pReq->walRetentionSize) < 0) return -1;
|
||||||
|
if (tEncodeI32(&encoder, pReq->walRollPeriod) < 0) return -1;
|
||||||
|
if (tEncodeI64(&encoder, pReq->walSegmentSize) < 0) return -1;
|
||||||
|
|
||||||
tEndEncode(&encoder);
|
tEndEncode(&encoder);
|
||||||
|
|
||||||
|
@ -3818,6 +3822,11 @@ int32_t tDeserializeSCreateVnodeReq(void *buf, int32_t bufLen, SCreateVnodeReq *
|
||||||
if (tDecodeBinary(&decoder, (uint8_t **)&pReq->pTsma, NULL) < 0) return -1;
|
if (tDecodeBinary(&decoder, (uint8_t **)&pReq->pTsma, NULL) < 0) return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (tDecodeI32(&decoder, &pReq->walRetentionPeriod) < 0) return -1;
|
||||||
|
if (tDecodeI64(&decoder, &pReq->walRetentionSize) < 0) return -1;
|
||||||
|
if (tDecodeI32(&decoder, &pReq->walRollPeriod) < 0) return -1;
|
||||||
|
if (tDecodeI64(&decoder, &pReq->walSegmentSize) < 0) return -1;
|
||||||
|
|
||||||
tEndDecode(&decoder);
|
tEndDecode(&decoder);
|
||||||
tDecoderClear(&decoder);
|
tDecoderClear(&decoder);
|
||||||
return 0;
|
return 0;
|
||||||
|
|
|
@ -160,6 +160,13 @@ static void vmGenerateVnodeCfg(SCreateVnodeReq *pCreate, SVnodeCfg *pCfg) {
|
||||||
}
|
}
|
||||||
|
|
||||||
pCfg->walCfg.vgId = pCreate->vgId;
|
pCfg->walCfg.vgId = pCreate->vgId;
|
||||||
|
pCfg->walCfg.fsyncPeriod = pCreate->fsyncPeriod;
|
||||||
|
pCfg->walCfg.retentionPeriod = pCreate->walRetentionPeriod;
|
||||||
|
pCfg->walCfg.rollPeriod = pCreate->walRollPeriod;
|
||||||
|
pCfg->walCfg.retentionSize = pCreate->walRetentionSize;
|
||||||
|
pCfg->walCfg.segSize = pCreate->walSegmentSize;
|
||||||
|
pCfg->walCfg.level = pCreate->walLevel;
|
||||||
|
|
||||||
pCfg->hashBegin = pCreate->hashBegin;
|
pCfg->hashBegin = pCreate->hashBegin;
|
||||||
pCfg->hashEnd = pCreate->hashEnd;
|
pCfg->hashEnd = pCreate->hashEnd;
|
||||||
pCfg->hashMethod = pCreate->hashMethod;
|
pCfg->hashMethod = pCreate->hashMethod;
|
||||||
|
|
|
@ -302,9 +302,13 @@ typedef struct {
|
||||||
int8_t strict;
|
int8_t strict;
|
||||||
int8_t hashMethod; // default is 1
|
int8_t hashMethod; // default is 1
|
||||||
int8_t cacheLast;
|
int8_t cacheLast;
|
||||||
|
int8_t schemaless;
|
||||||
int32_t numOfRetensions;
|
int32_t numOfRetensions;
|
||||||
SArray* pRetensions;
|
SArray* pRetensions;
|
||||||
int8_t schemaless;
|
int32_t walRetentionPeriod;
|
||||||
|
int64_t walRetentionSize;
|
||||||
|
int32_t walRollPeriod;
|
||||||
|
int64_t walSegmentSize;
|
||||||
} SDbCfg;
|
} SDbCfg;
|
||||||
|
|
||||||
typedef struct {
|
typedef struct {
|
||||||
|
|
|
@ -120,6 +120,10 @@ static SSdbRaw *mndDbActionEncode(SDbObj *pDb) {
|
||||||
SDB_SET_INT8(pRaw, dataPos, pRetension->keepUnit, _OVER)
|
SDB_SET_INT8(pRaw, dataPos, pRetension->keepUnit, _OVER)
|
||||||
}
|
}
|
||||||
SDB_SET_INT8(pRaw, dataPos, pDb->cfg.schemaless, _OVER)
|
SDB_SET_INT8(pRaw, dataPos, pDb->cfg.schemaless, _OVER)
|
||||||
|
SDB_SET_INT32(pRaw, dataPos, pDb->cfg.walRetentionPeriod, _OVER)
|
||||||
|
SDB_SET_INT64(pRaw, dataPos, pDb->cfg.walRetentionSize, _OVER)
|
||||||
|
SDB_SET_INT32(pRaw, dataPos, pDb->cfg.walRollPeriod, _OVER)
|
||||||
|
SDB_SET_INT64(pRaw, dataPos, pDb->cfg.walSegmentSize, _OVER)
|
||||||
|
|
||||||
SDB_SET_RESERVE(pRaw, dataPos, DB_RESERVE_SIZE, _OVER)
|
SDB_SET_RESERVE(pRaw, dataPos, DB_RESERVE_SIZE, _OVER)
|
||||||
SDB_SET_DATALEN(pRaw, dataPos, _OVER)
|
SDB_SET_DATALEN(pRaw, dataPos, _OVER)
|
||||||
|
@ -199,6 +203,10 @@ static SSdbRow *mndDbActionDecode(SSdbRaw *pRaw) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
SDB_GET_INT8(pRaw, dataPos, &pDb->cfg.schemaless, _OVER)
|
SDB_GET_INT8(pRaw, dataPos, &pDb->cfg.schemaless, _OVER)
|
||||||
|
SDB_GET_INT32(pRaw, dataPos, &pDb->cfg.walRetentionPeriod, _OVER)
|
||||||
|
SDB_GET_INT64(pRaw, dataPos, &pDb->cfg.walRetentionSize, _OVER)
|
||||||
|
SDB_GET_INT32(pRaw, dataPos, &pDb->cfg.walRollPeriod, _OVER)
|
||||||
|
SDB_GET_INT64(pRaw, dataPos, &pDb->cfg.walSegmentSize, _OVER)
|
||||||
|
|
||||||
SDB_GET_RESERVE(pRaw, dataPos, DB_RESERVE_SIZE, _OVER)
|
SDB_GET_RESERVE(pRaw, dataPos, DB_RESERVE_SIZE, _OVER)
|
||||||
taosInitRWLatch(&pDb->lock);
|
taosInitRWLatch(&pDb->lock);
|
||||||
|
@ -318,6 +326,10 @@ static int32_t mndCheckDbCfg(SMnode *pMnode, SDbCfg *pCfg) {
|
||||||
terrno = TSDB_CODE_MND_NO_ENOUGH_DNODES;
|
terrno = TSDB_CODE_MND_NO_ENOUGH_DNODES;
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
if (pCfg->walRetentionPeriod < TSDB_DB_MIN_WAL_RETENTION_PERIOD) return -1;
|
||||||
|
if (pCfg->walRetentionSize < TSDB_DB_MIN_WAL_RETENTION_SIZE) return -1;
|
||||||
|
if (pCfg->walRollPeriod < TSDB_DB_MIN_WAL_ROLL_PERIOD) return -1;
|
||||||
|
if (pCfg->walSegmentSize < TSDB_DB_MIN_WAL_SEGMENT_SIZE) return -1;
|
||||||
|
|
||||||
terrno = 0;
|
terrno = 0;
|
||||||
return terrno;
|
return terrno;
|
||||||
|
@ -345,6 +357,12 @@ static void mndSetDefaultDbCfg(SDbCfg *pCfg) {
|
||||||
if (pCfg->cacheLastSize <= 0) pCfg->cacheLastSize = TSDB_DEFAULT_CACHE_SIZE;
|
if (pCfg->cacheLastSize <= 0) pCfg->cacheLastSize = TSDB_DEFAULT_CACHE_SIZE;
|
||||||
if (pCfg->numOfRetensions < 0) pCfg->numOfRetensions = 0;
|
if (pCfg->numOfRetensions < 0) pCfg->numOfRetensions = 0;
|
||||||
if (pCfg->schemaless < 0) pCfg->schemaless = TSDB_DB_SCHEMALESS_OFF;
|
if (pCfg->schemaless < 0) pCfg->schemaless = TSDB_DB_SCHEMALESS_OFF;
|
||||||
|
if (pCfg->walRetentionPeriod < 0 && pCfg->walRetentionPeriod != -1)
|
||||||
|
pCfg->walRetentionPeriod = TSDB_DEFAULT_DB_WAL_RETENTION_PERIOD;
|
||||||
|
if (pCfg->walRetentionSize < 0 && pCfg->walRetentionSize != -1)
|
||||||
|
pCfg->walRetentionSize = TSDB_DEFAULT_DB_WAL_RETENTION_SIZE;
|
||||||
|
if (pCfg->walRollPeriod < 0) pCfg->walRollPeriod = TSDB_DEFAULT_DB_WAL_ROLL_PERIOD;
|
||||||
|
if (pCfg->walSegmentSize < 0) pCfg->walSegmentSize = TSDB_DEFAULT_DB_WAL_SEGMENT_SIZE;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t mndSetCreateDbRedoLogs(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SVgObj *pVgroups) {
|
static int32_t mndSetCreateDbRedoLogs(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SVgObj *pVgroups) {
|
||||||
|
@ -457,6 +475,10 @@ static int32_t mndCreateDb(SMnode *pMnode, SRpcMsg *pReq, SCreateDbReq *pCreate,
|
||||||
.cacheLast = pCreate->cacheLast,
|
.cacheLast = pCreate->cacheLast,
|
||||||
.hashMethod = 1,
|
.hashMethod = 1,
|
||||||
.schemaless = pCreate->schemaless,
|
.schemaless = pCreate->schemaless,
|
||||||
|
.walRetentionPeriod = pCreate->walRetentionPeriod,
|
||||||
|
.walRetentionSize = pCreate->walRetentionSize,
|
||||||
|
.walRollPeriod = pCreate->walRollPeriod,
|
||||||
|
.walSegmentSize = pCreate->walSegmentSize,
|
||||||
};
|
};
|
||||||
|
|
||||||
dbObj.cfg.numOfRetensions = pCreate->numOfRetensions;
|
dbObj.cfg.numOfRetensions = pCreate->numOfRetensions;
|
||||||
|
|
|
@ -230,6 +230,10 @@ void *mndBuildCreateVnodeReq(SMnode *pMnode, SDnodeObj *pDnode, SDbObj *pDb, SVg
|
||||||
createReq.standby = standby;
|
createReq.standby = standby;
|
||||||
createReq.isTsma = pVgroup->isTsma;
|
createReq.isTsma = pVgroup->isTsma;
|
||||||
createReq.pTsma = pVgroup->pTsma;
|
createReq.pTsma = pVgroup->pTsma;
|
||||||
|
createReq.walRetentionPeriod = pDb->cfg.walRetentionPeriod;
|
||||||
|
createReq.walRetentionSize = pDb->cfg.walRetentionSize;
|
||||||
|
createReq.walRollPeriod = pDb->cfg.walRollPeriod;
|
||||||
|
createReq.walSegmentSize = pDb->cfg.walSegmentSize;
|
||||||
|
|
||||||
for (int32_t v = 0; v < pVgroup->replica; ++v) {
|
for (int32_t v = 0; v < pVgroup->replica; ++v) {
|
||||||
SReplica *pReplica = &createReq.replicas[v];
|
SReplica *pReplica = &createReq.replicas[v];
|
||||||
|
|
|
@ -371,8 +371,8 @@ struct SBlockIdx {
|
||||||
|
|
||||||
struct SMapData {
|
struct SMapData {
|
||||||
int32_t nItem;
|
int32_t nItem;
|
||||||
int32_t *aOffset;
|
|
||||||
int32_t nData;
|
int32_t nData;
|
||||||
|
int32_t *aOffset;
|
||||||
uint8_t *pData;
|
uint8_t *pData;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -180,11 +180,41 @@ int metaClose(SMeta *pMeta) {
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t metaRLock(SMeta *pMeta) { return taosThreadRwlockRdlock(&pMeta->lock); }
|
int32_t metaRLock(SMeta *pMeta) {
|
||||||
|
int32_t ret = 0;
|
||||||
|
|
||||||
int32_t metaWLock(SMeta *pMeta) { return taosThreadRwlockWrlock(&pMeta->lock); }
|
metaDebug("meta rlock %p B", &pMeta->lock);
|
||||||
|
|
||||||
int32_t metaULock(SMeta *pMeta) { return taosThreadRwlockUnlock(&pMeta->lock); }
|
ret = taosThreadRwlockRdlock(&pMeta->lock);
|
||||||
|
|
||||||
|
metaDebug("meta rlock %p E", &pMeta->lock);
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t metaWLock(SMeta *pMeta) {
|
||||||
|
int32_t ret = 0;
|
||||||
|
|
||||||
|
metaDebug("meta wlock %p B", &pMeta->lock);
|
||||||
|
|
||||||
|
ret = taosThreadRwlockWrlock(&pMeta->lock);
|
||||||
|
|
||||||
|
metaDebug("meta wlock %p E", &pMeta->lock);
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t metaULock(SMeta *pMeta) {
|
||||||
|
int32_t ret = 0;
|
||||||
|
|
||||||
|
metaDebug("meta ulock %p B", &pMeta->lock);
|
||||||
|
|
||||||
|
ret = taosThreadRwlockUnlock(&pMeta->lock);
|
||||||
|
|
||||||
|
metaDebug("meta ulock %p E", &pMeta->lock);
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
static int tbDbKeyCmpr(const void *pKey1, int kLen1, const void *pKey2, int kLen2) {
|
static int tbDbKeyCmpr(const void *pKey1, int kLen1, const void *pKey2, int kLen2) {
|
||||||
STbDbKey *pTbDbKey1 = (STbDbKey *)pKey1;
|
STbDbKey *pTbDbKey1 = (STbDbKey *)pKey1;
|
||||||
|
@ -259,7 +289,7 @@ static int ctbIdxKeyCmpr(const void *pKey1, int kLen1, const void *pKey2, int kL
|
||||||
static int tagIdxKeyCmpr(const void *pKey1, int kLen1, const void *pKey2, int kLen2) {
|
static int tagIdxKeyCmpr(const void *pKey1, int kLen1, const void *pKey2, int kLen2) {
|
||||||
STagIdxKey *pTagIdxKey1 = (STagIdxKey *)pKey1;
|
STagIdxKey *pTagIdxKey1 = (STagIdxKey *)pKey1;
|
||||||
STagIdxKey *pTagIdxKey2 = (STagIdxKey *)pKey2;
|
STagIdxKey *pTagIdxKey2 = (STagIdxKey *)pKey2;
|
||||||
tb_uid_t uid1, uid2;
|
tb_uid_t uid1 = 0, uid2 = 0;
|
||||||
int c;
|
int c;
|
||||||
|
|
||||||
// compare suid
|
// compare suid
|
||||||
|
@ -287,14 +317,15 @@ static int tagIdxKeyCmpr(const void *pKey1, int kLen1, const void *pKey2, int kL
|
||||||
// all not NULL, compr tag vals
|
// all not NULL, compr tag vals
|
||||||
c = doCompare(pTagIdxKey1->data, pTagIdxKey2->data, pTagIdxKey1->type, 0);
|
c = doCompare(pTagIdxKey1->data, pTagIdxKey2->data, pTagIdxKey1->type, 0);
|
||||||
if (c) return c;
|
if (c) return c;
|
||||||
|
}
|
||||||
|
|
||||||
if (IS_VAR_DATA_TYPE(pTagIdxKey1->type)) {
|
// both null or tag values are equal, then continue to compare uids
|
||||||
uid1 = *(tb_uid_t *)(pTagIdxKey1->data + varDataTLen(pTagIdxKey1->data));
|
if (IS_VAR_DATA_TYPE(pTagIdxKey1->type)) {
|
||||||
uid2 = *(tb_uid_t *)(pTagIdxKey2->data + varDataTLen(pTagIdxKey2->data));
|
uid1 = *(tb_uid_t *)(pTagIdxKey1->data + varDataTLen(pTagIdxKey1->data));
|
||||||
} else {
|
uid2 = *(tb_uid_t *)(pTagIdxKey2->data + varDataTLen(pTagIdxKey2->data));
|
||||||
uid1 = *(tb_uid_t *)(pTagIdxKey1->data + tDataTypes[pTagIdxKey1->type].bytes);
|
} else {
|
||||||
uid2 = *(tb_uid_t *)(pTagIdxKey2->data + tDataTypes[pTagIdxKey2->type].bytes);
|
uid1 = *(tb_uid_t *)(pTagIdxKey1->data + tDataTypes[pTagIdxKey1->type].bytes);
|
||||||
}
|
uid2 = *(tb_uid_t *)(pTagIdxKey2->data + tDataTypes[pTagIdxKey2->type].bytes);
|
||||||
}
|
}
|
||||||
|
|
||||||
// compare uid
|
// compare uid
|
||||||
|
|
|
@ -583,7 +583,7 @@ int32_t tqProcessVgChangeReq(STQ* pTq, char* msg, int32_t msgLen) {
|
||||||
pHandle->execHandle.execTb.suid = req.suid;
|
pHandle->execHandle.execTb.suid = req.suid;
|
||||||
SArray* tbUidList = taosArrayInit(0, sizeof(int64_t));
|
SArray* tbUidList = taosArrayInit(0, sizeof(int64_t));
|
||||||
vnodeGetCtbIdList(pTq->pVnode, req.suid, tbUidList);
|
vnodeGetCtbIdList(pTq->pVnode, req.suid, tbUidList);
|
||||||
tqDebug("vgId:%d, tq try get suid:%" PRId64, pTq->pVnode->config.vgId, req.suid);
|
tqDebug("vgId:%d, tq try to get all ctb, suid:%" PRId64, pTq->pVnode->config.vgId, req.suid);
|
||||||
for (int32_t i = 0; i < taosArrayGetSize(tbUidList); i++) {
|
for (int32_t i = 0; i < taosArrayGetSize(tbUidList); i++) {
|
||||||
int64_t tbUid = *(int64_t*)taosArrayGet(tbUidList, i);
|
int64_t tbUid = *(int64_t*)taosArrayGet(tbUidList, i);
|
||||||
tqDebug("vgId:%d, idx %d, uid:%" PRId64, TD_VID(pTq->pVnode), i, tbUid);
|
tqDebug("vgId:%d, idx %d, uid:%" PRId64, TD_VID(pTq->pVnode), i, tbUid);
|
||||||
|
|
|
@ -31,7 +31,7 @@ typedef struct {
|
||||||
typedef struct STableBlockScanInfo {
|
typedef struct STableBlockScanInfo {
|
||||||
uint64_t uid;
|
uint64_t uid;
|
||||||
TSKEY lastKey;
|
TSKEY lastKey;
|
||||||
SBlockIdx blockIdx;
|
SMapData mapData; // block info (compressed)
|
||||||
SArray* pBlockList; // block data index list
|
SArray* pBlockList; // block data index list
|
||||||
SIterInfo iter; // mem buffer skip list iterator
|
SIterInfo iter; // mem buffer skip list iterator
|
||||||
SIterInfo iiter; // imem buffer skip list iterator
|
SIterInfo iiter; // imem buffer skip list iterator
|
||||||
|
@ -42,7 +42,7 @@ typedef struct STableBlockScanInfo {
|
||||||
|
|
||||||
typedef struct SBlockOrderWrapper {
|
typedef struct SBlockOrderWrapper {
|
||||||
int64_t uid;
|
int64_t uid;
|
||||||
SBlock* pBlock;
|
int64_t offset;
|
||||||
} SBlockOrderWrapper;
|
} SBlockOrderWrapper;
|
||||||
|
|
||||||
typedef struct SBlockOrderSupporter {
|
typedef struct SBlockOrderSupporter {
|
||||||
|
@ -53,11 +53,13 @@ typedef struct SBlockOrderSupporter {
|
||||||
} SBlockOrderSupporter;
|
} SBlockOrderSupporter;
|
||||||
|
|
||||||
typedef struct SIOCostSummary {
|
typedef struct SIOCostSummary {
|
||||||
int64_t blockLoadTime;
|
int64_t numOfBlocks;
|
||||||
int64_t smaLoadTime;
|
double blockLoadTime;
|
||||||
int64_t checkForNextTime;
|
double buildmemBlock;
|
||||||
int64_t headFileLoad;
|
int64_t headFileLoad;
|
||||||
int64_t headFileLoadTime;
|
double headFileLoadTime;
|
||||||
|
int64_t smaData;
|
||||||
|
double smaLoadTime;
|
||||||
} SIOCostSummary;
|
} SIOCostSummary;
|
||||||
|
|
||||||
typedef struct SBlockLoadSuppInfo {
|
typedef struct SBlockLoadSuppInfo {
|
||||||
|
@ -86,6 +88,8 @@ typedef struct SDataBlockIter {
|
||||||
int32_t index;
|
int32_t index;
|
||||||
SArray* blockList; // SArray<SFileDataBlockInfo>
|
SArray* blockList; // SArray<SFileDataBlockInfo>
|
||||||
int32_t order;
|
int32_t order;
|
||||||
|
SBlock block; // current SBlock data
|
||||||
|
SHashObj* pTableMap;
|
||||||
} SDataBlockIter;
|
} SDataBlockIter;
|
||||||
|
|
||||||
typedef struct SFileBlockDumpInfo {
|
typedef struct SFileBlockDumpInfo {
|
||||||
|
@ -183,7 +187,7 @@ static int32_t setColumnIdSlotList(STsdbReader* pReader, SSDataBlock* pBlock) {
|
||||||
|
|
||||||
static SHashObj* createDataBlockScanInfo(STsdbReader* pTsdbReader, const STableKeyInfo* idList, int32_t numOfTables) {
|
static SHashObj* createDataBlockScanInfo(STsdbReader* pTsdbReader, const STableKeyInfo* idList, int32_t numOfTables) {
|
||||||
// allocate buffer in order to load data blocks from file
|
// allocate buffer in order to load data blocks from file
|
||||||
// todo use simple hash instead
|
// todo use simple hash instead, optimize the memory consumption
|
||||||
SHashObj* pTableMap =
|
SHashObj* pTableMap =
|
||||||
taosHashInit(numOfTables, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK);
|
taosHashInit(numOfTables, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK);
|
||||||
if (pTableMap == NULL) {
|
if (pTableMap == NULL) {
|
||||||
|
@ -244,6 +248,7 @@ static void destroyBlockScanInfo(SHashObj* pTableMap) {
|
||||||
|
|
||||||
p->delSkyline = taosArrayDestroy(p->delSkyline);
|
p->delSkyline = taosArrayDestroy(p->delSkyline);
|
||||||
p->pBlockList = taosArrayDestroy(p->pBlockList);
|
p->pBlockList = taosArrayDestroy(p->pBlockList);
|
||||||
|
tMapDataClear(&p->mapData);
|
||||||
}
|
}
|
||||||
|
|
||||||
taosHashCleanup(pTableMap);
|
taosHashCleanup(pTableMap);
|
||||||
|
@ -320,6 +325,8 @@ static bool filesetIteratorNext(SFilesetIter* pIter, STsdbReader* pReader) {
|
||||||
goto _err;
|
goto _err;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pReader->cost.headFileLoad += 1;
|
||||||
|
|
||||||
int32_t fid = pReader->status.pCurrentFileset->fid;
|
int32_t fid = pReader->status.pCurrentFileset->fid;
|
||||||
tsdbFidKeyRange(fid, pReader->pTsdb->keepCfg.days, pReader->pTsdb->keepCfg.precision, &win.skey, &win.ekey);
|
tsdbFidKeyRange(fid, pReader->pTsdb->keepCfg.days, pReader->pTsdb->keepCfg.precision, &win.skey, &win.ekey);
|
||||||
|
|
||||||
|
@ -347,7 +354,7 @@ _err:
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void resetDataBlockIterator(SDataBlockIter* pIter, int32_t order) {
|
static void resetDataBlockIterator(SDataBlockIter* pIter, int32_t order, SHashObj* pTableMap) {
|
||||||
pIter->order = order;
|
pIter->order = order;
|
||||||
pIter->index = -1;
|
pIter->index = -1;
|
||||||
pIter->numOfBlocks = -1;
|
pIter->numOfBlocks = -1;
|
||||||
|
@ -356,6 +363,7 @@ static void resetDataBlockIterator(SDataBlockIter* pIter, int32_t order) {
|
||||||
} else {
|
} else {
|
||||||
taosArrayClear(pIter->blockList);
|
taosArrayClear(pIter->blockList);
|
||||||
}
|
}
|
||||||
|
pIter->pTableMap = pTableMap;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cleanupDataBlockIterator(SDataBlockIter* pIter) { taosArrayDestroy(pIter->blockList); }
|
static void cleanupDataBlockIterator(SDataBlockIter* pIter) { taosArrayDestroy(pIter->blockList); }
|
||||||
|
@ -521,7 +529,7 @@ _end:
|
||||||
// }
|
// }
|
||||||
|
|
||||||
static int32_t doLoadBlockIndex(STsdbReader* pReader, SDataFReader* pFileReader, SArray* pIndexList) {
|
static int32_t doLoadBlockIndex(STsdbReader* pReader, SDataFReader* pFileReader, SArray* pIndexList) {
|
||||||
SArray* aBlockIdx = taosArrayInit(0, sizeof(SBlockIdx));
|
SArray* aBlockIdx = taosArrayInit(8, sizeof(SBlockIdx));
|
||||||
|
|
||||||
int64_t st = taosGetTimestampUs();
|
int64_t st = taosGetTimestampUs();
|
||||||
int32_t code = tsdbReadBlockIdx(pFileReader, aBlockIdx, NULL);
|
int32_t code = tsdbReadBlockIdx(pFileReader, aBlockIdx, NULL);
|
||||||
|
@ -554,16 +562,18 @@ static int32_t doLoadBlockIndex(STsdbReader* pReader, SDataFReader* pFileReader,
|
||||||
|
|
||||||
STableBlockScanInfo* pScanInfo = p;
|
STableBlockScanInfo* pScanInfo = p;
|
||||||
if (pScanInfo->pBlockList == NULL) {
|
if (pScanInfo->pBlockList == NULL) {
|
||||||
pScanInfo->pBlockList = taosArrayInit(16, sizeof(SBlock));
|
pScanInfo->pBlockList = taosArrayInit(4, sizeof(int32_t));
|
||||||
}
|
}
|
||||||
|
|
||||||
pScanInfo->blockIdx = *pBlockIdx;
|
|
||||||
taosArrayPush(pIndexList, pBlockIdx);
|
taosArrayPush(pIndexList, pBlockIdx);
|
||||||
}
|
}
|
||||||
|
|
||||||
int64_t et2 = taosGetTimestampUs();
|
int64_t et2 = taosGetTimestampUs();
|
||||||
tsdbDebug("load block index for %d tables completed, elapsed time:%.2f ms, set blockIdx:%.2f ms, size:%d bytes %s",
|
tsdbDebug("load block index for %d tables completed, elapsed time:%.2f ms, set blockIdx:%.2f ms, size:%.2f Kb %s",
|
||||||
(int32_t)num, (et1 - st)/1000.0, (et2-et1)/1000.0, num * sizeof(SBlockIdx), pReader->idStr);
|
(int32_t)num, (et1 - st)/1000.0, (et2-et1)/1000.0, num * sizeof(SBlockIdx)/1024.0, pReader->idStr);
|
||||||
|
|
||||||
|
pReader->cost.headFileLoadTime += (et1 - st) / 1000.0;
|
||||||
|
|
||||||
_end:
|
_end:
|
||||||
taosArrayDestroy(aBlockIdx);
|
taosArrayDestroy(aBlockIdx);
|
||||||
return code;
|
return code;
|
||||||
|
@ -584,23 +594,22 @@ static int32_t doLoadFileBlock(STsdbReader* pReader, SArray* pIndexList, uint32_
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
tMapDataClear(&px->mapData);
|
||||||
taosArrayClear(px->pBlockList);
|
taosArrayClear(px->pBlockList);
|
||||||
}
|
}
|
||||||
|
|
||||||
for (int32_t i = 0; i < numOfTables; ++i) {
|
for (int32_t i = 0; i < numOfTables; ++i) {
|
||||||
SBlockIdx* pBlockIdx = taosArrayGet(pIndexList, i);
|
SBlockIdx* pBlockIdx = taosArrayGet(pIndexList, i);
|
||||||
|
|
||||||
SMapData mapData = {0};
|
|
||||||
tMapDataReset(&mapData);
|
|
||||||
tsdbReadBlock(pReader->pFileReader, pBlockIdx, &mapData, NULL);
|
|
||||||
|
|
||||||
size += mapData.nData;
|
|
||||||
|
|
||||||
STableBlockScanInfo* pScanInfo = taosHashGet(pReader->status.pTableMap, &pBlockIdx->uid, sizeof(int64_t));
|
STableBlockScanInfo* pScanInfo = taosHashGet(pReader->status.pTableMap, &pBlockIdx->uid, sizeof(int64_t));
|
||||||
for (int32_t j = 0; j < mapData.nItem; ++j) {
|
|
||||||
SBlock block = {0};
|
|
||||||
|
|
||||||
tMapDataGetItemByIdx(&mapData, j, &block, tGetBlock);
|
tMapDataReset(&pScanInfo->mapData);
|
||||||
|
tsdbReadBlock(pReader->pFileReader, pBlockIdx, &pScanInfo->mapData, NULL);
|
||||||
|
|
||||||
|
size += pScanInfo->mapData.nData;
|
||||||
|
for (int32_t j = 0; j < pScanInfo->mapData.nItem; ++j) {
|
||||||
|
SBlock block = {0};
|
||||||
|
tMapDataGetItemByIdx(&pScanInfo->mapData, j, &block, tGetBlock);
|
||||||
|
|
||||||
// 1. time range check
|
// 1. time range check
|
||||||
if (block.minKey.ts > pReader->window.ekey || block.maxKey.ts < pReader->window.skey) {
|
if (block.minKey.ts > pReader->window.ekey || block.maxKey.ts < pReader->window.skey) {
|
||||||
|
@ -612,24 +621,26 @@ static int32_t doLoadFileBlock(STsdbReader* pReader, SArray* pIndexList, uint32_
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
void* p = taosArrayPush(pScanInfo->pBlockList, &block);
|
void* p = taosArrayPush(pScanInfo->pBlockList, &j);
|
||||||
if (p == NULL) {
|
if (p == NULL) {
|
||||||
tMapDataClear(&mapData);
|
tMapDataClear(&pScanInfo->mapData);
|
||||||
return TSDB_CODE_OUT_OF_MEMORY;
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
}
|
}
|
||||||
|
|
||||||
(*numOfBlocks) += 1;
|
(*numOfBlocks) += 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
tMapDataClear(&mapData);
|
|
||||||
if (pScanInfo->pBlockList != NULL && taosArrayGetSize(pScanInfo->pBlockList) > 0) {
|
if (pScanInfo->pBlockList != NULL && taosArrayGetSize(pScanInfo->pBlockList) > 0) {
|
||||||
(*numOfValidTables) += 1;
|
(*numOfValidTables) += 1;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
int64_t et = taosGetTimestampUs();
|
double el = (taosGetTimestampUs() - st)/1000.0;
|
||||||
tsdbDebug("load block of %d tables completed, blocks:%d in %d tables, size:%.2f Kb, elapsed time:%.2f ms %s",
|
tsdbDebug("load block of %d tables completed, blocks:%d in %d tables, size:%.2f Kb, elapsed time:%.2f ms %s",
|
||||||
numOfTables, *numOfBlocks, *numOfValidTables, size/1000.0, (et-st)/1000.0, pReader->idStr);
|
numOfTables, *numOfBlocks, *numOfValidTables, size/1000.0, el, pReader->idStr);
|
||||||
|
|
||||||
|
pReader->cost.numOfBlocks += (*numOfBlocks);
|
||||||
|
pReader->cost.headFileLoadTime += el;
|
||||||
|
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
@ -657,13 +668,22 @@ static void doCopyColVal(SColumnInfoData* pColInfoData, int32_t rowIndex, int32_
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static SFileDataBlockInfo* getCurrentBlockInfo(SDataBlockIter* pBlockIter) {
|
||||||
|
SFileDataBlockInfo* pFBlockInfo = taosArrayGet(pBlockIter->blockList, pBlockIter->index);
|
||||||
|
return pFBlockInfo;
|
||||||
|
}
|
||||||
|
|
||||||
|
static SBlock* getCurrentBlock(SDataBlockIter* pBlockIter) {
|
||||||
|
return &pBlockIter->block;
|
||||||
|
}
|
||||||
|
|
||||||
static int32_t copyBlockDataToSDataBlock(STsdbReader* pReader, STableBlockScanInfo* pBlockScanInfo) {
|
static int32_t copyBlockDataToSDataBlock(STsdbReader* pReader, STableBlockScanInfo* pBlockScanInfo) {
|
||||||
SReaderStatus* pStatus = &pReader->status;
|
SReaderStatus* pStatus = &pReader->status;
|
||||||
SDataBlockIter* pBlockIter = &pStatus->blockIter;
|
SDataBlockIter* pBlockIter = &pStatus->blockIter;
|
||||||
|
|
||||||
SBlockData* pBlockData = &pStatus->fileBlockData;
|
SBlockData* pBlockData = &pStatus->fileBlockData;
|
||||||
SFileDataBlockInfo* pFBlock = getCurrentBlockInfo(pBlockIter);
|
SFileDataBlockInfo* pFBlock = getCurrentBlockInfo(pBlockIter);
|
||||||
SBlock* pBlock = taosArrayGet(pBlockScanInfo->pBlockList, pFBlock->tbBlockIdx);
|
SBlock* pBlock = getCurrentBlock(pBlockIter);
|
||||||
SSDataBlock* pResBlock = pReader->pResBlock;
|
SSDataBlock* pResBlock = pReader->pResBlock;
|
||||||
int32_t numOfCols = blockDataGetNumOfCols(pResBlock);
|
int32_t numOfCols = blockDataGetNumOfCols(pResBlock);
|
||||||
|
|
||||||
|
@ -729,12 +749,12 @@ static int32_t copyBlockDataToSDataBlock(STsdbReader* pReader, STableBlockScanIn
|
||||||
|
|
||||||
setBlockAllDumped(pDumpInfo, pBlock, pReader->order);
|
setBlockAllDumped(pDumpInfo, pBlock, pReader->order);
|
||||||
|
|
||||||
int64_t elapsedTime = (taosGetTimestampUs() - st);
|
double elapsedTime = (taosGetTimestampUs() - st) / 1000.0;
|
||||||
pReader->cost.blockLoadTime += elapsedTime;
|
pReader->cost.blockLoadTime += elapsedTime;
|
||||||
|
|
||||||
int32_t unDumpedRows = asc ? pBlock->nRow - pDumpInfo->rowIndex : pDumpInfo->rowIndex + 1;
|
int32_t unDumpedRows = asc ? pBlock->nRow - pDumpInfo->rowIndex : pDumpInfo->rowIndex + 1;
|
||||||
tsdbDebug("%p load file block into buffer, global index:%d, table index:%d, brange:%" PRId64 "-%" PRId64
|
tsdbDebug("%p load file block into buffer, global index:%d, table index:%d, brange:%" PRId64 "-%" PRId64
|
||||||
", rows:%d, remain:%d, minVer:%" PRId64 ", maxVer:%" PRId64 ", elapsed time:%" PRId64 " us, %s",
|
", rows:%d, remain:%d, minVer:%" PRId64 ", maxVer:%" PRId64 ", elapsed time:%.2f ms, %s",
|
||||||
pReader, pBlockIter->index, pFBlock->tbBlockIdx, pBlock->minKey.ts, pBlock->maxKey.ts, remain, unDumpedRows,
|
pReader, pBlockIter->index, pFBlock->tbBlockIdx, pBlock->minKey.ts, pBlock->maxKey.ts, remain, unDumpedRows,
|
||||||
pBlock->minVersion, pBlock->maxVersion, elapsedTime, pReader->idStr);
|
pBlock->minVersion, pBlock->maxVersion, elapsedTime, pReader->idStr);
|
||||||
|
|
||||||
|
@ -746,27 +766,30 @@ static int32_t doLoadFileBlockData(STsdbReader* pReader, SDataBlockIter* pBlockI
|
||||||
int64_t st = taosGetTimestampUs();
|
int64_t st = taosGetTimestampUs();
|
||||||
|
|
||||||
SFileDataBlockInfo* pFBlock = getCurrentBlockInfo(pBlockIter);
|
SFileDataBlockInfo* pFBlock = getCurrentBlockInfo(pBlockIter);
|
||||||
SBlock* pBlock = taosArrayGet(pBlockScanInfo->pBlockList, pFBlock->tbBlockIdx);
|
SBlock* pBlock = getCurrentBlock(pBlockIter);
|
||||||
|
|
||||||
SSDataBlock* pResBlock = pReader->pResBlock;
|
SSDataBlock* pResBlock = pReader->pResBlock;
|
||||||
int32_t numOfCols = blockDataGetNumOfCols(pResBlock);
|
int32_t numOfCols = blockDataGetNumOfCols(pResBlock);
|
||||||
|
|
||||||
SBlockLoadSuppInfo* pSupInfo = &pReader->suppInfo;
|
SBlockLoadSuppInfo* pSupInfo = &pReader->suppInfo;
|
||||||
SFileBlockDumpInfo* pDumpInfo = &pReader->status.fBlockDumpInfo;
|
SFileBlockDumpInfo* pDumpInfo = &pReader->status.fBlockDumpInfo;
|
||||||
|
|
||||||
int32_t code = tsdbReadColData(pReader->pFileReader, &pBlockScanInfo->blockIdx, pBlock, pSupInfo->colIds, numOfCols,
|
SBlockIdx blockIdx = {.suid = pReader->suid, .uid = pBlockScanInfo->uid};
|
||||||
|
int32_t code = tsdbReadColData(pReader->pFileReader, &blockIdx, pBlock, pSupInfo->colIds, numOfCols,
|
||||||
pBlockData, NULL, NULL);
|
pBlockData, NULL, NULL);
|
||||||
if (code != TSDB_CODE_SUCCESS) {
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
goto _error;
|
goto _error;
|
||||||
}
|
}
|
||||||
|
|
||||||
int64_t elapsedTime = (taosGetTimestampUs() - st);
|
double elapsedTime = (taosGetTimestampUs() - st) / 1000.0;
|
||||||
pReader->cost.blockLoadTime += elapsedTime;
|
pReader->cost.blockLoadTime += elapsedTime;
|
||||||
|
|
||||||
pDumpInfo->allDumped = false;
|
pDumpInfo->allDumped = false;
|
||||||
tsdbDebug("%p load file block into buffer, global index:%d, table index:%d, brange:%" PRId64 "-%" PRId64
|
tsdbDebug("%p load file block into buffer, global index:%d, table index:%d, brange:%" PRId64 "-%" PRId64
|
||||||
", rows:%d, minVer:%" PRId64 ", maxVer:%" PRId64 ", elapsed time:%" PRId64 " us, %s",
|
", rows:%d, minVer:%" PRId64 ", maxVer:%" PRId64 ", elapsed time:%.2f ms, %s",
|
||||||
pReader, pBlockIter->index, pFBlock->tbBlockIdx, pBlock->minKey.ts, pBlock->maxKey.ts, pBlock->nRow,
|
pReader, pBlockIter->index, pFBlock->tbBlockIdx, pBlock->minKey.ts, pBlock->maxKey.ts, pBlock->nRow,
|
||||||
pBlock->minVersion, pBlock->maxVersion, elapsedTime, pReader->idStr);
|
pBlock->minVersion, pBlock->maxVersion, elapsedTime, pReader->idStr);
|
||||||
|
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
|
|
||||||
_error:
|
_error:
|
||||||
|
@ -824,7 +847,21 @@ static int32_t fileDataBlockOrderCompar(const void* pLeft, const void* pRight, v
|
||||||
SBlockOrderWrapper* pLeftBlock = &pSupporter->pDataBlockInfo[leftIndex][leftTableBlockIndex];
|
SBlockOrderWrapper* pLeftBlock = &pSupporter->pDataBlockInfo[leftIndex][leftTableBlockIndex];
|
||||||
SBlockOrderWrapper* pRightBlock = &pSupporter->pDataBlockInfo[rightIndex][rightTableBlockIndex];
|
SBlockOrderWrapper* pRightBlock = &pSupporter->pDataBlockInfo[rightIndex][rightTableBlockIndex];
|
||||||
|
|
||||||
return pLeftBlock->pBlock->aSubBlock[0].offset > pRightBlock->pBlock->aSubBlock[0].offset ? 1 : -1;
|
return pLeftBlock->offset > pRightBlock->offset ? 1 : -1;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int32_t doSetCurrentBlock(SDataBlockIter* pBlockIter) {
|
||||||
|
SFileDataBlockInfo* pFBlock = getCurrentBlockInfo(pBlockIter);
|
||||||
|
STableBlockScanInfo* pScanInfo = taosHashGet(pBlockIter->pTableMap, &pFBlock->uid, sizeof(pFBlock->uid));
|
||||||
|
|
||||||
|
int32_t* mapDataIndex = taosArrayGet(pScanInfo->pBlockList, pFBlock->tbBlockIdx);
|
||||||
|
tMapDataGetItemByIdx(&pScanInfo->mapData, *mapDataIndex, &pBlockIter->block, tGetBlock);
|
||||||
|
|
||||||
|
#if 0
|
||||||
|
qDebug("check file block, table uid:%"PRIu64" index:%d offset:%"PRId64", ", pScanInfo->uid, *mapDataIndex, pBlockIter->block.aSubBlock[0].offset);
|
||||||
|
#endif
|
||||||
|
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t initBlockIterator(STsdbReader* pReader, SDataBlockIter* pBlockIter, int32_t numOfBlocks) {
|
static int32_t initBlockIterator(STsdbReader* pReader, SDataBlockIter* pBlockIter, int32_t numOfBlocks) {
|
||||||
|
@ -867,10 +904,15 @@ static int32_t initBlockIterator(STsdbReader* pReader, SDataBlockIter* pBlockIte
|
||||||
}
|
}
|
||||||
|
|
||||||
sup.pDataBlockInfo[sup.numOfTables] = (SBlockOrderWrapper*)buf;
|
sup.pDataBlockInfo[sup.numOfTables] = (SBlockOrderWrapper*)buf;
|
||||||
|
SBlock block = {0};
|
||||||
for (int32_t k = 0; k < num; ++k) {
|
for (int32_t k = 0; k < num; ++k) {
|
||||||
SBlockOrderWrapper wrapper = {0};
|
SBlockOrderWrapper wrapper = {0};
|
||||||
wrapper.pBlock = (SBlock*)taosArrayGet(pTableScanInfo->pBlockList, k);
|
|
||||||
|
int32_t* mapDataIndex = taosArrayGet(pTableScanInfo->pBlockList, k);
|
||||||
|
tMapDataGetItemByIdx(&pTableScanInfo->mapData, *mapDataIndex, &block, tGetBlock);
|
||||||
|
|
||||||
wrapper.uid = pTableScanInfo->uid;
|
wrapper.uid = pTableScanInfo->uid;
|
||||||
|
wrapper.offset = block.aSubBlock[0].offset;
|
||||||
|
|
||||||
sup.pDataBlockInfo[sup.numOfTables][k] = wrapper;
|
sup.pDataBlockInfo[sup.numOfTables][k] = wrapper;
|
||||||
cnt++;
|
cnt++;
|
||||||
|
@ -894,6 +936,7 @@ static int32_t initBlockIterator(STsdbReader* pReader, SDataBlockIter* pBlockIte
|
||||||
|
|
||||||
pBlockIter->index = asc ? 0 : (numOfBlocks - 1);
|
pBlockIter->index = asc ? 0 : (numOfBlocks - 1);
|
||||||
cleanupBlockOrderSupporter(&sup);
|
cleanupBlockOrderSupporter(&sup);
|
||||||
|
doSetCurrentBlock(pBlockIter);
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -932,6 +975,8 @@ static int32_t initBlockIterator(STsdbReader* pReader, SDataBlockIter* pBlockIte
|
||||||
taosMemoryFree(pTree);
|
taosMemoryFree(pTree);
|
||||||
|
|
||||||
pBlockIter->index = asc ? 0 : (numOfBlocks - 1);
|
pBlockIter->index = asc ? 0 : (numOfBlocks - 1);
|
||||||
|
doSetCurrentBlock(pBlockIter);
|
||||||
|
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -944,6 +989,8 @@ static bool blockIteratorNext(SDataBlockIter* pBlockIter) {
|
||||||
}
|
}
|
||||||
|
|
||||||
pBlockIter->index += step;
|
pBlockIter->index += step;
|
||||||
|
doSetCurrentBlock(pBlockIter);
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -957,11 +1004,6 @@ static int32_t dataBlockPartiallyRequired(STimeWindow* pWindow, SVersionRange* p
|
||||||
(pVerRange->maxVer < pBlock->maxVersion && pVerRange->maxVer >= pBlock->minVersion);
|
(pVerRange->maxVer < pBlock->maxVersion && pVerRange->maxVer >= pBlock->minVersion);
|
||||||
}
|
}
|
||||||
|
|
||||||
static SFileDataBlockInfo* getCurrentBlockInfo(SDataBlockIter* pBlockIter) {
|
|
||||||
SFileDataBlockInfo* pFBlockInfo = taosArrayGet(pBlockIter->blockList, pBlockIter->index);
|
|
||||||
return pFBlockInfo;
|
|
||||||
}
|
|
||||||
|
|
||||||
static SBlock* getNeighborBlockOfSameTable(SFileDataBlockInfo* pFBlockInfo, STableBlockScanInfo* pTableBlockScanInfo,
|
static SBlock* getNeighborBlockOfSameTable(SFileDataBlockInfo* pFBlockInfo, STableBlockScanInfo* pTableBlockScanInfo,
|
||||||
int32_t* nextIndex, int32_t order) {
|
int32_t* nextIndex, int32_t order) {
|
||||||
bool asc = ASCENDING_TRAVERSE(order);
|
bool asc = ASCENDING_TRAVERSE(order);
|
||||||
|
@ -974,10 +1016,13 @@ static SBlock* getNeighborBlockOfSameTable(SFileDataBlockInfo* pFBlockInfo, STab
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t step = asc ? 1 : -1;
|
int32_t step = asc ? 1 : -1;
|
||||||
|
|
||||||
*nextIndex = pFBlockInfo->tbBlockIdx + step;
|
*nextIndex = pFBlockInfo->tbBlockIdx + step;
|
||||||
SBlock* pNext = taosArrayGet(pTableBlockScanInfo->pBlockList, *nextIndex);
|
|
||||||
return pNext;
|
SBlock *pBlock = taosMemoryCalloc(1, sizeof(SBlock));
|
||||||
|
int32_t* indexInMapdata = taosArrayGet(pTableBlockScanInfo->pBlockList, *nextIndex);
|
||||||
|
|
||||||
|
tMapDataGetItemByIdx(&pTableBlockScanInfo->mapData, *indexInMapdata, pBlock, tGetBlock);
|
||||||
|
return pBlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t findFileBlockInfoIndex(SDataBlockIter* pBlockIter, SFileDataBlockInfo* pFBlockInfo) {
|
static int32_t findFileBlockInfoIndex(SDataBlockIter* pBlockIter, SFileDataBlockInfo* pFBlockInfo) {
|
||||||
|
@ -1015,6 +1060,7 @@ static int32_t setFileBlockActiveInBlockIter(SDataBlockIter* pBlockIter, int32_t
|
||||||
ASSERT(pBlockInfo->uid == fblock.uid && pBlockInfo->tbBlockIdx == fblock.tbBlockIdx);
|
ASSERT(pBlockInfo->uid == fblock.uid && pBlockInfo->tbBlockIdx == fblock.tbBlockIdx);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
doSetCurrentBlock(pBlockIter);
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1117,6 +1163,7 @@ static bool fileBlockShouldLoad(STsdbReader* pReader, SFileDataBlockInfo* pFBloc
|
||||||
bool overlapWithNeighbor = false;
|
bool overlapWithNeighbor = false;
|
||||||
if (pNeighbor) {
|
if (pNeighbor) {
|
||||||
overlapWithNeighbor = overlapWithNeighborBlock(pBlock, pNeighbor, pReader->order);
|
overlapWithNeighbor = overlapWithNeighborBlock(pBlock, pNeighbor, pReader->order);
|
||||||
|
taosMemoryFree(pNeighbor);
|
||||||
}
|
}
|
||||||
|
|
||||||
// has duplicated ts of different version in this block
|
// has duplicated ts of different version in this block
|
||||||
|
@ -1142,11 +1189,13 @@ static int32_t buildDataBlockFromBuf(STsdbReader* pReader, STableBlockScanInfo*
|
||||||
|
|
||||||
setComposedBlockFlag(pReader, true);
|
setComposedBlockFlag(pReader, true);
|
||||||
|
|
||||||
int64_t elapsedTime = taosGetTimestampUs() - st;
|
double elapsedTime = (taosGetTimestampUs() - st) / 1000.0;
|
||||||
tsdbDebug("%p build data block from cache completed, elapsed time:%" PRId64
|
tsdbDebug(
|
||||||
" us, numOfRows:%d, numOfCols:%d, brange: %" PRId64 " - %" PRId64 " %s",
|
"%p build data block from cache completed, elapsed time:%.2f ms, numOfRows:%d, brange: %" PRId64
|
||||||
pReader, elapsedTime, pBlock->info.rows, (int32_t)blockDataGetNumOfCols(pBlock), pBlock->info.window.skey,
|
" - %" PRId64 " %s",
|
||||||
pBlock->info.window.ekey, pReader->idStr);
|
pReader, elapsedTime, pBlock->info.rows, pBlock->info.window.skey, pBlock->info.window.ekey, pReader->idStr);
|
||||||
|
|
||||||
|
pReader->cost.buildmemBlock += elapsedTime;
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1362,7 +1411,6 @@ static int32_t buildComposedDataBlockImpl(STsdbReader* pReader, STableBlockScanI
|
||||||
SFileBlockDumpInfo* pDumpInfo = &pReader->status.fBlockDumpInfo;
|
SFileBlockDumpInfo* pDumpInfo = &pReader->status.fBlockDumpInfo;
|
||||||
SBlockData* pBlockData = &pReader->status.fileBlockData;
|
SBlockData* pBlockData = &pReader->status.fileBlockData;
|
||||||
|
|
||||||
SRowMerger merge = {0};
|
|
||||||
STSRow* pTSRow = NULL;
|
STSRow* pTSRow = NULL;
|
||||||
|
|
||||||
int64_t key = pBlockData->aTSKEY[pDumpInfo->rowIndex];
|
int64_t key = pBlockData->aTSKEY[pDumpInfo->rowIndex];
|
||||||
|
@ -1384,6 +1432,8 @@ static int32_t buildComposedDataBlockImpl(STsdbReader* pReader, STableBlockScanI
|
||||||
|
|
||||||
// imem & mem are all empty, only file exist
|
// imem & mem are all empty, only file exist
|
||||||
TSDBROW fRow = tsdbRowFromBlockData(pBlockData, pDumpInfo->rowIndex);
|
TSDBROW fRow = tsdbRowFromBlockData(pBlockData, pDumpInfo->rowIndex);
|
||||||
|
|
||||||
|
SRowMerger merge = {0};
|
||||||
tRowMergerInit(&merge, &fRow, pReader->pSchema);
|
tRowMergerInit(&merge, &fRow, pReader->pSchema);
|
||||||
doMergeRowsInFileBlocks(pBlockData, pBlockScanInfo, pReader, &merge);
|
doMergeRowsInFileBlocks(pBlockData, pBlockScanInfo, pReader, &merge);
|
||||||
tRowMergerGetRow(&merge, &pTSRow);
|
tRowMergerGetRow(&merge, &pTSRow);
|
||||||
|
@ -1408,9 +1458,7 @@ static int32_t buildComposedDataBlock(STsdbReader* pReader, STableBlockScanInfo*
|
||||||
if (!isValidFileBlockRow(pBlockData, pDumpInfo, pBlockScanInfo, pReader)) {
|
if (!isValidFileBlockRow(pBlockData, pDumpInfo, pBlockScanInfo, pReader)) {
|
||||||
pDumpInfo->rowIndex += step;
|
pDumpInfo->rowIndex += step;
|
||||||
|
|
||||||
SFileDataBlockInfo* pFBlock = getCurrentBlockInfo(&pReader->status.blockIter);
|
SBlock* pBlock = getCurrentBlock(&pReader->status.blockIter);
|
||||||
SBlock* pBlock = taosArrayGet(pBlockScanInfo->pBlockList, pFBlock->tbBlockIdx);
|
|
||||||
|
|
||||||
if (pDumpInfo->rowIndex >= pBlock->nRow || pDumpInfo->rowIndex < 0) {
|
if (pDumpInfo->rowIndex >= pBlock->nRow || pDumpInfo->rowIndex < 0) {
|
||||||
setBlockAllDumped(pDumpInfo, pBlock, pReader->order);
|
setBlockAllDumped(pDumpInfo, pBlock, pReader->order);
|
||||||
break;
|
break;
|
||||||
|
@ -1421,9 +1469,7 @@ static int32_t buildComposedDataBlock(STsdbReader* pReader, STableBlockScanInfo*
|
||||||
}
|
}
|
||||||
|
|
||||||
buildComposedDataBlockImpl(pReader, pBlockScanInfo);
|
buildComposedDataBlockImpl(pReader, pBlockScanInfo);
|
||||||
|
SBlock* pBlock = getCurrentBlock(&pReader->status.blockIter);
|
||||||
SFileDataBlockInfo* pFBlock = getCurrentBlockInfo(&pReader->status.blockIter);
|
|
||||||
SBlock* pBlock = taosArrayGet(pBlockScanInfo->pBlockList, pFBlock->tbBlockIdx);
|
|
||||||
|
|
||||||
// currently loaded file data block is consumed
|
// currently loaded file data block is consumed
|
||||||
if (pDumpInfo->rowIndex >= pBlock->nRow || pDumpInfo->rowIndex < 0) {
|
if (pDumpInfo->rowIndex >= pBlock->nRow || pDumpInfo->rowIndex < 0) {
|
||||||
|
@ -1666,7 +1712,7 @@ static int32_t doBuildDataBlock(STsdbReader* pReader) {
|
||||||
SFileDataBlockInfo* pFBlock = getCurrentBlockInfo(pBlockIter);
|
SFileDataBlockInfo* pFBlock = getCurrentBlockInfo(pBlockIter);
|
||||||
STableBlockScanInfo* pScanInfo = taosHashGet(pStatus->pTableMap, &pFBlock->uid, sizeof(pFBlock->uid));
|
STableBlockScanInfo* pScanInfo = taosHashGet(pStatus->pTableMap, &pFBlock->uid, sizeof(pFBlock->uid));
|
||||||
|
|
||||||
SBlock* pBlock = taosArrayGet(pScanInfo->pBlockList, pFBlock->tbBlockIdx);
|
SBlock* pBlock = getCurrentBlock(pBlockIter);
|
||||||
|
|
||||||
TSDBKEY key = getCurrentKeyInBuf(pBlockIter, pReader);
|
TSDBKEY key = getCurrentKeyInBuf(pBlockIter, pReader);
|
||||||
if (fileBlockShouldLoad(pReader, pFBlock, pBlock, pScanInfo, key)) {
|
if (fileBlockShouldLoad(pReader, pFBlock, pBlock, pScanInfo, key)) {
|
||||||
|
@ -1729,9 +1775,7 @@ static int32_t buildBlockFromBufferSequentially(STsdbReader* pReader) {
|
||||||
|
|
||||||
// set the correct start position in case of the first/last file block, according to the query time window
|
// set the correct start position in case of the first/last file block, according to the query time window
|
||||||
static void initBlockDumpInfo(STsdbReader* pReader, SDataBlockIter* pBlockIter) {
|
static void initBlockDumpInfo(STsdbReader* pReader, SDataBlockIter* pBlockIter) {
|
||||||
SFileDataBlockInfo* pFBlock = getCurrentBlockInfo(pBlockIter);
|
SBlock* pBlock = getCurrentBlock(pBlockIter);
|
||||||
STableBlockScanInfo* pScanInfo = taosHashGet(pReader->status.pTableMap, &pFBlock->uid, sizeof(pFBlock->uid));
|
|
||||||
SBlock* pBlock = taosArrayGet(pScanInfo->pBlockList, pFBlock->tbBlockIdx);
|
|
||||||
|
|
||||||
SReaderStatus* pStatus = &pReader->status;
|
SReaderStatus* pStatus = &pReader->status;
|
||||||
|
|
||||||
|
@ -2102,6 +2146,8 @@ static int32_t checkForNeighborFileBlock(STsdbReader* pReader, STableBlockScanIn
|
||||||
}
|
}
|
||||||
|
|
||||||
bool overlap = overlapWithNeighborBlock(pBlock, pNeighborBlock, pReader->order);
|
bool overlap = overlapWithNeighborBlock(pBlock, pNeighborBlock, pReader->order);
|
||||||
|
taosMemoryFree(pNeighborBlock);
|
||||||
|
|
||||||
if (overlap) { // load next block
|
if (overlap) { // load next block
|
||||||
SReaderStatus* pStatus = &pReader->status;
|
SReaderStatus* pStatus = &pReader->status;
|
||||||
SDataBlockIter* pBlockIter = &pStatus->blockIter;
|
SDataBlockIter* pBlockIter = &pStatus->blockIter;
|
||||||
|
@ -2152,7 +2198,7 @@ int32_t doMergeRowsInFileBlocks(SBlockData* pBlockData, STableBlockScanInfo* pSc
|
||||||
CHECK_FILEBLOCK_STATE st;
|
CHECK_FILEBLOCK_STATE st;
|
||||||
|
|
||||||
SFileDataBlockInfo* pFileBlockInfo = getCurrentBlockInfo(&pReader->status.blockIter);
|
SFileDataBlockInfo* pFileBlockInfo = getCurrentBlockInfo(&pReader->status.blockIter);
|
||||||
SBlock* pCurrentBlock = taosArrayGet(pScanInfo->pBlockList, pFileBlockInfo->tbBlockIdx);
|
SBlock* pCurrentBlock = getCurrentBlock(&pReader->status.blockIter);
|
||||||
checkForNeighborFileBlock(pReader, pScanInfo, pCurrentBlock, pFileBlockInfo, pMerger, key, &st);
|
checkForNeighborFileBlock(pReader, pScanInfo, pCurrentBlock, pFileBlockInfo, pMerger, key, &st);
|
||||||
if (st == CHECK_FILEBLOCK_QUIT) {
|
if (st == CHECK_FILEBLOCK_QUIT) {
|
||||||
break;
|
break;
|
||||||
|
@ -2461,7 +2507,7 @@ int32_t tsdbReaderOpen(SVnode* pVnode, SQueryTableDataCond* pCond, SArray* pTabl
|
||||||
SDataBlockIter* pBlockIter = &pReader->status.blockIter;
|
SDataBlockIter* pBlockIter = &pReader->status.blockIter;
|
||||||
|
|
||||||
initFilesetIterator(&pReader->status.fileIter, pReader->pReadSnap->fs.aDFileSet, pReader->order, pReader->idStr);
|
initFilesetIterator(&pReader->status.fileIter, pReader->pReadSnap->fs.aDFileSet, pReader->order, pReader->idStr);
|
||||||
resetDataBlockIterator(&pReader->status.blockIter, pReader->order);
|
resetDataBlockIterator(&pReader->status.blockIter, pReader->order, pReader->status.pTableMap);
|
||||||
|
|
||||||
// no data in files, let's try buffer in memory
|
// no data in files, let's try buffer in memory
|
||||||
if (pReader->status.fileIter.numOfFiles == 0) {
|
if (pReader->status.fileIter.numOfFiles == 0) {
|
||||||
|
@ -2477,7 +2523,7 @@ int32_t tsdbReaderOpen(SVnode* pVnode, SQueryTableDataCond* pCond, SArray* pTabl
|
||||||
SDataBlockIter* pBlockIter = &pPrevReader->status.blockIter;
|
SDataBlockIter* pBlockIter = &pPrevReader->status.blockIter;
|
||||||
|
|
||||||
initFilesetIterator(&pPrevReader->status.fileIter, pPrevReader->pReadSnap->fs.aDFileSet, pPrevReader->order, pPrevReader->idStr);
|
initFilesetIterator(&pPrevReader->status.fileIter, pPrevReader->pReadSnap->fs.aDFileSet, pPrevReader->order, pPrevReader->idStr);
|
||||||
resetDataBlockIterator(&pPrevReader->status.blockIter, pPrevReader->order);
|
resetDataBlockIterator(&pPrevReader->status.blockIter, pPrevReader->order, pReader->status.pTableMap);
|
||||||
|
|
||||||
// no data in files, let's try buffer in memory
|
// no data in files, let's try buffer in memory
|
||||||
if (pPrevReader->status.fileIter.numOfFiles == 0) {
|
if (pPrevReader->status.fileIter.numOfFiles == 0) {
|
||||||
|
@ -2519,6 +2565,8 @@ void tsdbReaderClose(STsdbReader* pReader) {
|
||||||
taosMemoryFree(pSupInfo->buildBuf);
|
taosMemoryFree(pSupInfo->buildBuf);
|
||||||
|
|
||||||
cleanupDataBlockIterator(&pReader->status.blockIter);
|
cleanupDataBlockIterator(&pReader->status.blockIter);
|
||||||
|
|
||||||
|
size_t numOfTables = taosHashGetSize(pReader->status.pTableMap);
|
||||||
destroyBlockScanInfo(pReader->status.pTableMap);
|
destroyBlockScanInfo(pReader->status.pTableMap);
|
||||||
blockDataDestroy(pReader->pResBlock);
|
blockDataDestroy(pReader->pResBlock);
|
||||||
|
|
||||||
|
@ -2528,10 +2576,12 @@ void tsdbReaderClose(STsdbReader* pReader) {
|
||||||
|
|
||||||
SIOCostSummary* pCost = &pReader->cost;
|
SIOCostSummary* pCost = &pReader->cost;
|
||||||
|
|
||||||
tsdbDebug("%p :io-cost summary: head-file read cnt:%" PRIu64 ", head-file time:%" PRIu64 " us, statis-info:%" PRId64
|
tsdbDebug("%p :io-cost summary: head-file:%" PRIu64 ", head-file time:%.2f ms, SMA:%"PRId64" SMA-time:%.2f ms, "
|
||||||
" us, datablock:%" PRId64 " us, check data:%" PRId64 " us, %s",
|
"fileBlocks:%"PRId64", fileBlocks-time:%.2f ms, build in-memory-block-time:%.2f ms, STableBlockScanInfo "
|
||||||
pReader, pCost->headFileLoad, pCost->headFileLoadTime, pCost->smaLoadTime, pCost->blockLoadTime,
|
"size:%.2f Kb %s",
|
||||||
pCost->checkForNextTime, pReader->idStr);
|
pReader, pCost->headFileLoad, pCost->headFileLoadTime, pCost->smaData, pCost->smaLoadTime,
|
||||||
|
pCost->numOfBlocks, pCost->blockLoadTime, pCost->buildmemBlock,
|
||||||
|
numOfTables * sizeof(STableBlockScanInfo) /1000.0, pReader->idStr);
|
||||||
|
|
||||||
taosMemoryFree(pReader->idStr);
|
taosMemoryFree(pReader->idStr);
|
||||||
taosMemoryFree(pReader->pSchema);
|
taosMemoryFree(pReader->pSchema);
|
||||||
|
@ -2543,7 +2593,6 @@ static bool doTsdbNextDataBlock(STsdbReader* pReader) {
|
||||||
SSDataBlock* pBlock = pReader->pResBlock;
|
SSDataBlock* pBlock = pReader->pResBlock;
|
||||||
blockDataCleanup(pBlock);
|
blockDataCleanup(pBlock);
|
||||||
|
|
||||||
int64_t stime = taosGetTimestampUs();
|
|
||||||
SReaderStatus* pStatus = &pReader->status;
|
SReaderStatus* pStatus = &pReader->status;
|
||||||
|
|
||||||
if (pStatus->loadFromFile) {
|
if (pStatus->loadFromFile) {
|
||||||
|
@ -2639,9 +2688,8 @@ int32_t tsdbRetrieveDatablockSMA(STsdbReader* pReader, SColumnDataAgg*** pBlockS
|
||||||
}
|
}
|
||||||
|
|
||||||
SFileDataBlockInfo* pFBlock = getCurrentBlockInfo(&pReader->status.blockIter);
|
SFileDataBlockInfo* pFBlock = getCurrentBlockInfo(&pReader->status.blockIter);
|
||||||
STableBlockScanInfo* pBlockScanInfo = taosHashGet(pReader->status.pTableMap, &pFBlock->uid, sizeof(pFBlock->uid));
|
|
||||||
SBlock* pBlock = taosArrayGet(pBlockScanInfo->pBlockList, pFBlock->tbBlockIdx);
|
|
||||||
|
|
||||||
|
SBlock* pBlock = getCurrentBlock(&pReader->status.blockIter);
|
||||||
int64_t stime = taosGetTimestampUs();
|
int64_t stime = taosGetTimestampUs();
|
||||||
|
|
||||||
SBlockLoadSuppInfo* pSup = &pReader->suppInfo;
|
SBlockLoadSuppInfo* pSup = &pReader->suppInfo;
|
||||||
|
@ -2690,12 +2738,13 @@ int32_t tsdbRetrieveDatablockSMA(STsdbReader* pReader, SColumnDataAgg*** pBlockS
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
int64_t elapsed = taosGetTimestampUs() - stime;
|
double elapsed = (taosGetTimestampUs() - stime) / 1000.0;
|
||||||
pReader->cost.smaLoadTime += elapsed;
|
pReader->cost.smaLoadTime += elapsed;
|
||||||
|
pReader->cost.smaData += 1;
|
||||||
|
|
||||||
*pBlockStatis = pSup->plist;
|
*pBlockStatis = pSup->plist;
|
||||||
|
|
||||||
tsdbDebug("vgId:%d, succeed to load block SMA for uid %" PRIu64 ", elapsed time:%" PRId64 "us, %s", 0, pFBlock->uid,
|
tsdbDebug("vgId:%d, succeed to load block SMA for uid %" PRIu64 ", elapsed time:%.2f ms, %s", 0, pFBlock->uid,
|
||||||
elapsed, pReader->idStr);
|
elapsed, pReader->idStr);
|
||||||
|
|
||||||
return code;
|
return code;
|
||||||
|
@ -2764,7 +2813,7 @@ int32_t tsdbReaderReset(STsdbReader* pReader, SQueryTableDataCond* pCond) {
|
||||||
tsdbDataFReaderClose(&pReader->pFileReader);
|
tsdbDataFReaderClose(&pReader->pFileReader);
|
||||||
|
|
||||||
initFilesetIterator(&pReader->status.fileIter, pReader->pReadSnap->fs.aDFileSet, pReader->order, pReader->idStr);
|
initFilesetIterator(&pReader->status.fileIter, pReader->pReadSnap->fs.aDFileSet, pReader->order, pReader->idStr);
|
||||||
resetDataBlockIterator(&pReader->status.blockIter, pReader->order);
|
resetDataBlockIterator(&pReader->status.blockIter, pReader->order, pReader->status.pTableMap);
|
||||||
resetDataBlockScanInfo(pReader->status.pTableMap);
|
resetDataBlockScanInfo(pReader->status.pTableMap);
|
||||||
|
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
|
|
|
@ -24,6 +24,8 @@ void tMapDataReset(SMapData *pMapData) {
|
||||||
void tMapDataClear(SMapData *pMapData) {
|
void tMapDataClear(SMapData *pMapData) {
|
||||||
tFree((uint8_t *)pMapData->aOffset);
|
tFree((uint8_t *)pMapData->aOffset);
|
||||||
tFree(pMapData->pData);
|
tFree(pMapData->pData);
|
||||||
|
pMapData->pData = NULL;
|
||||||
|
pMapData->aOffset = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t tMapDataPutItem(SMapData *pMapData, void *pItem, int32_t (*tPutItemFn)(uint8_t *, void *)) {
|
int32_t tMapDataPutItem(SMapData *pMapData, void *pItem, int32_t (*tPutItemFn)(uint8_t *, void *)) {
|
||||||
|
|
|
@ -40,8 +40,8 @@ const SVnodeCfg vnodeCfgDefault = {.vgId = -1,
|
||||||
.vgId = -1,
|
.vgId = -1,
|
||||||
.fsyncPeriod = 0,
|
.fsyncPeriod = 0,
|
||||||
.retentionPeriod = -1,
|
.retentionPeriod = -1,
|
||||||
.rollPeriod = -1,
|
.rollPeriod = 0,
|
||||||
.segSize = -1,
|
.segSize = 0,
|
||||||
.retentionSize = -1,
|
.retentionSize = -1,
|
||||||
.level = TAOS_WAL_WRITE,
|
.level = TAOS_WAL_WRITE,
|
||||||
},
|
},
|
||||||
|
|
|
@ -206,13 +206,13 @@ static void inline vnodeProposeBatchMsg(SVnode *pVnode, SRpcMsg **pMsgArr, bool
|
||||||
}
|
}
|
||||||
|
|
||||||
void vnodeProposeWriteMsg(SQueueInfo *pInfo, STaosQall *qall, int32_t numOfMsgs) {
|
void vnodeProposeWriteMsg(SQueueInfo *pInfo, STaosQall *qall, int32_t numOfMsgs) {
|
||||||
SVnode *pVnode = pInfo->ahandle;
|
SVnode *pVnode = pInfo->ahandle;
|
||||||
int32_t vgId = pVnode->config.vgId;
|
int32_t vgId = pVnode->config.vgId;
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
SRpcMsg *pMsg = NULL;
|
SRpcMsg *pMsg = NULL;
|
||||||
int32_t arrayPos = 0;
|
int32_t arrayPos = 0;
|
||||||
SRpcMsg **pMsgArr = taosMemoryCalloc(numOfMsgs, sizeof(SRpcMsg*));
|
SRpcMsg **pMsgArr = taosMemoryCalloc(numOfMsgs, sizeof(SRpcMsg *));
|
||||||
bool *pIsWeakArr = taosMemoryCalloc(numOfMsgs, sizeof(bool));
|
bool *pIsWeakArr = taosMemoryCalloc(numOfMsgs, sizeof(bool));
|
||||||
vTrace("vgId:%d, get %d msgs from vnode-write queue", vgId, numOfMsgs);
|
vTrace("vgId:%d, get %d msgs from vnode-write queue", vgId, numOfMsgs);
|
||||||
|
|
||||||
for (int32_t msg = 0; msg < numOfMsgs; msg++) {
|
for (int32_t msg = 0; msg < numOfMsgs; msg++) {
|
||||||
|
@ -506,7 +506,7 @@ static void vnodeSyncCommitMsg(SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta c
|
||||||
syncGetVgId(pVnode->sync), pFsm, cbMeta.index, cbMeta.isWeak, cbMeta.code, cbMeta.state,
|
syncGetVgId(pVnode->sync), pFsm, cbMeta.index, cbMeta.isWeak, cbMeta.code, cbMeta.state,
|
||||||
syncUtilState2String(cbMeta.state), pMsg->msgType, TMSG_INFO(pMsg->msgType));
|
syncUtilState2String(cbMeta.state), pMsg->msgType, TMSG_INFO(pMsg->msgType));
|
||||||
|
|
||||||
if (cbMeta.code == 0) {
|
if (cbMeta.code == 0 && cbMeta.isWeak == 0) {
|
||||||
SRpcMsg rpcMsg = {.msgType = pMsg->msgType, .contLen = pMsg->contLen};
|
SRpcMsg rpcMsg = {.msgType = pMsg->msgType, .contLen = pMsg->contLen};
|
||||||
rpcMsg.pCont = rpcMallocCont(rpcMsg.contLen);
|
rpcMsg.pCont = rpcMallocCont(rpcMsg.contLen);
|
||||||
memcpy(rpcMsg.pCont, pMsg->pCont, pMsg->contLen);
|
memcpy(rpcMsg.pCont, pMsg->pCont, pMsg->contLen);
|
||||||
|
@ -529,6 +529,23 @@ static void vnodeSyncPreCommitMsg(SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMet
|
||||||
vTrace("vgId:%d, pre-commit-cb is excuted, fsm:%p, index:%" PRId64 ", isWeak:%d, code:%d, state:%d %s, msgtype:%d %s",
|
vTrace("vgId:%d, pre-commit-cb is excuted, fsm:%p, index:%" PRId64 ", isWeak:%d, code:%d, state:%d %s, msgtype:%d %s",
|
||||||
syncGetVgId(pVnode->sync), pFsm, cbMeta.index, cbMeta.isWeak, cbMeta.code, cbMeta.state,
|
syncGetVgId(pVnode->sync), pFsm, cbMeta.index, cbMeta.isWeak, cbMeta.code, cbMeta.state,
|
||||||
syncUtilState2String(cbMeta.state), pMsg->msgType, TMSG_INFO(pMsg->msgType));
|
syncUtilState2String(cbMeta.state), pMsg->msgType, TMSG_INFO(pMsg->msgType));
|
||||||
|
|
||||||
|
if (cbMeta.code == 0 && cbMeta.isWeak == 1) {
|
||||||
|
SRpcMsg rpcMsg = {.msgType = pMsg->msgType, .contLen = pMsg->contLen};
|
||||||
|
rpcMsg.pCont = rpcMallocCont(rpcMsg.contLen);
|
||||||
|
memcpy(rpcMsg.pCont, pMsg->pCont, pMsg->contLen);
|
||||||
|
syncGetAndDelRespRpc(pVnode->sync, cbMeta.seqNum, &rpcMsg.info);
|
||||||
|
rpcMsg.info.conn.applyIndex = cbMeta.index;
|
||||||
|
rpcMsg.info.conn.applyTerm = cbMeta.term;
|
||||||
|
tmsgPutToQueue(&pVnode->msgCb, APPLY_QUEUE, &rpcMsg);
|
||||||
|
} else {
|
||||||
|
SRpcMsg rsp = {.code = cbMeta.code, .info = pMsg->info};
|
||||||
|
vError("vgId:%d, sync pre-commit error, msgtype:%d,%s, error:0x%X, errmsg:%s", syncGetVgId(pVnode->sync),
|
||||||
|
pMsg->msgType, TMSG_INFO(pMsg->msgType), cbMeta.code, tstrerror(cbMeta.code));
|
||||||
|
if (rsp.info.handle != NULL) {
|
||||||
|
tmsgSendRsp(&rsp);
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static void vnodeSyncRollBackMsg(SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta cbMeta) {
|
static void vnodeSyncRollBackMsg(SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta cbMeta) {
|
||||||
|
|
|
@ -1003,6 +1003,7 @@ int32_t updateSessionWindowInfo(SResultWindowInfo* pWinInfo, TSKEY* pStartTs,
|
||||||
bool functionNeedToExecute(SqlFunctionCtx* pCtx);
|
bool functionNeedToExecute(SqlFunctionCtx* pCtx);
|
||||||
bool isCloseWindow(STimeWindow* pWin, STimeWindowAggSupp* pSup);
|
bool isCloseWindow(STimeWindow* pWin, STimeWindowAggSupp* pSup);
|
||||||
void appendOneRow(SSDataBlock* pBlock, TSKEY* pStartTs, TSKEY* pEndTs, uint64_t* pUid);
|
void appendOneRow(SSDataBlock* pBlock, TSKEY* pStartTs, TSKEY* pEndTs, uint64_t* pUid);
|
||||||
|
void printDataBlock(SSDataBlock* pBlock, const char* flag);
|
||||||
|
|
||||||
int32_t finalizeResultRowIntoResultDataBlock(SDiskbasedBuf* pBuf, SResultRowPosition* resultRowPosition,
|
int32_t finalizeResultRowIntoResultDataBlock(SDiskbasedBuf* pBuf, SResultRowPosition* resultRowPosition,
|
||||||
SqlFunctionCtx* pCtx, SExprInfo* pExprInfo, int32_t numOfExprs, const int32_t* rowCellOffset,
|
SqlFunctionCtx* pCtx, SExprInfo* pExprInfo, int32_t numOfExprs, const int32_t* rowCellOffset,
|
||||||
|
|
|
@ -182,7 +182,8 @@ static int32_t getDataBlock(SDataSinkHandle* pHandle, SOutputData* pOutput) {
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
SDataCacheEntry* pEntry = (SDataCacheEntry*)(pDeleter->nextOutput.pData);
|
SDataCacheEntry* pEntry = (SDataCacheEntry*)(pDeleter->nextOutput.pData);
|
||||||
memcpy(pOutput->pData, pEntry->data, pEntry->dataLen);
|
memcpy(pOutput->pData, pEntry->data, pEntry->dataLen);
|
||||||
|
pDeleter->pParam->pUidList = NULL;
|
||||||
pOutput->numOfRows = pEntry->numOfRows;
|
pOutput->numOfRows = pEntry->numOfRows;
|
||||||
pOutput->numOfCols = pEntry->numOfCols;
|
pOutput->numOfCols = pEntry->numOfCols;
|
||||||
pOutput->compressed = pEntry->compressed;
|
pOutput->compressed = pEntry->compressed;
|
||||||
|
@ -205,6 +206,8 @@ static int32_t destroyDataSinker(SDataSinkHandle* pHandle) {
|
||||||
SDataDeleterHandle* pDeleter = (SDataDeleterHandle*)pHandle;
|
SDataDeleterHandle* pDeleter = (SDataDeleterHandle*)pHandle;
|
||||||
atomic_sub_fetch_64(&gDataSinkStat.cachedSize, pDeleter->cachedSize);
|
atomic_sub_fetch_64(&gDataSinkStat.cachedSize, pDeleter->cachedSize);
|
||||||
taosMemoryFreeClear(pDeleter->nextOutput.pData);
|
taosMemoryFreeClear(pDeleter->nextOutput.pData);
|
||||||
|
taosArrayDestroy(pDeleter->pParam->pUidList);
|
||||||
|
taosMemoryFree(pDeleter->pParam);
|
||||||
while (!taosQueueEmpty(pDeleter->pDataBlocks)) {
|
while (!taosQueueEmpty(pDeleter->pDataBlocks)) {
|
||||||
SDataDeleterBuf* pBuf = NULL;
|
SDataDeleterBuf* pBuf = NULL;
|
||||||
taosReadQitem(pDeleter->pDataBlocks, (void**)&pBuf);
|
taosReadQitem(pDeleter->pDataBlocks, (void**)&pBuf);
|
||||||
|
|
|
@ -666,6 +666,11 @@ int32_t projectApplyFunctions(SExprInfo* pExpr, SSDataBlock* pResult, SSDataBloc
|
||||||
pfCtx->pTsOutput = (SColumnInfoData*)pCtx[*outputColIndex].pOutput;
|
pfCtx->pTsOutput = (SColumnInfoData*)pCtx[*outputColIndex].pOutput;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// link pDstBlock to set selectivity value
|
||||||
|
if (pfCtx->subsidiaries.num > 0) {
|
||||||
|
pfCtx->pDstBlock = pResult;
|
||||||
|
}
|
||||||
|
|
||||||
numOfRows = pfCtx->fpSet.process(pfCtx);
|
numOfRows = pfCtx->fpSet.process(pfCtx);
|
||||||
} else if (fmIsAggFunc(pfCtx->functionId)) {
|
} else if (fmIsAggFunc(pfCtx->functionId)) {
|
||||||
// _group_key function for "partition by tbname" + csum(col_name) query
|
// _group_key function for "partition by tbname" + csum(col_name) query
|
||||||
|
|
|
@ -1616,6 +1616,7 @@ static SSDataBlock* doStreamIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
pInfo->twAggSup.maxTs = TMAX(pInfo->twAggSup.maxTs, pBlock->info.window.ekey);
|
pInfo->twAggSup.maxTs = TMAX(pInfo->twAggSup.maxTs, pBlock->info.window.ekey);
|
||||||
hashIntervalAgg(pOperator, &pInfo->binfo.resultRowInfo, pBlock, MAIN_SCAN, pUpdated);
|
hashIntervalAgg(pOperator, &pInfo->binfo.resultRowInfo, pBlock, MAIN_SCAN, pUpdated);
|
||||||
}
|
}
|
||||||
|
|
||||||
pOperator->status = OP_RES_TO_RETURN;
|
pOperator->status = OP_RES_TO_RETURN;
|
||||||
closeIntervalWindow(pInfo->aggSup.pResultRowHashTable, &pInfo->twAggSup, &pInfo->interval, NULL, pUpdated,
|
closeIntervalWindow(pInfo->aggSup.pResultRowHashTable, &pInfo->twAggSup, &pInfo->interval, NULL, pUpdated,
|
||||||
pInfo->pRecycledPages, pInfo->aggSup.pResultBuf);
|
pInfo->pRecycledPages, pInfo->aggSup.pResultBuf);
|
||||||
|
@ -1628,6 +1629,7 @@ static SSDataBlock* doStreamIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
if (pInfo->pDelRes->info.rows > 0) {
|
if (pInfo->pDelRes->info.rows > 0) {
|
||||||
return pInfo->pDelRes;
|
return pInfo->pDelRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->aggSup.pResultBuf);
|
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->aggSup.pResultBuf);
|
||||||
printDataBlock(pInfo->binfo.pRes, "single interval");
|
printDataBlock(pInfo->binfo.pRes, "single interval");
|
||||||
return pInfo->binfo.pRes->info.rows == 0 ? NULL : pInfo->binfo.pRes;
|
return pInfo->binfo.pRes->info.rows == 0 ? NULL : pInfo->binfo.pRes;
|
||||||
|
@ -2769,7 +2771,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
|
|
||||||
SExprSupp* pSup = &pOperator->exprSupp;
|
SExprSupp* pSup = &pOperator->exprSupp;
|
||||||
|
|
||||||
qDebug("interval status %d %s", pOperator->status, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
qDebug("interval status %d %s", pOperator->status, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
|
|
||||||
if (pOperator->status == OP_EXEC_DONE) {
|
if (pOperator->status == OP_EXEC_DONE) {
|
||||||
return NULL;
|
return NULL;
|
||||||
|
@ -2778,7 +2780,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
if (pInfo->pPullDataRes->info.rows != 0) {
|
if (pInfo->pPullDataRes->info.rows != 0) {
|
||||||
// process the rest of the data
|
// process the rest of the data
|
||||||
ASSERT(IS_FINAL_OP(pInfo));
|
ASSERT(IS_FINAL_OP(pInfo));
|
||||||
printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
return pInfo->pPullDataRes;
|
return pInfo->pPullDataRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2793,20 +2795,20 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
}
|
}
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
return pInfo->binfo.pRes;
|
return pInfo->binfo.pRes;
|
||||||
} else {
|
} else {
|
||||||
if (!IS_FINAL_OP(pInfo)) {
|
if (!IS_FINAL_OP(pInfo)) {
|
||||||
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->aggSup.pResultBuf);
|
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->aggSup.pResultBuf);
|
||||||
if (pInfo->binfo.pRes->info.rows != 0) {
|
if (pInfo->binfo.pRes->info.rows != 0) {
|
||||||
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
return pInfo->binfo.pRes;
|
return pInfo->binfo.pRes;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (pInfo->pUpdateRes->info.rows != 0 && pInfo->returnUpdate) {
|
if (pInfo->pUpdateRes->info.rows != 0 && pInfo->returnUpdate) {
|
||||||
pInfo->returnUpdate = false;
|
pInfo->returnUpdate = false;
|
||||||
ASSERT(!IS_FINAL_OP(pInfo));
|
ASSERT(!IS_FINAL_OP(pInfo));
|
||||||
printDataBlock(pInfo->pUpdateRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
printDataBlock(pInfo->pUpdateRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
// process the rest of the data
|
// process the rest of the data
|
||||||
return pInfo->pUpdateRes;
|
return pInfo->pUpdateRes;
|
||||||
}
|
}
|
||||||
|
@ -2814,13 +2816,13 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
// if (pInfo->pPullDataRes->info.rows != 0) {
|
// if (pInfo->pPullDataRes->info.rows != 0) {
|
||||||
// // process the rest of the data
|
// // process the rest of the data
|
||||||
// ASSERT(IS_FINAL_OP(pInfo));
|
// ASSERT(IS_FINAL_OP(pInfo));
|
||||||
// printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
// printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
// return pInfo->pPullDataRes;
|
// return pInfo->pPullDataRes;
|
||||||
// }
|
// }
|
||||||
doBuildDeleteResult(pInfo->pDelWins, &pInfo->delIndex, pInfo->pDelRes);
|
doBuildDeleteResult(pInfo->pDelWins, &pInfo->delIndex, pInfo->pDelRes);
|
||||||
if (pInfo->pDelRes->info.rows != 0) {
|
if (pInfo->pDelRes->info.rows != 0) {
|
||||||
// process the rest of the data
|
// process the rest of the data
|
||||||
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
return pInfo->pDelRes;
|
return pInfo->pDelRes;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -2831,10 +2833,10 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
clearSpecialDataBlock(pInfo->pUpdateRes);
|
clearSpecialDataBlock(pInfo->pUpdateRes);
|
||||||
removeDeleteResults(pUpdated, pInfo->pDelWins);
|
removeDeleteResults(pUpdated, pInfo->pDelWins);
|
||||||
pOperator->status = OP_RES_TO_RETURN;
|
pOperator->status = OP_RES_TO_RETURN;
|
||||||
qDebug("%s return data", IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
qDebug("%s return data", IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
printDataBlock(pBlock, IS_FINAL_OP(pInfo) ? "interval Final recv" : "interval Semi recv");
|
printDataBlock(pBlock, IS_FINAL_OP(pInfo) ? "interval final recv" : "interval semi recv");
|
||||||
maxTs = TMAX(maxTs, pBlock->info.window.ekey);
|
maxTs = TMAX(maxTs, pBlock->info.window.ekey);
|
||||||
|
|
||||||
if (pBlock->info.type == STREAM_NORMAL || pBlock->info.type == STREAM_PULL_DATA ||
|
if (pBlock->info.type == STREAM_NORMAL || pBlock->info.type == STREAM_PULL_DATA ||
|
||||||
|
@ -2934,20 +2936,20 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
if (pInfo->pPullDataRes->info.rows != 0) {
|
if (pInfo->pPullDataRes->info.rows != 0) {
|
||||||
// process the rest of the data
|
// process the rest of the data
|
||||||
ASSERT(IS_FINAL_OP(pInfo));
|
ASSERT(IS_FINAL_OP(pInfo));
|
||||||
printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
return pInfo->pPullDataRes;
|
return pInfo->pPullDataRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->aggSup.pResultBuf);
|
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->aggSup.pResultBuf);
|
||||||
if (pInfo->binfo.pRes->info.rows != 0) {
|
if (pInfo->binfo.pRes->info.rows != 0) {
|
||||||
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
return pInfo->binfo.pRes;
|
return pInfo->binfo.pRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pInfo->pUpdateRes->info.rows != 0 && pInfo->returnUpdate) {
|
if (pInfo->pUpdateRes->info.rows != 0 && pInfo->returnUpdate) {
|
||||||
pInfo->returnUpdate = false;
|
pInfo->returnUpdate = false;
|
||||||
ASSERT(!IS_FINAL_OP(pInfo));
|
ASSERT(!IS_FINAL_OP(pInfo));
|
||||||
printDataBlock(pInfo->pUpdateRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
printDataBlock(pInfo->pUpdateRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
// process the rest of the data
|
// process the rest of the data
|
||||||
return pInfo->pUpdateRes;
|
return pInfo->pUpdateRes;
|
||||||
}
|
}
|
||||||
|
@ -2955,7 +2957,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
doBuildDeleteResult(pInfo->pDelWins, &pInfo->delIndex, pInfo->pDelRes);
|
doBuildDeleteResult(pInfo->pDelWins, &pInfo->delIndex, pInfo->pDelRes);
|
||||||
if (pInfo->pDelRes->info.rows != 0) {
|
if (pInfo->pDelRes->info.rows != 0) {
|
||||||
// process the rest of the data
|
// process the rest of the data
|
||||||
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
|
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
return pInfo->pDelRes;
|
return pInfo->pDelRes;
|
||||||
}
|
}
|
||||||
// ASSERT(false);
|
// ASSERT(false);
|
||||||
|
@ -3815,14 +3817,14 @@ static SSDataBlock* doStreamSessionAgg(SOperatorInfo* pOperator) {
|
||||||
} else if (pOperator->status == OP_RES_TO_RETURN) {
|
} else if (pOperator->status == OP_RES_TO_RETURN) {
|
||||||
doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
|
doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
|
||||||
if (pInfo->pDelRes->info.rows > 0) {
|
if (pInfo->pDelRes->info.rows > 0) {
|
||||||
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "Final Session" : "Single Session");
|
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "final session" : "single session");
|
||||||
return pInfo->pDelRes;
|
return pInfo->pDelRes;
|
||||||
}
|
}
|
||||||
doBuildResultDatablock(pOperator, pBInfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
|
doBuildResultDatablock(pOperator, pBInfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
|
||||||
if (pBInfo->pRes->info.rows == 0 || !hasDataInGroupInfo(&pInfo->groupResInfo)) {
|
if (pBInfo->pRes->info.rows == 0 || !hasDataInGroupInfo(&pInfo->groupResInfo)) {
|
||||||
doSetOperatorCompleted(pOperator);
|
doSetOperatorCompleted(pOperator);
|
||||||
}
|
}
|
||||||
printDataBlock(pBInfo->pRes, IS_FINAL_OP(pInfo) ? "Final Session" : "Single Session");
|
printDataBlock(pBInfo->pRes, IS_FINAL_OP(pInfo) ? "final session" : "single session");
|
||||||
return pBInfo->pRes->info.rows == 0 ? NULL : pBInfo->pRes;
|
return pBInfo->pRes->info.rows == 0 ? NULL : pBInfo->pRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -3835,7 +3837,7 @@ static SSDataBlock* doStreamSessionAgg(SOperatorInfo* pOperator) {
|
||||||
if (pBlock == NULL) {
|
if (pBlock == NULL) {
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
printDataBlock(pBlock, IS_FINAL_OP(pInfo) ? "Final Session Recv" : "Single Session Recv");
|
printDataBlock(pBlock, IS_FINAL_OP(pInfo) ? "final session recv" : "single session recv");
|
||||||
|
|
||||||
if (pBlock->info.type == STREAM_CLEAR) {
|
if (pBlock->info.type == STREAM_CLEAR) {
|
||||||
SArray* pWins = taosArrayInit(16, sizeof(SResultWindowInfo));
|
SArray* pWins = taosArrayInit(16, sizeof(SResultWindowInfo));
|
||||||
|
@ -3912,11 +3914,11 @@ static SSDataBlock* doStreamSessionAgg(SOperatorInfo* pOperator) {
|
||||||
blockDataEnsureCapacity(pInfo->binfo.pRes, pOperator->resultInfo.capacity);
|
blockDataEnsureCapacity(pInfo->binfo.pRes, pOperator->resultInfo.capacity);
|
||||||
doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
|
doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
|
||||||
if (pInfo->pDelRes->info.rows > 0) {
|
if (pInfo->pDelRes->info.rows > 0) {
|
||||||
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "Final Session" : "Single Session");
|
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "final session" : "single session");
|
||||||
return pInfo->pDelRes;
|
return pInfo->pDelRes;
|
||||||
}
|
}
|
||||||
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
|
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
|
||||||
printDataBlock(pBInfo->pRes, IS_FINAL_OP(pInfo) ? "Final Session" : "Single Session");
|
printDataBlock(pBInfo->pRes, IS_FINAL_OP(pInfo) ? "final session" : "single session");
|
||||||
return pBInfo->pRes->info.rows == 0 ? NULL : pBInfo->pRes;
|
return pBInfo->pRes->info.rows == 0 ? NULL : pBInfo->pRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -3955,21 +3957,21 @@ static SSDataBlock* doStreamSessionSemiAgg(SOperatorInfo* pOperator) {
|
||||||
} else if (pOperator->status == OP_RES_TO_RETURN) {
|
} else if (pOperator->status == OP_RES_TO_RETURN) {
|
||||||
doBuildResultDatablock(pOperator, pBInfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
|
doBuildResultDatablock(pOperator, pBInfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
|
||||||
if (pBInfo->pRes->info.rows > 0) {
|
if (pBInfo->pRes->info.rows > 0) {
|
||||||
printDataBlock(pBInfo->pRes, "Semi Session");
|
printDataBlock(pBInfo->pRes, "sems session");
|
||||||
return pBInfo->pRes;
|
return pBInfo->pRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
// doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
|
// doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
|
||||||
if (pInfo->pDelRes->info.rows > 0 && !pInfo->returnDelete) {
|
if (pInfo->pDelRes->info.rows > 0 && !pInfo->returnDelete) {
|
||||||
pInfo->returnDelete = true;
|
pInfo->returnDelete = true;
|
||||||
printDataBlock(pInfo->pDelRes, "Semi Session");
|
printDataBlock(pInfo->pDelRes, "sems session");
|
||||||
return pInfo->pDelRes;
|
return pInfo->pDelRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pInfo->pUpdateRes->info.rows > 0) {
|
if (pInfo->pUpdateRes->info.rows > 0) {
|
||||||
// process the rest of the data
|
// process the rest of the data
|
||||||
pOperator->status = OP_OPENED;
|
pOperator->status = OP_OPENED;
|
||||||
printDataBlock(pInfo->pUpdateRes, "Semi Session");
|
printDataBlock(pInfo->pUpdateRes, "sems session");
|
||||||
return pInfo->pUpdateRes;
|
return pInfo->pUpdateRes;
|
||||||
}
|
}
|
||||||
// semi interval operator clear disk buffer
|
// semi interval operator clear disk buffer
|
||||||
|
@ -4033,21 +4035,21 @@ static SSDataBlock* doStreamSessionSemiAgg(SOperatorInfo* pOperator) {
|
||||||
|
|
||||||
doBuildResultDatablock(pOperator, pBInfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
|
doBuildResultDatablock(pOperator, pBInfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
|
||||||
if (pBInfo->pRes->info.rows > 0) {
|
if (pBInfo->pRes->info.rows > 0) {
|
||||||
printDataBlock(pBInfo->pRes, "Semi Session");
|
printDataBlock(pBInfo->pRes, "sems session");
|
||||||
return pBInfo->pRes;
|
return pBInfo->pRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
// doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
|
// doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
|
||||||
if (pInfo->pDelRes->info.rows > 0 && !pInfo->returnDelete) {
|
if (pInfo->pDelRes->info.rows > 0 && !pInfo->returnDelete) {
|
||||||
pInfo->returnDelete = true;
|
pInfo->returnDelete = true;
|
||||||
printDataBlock(pInfo->pDelRes, "Semi Session");
|
printDataBlock(pInfo->pDelRes, "sems session");
|
||||||
return pInfo->pDelRes;
|
return pInfo->pDelRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pInfo->pUpdateRes->info.rows > 0) {
|
if (pInfo->pUpdateRes->info.rows > 0) {
|
||||||
// process the rest of the data
|
// process the rest of the data
|
||||||
pOperator->status = OP_OPENED;
|
pOperator->status = OP_OPENED;
|
||||||
printDataBlock(pInfo->pUpdateRes, "Semi Session");
|
printDataBlock(pInfo->pUpdateRes, "sems session");
|
||||||
return pInfo->pUpdateRes;
|
return pInfo->pUpdateRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -2231,7 +2231,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
||||||
{
|
{
|
||||||
.name = "derivative",
|
.name = "derivative",
|
||||||
.type = FUNCTION_TYPE_DERIVATIVE,
|
.type = FUNCTION_TYPE_DERIVATIVE,
|
||||||
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_IMPLICIT_TS_FUNC,
|
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_IMPLICIT_TS_FUNC,
|
||||||
.translateFunc = translateDerivative,
|
.translateFunc = translateDerivative,
|
||||||
.getEnvFunc = getDerivativeFuncEnv,
|
.getEnvFunc = getDerivativeFuncEnv,
|
||||||
.initFunc = derivativeFuncSetup,
|
.initFunc = derivativeFuncSetup,
|
||||||
|
@ -2436,7 +2436,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
||||||
{
|
{
|
||||||
.name = "diff",
|
.name = "diff",
|
||||||
.type = FUNCTION_TYPE_DIFF,
|
.type = FUNCTION_TYPE_DIFF,
|
||||||
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
|
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
|
||||||
.translateFunc = translateDiff,
|
.translateFunc = translateDiff,
|
||||||
.getEnvFunc = getDiffFuncEnv,
|
.getEnvFunc = getDiffFuncEnv,
|
||||||
.initFunc = diffFunctionSetup,
|
.initFunc = diffFunctionSetup,
|
||||||
|
|
|
@ -1624,6 +1624,10 @@ int32_t minmaxFunctionFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
|
||||||
}
|
}
|
||||||
|
|
||||||
void setNullSelectivityValue(SqlFunctionCtx* pCtx, SSDataBlock* pBlock, int32_t rowIndex) {
|
void setNullSelectivityValue(SqlFunctionCtx* pCtx, SSDataBlock* pBlock, int32_t rowIndex) {
|
||||||
|
if (pCtx->subsidiaries.num <= 0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
for (int32_t j = 0; j < pCtx->subsidiaries.num; ++j) {
|
for (int32_t j = 0; j < pCtx->subsidiaries.num; ++j) {
|
||||||
SqlFunctionCtx* pc = pCtx->subsidiaries.pCtx[j];
|
SqlFunctionCtx* pc = pCtx->subsidiaries.pCtx[j];
|
||||||
int32_t dstSlotId = pc->pExpr->base.resSchema.slotId;
|
int32_t dstSlotId = pc->pExpr->base.resSchema.slotId;
|
||||||
|
@ -1655,8 +1659,6 @@ void setSelectivityValue(SqlFunctionCtx* pCtx, SSDataBlock* pBlock, const STuple
|
||||||
SFunctParam* pFuncParam = &pc->pExpr->base.pParam[0];
|
SFunctParam* pFuncParam = &pc->pExpr->base.pParam[0];
|
||||||
int32_t dstSlotId = pc->pExpr->base.resSchema.slotId;
|
int32_t dstSlotId = pc->pExpr->base.resSchema.slotId;
|
||||||
|
|
||||||
int32_t ps = 0;
|
|
||||||
|
|
||||||
SColumnInfoData* pDstCol = taosArrayGet(pBlock->pDataBlock, dstSlotId);
|
SColumnInfoData* pDstCol = taosArrayGet(pBlock->pDataBlock, dstSlotId);
|
||||||
ASSERT(pc->pExpr->base.resSchema.bytes == pDstCol->info.bytes);
|
ASSERT(pc->pExpr->base.resSchema.bytes == pDstCol->info.bytes);
|
||||||
if (nullList[j]) {
|
if (nullList[j]) {
|
||||||
|
@ -1678,6 +1680,39 @@ void releaseSource(STuplePos* pPos) {
|
||||||
// Todo(liuyao) relase row
|
// Todo(liuyao) relase row
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// This function append the selectivity to subsidiaries function context directly, without fetching data
|
||||||
|
// from intermediate disk based buf page
|
||||||
|
void appendSelectivityValue(SqlFunctionCtx* pCtx, int32_t rowIndex, int32_t pos) {
|
||||||
|
if (pCtx->subsidiaries.num <= 0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (int32_t j = 0; j < pCtx->subsidiaries.num; ++j) {
|
||||||
|
SqlFunctionCtx* pc = pCtx->subsidiaries.pCtx[j];
|
||||||
|
|
||||||
|
// get data from source col
|
||||||
|
SFunctParam* pFuncParam = &pc->pExpr->base.pParam[0];
|
||||||
|
int32_t srcSlotId = pFuncParam->pCol->slotId;
|
||||||
|
|
||||||
|
SColumnInfoData* pSrcCol = taosArrayGet(pCtx->pSrcBlock->pDataBlock, srcSlotId);
|
||||||
|
|
||||||
|
char* pData = colDataGetData(pSrcCol, rowIndex);
|
||||||
|
|
||||||
|
// append to dest col
|
||||||
|
int32_t dstSlotId = pc->pExpr->base.resSchema.slotId;
|
||||||
|
|
||||||
|
SColumnInfoData* pDstCol = taosArrayGet(pCtx->pDstBlock->pDataBlock, dstSlotId);
|
||||||
|
ASSERT(pc->pExpr->base.resSchema.bytes == pDstCol->info.bytes);
|
||||||
|
|
||||||
|
if (colDataIsNull_s(pSrcCol, rowIndex) == true) {
|
||||||
|
colDataAppendNULL(pDstCol, pos);
|
||||||
|
} else {
|
||||||
|
colDataAppend(pDstCol, pos, pData, false);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
void replaceTupleData(STuplePos* pDestPos, STuplePos* pSourcePos) {
|
void replaceTupleData(STuplePos* pDestPos, STuplePos* pSourcePos) {
|
||||||
releaseSource(pDestPos);
|
releaseSource(pDestPos);
|
||||||
*pDestPos = *pSourcePos;
|
*pDestPos = *pSourcePos;
|
||||||
|
@ -3155,6 +3190,7 @@ static void doHandleDiff(SDiffInfo* pDiffInfo, int32_t type, const char* pv, SCo
|
||||||
colDataAppendInt64(pOutput, pos, &delta);
|
colDataAppendInt64(pOutput, pos, &delta);
|
||||||
}
|
}
|
||||||
pDiffInfo->prev.i64 = v;
|
pDiffInfo->prev.i64 = v;
|
||||||
|
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case TSDB_DATA_TYPE_BOOL:
|
case TSDB_DATA_TYPE_BOOL:
|
||||||
|
@ -3248,6 +3284,10 @@ int32_t diffFunction(SqlFunctionCtx* pCtx) {
|
||||||
|
|
||||||
if (pDiffInfo->hasPrev) {
|
if (pDiffInfo->hasPrev) {
|
||||||
doHandleDiff(pDiffInfo, pInputCol->info.type, pv, pOutput, pos, pCtx->order);
|
doHandleDiff(pDiffInfo, pInputCol->info.type, pv, pOutput, pos, pCtx->order);
|
||||||
|
// handle selectivity
|
||||||
|
if (pCtx->subsidiaries.num > 0) {
|
||||||
|
appendSelectivityValue(pCtx, i, pos);
|
||||||
|
}
|
||||||
|
|
||||||
numOfElems++;
|
numOfElems++;
|
||||||
} else {
|
} else {
|
||||||
|
@ -3274,6 +3314,10 @@ int32_t diffFunction(SqlFunctionCtx* pCtx) {
|
||||||
// there is a row of previous data block to be handled in the first place.
|
// there is a row of previous data block to be handled in the first place.
|
||||||
if (pDiffInfo->hasPrev) {
|
if (pDiffInfo->hasPrev) {
|
||||||
doHandleDiff(pDiffInfo, pInputCol->info.type, pv, pOutput, pos, pCtx->order);
|
doHandleDiff(pDiffInfo, pInputCol->info.type, pv, pOutput, pos, pCtx->order);
|
||||||
|
// handle selectivity
|
||||||
|
if (pCtx->subsidiaries.num > 0) {
|
||||||
|
appendSelectivityValue(pCtx, i, pos);
|
||||||
|
}
|
||||||
|
|
||||||
numOfElems++;
|
numOfElems++;
|
||||||
} else {
|
} else {
|
||||||
|
@ -5724,6 +5768,12 @@ int32_t derivativeFunction(SqlFunctionCtx* pCtx) {
|
||||||
if (pTsOutput != NULL) {
|
if (pTsOutput != NULL) {
|
||||||
colDataAppendInt64(pTsOutput, pos, &tsList[i]);
|
colDataAppendInt64(pTsOutput, pos, &tsList[i]);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// handle selectivity
|
||||||
|
if (pCtx->subsidiaries.num > 0) {
|
||||||
|
appendSelectivityValue(pCtx, i, pos);
|
||||||
|
}
|
||||||
|
|
||||||
numOfElems++;
|
numOfElems++;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -5756,6 +5806,12 @@ int32_t derivativeFunction(SqlFunctionCtx* pCtx) {
|
||||||
if (pTsOutput != NULL) {
|
if (pTsOutput != NULL) {
|
||||||
colDataAppendInt64(pTsOutput, pos, &pDerivInfo->prevTs);
|
colDataAppendInt64(pTsOutput, pos, &pDerivInfo->prevTs);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// handle selectivity
|
||||||
|
if (pCtx->subsidiaries.num > 0) {
|
||||||
|
appendSelectivityValue(pCtx, i, pos);
|
||||||
|
}
|
||||||
|
|
||||||
numOfElems++;
|
numOfElems++;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -144,9 +144,9 @@ TEST_F(ParserSelectTest, IndefiniteRowsFunc) {
|
||||||
TEST_F(ParserSelectTest, IndefiniteRowsFuncSemanticCheck) {
|
TEST_F(ParserSelectTest, IndefiniteRowsFuncSemanticCheck) {
|
||||||
useDb("root", "test");
|
useDb("root", "test");
|
||||||
|
|
||||||
run("SELECT DIFF(c1), c2 FROM t1", TSDB_CODE_PAR_NOT_SINGLE_GROUP);
|
run("SELECT DIFF(c1), c2 FROM t1");
|
||||||
|
|
||||||
run("SELECT DIFF(c1), tbname FROM t1", TSDB_CODE_PAR_NOT_SINGLE_GROUP);
|
run("SELECT DIFF(c1), tbname FROM t1");
|
||||||
|
|
||||||
run("SELECT DIFF(c1), count(*) FROM t1", TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
|
run("SELECT DIFF(c1), count(*) FROM t1", TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
|
||||||
|
|
||||||
|
|
|
@ -283,6 +283,8 @@ int32_t qwGetDeleteResFromSink(QW_FPARAMS_DEF, SQWTaskCtx *ctx, SDeleteRes *pRes
|
||||||
pRes->skey = pDelRes->skey;
|
pRes->skey = pDelRes->skey;
|
||||||
pRes->ekey = pDelRes->ekey;
|
pRes->ekey = pDelRes->ekey;
|
||||||
pRes->affectedRows = pDelRes->affectedRows;
|
pRes->affectedRows = pDelRes->affectedRows;
|
||||||
|
|
||||||
|
taosMemoryFree(output.pData);
|
||||||
|
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
|
@ -238,6 +238,7 @@ int32_t syncNodeGetPreIndexTerm(SSyncNode* pSyncNode, SyncIndex index, SyncInd
|
||||||
|
|
||||||
bool syncNodeIsOptimizedOneReplica(SSyncNode* ths, SRpcMsg* pMsg);
|
bool syncNodeIsOptimizedOneReplica(SSyncNode* ths, SRpcMsg* pMsg);
|
||||||
int32_t syncNodeCommit(SSyncNode* ths, SyncIndex beginIndex, SyncIndex endIndex, uint64_t flag);
|
int32_t syncNodeCommit(SSyncNode* ths, SyncIndex beginIndex, SyncIndex endIndex, uint64_t flag);
|
||||||
|
int32_t syncNodePreCommit(SSyncNode* ths, SSyncRaftEntry* pEntry, int32_t code);
|
||||||
|
|
||||||
int32_t syncNodeUpdateNewConfigIndex(SSyncNode* ths, SSyncCfg* pNewCfg);
|
int32_t syncNodeUpdateNewConfigIndex(SSyncNode* ths, SSyncCfg* pNewCfg);
|
||||||
|
|
||||||
|
|
|
@ -244,22 +244,7 @@ int32_t syncNodeOnAppendEntriesCb(SSyncNode* ths, SyncAppendEntries* pMsg) {
|
||||||
ths->pLogStore->appendEntry(ths->pLogStore, pAppendEntry);
|
ths->pLogStore->appendEntry(ths->pLogStore, pAppendEntry);
|
||||||
|
|
||||||
// pre commit
|
// pre commit
|
||||||
SRpcMsg rpcMsg;
|
syncNodePreCommit(ths, pAppendEntry, 0);
|
||||||
syncEntry2OriginalRpc(pAppendEntry, &rpcMsg);
|
|
||||||
if (ths->pFsm != NULL) {
|
|
||||||
// if (ths->pFsm->FpPreCommitCb != NULL && pAppendEntry->originalRpcType != TDMT_SYNC_NOOP) {
|
|
||||||
if (ths->pFsm->FpPreCommitCb != NULL && syncUtilUserPreCommit(pAppendEntry->originalRpcType)) {
|
|
||||||
SFsmCbMeta cbMeta = {0};
|
|
||||||
cbMeta.index = pAppendEntry->index;
|
|
||||||
cbMeta.lastConfigIndex = syncNodeGetSnapshotConfigIndex(ths, cbMeta.index);
|
|
||||||
cbMeta.isWeak = pAppendEntry->isWeak;
|
|
||||||
cbMeta.code = 2;
|
|
||||||
cbMeta.state = ths->state;
|
|
||||||
cbMeta.seqNum = pAppendEntry->seqNum;
|
|
||||||
ths->pFsm->FpPreCommitCb(ths->pFsm, &rpcMsg, cbMeta);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
rpcFreeCont(rpcMsg.pCont);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// free memory
|
// free memory
|
||||||
|
@ -280,22 +265,7 @@ int32_t syncNodeOnAppendEntriesCb(SSyncNode* ths, SyncAppendEntries* pMsg) {
|
||||||
ths->pLogStore->appendEntry(ths->pLogStore, pAppendEntry);
|
ths->pLogStore->appendEntry(ths->pLogStore, pAppendEntry);
|
||||||
|
|
||||||
// pre commit
|
// pre commit
|
||||||
SRpcMsg rpcMsg;
|
syncNodePreCommit(ths, pAppendEntry, 0);
|
||||||
syncEntry2OriginalRpc(pAppendEntry, &rpcMsg);
|
|
||||||
if (ths->pFsm != NULL) {
|
|
||||||
// if (ths->pFsm->FpPreCommitCb != NULL && pAppendEntry->originalRpcType != TDMT_SYNC_NOOP) {
|
|
||||||
if (ths->pFsm->FpPreCommitCb != NULL && syncUtilUserPreCommit(pAppendEntry->originalRpcType)) {
|
|
||||||
SFsmCbMeta cbMeta = {0};
|
|
||||||
cbMeta.index = pAppendEntry->index;
|
|
||||||
cbMeta.lastConfigIndex = syncNodeGetSnapshotConfigIndex(ths, cbMeta.index);
|
|
||||||
cbMeta.isWeak = pAppendEntry->isWeak;
|
|
||||||
cbMeta.code = 3;
|
|
||||||
cbMeta.state = ths->state;
|
|
||||||
cbMeta.seqNum = pAppendEntry->seqNum;
|
|
||||||
ths->pFsm->FpPreCommitCb(ths->pFsm, &rpcMsg, cbMeta);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
rpcFreeCont(rpcMsg.pCont);
|
|
||||||
|
|
||||||
// free memory
|
// free memory
|
||||||
syncEntryDestory(pAppendEntry);
|
syncEntryDestory(pAppendEntry);
|
||||||
|
@ -440,7 +410,7 @@ static int32_t syncNodeDoMakeLogSame(SSyncNode* ths, SyncIndex FromIndex) {
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t syncNodePreCommit(SSyncNode* ths, SSyncRaftEntry* pEntry) {
|
int32_t syncNodePreCommit(SSyncNode* ths, SSyncRaftEntry* pEntry, int32_t code) {
|
||||||
SRpcMsg rpcMsg;
|
SRpcMsg rpcMsg;
|
||||||
syncEntry2OriginalRpc(pEntry, &rpcMsg);
|
syncEntry2OriginalRpc(pEntry, &rpcMsg);
|
||||||
|
|
||||||
|
@ -456,7 +426,7 @@ static int32_t syncNodePreCommit(SSyncNode* ths, SSyncRaftEntry* pEntry) {
|
||||||
cbMeta.index = pEntry->index;
|
cbMeta.index = pEntry->index;
|
||||||
cbMeta.lastConfigIndex = syncNodeGetSnapshotConfigIndex(ths, cbMeta.index);
|
cbMeta.lastConfigIndex = syncNodeGetSnapshotConfigIndex(ths, cbMeta.index);
|
||||||
cbMeta.isWeak = pEntry->isWeak;
|
cbMeta.isWeak = pEntry->isWeak;
|
||||||
cbMeta.code = 2;
|
cbMeta.code = code;
|
||||||
cbMeta.state = ths->state;
|
cbMeta.state = ths->state;
|
||||||
cbMeta.seqNum = pEntry->seqNum;
|
cbMeta.seqNum = pEntry->seqNum;
|
||||||
ths->pFsm->FpPreCommitCb(ths->pFsm, &rpcMsg, cbMeta);
|
ths->pFsm->FpPreCommitCb(ths->pFsm, &rpcMsg, cbMeta);
|
||||||
|
@ -594,7 +564,7 @@ int32_t syncNodeOnAppendEntriesSnapshot2Cb(SSyncNode* ths, SyncAppendEntriesBatc
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
code = syncNodePreCommit(ths, pAppendEntry);
|
code = syncNodePreCommit(ths, pAppendEntry, 0);
|
||||||
ASSERT(code == 0);
|
ASSERT(code == 0);
|
||||||
|
|
||||||
// syncEntryDestory(pAppendEntry);
|
// syncEntryDestory(pAppendEntry);
|
||||||
|
@ -715,7 +685,7 @@ int32_t syncNodeOnAppendEntriesSnapshot2Cb(SSyncNode* ths, SyncAppendEntriesBatc
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
code = syncNodePreCommit(ths, pAppendEntry);
|
code = syncNodePreCommit(ths, pAppendEntry, 0);
|
||||||
ASSERT(code == 0);
|
ASSERT(code == 0);
|
||||||
|
|
||||||
// syncEntryDestory(pAppendEntry);
|
// syncEntryDestory(pAppendEntry);
|
||||||
|
@ -919,7 +889,7 @@ int32_t syncNodeOnAppendEntriesSnapshotCb(SSyncNode* ths, SyncAppendEntries* pMs
|
||||||
}
|
}
|
||||||
|
|
||||||
// pre commit
|
// pre commit
|
||||||
code = syncNodePreCommit(ths, pAppendEntry);
|
code = syncNodePreCommit(ths, pAppendEntry, 0);
|
||||||
ASSERT(code == 0);
|
ASSERT(code == 0);
|
||||||
|
|
||||||
// update match index
|
// update match index
|
||||||
|
@ -1032,7 +1002,7 @@ int32_t syncNodeOnAppendEntriesSnapshotCb(SSyncNode* ths, SyncAppendEntries* pMs
|
||||||
}
|
}
|
||||||
|
|
||||||
// pre commit
|
// pre commit
|
||||||
code = syncNodePreCommit(ths, pAppendEntry);
|
code = syncNodePreCommit(ths, pAppendEntry, 0);
|
||||||
ASSERT(code == 0);
|
ASSERT(code == 0);
|
||||||
|
|
||||||
syncEntryDestory(pAppendEntry);
|
syncEntryDestory(pAppendEntry);
|
||||||
|
|
|
@ -67,11 +67,6 @@ void syncMaybeAdvanceCommitIndex(SSyncNode* pSyncNode) {
|
||||||
for (SyncIndex index = syncNodeGetLastIndex(pSyncNode); index > pSyncNode->commitIndex; --index) {
|
for (SyncIndex index = syncNodeGetLastIndex(pSyncNode); index > pSyncNode->commitIndex; --index) {
|
||||||
bool agree = syncAgree(pSyncNode, index);
|
bool agree = syncAgree(pSyncNode, index);
|
||||||
|
|
||||||
if (gRaftDetailLog) {
|
|
||||||
sTrace("syncMaybeAdvanceCommitIndex syncAgree:%d, index:%" PRId64 ", pSyncNode->commitIndex:%" PRId64, agree,
|
|
||||||
index, pSyncNode->commitIndex);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (agree) {
|
if (agree) {
|
||||||
// term
|
// term
|
||||||
SSyncRaftEntry* pEntry = pSyncNode->pLogStore->getEntry(pSyncNode->pLogStore, index);
|
SSyncRaftEntry* pEntry = pSyncNode->pLogStore->getEntry(pSyncNode->pLogStore, index);
|
||||||
|
@ -82,20 +77,15 @@ void syncMaybeAdvanceCommitIndex(SSyncNode* pSyncNode) {
|
||||||
// update commit index
|
// update commit index
|
||||||
newCommitIndex = index;
|
newCommitIndex = index;
|
||||||
|
|
||||||
if (gRaftDetailLog) {
|
|
||||||
sTrace("syncMaybeAdvanceCommitIndex maybe to update, newCommitIndex:%" PRId64
|
|
||||||
" commit, pSyncNode->commitIndex:%" PRId64,
|
|
||||||
newCommitIndex, pSyncNode->commitIndex);
|
|
||||||
}
|
|
||||||
|
|
||||||
syncEntryDestory(pEntry);
|
syncEntryDestory(pEntry);
|
||||||
break;
|
break;
|
||||||
} else {
|
} else {
|
||||||
if (gRaftDetailLog) {
|
do {
|
||||||
sTrace("syncMaybeAdvanceCommitIndex can not commit due to term not equal, pEntry->term:%" PRIu64
|
char logBuf[128];
|
||||||
", pSyncNode->pRaftStore->currentTerm:%" PRIu64,
|
snprintf(logBuf, sizeof(logBuf), "can not commit due to term not equal, index:%ld, term:%lu", pEntry->index,
|
||||||
pEntry->term, pSyncNode->pRaftStore->currentTerm);
|
pEntry->term);
|
||||||
}
|
syncNodeEventLog(pSyncNode, logBuf);
|
||||||
|
} while (0);
|
||||||
}
|
}
|
||||||
|
|
||||||
syncEntryDestory(pEntry);
|
syncEntryDestory(pEntry);
|
||||||
|
@ -107,10 +97,6 @@ void syncMaybeAdvanceCommitIndex(SSyncNode* pSyncNode) {
|
||||||
SyncIndex beginIndex = pSyncNode->commitIndex + 1;
|
SyncIndex beginIndex = pSyncNode->commitIndex + 1;
|
||||||
SyncIndex endIndex = newCommitIndex;
|
SyncIndex endIndex = newCommitIndex;
|
||||||
|
|
||||||
if (gRaftDetailLog) {
|
|
||||||
sTrace("syncMaybeAdvanceCommitIndex sync commit %" PRId64, newCommitIndex);
|
|
||||||
}
|
|
||||||
|
|
||||||
// update commit index
|
// update commit index
|
||||||
pSyncNode->commitIndex = newCommitIndex;
|
pSyncNode->commitIndex = newCommitIndex;
|
||||||
|
|
||||||
|
|
|
@ -2504,22 +2504,7 @@ int32_t syncNodeOnClientRequestCb(SSyncNode* ths, SyncClientRequest* pMsg, SyncI
|
||||||
}
|
}
|
||||||
|
|
||||||
// pre commit
|
// pre commit
|
||||||
SRpcMsg rpcMsg;
|
syncNodePreCommit(ths, pEntry, 0);
|
||||||
syncEntry2OriginalRpc(pEntry, &rpcMsg);
|
|
||||||
|
|
||||||
if (ths->pFsm != NULL) {
|
|
||||||
if (ths->pFsm->FpPreCommitCb != NULL && syncUtilUserPreCommit(pEntry->originalRpcType)) {
|
|
||||||
SFsmCbMeta cbMeta = {0};
|
|
||||||
cbMeta.index = pEntry->index;
|
|
||||||
cbMeta.lastConfigIndex = syncNodeGetSnapshotConfigIndex(ths, cbMeta.index);
|
|
||||||
cbMeta.isWeak = pEntry->isWeak;
|
|
||||||
cbMeta.code = 0;
|
|
||||||
cbMeta.state = ths->state;
|
|
||||||
cbMeta.seqNum = pEntry->seqNum;
|
|
||||||
ths->pFsm->FpPreCommitCb(ths->pFsm, &rpcMsg, cbMeta);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
rpcFreeCont(rpcMsg.pCont);
|
|
||||||
|
|
||||||
// if only myself, maybe commit right now
|
// if only myself, maybe commit right now
|
||||||
if (ths->replicaNum == 1) {
|
if (ths->replicaNum == 1) {
|
||||||
|
@ -2528,22 +2513,7 @@ int32_t syncNodeOnClientRequestCb(SSyncNode* ths, SyncClientRequest* pMsg, SyncI
|
||||||
|
|
||||||
} else {
|
} else {
|
||||||
// pre commit
|
// pre commit
|
||||||
SRpcMsg rpcMsg;
|
syncNodePreCommit(ths, pEntry, 0);
|
||||||
syncEntry2OriginalRpc(pEntry, &rpcMsg);
|
|
||||||
|
|
||||||
if (ths->pFsm != NULL) {
|
|
||||||
if (ths->pFsm->FpPreCommitCb != NULL && syncUtilUserPreCommit(pEntry->originalRpcType)) {
|
|
||||||
SFsmCbMeta cbMeta = {0};
|
|
||||||
cbMeta.index = pEntry->index;
|
|
||||||
cbMeta.lastConfigIndex = syncNodeGetSnapshotConfigIndex(ths, cbMeta.index);
|
|
||||||
cbMeta.isWeak = pEntry->isWeak;
|
|
||||||
cbMeta.code = 1;
|
|
||||||
cbMeta.state = ths->state;
|
|
||||||
cbMeta.seqNum = pEntry->seqNum;
|
|
||||||
ths->pFsm->FpPreCommitCb(ths->pFsm, &rpcMsg, cbMeta);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
rpcFreeCont(rpcMsg.pCont);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pRetIndex != NULL) {
|
if (pRetIndex != NULL) {
|
||||||
|
|
|
@ -101,8 +101,8 @@ SWal *walOpen(const char *path, SWalCfg *pCfg) {
|
||||||
|
|
||||||
// open meta
|
// open meta
|
||||||
walResetVer(&pWal->vers);
|
walResetVer(&pWal->vers);
|
||||||
pWal->pWriteLogTFile = NULL;
|
pWal->pLogFile = NULL;
|
||||||
pWal->pWriteIdxTFile = NULL;
|
pWal->pIdxFile = NULL;
|
||||||
pWal->writeCur = -1;
|
pWal->writeCur = -1;
|
||||||
pWal->fileInfoSet = taosArrayInit(8, sizeof(SWalFileInfo));
|
pWal->fileInfoSet = taosArrayInit(8, sizeof(SWalFileInfo));
|
||||||
if (pWal->fileInfoSet == NULL) {
|
if (pWal->fileInfoSet == NULL) {
|
||||||
|
@ -179,10 +179,10 @@ int32_t walAlter(SWal *pWal, SWalCfg *pCfg) {
|
||||||
|
|
||||||
void walClose(SWal *pWal) {
|
void walClose(SWal *pWal) {
|
||||||
taosThreadMutexLock(&pWal->mutex);
|
taosThreadMutexLock(&pWal->mutex);
|
||||||
taosCloseFile(&pWal->pWriteLogTFile);
|
taosCloseFile(&pWal->pLogFile);
|
||||||
pWal->pWriteLogTFile = NULL;
|
pWal->pLogFile = NULL;
|
||||||
taosCloseFile(&pWal->pWriteIdxTFile);
|
taosCloseFile(&pWal->pIdxFile);
|
||||||
pWal->pWriteIdxTFile = NULL;
|
pWal->pIdxFile = NULL;
|
||||||
walSaveMeta(pWal);
|
walSaveMeta(pWal);
|
||||||
taosArrayDestroy(pWal->fileInfoSet);
|
taosArrayDestroy(pWal->fileInfoSet);
|
||||||
pWal->fileInfoSet = NULL;
|
pWal->fileInfoSet = NULL;
|
||||||
|
@ -223,7 +223,7 @@ static void walFsyncAll() {
|
||||||
if (walNeedFsync(pWal)) {
|
if (walNeedFsync(pWal)) {
|
||||||
wTrace("vgId:%d, do fsync, level:%d seq:%d rseq:%d", pWal->cfg.vgId, pWal->cfg.level, pWal->fsyncSeq,
|
wTrace("vgId:%d, do fsync, level:%d seq:%d rseq:%d", pWal->cfg.vgId, pWal->cfg.level, pWal->fsyncSeq,
|
||||||
atomic_load_32(&tsWal.seq));
|
atomic_load_32(&tsWal.seq));
|
||||||
int32_t code = taosFsyncFile(pWal->pWriteLogTFile);
|
int32_t code = taosFsyncFile(pWal->pLogFile);
|
||||||
if (code != 0) {
|
if (code != 0) {
|
||||||
wError("vgId:%d, file:%" PRId64 ".log, failed to fsync since %s", pWal->cfg.vgId, walGetLastFileFirstVer(pWal),
|
wError("vgId:%d, file:%" PRId64 ".log, failed to fsync since %s", pWal->cfg.vgId, walGetLastFileFirstVer(pWal),
|
||||||
strerror(code));
|
strerror(code));
|
||||||
|
|
|
@ -22,8 +22,8 @@
|
||||||
static int64_t walSeekWritePos(SWal* pWal, int64_t ver) {
|
static int64_t walSeekWritePos(SWal* pWal, int64_t ver) {
|
||||||
int64_t code = 0;
|
int64_t code = 0;
|
||||||
|
|
||||||
TdFilePtr pIdxTFile = pWal->pWriteIdxTFile;
|
TdFilePtr pIdxTFile = pWal->pIdxFile;
|
||||||
TdFilePtr pLogTFile = pWal->pWriteLogTFile;
|
TdFilePtr pLogTFile = pWal->pLogFile;
|
||||||
|
|
||||||
// seek position
|
// seek position
|
||||||
int64_t idxOff = walGetVerIdxOffset(pWal, ver);
|
int64_t idxOff = walGetVerIdxOffset(pWal, ver);
|
||||||
|
@ -68,8 +68,8 @@ int walInitWriteFile(SWal* pWal) {
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
// switch file
|
// switch file
|
||||||
pWal->pWriteIdxTFile = pIdxTFile;
|
pWal->pIdxFile = pIdxTFile;
|
||||||
pWal->pWriteLogTFile = pLogTFile;
|
pWal->pLogFile = pLogTFile;
|
||||||
pWal->writeCur = taosArrayGetSize(pWal->fileInfoSet) - 1;
|
pWal->writeCur = taosArrayGetSize(pWal->fileInfoSet) - 1;
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -78,15 +78,15 @@ int walChangeWrite(SWal* pWal, int64_t ver) {
|
||||||
int code;
|
int code;
|
||||||
TdFilePtr pIdxTFile, pLogTFile;
|
TdFilePtr pIdxTFile, pLogTFile;
|
||||||
char fnameStr[WAL_FILE_LEN];
|
char fnameStr[WAL_FILE_LEN];
|
||||||
if (pWal->pWriteLogTFile != NULL) {
|
if (pWal->pLogFile != NULL) {
|
||||||
code = taosCloseFile(&pWal->pWriteLogTFile);
|
code = taosCloseFile(&pWal->pLogFile);
|
||||||
if (code != 0) {
|
if (code != 0) {
|
||||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (pWal->pWriteIdxTFile != NULL) {
|
if (pWal->pIdxFile != NULL) {
|
||||||
code = taosCloseFile(&pWal->pWriteIdxTFile);
|
code = taosCloseFile(&pWal->pIdxFile);
|
||||||
if (code != 0) {
|
if (code != 0) {
|
||||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||||
return -1;
|
return -1;
|
||||||
|
@ -106,7 +106,7 @@ int walChangeWrite(SWal* pWal, int64_t ver) {
|
||||||
pIdxTFile = taosOpenFile(fnameStr, TD_FILE_CREATE | TD_FILE_WRITE | TD_FILE_APPEND);
|
pIdxTFile = taosOpenFile(fnameStr, TD_FILE_CREATE | TD_FILE_WRITE | TD_FILE_APPEND);
|
||||||
if (pIdxTFile == NULL) {
|
if (pIdxTFile == NULL) {
|
||||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||||
pWal->pWriteIdxTFile = NULL;
|
pWal->pIdxFile = NULL;
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
walBuildLogName(pWal, fileFirstVer, fnameStr);
|
walBuildLogName(pWal, fileFirstVer, fnameStr);
|
||||||
|
@ -114,12 +114,12 @@ int walChangeWrite(SWal* pWal, int64_t ver) {
|
||||||
if (pLogTFile == NULL) {
|
if (pLogTFile == NULL) {
|
||||||
taosCloseFile(&pIdxTFile);
|
taosCloseFile(&pIdxTFile);
|
||||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||||
pWal->pWriteLogTFile = NULL;
|
pWal->pLogFile = NULL;
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
pWal->pWriteLogTFile = pLogTFile;
|
pWal->pLogFile = pLogTFile;
|
||||||
pWal->pWriteIdxTFile = pIdxTFile;
|
pWal->pIdxFile = pIdxTFile;
|
||||||
pWal->writeCur = idx;
|
pWal->writeCur = idx;
|
||||||
return fileFirstVer;
|
return fileFirstVer;
|
||||||
}
|
}
|
||||||
|
|
|
@ -32,8 +32,8 @@ int32_t walRestoreFromSnapshot(SWal *pWal, int64_t ver) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
taosCloseFile(&pWal->pWriteLogTFile);
|
taosCloseFile(&pWal->pLogFile);
|
||||||
taosCloseFile(&pWal->pWriteIdxTFile);
|
taosCloseFile(&pWal->pIdxFile);
|
||||||
|
|
||||||
if (pWal->vers.firstVer != -1) {
|
if (pWal->vers.firstVer != -1) {
|
||||||
int32_t fileSetSize = taosArrayGetSize(pWal->fileInfoSet);
|
int32_t fileSetSize = taosArrayGetSize(pWal->fileInfoSet);
|
||||||
|
@ -261,14 +261,13 @@ int32_t walEndSnapshot(SWal *pWal) {
|
||||||
pWal->vers.snapshotVer = ver;
|
pWal->vers.snapshotVer = ver;
|
||||||
int ts = taosGetTimestampSec();
|
int ts = taosGetTimestampSec();
|
||||||
|
|
||||||
int64_t minVerToDelete = ver;
|
void *pIter = NULL;
|
||||||
void *pIter = NULL;
|
|
||||||
while (1) {
|
while (1) {
|
||||||
pIter = taosHashIterate(pWal->pRefHash, pIter);
|
pIter = taosHashIterate(pWal->pRefHash, pIter);
|
||||||
if (pIter == NULL) break;
|
if (pIter == NULL) break;
|
||||||
SWalRef *pRef = *(SWalRef **)pIter;
|
SWalRef *pRef = *(SWalRef **)pIter;
|
||||||
if (pRef->refVer == -1) continue;
|
if (pRef->refVer == -1) continue;
|
||||||
minVerToDelete = TMIN(minVerToDelete, pRef->refVer);
|
ver = TMIN(ver, pRef->refVer);
|
||||||
}
|
}
|
||||||
|
|
||||||
int deleteCnt = 0;
|
int deleteCnt = 0;
|
||||||
|
@ -277,35 +276,37 @@ int32_t walEndSnapshot(SWal *pWal) {
|
||||||
tmp.firstVer = ver;
|
tmp.firstVer = ver;
|
||||||
// find files safe to delete
|
// find files safe to delete
|
||||||
SWalFileInfo *pInfo = taosArraySearch(pWal->fileInfoSet, &tmp, compareWalFileInfo, TD_LE);
|
SWalFileInfo *pInfo = taosArraySearch(pWal->fileInfoSet, &tmp, compareWalFileInfo, TD_LE);
|
||||||
if (ver >= pInfo->lastVer) {
|
if (pInfo) {
|
||||||
pInfo++;
|
if (ver >= pInfo->lastVer) {
|
||||||
}
|
pInfo++;
|
||||||
// iterate files, until the searched result
|
}
|
||||||
for (SWalFileInfo *iter = pWal->fileInfoSet->pData; iter < pInfo; iter++) {
|
// iterate files, until the searched result
|
||||||
if ((pWal->cfg.retentionSize != -1 && newTotSize > pWal->cfg.retentionSize) ||
|
for (SWalFileInfo *iter = pWal->fileInfoSet->pData; iter < pInfo; iter++) {
|
||||||
(pWal->cfg.retentionPeriod != -1 && iter->closeTs + pWal->cfg.retentionPeriod > ts)) {
|
if ((pWal->cfg.retentionSize != -1 && newTotSize > pWal->cfg.retentionSize) ||
|
||||||
// delete according to file size or close time
|
(pWal->cfg.retentionPeriod != -1 && iter->closeTs + pWal->cfg.retentionPeriod > ts)) {
|
||||||
deleteCnt++;
|
// delete according to file size or close time
|
||||||
newTotSize -= iter->fileSize;
|
deleteCnt++;
|
||||||
|
newTotSize -= iter->fileSize;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
char fnameStr[WAL_FILE_LEN];
|
||||||
|
// remove file
|
||||||
|
for (int i = 0; i < deleteCnt; i++) {
|
||||||
|
pInfo = taosArrayGet(pWal->fileInfoSet, i);
|
||||||
|
walBuildLogName(pWal, pInfo->firstVer, fnameStr);
|
||||||
|
taosRemoveFile(fnameStr);
|
||||||
|
walBuildIdxName(pWal, pInfo->firstVer, fnameStr);
|
||||||
|
taosRemoveFile(fnameStr);
|
||||||
}
|
}
|
||||||
}
|
|
||||||
char fnameStr[WAL_FILE_LEN];
|
|
||||||
// remove file
|
|
||||||
for (int i = 0; i < deleteCnt; i++) {
|
|
||||||
pInfo = taosArrayGet(pWal->fileInfoSet, i);
|
|
||||||
walBuildLogName(pWal, pInfo->firstVer, fnameStr);
|
|
||||||
taosRemoveFile(fnameStr);
|
|
||||||
walBuildIdxName(pWal, pInfo->firstVer, fnameStr);
|
|
||||||
taosRemoveFile(fnameStr);
|
|
||||||
}
|
|
||||||
|
|
||||||
// make new array, remove files
|
// make new array, remove files
|
||||||
taosArrayPopFrontBatch(pWal->fileInfoSet, deleteCnt);
|
taosArrayPopFrontBatch(pWal->fileInfoSet, deleteCnt);
|
||||||
if (taosArrayGetSize(pWal->fileInfoSet) == 0) {
|
if (taosArrayGetSize(pWal->fileInfoSet) == 0) {
|
||||||
pWal->writeCur = -1;
|
pWal->writeCur = -1;
|
||||||
pWal->vers.firstVer = -1;
|
pWal->vers.firstVer = -1;
|
||||||
} else {
|
} else {
|
||||||
pWal->vers.firstVer = ((SWalFileInfo *)taosArrayGet(pWal->fileInfoSet, 0))->firstVer;
|
pWal->vers.firstVer = ((SWalFileInfo *)taosArrayGet(pWal->fileInfoSet, 0))->firstVer;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
pWal->writeCur = taosArrayGetSize(pWal->fileInfoSet) - 1;
|
pWal->writeCur = taosArrayGetSize(pWal->fileInfoSet) - 1;
|
||||||
pWal->totSize = newTotSize;
|
pWal->totSize = newTotSize;
|
||||||
|
@ -324,34 +325,34 @@ END:
|
||||||
|
|
||||||
int32_t walRollImpl(SWal *pWal) {
|
int32_t walRollImpl(SWal *pWal) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
if (pWal->pWriteIdxTFile != NULL) {
|
if (pWal->pIdxFile != NULL) {
|
||||||
code = taosCloseFile(&pWal->pWriteIdxTFile);
|
code = taosCloseFile(&pWal->pIdxFile);
|
||||||
if (code != 0) {
|
if (code != 0) {
|
||||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||||
goto END;
|
goto END;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (pWal->pWriteLogTFile != NULL) {
|
if (pWal->pLogFile != NULL) {
|
||||||
code = taosCloseFile(&pWal->pWriteLogTFile);
|
code = taosCloseFile(&pWal->pLogFile);
|
||||||
if (code != 0) {
|
if (code != 0) {
|
||||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||||
goto END;
|
goto END;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
TdFilePtr pIdxTFile, pLogTFile;
|
TdFilePtr pIdxFile, pLogFile;
|
||||||
// create new file
|
// create new file
|
||||||
int64_t newFileFirstVersion = pWal->vers.lastVer + 1;
|
int64_t newFileFirstVer = pWal->vers.lastVer + 1;
|
||||||
char fnameStr[WAL_FILE_LEN];
|
char fnameStr[WAL_FILE_LEN];
|
||||||
walBuildIdxName(pWal, newFileFirstVersion, fnameStr);
|
walBuildIdxName(pWal, newFileFirstVer, fnameStr);
|
||||||
pIdxTFile = taosOpenFile(fnameStr, TD_FILE_CREATE | TD_FILE_WRITE | TD_FILE_APPEND);
|
pIdxFile = taosOpenFile(fnameStr, TD_FILE_CREATE | TD_FILE_WRITE | TD_FILE_APPEND);
|
||||||
if (pIdxTFile == NULL) {
|
if (pIdxFile == NULL) {
|
||||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||||
code = -1;
|
code = -1;
|
||||||
goto END;
|
goto END;
|
||||||
}
|
}
|
||||||
walBuildLogName(pWal, newFileFirstVersion, fnameStr);
|
walBuildLogName(pWal, newFileFirstVer, fnameStr);
|
||||||
pLogTFile = taosOpenFile(fnameStr, TD_FILE_CREATE | TD_FILE_WRITE | TD_FILE_APPEND);
|
pLogFile = taosOpenFile(fnameStr, TD_FILE_CREATE | TD_FILE_WRITE | TD_FILE_APPEND);
|
||||||
if (pLogTFile == NULL) {
|
if (pLogFile == NULL) {
|
||||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||||
code = -1;
|
code = -1;
|
||||||
goto END;
|
goto END;
|
||||||
|
@ -363,8 +364,8 @@ int32_t walRollImpl(SWal *pWal) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// switch file
|
// switch file
|
||||||
pWal->pWriteIdxTFile = pIdxTFile;
|
pWal->pIdxFile = pIdxFile;
|
||||||
pWal->pWriteLogTFile = pLogTFile;
|
pWal->pLogFile = pLogFile;
|
||||||
pWal->writeCur = taosArrayGetSize(pWal->fileInfoSet) - 1;
|
pWal->writeCur = taosArrayGetSize(pWal->fileInfoSet) - 1;
|
||||||
ASSERT(pWal->writeCur >= 0);
|
ASSERT(pWal->writeCur >= 0);
|
||||||
|
|
||||||
|
@ -378,10 +379,10 @@ END:
|
||||||
|
|
||||||
static int32_t walWriteIndex(SWal *pWal, int64_t ver, int64_t offset) {
|
static int32_t walWriteIndex(SWal *pWal, int64_t ver, int64_t offset) {
|
||||||
SWalIdxEntry entry = {.ver = ver, .offset = offset};
|
SWalIdxEntry entry = {.ver = ver, .offset = offset};
|
||||||
int64_t idxOffset = taosLSeekFile(pWal->pWriteIdxTFile, 0, SEEK_END);
|
int64_t idxOffset = taosLSeekFile(pWal->pIdxFile, 0, SEEK_END);
|
||||||
wDebug("vgId:%d, write index, index:%" PRId64 ", offset:%" PRId64 ", at %" PRId64, pWal->cfg.vgId, ver, offset,
|
wDebug("vgId:%d, write index, index:%" PRId64 ", offset:%" PRId64 ", at %" PRId64, pWal->cfg.vgId, ver, offset,
|
||||||
idxOffset);
|
idxOffset);
|
||||||
int64_t size = taosWriteFile(pWal->pWriteIdxTFile, &entry, sizeof(SWalIdxEntry));
|
int64_t size = taosWriteFile(pWal->pIdxFile, &entry, sizeof(SWalIdxEntry));
|
||||||
if (size != sizeof(SWalIdxEntry)) {
|
if (size != sizeof(SWalIdxEntry)) {
|
||||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||||
// TODO truncate
|
// TODO truncate
|
||||||
|
@ -407,7 +408,7 @@ static FORCE_INLINE int32_t walWriteImpl(SWal *pWal, int64_t index, tmsg_t msgTy
|
||||||
pWal->writeHead.cksumHead = walCalcHeadCksum(&pWal->writeHead);
|
pWal->writeHead.cksumHead = walCalcHeadCksum(&pWal->writeHead);
|
||||||
pWal->writeHead.cksumBody = walCalcBodyCksum(body, bodyLen);
|
pWal->writeHead.cksumBody = walCalcBodyCksum(body, bodyLen);
|
||||||
|
|
||||||
if (taosWriteFile(pWal->pWriteLogTFile, &pWal->writeHead, sizeof(SWalCkHead)) != sizeof(SWalCkHead)) {
|
if (taosWriteFile(pWal->pLogFile, &pWal->writeHead, sizeof(SWalCkHead)) != sizeof(SWalCkHead)) {
|
||||||
// TODO ftruncate
|
// TODO ftruncate
|
||||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||||
wError("vgId:%d, file:%" PRId64 ".log, failed to write since %s", pWal->cfg.vgId, walGetLastFileFirstVer(pWal),
|
wError("vgId:%d, file:%" PRId64 ".log, failed to write since %s", pWal->cfg.vgId, walGetLastFileFirstVer(pWal),
|
||||||
|
@ -416,7 +417,7 @@ static FORCE_INLINE int32_t walWriteImpl(SWal *pWal, int64_t index, tmsg_t msgTy
|
||||||
goto END;
|
goto END;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (taosWriteFile(pWal->pWriteLogTFile, (char *)body, bodyLen) != bodyLen) {
|
if (taosWriteFile(pWal->pLogFile, (char *)body, bodyLen) != bodyLen) {
|
||||||
// TODO ftruncate
|
// TODO ftruncate
|
||||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||||
wError("vgId:%d, file:%" PRId64 ".log, failed to write since %s", pWal->cfg.vgId, walGetLastFileFirstVer(pWal),
|
wError("vgId:%d, file:%" PRId64 ".log, failed to write since %s", pWal->cfg.vgId, walGetLastFileFirstVer(pWal),
|
||||||
|
@ -456,14 +457,14 @@ int64_t walAppendLog(SWal *pWal, tmsg_t msgType, SWalSyncInfo syncMeta, const vo
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pWal->pWriteIdxTFile == NULL || pWal->pWriteIdxTFile == NULL || pWal->writeCur < 0) {
|
if (pWal->pIdxFile == NULL || pWal->pIdxFile == NULL || pWal->writeCur < 0) {
|
||||||
if (walInitWriteFile(pWal) < 0) {
|
if (walInitWriteFile(pWal) < 0) {
|
||||||
taosThreadMutexUnlock(&pWal->mutex);
|
taosThreadMutexUnlock(&pWal->mutex);
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
ASSERT(pWal->pWriteIdxTFile != NULL && pWal->pWriteLogTFile != NULL && pWal->writeCur >= 0);
|
ASSERT(pWal->pIdxFile != NULL && pWal->pLogFile != NULL && pWal->writeCur >= 0);
|
||||||
|
|
||||||
if (walWriteImpl(pWal, index, msgType, syncMeta, body, bodyLen) < 0) {
|
if (walWriteImpl(pWal, index, msgType, syncMeta, body, bodyLen) < 0) {
|
||||||
taosThreadMutexUnlock(&pWal->mutex);
|
taosThreadMutexUnlock(&pWal->mutex);
|
||||||
|
@ -494,14 +495,14 @@ int32_t walWriteWithSyncInfo(SWal *pWal, int64_t index, tmsg_t msgType, SWalSync
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pWal->pWriteIdxTFile == NULL || pWal->pWriteIdxTFile == NULL || pWal->writeCur < 0) {
|
if (pWal->pIdxFile == NULL || pWal->pIdxFile == NULL || pWal->writeCur < 0) {
|
||||||
if (walInitWriteFile(pWal) < 0) {
|
if (walInitWriteFile(pWal) < 0) {
|
||||||
taosThreadMutexUnlock(&pWal->mutex);
|
taosThreadMutexUnlock(&pWal->mutex);
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
ASSERT(pWal->pWriteIdxTFile != NULL && pWal->pWriteLogTFile != NULL && pWal->writeCur >= 0);
|
ASSERT(pWal->pIdxFile != NULL && pWal->pLogFile != NULL && pWal->writeCur >= 0);
|
||||||
|
|
||||||
if (walWriteImpl(pWal, index, msgType, syncMeta, body, bodyLen) < 0) {
|
if (walWriteImpl(pWal, index, msgType, syncMeta, body, bodyLen) < 0) {
|
||||||
taosThreadMutexUnlock(&pWal->mutex);
|
taosThreadMutexUnlock(&pWal->mutex);
|
||||||
|
@ -524,7 +525,7 @@ int32_t walWrite(SWal *pWal, int64_t index, tmsg_t msgType, const void *body, in
|
||||||
void walFsync(SWal *pWal, bool forceFsync) {
|
void walFsync(SWal *pWal, bool forceFsync) {
|
||||||
if (forceFsync || (pWal->cfg.level == TAOS_WAL_FSYNC && pWal->cfg.fsyncPeriod == 0)) {
|
if (forceFsync || (pWal->cfg.level == TAOS_WAL_FSYNC && pWal->cfg.fsyncPeriod == 0)) {
|
||||||
wTrace("vgId:%d, fileId:%" PRId64 ".log, do fsync", pWal->cfg.vgId, walGetCurFileFirstVer(pWal));
|
wTrace("vgId:%d, fileId:%" PRId64 ".log, do fsync", pWal->cfg.vgId, walGetCurFileFirstVer(pWal));
|
||||||
if (taosFsyncFile(pWal->pWriteLogTFile) < 0) {
|
if (taosFsyncFile(pWal->pLogFile) < 0) {
|
||||||
wError("vgId:%d, file:%" PRId64 ".log, fsync failed since %s", pWal->cfg.vgId, walGetCurFileFirstVer(pWal),
|
wError("vgId:%d, file:%" PRId64 ".log, fsync failed since %s", pWal->cfg.vgId, walGetCurFileFirstVer(pWal),
|
||||||
strerror(errno));
|
strerror(errno));
|
||||||
}
|
}
|
||||||
|
|
|
@ -371,7 +371,9 @@ class ThreadCoordinator:
|
||||||
if isinstance(err, CrashGenError): # our own transition failure
|
if isinstance(err, CrashGenError): # our own transition failure
|
||||||
Logging.info("State transition error")
|
Logging.info("State transition error")
|
||||||
# TODO: saw an error here once, let's print out stack info for err?
|
# TODO: saw an error here once, let's print out stack info for err?
|
||||||
traceback.print_stack()
|
traceback.print_stack() # Stack frame to here.
|
||||||
|
Logging.info("Caused by:")
|
||||||
|
traceback.print_exception(*sys.exc_info()) # Ref: https://www.geeksforgeeks.org/how-to-print-exception-stack-trace-in-python/
|
||||||
transitionFailed = True
|
transitionFailed = True
|
||||||
self._te = None # Not running any more
|
self._te = None # Not running any more
|
||||||
self._execStats.registerFailure("State transition error: {}".format(err))
|
self._execStats.registerFailure("State transition error: {}".format(err))
|
||||||
|
@ -741,7 +743,8 @@ class AnyState:
|
||||||
sCnt += 1
|
sCnt += 1
|
||||||
if (sCnt >= 2):
|
if (sCnt >= 2):
|
||||||
raise CrashGenError(
|
raise CrashGenError(
|
||||||
"Unexpected more than 1 success with task: {}, in task set: {}".format(
|
"Unexpected more than 1 success at state: {}, with task: {}, in task set: {}".format(
|
||||||
|
self.__class__.__name__,
|
||||||
cls.__name__, # verified just now that isinstance(task, cls)
|
cls.__name__, # verified just now that isinstance(task, cls)
|
||||||
[c.__class__.__name__ for c in tasks]
|
[c.__class__.__name__ for c in tasks]
|
||||||
))
|
))
|
||||||
|
@ -756,8 +759,11 @@ class AnyState:
|
||||||
if task.isSuccess():
|
if task.isSuccess():
|
||||||
sCnt += 1
|
sCnt += 1
|
||||||
if (exists and sCnt <= 0):
|
if (exists and sCnt <= 0):
|
||||||
raise CrashGenError("Unexpected zero success for task type: {}, from tasks: {}"
|
raise CrashGenError("Unexpected zero success at state: {}, with task: {}, in task set: {}".format(
|
||||||
.format(cls, tasks))
|
self.__class__.__name__,
|
||||||
|
cls.__name__, # verified just now that isinstance(task, cls)
|
||||||
|
[c.__class__.__name__ for c in tasks]
|
||||||
|
))
|
||||||
|
|
||||||
def assertNoTask(self, tasks, cls):
|
def assertNoTask(self, tasks, cls):
|
||||||
for task in tasks:
|
for task in tasks:
|
||||||
|
@ -809,8 +815,6 @@ class StateEmpty(AnyState):
|
||||||
]
|
]
|
||||||
|
|
||||||
def verifyTasksToState(self, tasks, newState):
|
def verifyTasksToState(self, tasks, newState):
|
||||||
if Config.getConfig().ignore_errors: # if we are asked to ignore certain errors, let's not verify CreateDB success.
|
|
||||||
return
|
|
||||||
if (self.hasSuccess(tasks, TaskCreateDb)
|
if (self.hasSuccess(tasks, TaskCreateDb)
|
||||||
): # at EMPTY, if there's succes in creating DB
|
): # at EMPTY, if there's succes in creating DB
|
||||||
if (not self.hasTask(tasks, TaskDropDb)): # and no drop_db tasks
|
if (not self.hasTask(tasks, TaskDropDb)): # and no drop_db tasks
|
||||||
|
@ -995,16 +999,17 @@ class StateMechine:
|
||||||
dbc.execute("show dnodes")
|
dbc.execute("show dnodes")
|
||||||
|
|
||||||
# Generic Checks, first based on the start state
|
# Generic Checks, first based on the start state
|
||||||
if self._curState.canCreateDb():
|
if not Config.getConfig().ignore_errors: # verify state, only if we are asked not to ignore certain errors.
|
||||||
self._curState.assertIfExistThenSuccess(tasks, TaskCreateDb)
|
if self._curState.canCreateDb():
|
||||||
# self.assertAtMostOneSuccess(tasks, CreateDbTask) # not really, in
|
self._curState.assertIfExistThenSuccess(tasks, TaskCreateDb)
|
||||||
# case of multiple creation and drops
|
# self.assertAtMostOneSuccess(tasks, CreateDbTask) # not really, in
|
||||||
|
# case of multiple creation and drops
|
||||||
|
|
||||||
if self._curState.canDropDb():
|
if self._curState.canDropDb():
|
||||||
if gSvcMgr == None: # only if we are running as client-only
|
if gSvcMgr == None: # only if we are running as client-only
|
||||||
self._curState.assertIfExistThenSuccess(tasks, TaskDropDb)
|
self._curState.assertIfExistThenSuccess(tasks, TaskDropDb)
|
||||||
# self.assertAtMostOneSuccess(tasks, DropDbTask) # not really in
|
# self.assertAtMostOneSuccess(tasks, DropDbTask) # not really in
|
||||||
# case of drop-create-drop
|
# case of drop-create-drop
|
||||||
|
|
||||||
# if self._state.canCreateFixedTable():
|
# if self._state.canCreateFixedTable():
|
||||||
# self.assertIfExistThenSuccess(tasks, CreateFixedTableTask) # Not true, DB may be dropped
|
# self.assertIfExistThenSuccess(tasks, CreateFixedTableTask) # Not true, DB may be dropped
|
||||||
|
@ -1026,7 +1031,8 @@ class StateMechine:
|
||||||
newState = self._findCurrentState(dbc)
|
newState = self._findCurrentState(dbc)
|
||||||
Logging.debug("[STT] New DB state determined: {}".format(newState))
|
Logging.debug("[STT] New DB state determined: {}".format(newState))
|
||||||
# can old state move to new state through the tasks?
|
# can old state move to new state through the tasks?
|
||||||
self._curState.verifyTasksToState(tasks, newState)
|
if not Config.getConfig().ignore_errors: # verify state, only if we are asked not to ignore certain errors.
|
||||||
|
self._curState.verifyTasksToState(tasks, newState)
|
||||||
self._curState = newState
|
self._curState = newState
|
||||||
|
|
||||||
def pickTaskType(self):
|
def pickTaskType(self):
|
||||||
|
@ -2231,16 +2237,14 @@ class TaskAddData(StateTransitionTask):
|
||||||
class ThreadStacks: # stack info for all threads
|
class ThreadStacks: # stack info for all threads
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self._allStacks = {}
|
self._allStacks = {}
|
||||||
allFrames = sys._current_frames() # All current stack frames
|
allFrames = sys._current_frames() # All current stack frames, keyed with "ident"
|
||||||
for th in threading.enumerate(): # For each thread
|
for th in threading.enumerate(): # For each thread
|
||||||
if th.ident is None:
|
stack = traceback.extract_stack(allFrames[th.ident]) #type: ignore # Get stack for a thread
|
||||||
continue
|
shortTid = th.native_id % 10000 #type: ignore
|
||||||
stack = traceback.extract_stack(allFrames[th.ident]) # Get stack for a thread
|
|
||||||
shortTid = th.ident % 10000
|
|
||||||
self._allStacks[shortTid] = stack # Was using th.native_id
|
self._allStacks[shortTid] = stack # Was using th.native_id
|
||||||
|
|
||||||
def print(self, filteredEndName = None, filterInternal = False):
|
def print(self, filteredEndName = None, filterInternal = False):
|
||||||
for tIdent, stack in self._allStacks.items(): # for each thread, stack frames top to bottom
|
for shortTid, stack in self._allStacks.items(): # for each thread, stack frames top to bottom
|
||||||
lastFrame = stack[-1]
|
lastFrame = stack[-1]
|
||||||
if filteredEndName: # we need to filter out stacks that match this name
|
if filteredEndName: # we need to filter out stacks that match this name
|
||||||
if lastFrame.name == filteredEndName : # end did not match
|
if lastFrame.name == filteredEndName : # end did not match
|
||||||
|
@ -2252,7 +2256,9 @@ class ThreadStacks: # stack info for all threads
|
||||||
'__init__']: # the thread that extracted the stack
|
'__init__']: # the thread that extracted the stack
|
||||||
continue # ignore
|
continue # ignore
|
||||||
# Now print
|
# Now print
|
||||||
print("\n<----- Thread Info for LWP/ID: {} (most recent call last) <-----".format(tIdent))
|
print("\n<----- Thread Info for LWP/ID: {} (most recent call last) <-----".format(shortTid))
|
||||||
|
lastSqlForThread = DbConn.fetchSqlForThread(shortTid)
|
||||||
|
print("Last SQL statement attempted from thread {} is: {}".format(shortTid, lastSqlForThread))
|
||||||
stackFrame = 0
|
stackFrame = 0
|
||||||
for frame in stack: # was using: reversed(stack)
|
for frame in stack: # was using: reversed(stack)
|
||||||
# print(frame)
|
# print(frame)
|
||||||
|
|
|
@ -27,6 +27,26 @@ class DbConn:
|
||||||
TYPE_REST = "rest-api"
|
TYPE_REST = "rest-api"
|
||||||
TYPE_INVALID = "invalid"
|
TYPE_INVALID = "invalid"
|
||||||
|
|
||||||
|
# class variables
|
||||||
|
lastSqlFromThreads : dict[int, str] = {} # stored by thread id, obtained from threading.current_thread().ident%10000
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def saveSqlForCurrentThread(cls, sql: str):
|
||||||
|
'''
|
||||||
|
Let us save the last SQL statement on a per-thread basis, so that when later we
|
||||||
|
run into a dead-lock situation, we can pick out the deadlocked thread, and use
|
||||||
|
that information to find what what SQL statement is stuck.
|
||||||
|
'''
|
||||||
|
th = threading.current_thread()
|
||||||
|
shortTid = th.native_id % 10000 #type: ignore
|
||||||
|
cls.lastSqlFromThreads[shortTid] = sql # Save this for later
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def fetchSqlForThread(cls, shortTid : int) -> str :
|
||||||
|
if shortTid not in cls.lastSqlFromThreads:
|
||||||
|
raise CrashGenError("No last-attempted-SQL found for thread id: {}".format(shortTid))
|
||||||
|
return cls.lastSqlFromThreads[shortTid]
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def create(cls, connType, dbTarget):
|
def create(cls, connType, dbTarget):
|
||||||
if connType == cls.TYPE_NATIVE:
|
if connType == cls.TYPE_NATIVE:
|
||||||
|
@ -163,6 +183,7 @@ class DbConnRest(DbConn):
|
||||||
|
|
||||||
def _doSql(self, sql):
|
def _doSql(self, sql):
|
||||||
self._lastSql = sql # remember this, last SQL attempted
|
self._lastSql = sql # remember this, last SQL attempted
|
||||||
|
self.saveSqlForCurrentThread(sql) # Save in global structure too. #TODO: combine with above
|
||||||
try:
|
try:
|
||||||
r = requests.post(self._url,
|
r = requests.post(self._url,
|
||||||
data = sql,
|
data = sql,
|
||||||
|
@ -392,6 +413,7 @@ class DbConnNative(DbConn):
|
||||||
"Cannot exec SQL unless db connection is open", CrashGenError.DB_CONNECTION_NOT_OPEN)
|
"Cannot exec SQL unless db connection is open", CrashGenError.DB_CONNECTION_NOT_OPEN)
|
||||||
Logging.debug("[SQL] Executing SQL: {}".format(sql))
|
Logging.debug("[SQL] Executing SQL: {}".format(sql))
|
||||||
self._lastSql = sql
|
self._lastSql = sql
|
||||||
|
self.saveSqlForCurrentThread(sql) # Save in global structure too. #TODO: combine with above
|
||||||
nRows = self._tdSql.execute(sql)
|
nRows = self._tdSql.execute(sql)
|
||||||
cls = self.__class__
|
cls = self.__class__
|
||||||
cls.totalRequests += 1
|
cls.totalRequests += 1
|
||||||
|
@ -407,6 +429,7 @@ class DbConnNative(DbConn):
|
||||||
"Cannot query database until connection is open, restarting?", CrashGenError.DB_CONNECTION_NOT_OPEN)
|
"Cannot query database until connection is open, restarting?", CrashGenError.DB_CONNECTION_NOT_OPEN)
|
||||||
Logging.debug("[SQL] Executing SQL: {}".format(sql))
|
Logging.debug("[SQL] Executing SQL: {}".format(sql))
|
||||||
self._lastSql = sql
|
self._lastSql = sql
|
||||||
|
self.saveSqlForCurrentThread(sql) # Save in global structure too. #TODO: combine with above
|
||||||
nRows = self._tdSql.query(sql)
|
nRows = self._tdSql.query(sql)
|
||||||
cls = self.__class__
|
cls = self.__class__
|
||||||
cls.totalRequests += 1
|
cls.totalRequests += 1
|
||||||
|
|
|
@ -224,7 +224,7 @@
|
||||||
|
|
||||||
# ---- stream
|
# ---- stream
|
||||||
./test.sh -f tsim/stream/basic0.sim
|
./test.sh -f tsim/stream/basic0.sim
|
||||||
#./test.sh -f tsim/stream/basic1.sim
|
./test.sh -f tsim/stream/basic1.sim
|
||||||
./test.sh -f tsim/stream/basic2.sim
|
./test.sh -f tsim/stream/basic2.sim
|
||||||
./test.sh -f tsim/stream/drop_stream.sim
|
./test.sh -f tsim/stream/drop_stream.sim
|
||||||
./test.sh -f tsim/stream/distributeInterval0.sim
|
./test.sh -f tsim/stream/distributeInterval0.sim
|
||||||
|
|
|
@ -3,6 +3,7 @@ system sh/deploy.sh -n dnode1 -i 1
|
||||||
system sh/deploy.sh -n dnode2 -i 2
|
system sh/deploy.sh -n dnode2 -i 2
|
||||||
system sh/deploy.sh -n dnode3 -i 3
|
system sh/deploy.sh -n dnode3 -i 3
|
||||||
system sh/deploy.sh -n dnode4 -i 4
|
system sh/deploy.sh -n dnode4 -i 4
|
||||||
|
system sh/cfg.sh -n dnode1 -c supportVnodes -v 0
|
||||||
system sh/exec.sh -n dnode1 -s start
|
system sh/exec.sh -n dnode1 -s start
|
||||||
system sh/exec.sh -n dnode2 -s start
|
system sh/exec.sh -n dnode2 -s start
|
||||||
system sh/exec.sh -n dnode3 -s start
|
system sh/exec.sh -n dnode3 -s start
|
||||||
|
@ -44,6 +45,8 @@ if $data(4)[4] != ready then
|
||||||
goto step1
|
goto step1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
return
|
||||||
|
|
||||||
print =============== step2: create database
|
print =============== step2: create database
|
||||||
sql create database db vgroups 1 replica 3
|
sql create database db vgroups 1 replica 3
|
||||||
sql show databases
|
sql show databases
|
||||||
|
|
|
@ -183,7 +183,7 @@ if $rows != 12800 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
sql select _rowts, top(c1, 80), tbname, t1, t2 from select_tags_mt0;
|
sql select ts, top(c1, 80), tbname, t1, t2 from select_tags_mt0 order by ts;
|
||||||
if $rows != 80 then
|
if $rows != 80 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
@ -212,7 +212,7 @@ if $data04 != @abc12@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
sql select top(c1, 80), tbname, t1, t2 from select_tags_mt0;
|
sql select ts, top(c1, 80), tbname, t1, t2 from select_tags_mt0 order by ts;
|
||||||
if $rows != 80 then
|
if $rows != 80 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
@ -241,7 +241,7 @@ if $data04 != @abc12@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
sql select bottom(c1, 72), tbname, t1, t2 from select_tags_mt0;
|
sql select ts, bottom(c1, 72), tbname, t1, t2 from select_tags_mt0 order by ts;
|
||||||
if $rows != 72 then
|
if $rows != 72 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
@ -293,7 +293,7 @@ if $data03 != 15 then
|
||||||
endi
|
endi
|
||||||
|
|
||||||
print ====== selectivity+tags+group by tags=======================
|
print ====== selectivity+tags+group by tags=======================
|
||||||
sql select first(c1), tbname, t1, t2 from select_tags_mt0 group by tbname;
|
sql select first(c1), tbname, t1, t2, tbname from select_tags_mt0 group by tbname order by t1;
|
||||||
if $rows != 16 then
|
if $rows != 16 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
@ -327,7 +327,7 @@ if $data04 != @select_tags_tb0@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
sql select last_row(ts,c1), tbname, t1, t2 from select_tags_mt0 group by tbname;
|
sql select last_row(ts,c1), tbname, t1, t2, tbname from select_tags_mt0 group by tbname order by t1;
|
||||||
if $rows != 16 then
|
if $rows != 16 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
@ -361,7 +361,7 @@ if $data04 != @abc0@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
sql select tbname,t1,t2 from select_tags_mt0;
|
sql select distinct tbname,t1,t2 from select_tags_mt0;
|
||||||
if $row != 16 then
|
if $row != 16 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
@ -411,7 +411,7 @@ if $data11 != @70-01-01 08:01:40.001@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
sql select top(c1, 100), tbname, t1, t2 from select_tags_mt0 where tbname in ('select_tags_tb0', 'select_tags_tb1') group by tbname;
|
sql select ts, top(c1, 100), tbname, t1, t2 from select_tags_mt0 where tbname in ('select_tags_tb0', 'select_tags_tb1') group by tbname order by ts;
|
||||||
if $row != 200 then
|
if $row != 200 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
@ -448,7 +448,7 @@ if $data04 != @abc0@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
sql select top(c1, 2), t2 from select_tags_mt0 where tbname in ('select_tags_tb0', 'select_tags_tb1') group by tbname,t2;
|
sql select ts, top(c1, 2), t2, tbname, t2 from select_tags_mt0 where tbname in ('select_tags_tb0', 'select_tags_tb1') group by tbname,t2 order by ts;
|
||||||
if $row != 4 then
|
if $row != 4 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
@ -535,33 +535,13 @@ endi
|
||||||
|
|
||||||
|
|
||||||
# slimit /limit
|
# slimit /limit
|
||||||
sql select top(c1, 2), t2 from select_tags_mt0 where tbname in ('select_tags_tb0', 'select_tags_tb1') group by tbname,t2 limit 2 offset 1;
|
sql select ts, top(c1, 2), t2 from select_tags_mt0 where tbname in ('select_tags_tb0', 'select_tags_tb1') group by tbname,t2 limit 2 offset 1;
|
||||||
if $row != 2 then
|
if $row != 2 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
if $data00 != @70-01-01 08:01:40.199@ then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data01 != 99 then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data02 != @abc0@ then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data03 != @select_tags_tb0@ then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
if $data04 != @abc0@ then
|
|
||||||
return -1
|
|
||||||
endi
|
|
||||||
|
|
||||||
print ======= selectivity + tags + group by + tags + filter ===========================
|
print ======= selectivity + tags + group by + tags + filter ===========================
|
||||||
sql select first(c1), t1 from select_tags_mt0 where c1<=2 group by tbname;
|
sql select first(c1), t1, tbname from select_tags_mt0 where c1<=2 group by tbname order by t1;
|
||||||
if $row != 3 then
|
if $row != 3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
@ -602,7 +582,7 @@ if $data22 != @select_tags_tb2@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
sql select first(c1), tbname from select_tags_mt0 where c1<=2 interval(1s);
|
sql select _wstart, first(c1), tbname from select_tags_mt0 where c1<=2 interval(1s);
|
||||||
if $row != 3 then
|
if $row != 3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
@ -671,7 +651,7 @@ if $data01 != @70-01-01 08:01:50.001@ then
|
||||||
endi
|
endi
|
||||||
|
|
||||||
print ======= selectivity + tags + group by + tags + filter + interval ================
|
print ======= selectivity + tags + group by + tags + filter + interval ================
|
||||||
sql select first(c1), t2, t1, tbname from select_tags_mt0 where c1<=2 interval(1d) group by tbname;
|
sql select _wstart,first(c1), t2, t1, tbname, tbname from select_tags_mt0 where c1<=2 partition by tbname interval(1d) order by t1;
|
||||||
if $row != 3 then
|
if $row != 3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
@ -708,7 +688,7 @@ if $data25 != @select_tags_tb2@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
sql select top(c1, 5), t2 from select_tags_mt0 where c1<=2 interval(1d) group by tbname;
|
sql select ts, top(c1, 5), t2, tbname from select_tags_mt0 where c1<=2 partition by tbname interval(1d) order by ts, t2;
|
||||||
if $row != 15 then
|
if $row != 15 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
@ -746,7 +726,7 @@ if $data93 != @select_tags_tb1@ then
|
||||||
endi
|
endi
|
||||||
|
|
||||||
#if data
|
#if data
|
||||||
sql select top(c1, 50), t2, t1, tbname from select_tags_mt0 where c1<=2 interval(1d) group by tbname;
|
sql select ts, top(c1, 50), t2, t1, tbname, tbname from select_tags_mt0 where c1<=2 partition by tbname interval(1d) order by ts, t2;
|
||||||
if $row != 48 then
|
if $row != 48 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
@ -831,7 +811,7 @@ endi
|
||||||
print TODO ======= selectivity + tags+ group by + tags + filter + interval + join===========
|
print TODO ======= selectivity + tags+ group by + tags + filter + interval + join===========
|
||||||
|
|
||||||
print ==========================mix tag columns and group by columns======================
|
print ==========================mix tag columns and group by columns======================
|
||||||
sql select top(c1, 100), tbname from select_tags_mt0 where tbname in ('select_tags_tb0', 'select_tags_tb1') group by t3
|
sql select ts, top(c1, 100), tbname, t3 from select_tags_mt0 where tbname in ('select_tags_tb0', 'select_tags_tb1') group by t3 order by ts, tbname;
|
||||||
if $rows != 100 then
|
if $rows != 100 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
@ -887,9 +867,9 @@ sql_error select twa(c2), tbname from select_tags_mt0;
|
||||||
sql_error select interp(c2), tbname from select_tags_mt0 where ts=100001;
|
sql_error select interp(c2), tbname from select_tags_mt0 where ts=100001;
|
||||||
|
|
||||||
sql_error select t1,t2,tbname from select_tags_mt0 group by tbname;
|
sql_error select t1,t2,tbname from select_tags_mt0 group by tbname;
|
||||||
sql_error select count(tbname) from select_tags_mt0 interval(1d);
|
sql select count(tbname) from select_tags_mt0 interval(1d);
|
||||||
sql_error select count(tbname) from select_tags_mt0 group by t1;
|
sql select count(tbname) from select_tags_mt0 group by t1;
|
||||||
sql_error select count(tbname),SUM(T1) from select_tags_mt0 interval(1d);
|
sql select count(tbname),SUM(T1) from select_tags_mt0 interval(1d);
|
||||||
sql_error select first(c1), count(*), t2, t1, tbname from select_tags_mt0 where c1<=2 interval(1d) group by tbname;
|
sql_error select first(c1), count(*), t2, t1, tbname from select_tags_mt0 where c1<=2 interval(1d) group by tbname;
|
||||||
sql_error select ts from select_tags_mt0 interval(1y);
|
sql_error select ts from select_tags_mt0 interval(1y);
|
||||||
sql_error select count(*), tbname from select_tags_mt0 interval(1y);
|
sql_error select count(*), tbname from select_tags_mt0 interval(1y);
|
||||||
|
@ -902,8 +882,8 @@ sql_error select tbname, t1 from select_tags_mt0 interval(1y);
|
||||||
#valid sql: select first(c1), tbname, t1 from select_tags_mt0 group by t2;
|
#valid sql: select first(c1), tbname, t1 from select_tags_mt0 group by t2;
|
||||||
|
|
||||||
print ==================================>TD-4231
|
print ==================================>TD-4231
|
||||||
sql_error select t1,tbname from select_tags_mt0 where c1<0
|
sql select t1,tbname from select_tags_mt0 where c1<0
|
||||||
sql_error select t1,tbname from select_tags_mt0 where c1<0 and tbname in ('select_tags_tb12')
|
sql select t1,tbname from select_tags_mt0 where c1<0 and tbname in ('select_tags_tb12')
|
||||||
|
|
||||||
sql select tbname from select_tags_mt0 where tbname in ('select_tags_tb12');
|
sql select tbname from select_tags_mt0 where tbname in ('select_tags_tb12');
|
||||||
|
|
||||||
|
|
|
@ -1,6 +1,5 @@
|
||||||
system sh/stop_dnodes.sh
|
system sh/stop_dnodes.sh
|
||||||
system sh/deploy.sh -n dnode1 -i 1
|
system sh/deploy.sh -n dnode1 -i 1
|
||||||
system sh/cfg.sh -n dnode1 -c debugflag -v 131
|
|
||||||
system sh/exec.sh -n dnode1 -s start -v
|
system sh/exec.sh -n dnode1 -s start -v
|
||||||
sql connect
|
sql connect
|
||||||
|
|
||||||
|
@ -22,78 +21,31 @@ if $data(1)[4] != ready then
|
||||||
goto step1
|
goto step1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
print =============== step2: create db
|
$tbPrefix = tb
|
||||||
sql create database db
|
$tbNum = 5
|
||||||
|
$rowNum = 10
|
||||||
|
|
||||||
|
print =============== step2: prepare data
|
||||||
|
sql create database db vgroups 2
|
||||||
sql use db
|
sql use db
|
||||||
sql create table db.stb (ts timestamp, c1 int, c2 binary(4)) tags(t1 int, t2 float, t3 binary(16)) comment "abd"
|
sql create table if not exists stb (ts timestamp, tbcol int, tbcol2 float, tbcol3 double) tags (tgcol int unsigned)
|
||||||
sql create table db.c1 using db.stb tags(101, 102, "103")
|
|
||||||
|
|
||||||
print =============== step3: alter stb
|
$i = 0
|
||||||
sql_error alter table db.stb add column ts int
|
while $i < $tbNum
|
||||||
sql alter table db.stb add column c3 int
|
$tb = $tbPrefix . $i
|
||||||
sql alter table db.stb add column c4 bigint
|
sql create table $tb using stb tags( $i )
|
||||||
sql alter table db.stb add column c5 binary(12)
|
$x = 0
|
||||||
sql alter table db.stb drop column c1
|
while $x < $rowNum
|
||||||
sql alter table db.stb drop column c4
|
$cc = $x * 60000
|
||||||
sql alter table db.stb MODIFY column c2 binary(32)
|
$ms = 1601481600000 + $cc
|
||||||
sql alter table db.stb add tag t4 bigint
|
sql insert into $tb values ($ms , $x , $x , $x )
|
||||||
sql alter table db.stb add tag c1 int
|
$x = $x + 1
|
||||||
sql alter table db.stb add tag t5 binary(12)
|
endw
|
||||||
sql alter table db.stb drop tag c1
|
$i = $i + 1
|
||||||
sql alter table db.stb drop tag t5
|
endw
|
||||||
sql alter table db.stb MODIFY tag t3 binary(32)
|
|
||||||
sql alter table db.stb rename tag t1 tx
|
|
||||||
sql alter table db.stb comment 'abcde' ;
|
|
||||||
sql drop table db.stb
|
|
||||||
|
|
||||||
print =============== step4: alter tb
|
print =============== step3: tb
|
||||||
sql create table tb (ts timestamp, a int)
|
sql select count(1) from tb1
|
||||||
sql insert into tb values(now-28d, -28)
|
|
||||||
sql select count(a) from tb
|
|
||||||
sql alter table tb add column b smallint
|
|
||||||
sql insert into tb values(now-25d, -25, 0)
|
|
||||||
sql select count(b) from tb
|
|
||||||
sql alter table tb add column c tinyint
|
|
||||||
sql insert into tb values(now-22d, -22, 3, 0)
|
|
||||||
sql select count(c) from tb
|
|
||||||
sql alter table tb add column d int
|
|
||||||
sql insert into tb values(now-19d, -19, 6, 0, 0)
|
|
||||||
sql select count(d) from tb
|
|
||||||
sql alter table tb add column e bigint
|
|
||||||
sql alter table tb add column f float
|
|
||||||
sql alter table tb add column g double
|
|
||||||
sql alter table tb add column h binary(10)
|
|
||||||
sql select count(a), count(b), count(c), count(d), count(e), count(f), count(g), count(h) from tb
|
|
||||||
sql select * from tb order by ts desc
|
|
||||||
|
|
||||||
print =============== step5: alter stb and insert data
|
|
||||||
sql create table stb (ts timestamp, c1 int, c2 binary(4)) tags(t1 int, t2 float, t3 binary(16)) comment "abd"
|
|
||||||
sql show db.stables
|
|
||||||
sql describe stb
|
|
||||||
sql_error alter table stb add column ts int
|
|
||||||
|
|
||||||
sql create table db.ctb using db.stb tags(101, 102, "103")
|
|
||||||
sql insert into db.ctb values(now, 1, "2")
|
|
||||||
sql show db.tables
|
|
||||||
sql select * from db.stb
|
|
||||||
sql select * from tb
|
|
||||||
|
|
||||||
sql alter table stb add column c3 int
|
|
||||||
sql describe stb
|
|
||||||
sql select * from db.stb
|
|
||||||
sql select * from tb
|
|
||||||
sql insert into db.ctb values(now+1s, 1, 2, 3)
|
|
||||||
sql select * from db.stb
|
|
||||||
|
|
||||||
sql alter table db.stb add column c4 bigint
|
|
||||||
sql select * from db.stb
|
|
||||||
sql insert into db.ctb values(now+2s, 1, 2, 3, 4)
|
|
||||||
|
|
||||||
sql alter table db.stb drop column c1
|
|
||||||
sql reset query cache
|
|
||||||
sql select * from tb
|
|
||||||
sql insert into db.ctb values(now+3s, 2, 3, 4)
|
|
||||||
sql select * from db.stb
|
|
||||||
|
|
||||||
_OVER:
|
_OVER:
|
||||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
system sh/stop_dnodes.sh
|
system sh/stop_dnodes.sh
|
||||||
system sh/deploy.sh -n dnode1 -i 1
|
system sh/deploy.sh -n dnode1 -i 1
|
||||||
system sh/exec.sh -n dnode1 -s start -v
|
system sh/cfg.sh -n dnode1 -c debugflag -v 131
|
||||||
|
system sh/exec.sh -n dnode1 -s start
|
||||||
sql connect
|
sql connect
|
||||||
|
|
||||||
print =============== step1: create drop show dnodes
|
print =============== step1: create drop show dnodes
|
||||||
|
@ -21,88 +22,73 @@ if $data(1)[4] != ready then
|
||||||
goto step1
|
goto step1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
print =============== step2: create db
|
$tbPrefix = tb
|
||||||
sql create database db
|
$tbNum = 5
|
||||||
|
$rowNum = 10
|
||||||
|
|
||||||
|
print =============== step2: prepare data
|
||||||
|
sql create database db vgroups 2
|
||||||
sql use db
|
sql use db
|
||||||
sql create table db.stb (ts timestamp, c1 int, c2 binary(4)) tags(t1 int, t2 float, t3 binary(16)) comment "abd"
|
sql create table if not exists stb (ts timestamp, tbcol int, tbcol2 float, tbcol3 double) tags (tgcol int unsigned)
|
||||||
sql create table db.c1 using db.stb tags(101, 102, "103")
|
|
||||||
|
|
||||||
print =============== step3: alter stb
|
$i = 0
|
||||||
sql_error alter table db.stb add column ts int
|
while $i < $tbNum
|
||||||
sql alter table db.stb add column c3 int
|
$tb = $tbPrefix . $i
|
||||||
sql alter table db.stb add column c4 bigint
|
sql create table $tb using stb tags( $i )
|
||||||
sql alter table db.stb add column c5 binary(12)
|
$x = 0
|
||||||
sql alter table db.stb drop column c1
|
while $x < $rowNum
|
||||||
sql alter table db.stb drop column c4
|
$cc = $x * 60000
|
||||||
sql alter table db.stb MODIFY column c2 binary(32)
|
$ms = 1601481600000 + $cc
|
||||||
sql alter table db.stb add tag t4 bigint
|
sql insert into $tb values ($ms , $x , $x , $x )
|
||||||
sql alter table db.stb add tag c1 int
|
$x = $x + 1
|
||||||
sql alter table db.stb add tag t5 binary(12)
|
endw
|
||||||
sql alter table db.stb drop tag c1
|
|
||||||
sql alter table db.stb drop tag t5
|
|
||||||
sql alter table db.stb MODIFY tag t3 binary(32)
|
|
||||||
sql alter table db.stb rename tag t1 tx
|
|
||||||
sql alter table db.stb comment 'abcde' ;
|
|
||||||
sql drop table db.stb
|
|
||||||
|
|
||||||
print =============== step4: alter tb
|
$cc = $x * 60000
|
||||||
sql create table tb (ts timestamp, a int)
|
$ms = 1601481600000 + $cc
|
||||||
sql insert into tb values(now-28d, -28)
|
sql insert into $tb values ($ms , NULL , NULL , NULL )
|
||||||
sql select count(a) from tb
|
$i = $i + 1
|
||||||
sql alter table tb add column b smallint
|
endw
|
||||||
sql insert into tb values(now-25d, -25, 0)
|
|
||||||
sql select count(b) from tb
|
|
||||||
sql alter table tb add column c tinyint
|
|
||||||
sql insert into tb values(now-22d, -22, 3, 0)
|
|
||||||
sql select count(c) from tb
|
|
||||||
sql alter table tb add column d int
|
|
||||||
sql insert into tb values(now-19d, -19, 6, 0, 0)
|
|
||||||
sql select count(d) from tb
|
|
||||||
sql alter table tb add column e bigint
|
|
||||||
sql alter table tb add column f float
|
|
||||||
sql alter table tb add column g double
|
|
||||||
sql alter table tb add column h binary(10)
|
|
||||||
sql select count(a), count(b), count(c), count(d), count(e), count(f), count(g), count(h) from tb
|
|
||||||
sql select * from tb order by ts desc
|
|
||||||
|
|
||||||
print =============== step5: alter stb and insert data
|
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||||
sql create table stb (ts timestamp, c1 int, c2 binary(4)) tags(t1 int, t2 float, t3 binary(16)) comment "abd"
|
system sh/exec.sh -n dnode1 -s start -v
|
||||||
sql show db.stables
|
|
||||||
sql describe stb
|
|
||||||
sql_error alter table stb add column ts int
|
|
||||||
|
|
||||||
sql create table db.ctb using db.stb tags(101, 102, "103")
|
print =============== step3: tb
|
||||||
sql insert into db.ctb values(now, 1, "2")
|
sql select avg(tbcol) from tb1
|
||||||
sql show db.tables
|
sql select avg(tbcol) from tb1 where ts <= 1601481840000
|
||||||
sql select * from db.stb
|
sql select avg(tbcol) as b from tb1
|
||||||
sql select * from tb
|
sql select avg(tbcol) as b from tb1 interval(1d)
|
||||||
|
sql select avg(tbcol) as b from tb1 where ts <= 1601481840000 interval(1m)
|
||||||
|
sql select bottom(tbcol, 2) from tb1 where ts <= 1601481840000
|
||||||
|
sql select top(tbcol, 2) from tb1 where ts <= 1601481840000
|
||||||
|
sql select percentile(tbcol, 2) from tb1 where ts <= 1601481840000
|
||||||
|
sql select leastsquares(tbcol, 1, 1) as b from tb1 where ts <= 1601481840000
|
||||||
|
sql show table distributed tb1
|
||||||
|
sql select count(tbcol) as b from tb1 where ts <= 1601481840000 interval(1m)
|
||||||
|
sql select diff(tbcol) from tb1 where ts <= 1601481840000
|
||||||
|
sql select diff(tbcol) from tb1 where tbcol > 5 and tbcol < 20
|
||||||
|
sql select first(tbcol), last(tbcol) as b from tb1 where ts <= 1601481840000 interval(1m)
|
||||||
|
sql select count(tbcol), avg(tbcol), max(tbcol), min(tbcol), sum(tbcol), stddev(tbcol) from tb1 where ts <= 1601481840000 partition by tgcol interval(1m)
|
||||||
|
#sql select count(tbcol), avg(tbcol), max(tbcol), min(tbcol), count(tbcol) from tb1 where ts <= 1601481840000 and ts >= 1601481800000 partition by tgcol interval(1m) fill(value, 0)
|
||||||
|
sql select last_row(*) from tb1 where tbcol > 5 and tbcol < 20
|
||||||
|
|
||||||
sql alter table stb add column c3 int
|
print =============== step4: stb
|
||||||
sql describe stb
|
sql select avg(tbcol) as c from stb
|
||||||
sql select * from db.stb
|
sql select avg(tbcol) as c from stb where ts <= 1601481840000
|
||||||
sql select * from tb
|
sql select avg(tbcol) as c from stb where tgcol < 5 and ts <= 1601481840000
|
||||||
sql insert into db.ctb values(now+1s, 1, 2, 3)
|
sql select avg(tbcol) as c from stb interval(1m)
|
||||||
sql select * from db.stb
|
sql select avg(tbcol) as c from stb interval(1d)
|
||||||
|
sql select avg(tbcol) as b from stb where ts <= 1601481840000 interval(1m)
|
||||||
sql alter table db.stb add column c4 bigint
|
sql select avg(tbcol) as c from stb group by tgcol
|
||||||
sql select * from db.stb
|
sql select avg(tbcol) as b from stb where ts <= 1601481840000 partition by tgcol interval(1m)
|
||||||
sql insert into db.ctb values(now+2s, 1, 2, 3, 4)
|
sql show table distributed stb
|
||||||
|
sql select count(tbcol) as b from stb where ts <= 1601481840000 partition by tgcol interval(1m)
|
||||||
sql alter table db.stb drop column c1
|
sql select diff(tbcol) from stb where ts <= 1601481840000
|
||||||
sql reset query cache
|
sql select first(tbcol), last(tbcol) as c from stb group by tgcol
|
||||||
sql select * from tb
|
sql select first(tbcol), last(tbcol) as b from stb where ts <= 1601481840000 and tbcol2 is null partition by tgcol interval(1m)
|
||||||
sql insert into db.ctb values(now+3s, 2, 3, 4)
|
sql select first(tbcol), last(tbcol) as b from stb where ts <= 1601481840000 partition by tgcol interval(1m)
|
||||||
sql select * from db.stb
|
sql select count(tbcol), avg(tbcol), max(tbcol), min(tbcol), sum(tbcol), stddev(tbcol) from stb where ts <= 1601481840000 partition by tgcol interval(1m)
|
||||||
|
#sql select count(tbcol), avg(tbcol), max(tbcol), min(tbcol), count(tbcol) from stb where ts <= 1601481840000 and ts >= 1601481800000 partition by tgcol interval(1m) fill(value, 0)
|
||||||
sql alter table db.stb add tag t4 bigint
|
sql select last_row(tbcol), stddev(tbcol) from stb where tbcol > 5 and tbcol < 20 group by tgcol
|
||||||
sql select * from db.stb
|
|
||||||
sql select * from db.stb
|
|
||||||
sql_error create table db.ctb2 using db.stb tags(101, "102")
|
|
||||||
sql create table db.ctb2 using db.stb tags(101, 102, "103", 104)
|
|
||||||
sql insert into db.ctb2 values(now, 1, 2, 3)
|
|
||||||
|
|
||||||
print =============== step6: query data
|
|
||||||
sql select * from db.stb where tbname = 'ctb2';
|
|
||||||
|
|
||||||
_OVER:
|
_OVER:
|
||||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||||
|
|
|
@ -1,5 +1,6 @@
|
||||||
system sh/stop_dnodes.sh
|
system sh/stop_dnodes.sh
|
||||||
system sh/deploy.sh -n dnode1 -i 1
|
system sh/deploy.sh -n dnode1 -i 1
|
||||||
|
system sh/cfg.sh -n dnode1 -c debugflag -v 131
|
||||||
system sh/exec.sh -n dnode1 -s start -v
|
system sh/exec.sh -n dnode1 -s start -v
|
||||||
sql connect
|
sql connect
|
||||||
|
|
||||||
|
@ -44,20 +45,11 @@ while $i < $tbNum
|
||||||
$i = $i + 1
|
$i = $i + 1
|
||||||
endw
|
endw
|
||||||
|
|
||||||
print =============== step3: avg
|
print =============== step3: tb
|
||||||
sql select avg(tbcol) from tb1
|
sql select count(tbcol), avg(tbcol), max(tbcol), min(tbcol), count(tbcol) from tb1 where ts <= 1601481840000 and ts >= 1601481800000 partition by tgcol interval(1m) fill(value, 0)
|
||||||
sql select avg(tbcol) from tb1 where ts <= 1601481840000
|
|
||||||
sql select avg(tbcol) as b from tb1
|
print =============== step4: stb
|
||||||
sql select avg(tbcol) as b from tb1 interval(1d)
|
sql select count(tbcol), avg(tbcol), max(tbcol), min(tbcol), count(tbcol) from stb where ts <= 1601481840000 and ts >= 1601481800000 partition by tgcol interval(1m) fill(value, 0)
|
||||||
sql select avg(tbcol) as b from tb1 where ts <= 1601481840000s interval(1m)
|
|
||||||
sql select avg(tbcol) as c from stb
|
|
||||||
sql select avg(tbcol) as c from stb where ts <= 1601481840000
|
|
||||||
sql select avg(tbcol) as c from stb where tgcol < 5 and ts <= 1601481840000
|
|
||||||
sql select avg(tbcol) as c from stb interval(1m)
|
|
||||||
sql select avg(tbcol) as c from stb interval(1d)
|
|
||||||
sql select avg(tbcol) as b from stb where ts <= 1601481840000s interval(1m)
|
|
||||||
sql select avg(tbcol) as c from stb group by tgcol
|
|
||||||
sql select avg(tbcol) as b from stb where ts <= 1601481840000s partition by tgcol interval(1m)
|
|
||||||
|
|
||||||
_OVER:
|
_OVER:
|
||||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||||
|
|
|
@ -0,0 +1,74 @@
|
||||||
|
system sh/stop_dnodes.sh
|
||||||
|
system sh/deploy.sh -n dnode1 -i 1
|
||||||
|
system sh/cfg.sh -n dnode1 -c debugflag -v 131
|
||||||
|
system sh/exec.sh -n dnode1 -s start -v
|
||||||
|
sql connect
|
||||||
|
|
||||||
|
print =============== step1: create drop show dnodes
|
||||||
|
$x = 0
|
||||||
|
step1:
|
||||||
|
$x = $x + 1
|
||||||
|
sleep 1000
|
||||||
|
if $x == 10 then
|
||||||
|
print ---> dnode not ready!
|
||||||
|
return -1
|
||||||
|
endi
|
||||||
|
sql show dnodes
|
||||||
|
print ---> $data00 $data01 $data02 $data03 $data04 $data05
|
||||||
|
if $rows != 1 then
|
||||||
|
return -1
|
||||||
|
endi
|
||||||
|
if $data(1)[4] != ready then
|
||||||
|
goto step1
|
||||||
|
endi
|
||||||
|
|
||||||
|
print =============== step2: create db
|
||||||
|
sql create database d1 vgroups 2 buffer 3
|
||||||
|
sql show databases
|
||||||
|
sql use d1
|
||||||
|
sql show vgroups
|
||||||
|
|
||||||
|
print =============== step3: create show stable
|
||||||
|
sql create table if not exists stb (ts timestamp, c1 int, c2 float, c3 double) tags (t1 int unsigned)
|
||||||
|
sql show stables
|
||||||
|
if $rows != 1 then
|
||||||
|
return -1
|
||||||
|
endi
|
||||||
|
|
||||||
|
print =============== step4: create show table
|
||||||
|
sql create table ct1 using stb tags(1000)
|
||||||
|
sql create table ct2 using stb tags(2000)
|
||||||
|
sql create table ct3 using stb tags(3000)
|
||||||
|
sql show tables
|
||||||
|
if $rows != 3 then
|
||||||
|
return -1
|
||||||
|
endi
|
||||||
|
|
||||||
|
print =============== step5: insert data (null / update)
|
||||||
|
sql insert into ct1 values(now+0s, 10, 2.0, 3.0)
|
||||||
|
sql insert into ct1 values(now+1s, 11, 2.1, NULL)(now+2s, -12, -2.2, -3.2)(now+3s, -13, -2.3, -3.3)
|
||||||
|
sql insert into ct2 values(now+0s, 10, 2.0, 3.0)
|
||||||
|
sql insert into ct2 values(now+1s, 11, 2.1, 3.1)(now+2s, -12, -2.2, -3.2)(now+3s, -13, -2.3, -3.3)
|
||||||
|
sql insert into ct3 values('2021-01-01 00:00:00.000', NULL, NULL, 3.0)
|
||||||
|
sql insert into ct3 values('2022-03-02 16:59:00.010', 3 , 4, 5), ('2022-03-02 16:59:00.010', 33 , 4, 5), ('2022-04-01 16:59:00.011', 4, 4, 5), ('2022-04-01 16:59:00.011', 6, 4, 5), ('2022-03-06 16:59:00.013', 8, 4, 5);
|
||||||
|
sql insert into ct3 values('2022-03-02 16:59:00.010', 103, 1, 2), ('2022-03-02 16:59:00.010', 303, 3, 4), ('2022-04-01 16:59:00.011', 40, 5, 6), ('2022-04-01 16:59:00.011', 60, 4, 5), ('2022-03-06 16:59:00.013', 80, 4, 5);
|
||||||
|
|
||||||
|
print =============== step6: query data=
|
||||||
|
|
||||||
|
sql select * from stb where t1 between 1000 and 2500
|
||||||
|
|
||||||
|
|
||||||
|
_OVER:
|
||||||
|
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||||
|
print =============== check
|
||||||
|
$null=
|
||||||
|
|
||||||
|
system_content sh/checkValgrind.sh -n dnode1
|
||||||
|
print cmd return result ----> [ $system_content ]
|
||||||
|
if $system_content > 0 then
|
||||||
|
return -1
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $system_content == $null then
|
||||||
|
return -1
|
||||||
|
endi
|
|
@ -37,6 +37,7 @@ sql show stables
|
||||||
if $rows != 4 then
|
if $rows != 4 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
sql show stables like 'stb'
|
||||||
|
|
||||||
print =============== step4: ccreate child table
|
print =============== step4: ccreate child table
|
||||||
sql create table c1 using stb tags(true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40)
|
sql create table c1 using stb tags(true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40)
|
||||||
|
|
|
@ -105,6 +105,39 @@ sql insert into db.ctb2 values(now, 1, 2, 3)
|
||||||
print =============== step6: query data
|
print =============== step6: query data
|
||||||
sql select * from db.stb where tbname = 'ctb2';
|
sql select * from db.stb where tbname = 'ctb2';
|
||||||
|
|
||||||
|
|
||||||
|
print =============== step7: normal table
|
||||||
|
sql create database d1 replica 1 duration 7 keep 50
|
||||||
|
sql use d1
|
||||||
|
sql create table tb (ts timestamp, a int)
|
||||||
|
sql insert into tb values(now-28d, -28)
|
||||||
|
sql alter table tb add column b smallint
|
||||||
|
sql insert into tb values(now-25d, -25, 0)
|
||||||
|
sql alter table tb add column c tinyint
|
||||||
|
sql insert into tb values(now-22d, -22, 3, 0)
|
||||||
|
sql alter table tb add column d int
|
||||||
|
sql insert into tb values(now-19d, -19, 6, 0, 0)
|
||||||
|
sql alter table tb add column e bigint
|
||||||
|
sql insert into tb values(now-16d, -16, 9, 0, 0, 0)
|
||||||
|
sql alter table tb add column f float
|
||||||
|
sql insert into tb values(now-13d, -13, 12, 0, 0, 0, 0)
|
||||||
|
sql alter table tb add column g double
|
||||||
|
sql insert into tb values(now-10d, -10, 15, 0, 0, 0, 0, 0)
|
||||||
|
sql alter table tb add column h binary(10)
|
||||||
|
sql insert into tb values(now-7d, -7, 18, 0, 0, 0, 0, 0, '0')
|
||||||
|
sql select count(a), count(b), count(c), count(d), count(e), count(f), count(g), count(h) from d1.tb;
|
||||||
|
sql alter table tb drop column a
|
||||||
|
sql insert into tb values(now-4d, 1, 1, 1, 1, 1, 1, '1')
|
||||||
|
sql alter table tb drop column b
|
||||||
|
sql insert into tb values(now-3d, 1, 1, 1, 1, 1, '1')
|
||||||
|
sql alter table tb drop column c
|
||||||
|
sql insert into tb values(now-2d, 1, 1, 1, 1, '1')
|
||||||
|
sql alter table tb drop column d
|
||||||
|
sql insert into tb values(now-1d, 1, 1, 1, '1')
|
||||||
|
sql alter table tb drop column e
|
||||||
|
sql insert into tb values(now, 1, 1, '1')
|
||||||
|
sql select count(h) from tb
|
||||||
|
|
||||||
_OVER:
|
_OVER:
|
||||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||||
print =============== check
|
print =============== check
|
||||||
|
|
|
@ -42,23 +42,50 @@ while $i < $tbNum
|
||||||
sql insert into $tb values ($ms , $x , $x , $x )
|
sql insert into $tb values ($ms , $x , $x , $x )
|
||||||
$x = $x + 1
|
$x = $x + 1
|
||||||
endw
|
endw
|
||||||
|
|
||||||
|
$cc = $x * 60000
|
||||||
|
$ms = 1601481600000 + $cc
|
||||||
|
sql insert into $tb values ($ms , NULL , NULL , NULL )
|
||||||
$i = $i + 1
|
$i = $i + 1
|
||||||
endw
|
endw
|
||||||
|
|
||||||
print =============== step3: avg
|
print =============== step3: tb
|
||||||
sql select avg(tbcol) from tb1
|
sql select avg(tbcol) from tb1
|
||||||
sql select avg(tbcol) from tb1 where ts <= 1601481840000
|
sql select avg(tbcol) from tb1 where ts <= 1601481840000
|
||||||
sql select avg(tbcol) as b from tb1
|
sql select avg(tbcol) as b from tb1
|
||||||
sql select avg(tbcol) as b from tb1 interval(1d)
|
sql select avg(tbcol) as b from tb1 interval(1d)
|
||||||
sql select avg(tbcol) as b from tb1 where ts <= 1601481840000s interval(1m)
|
sql select avg(tbcol) as b from tb1 where ts <= 1601481840000 interval(1m)
|
||||||
|
sql select bottom(tbcol, 2) from tb1 where ts <= 1601481840000
|
||||||
|
sql select top(tbcol, 2) from tb1 where ts <= 1601481840000
|
||||||
|
sql select percentile(tbcol, 2) from tb1 where ts <= 1601481840000
|
||||||
|
sql select leastsquares(tbcol, 1, 1) as b from tb1 where ts <= 1601481840000
|
||||||
|
sql show table distributed tb1
|
||||||
|
sql select count(tbcol) as b from tb1 where ts <= 1601481840000 interval(1m)
|
||||||
|
sql select diff(tbcol) from tb1 where ts <= 1601481840000
|
||||||
|
sql select diff(tbcol) from tb1 where tbcol > 5 and tbcol < 20
|
||||||
|
sql select first(tbcol), last(tbcol) as b from tb1 where ts <= 1601481840000 interval(1m)
|
||||||
|
sql select count(tbcol), avg(tbcol), max(tbcol), min(tbcol), sum(tbcol), stddev(tbcol) from tb1 where ts <= 1601481840000 partition by tgcol interval(1m)
|
||||||
|
#sql select count(tbcol), avg(tbcol), max(tbcol), min(tbcol), count(tbcol) from tb1 where ts <= 1601481840000 and ts >= 1601481800000 partition by tgcol interval(1m) fill(value, 0)
|
||||||
|
sql select last_row(*) from tb1 where tbcol > 5 and tbcol < 20
|
||||||
|
|
||||||
|
print =============== step4: stb
|
||||||
sql select avg(tbcol) as c from stb
|
sql select avg(tbcol) as c from stb
|
||||||
sql select avg(tbcol) as c from stb where ts <= 1601481840000
|
sql select avg(tbcol) as c from stb where ts <= 1601481840000
|
||||||
sql select avg(tbcol) as c from stb where tgcol < 5 and ts <= 1601481840000
|
sql select avg(tbcol) as c from stb where tgcol < 5 and ts <= 1601481840000
|
||||||
sql select avg(tbcol) as c from stb interval(1m)
|
sql select avg(tbcol) as c from stb interval(1m)
|
||||||
sql select avg(tbcol) as c from stb interval(1d)
|
sql select avg(tbcol) as c from stb interval(1d)
|
||||||
sql select avg(tbcol) as b from stb where ts <= 1601481840000s interval(1m)
|
sql select avg(tbcol) as b from stb where ts <= 1601481840000 interval(1m)
|
||||||
sql select avg(tbcol) as c from stb group by tgcol
|
sql select avg(tbcol) as c from stb group by tgcol
|
||||||
sql select avg(tbcol) as b from stb where ts <= 1601481840000s partition by tgcol interval(1m)
|
sql select avg(tbcol) as b from stb where ts <= 1601481840000 partition by tgcol interval(1m)
|
||||||
|
sql show table distributed stb
|
||||||
|
sql select count(tbcol) as b from stb where ts <= 1601481840000 partition by tgcol interval(1m)
|
||||||
|
sql select diff(tbcol) from stb where ts <= 1601481840000
|
||||||
|
sql select first(tbcol), last(tbcol) as c from stb group by tgcol
|
||||||
|
sql select first(tbcol), last(tbcol) as b from stb where ts <= 1601481840000 and tbcol2 is null partition by tgcol interval(1m)
|
||||||
|
sql select first(tbcol), last(tbcol) as b from stb where ts <= 1601481840000 partition by tgcol interval(1m)
|
||||||
|
sql select count(tbcol), avg(tbcol), max(tbcol), min(tbcol), sum(tbcol), stddev(tbcol) from stb where ts <= 1601481840000 partition by tgcol interval(1m)
|
||||||
|
#sql select count(tbcol), avg(tbcol), max(tbcol), min(tbcol), count(tbcol) from stb where ts <= 1601481840000 and ts >= 1601481800000 partition by tgcol interval(1m) fill(value, 0)
|
||||||
|
sql select last_row(tbcol), stddev(tbcol) from stb where tbcol > 5 and tbcol < 20 group by tgcol
|
||||||
|
|
||||||
_OVER:
|
_OVER:
|
||||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||||
|
|
|
@ -0,0 +1,75 @@
|
||||||
|
system sh/stop_dnodes.sh
|
||||||
|
system sh/deploy.sh -n dnode1 -i 1
|
||||||
|
system sh/exec.sh -n dnode1 -s start -v
|
||||||
|
sql connect
|
||||||
|
|
||||||
|
print ======================== create stable
|
||||||
|
sql create database d1
|
||||||
|
sql use d1
|
||||||
|
|
||||||
|
$x = 0
|
||||||
|
while $x < 128
|
||||||
|
$tb = d1.s . $x
|
||||||
|
sql create table $tb (ts timestamp, i int) tags (j int)
|
||||||
|
$x = $x + 1
|
||||||
|
endw
|
||||||
|
|
||||||
|
print ======================== describe stables
|
||||||
|
# TODO : create stable error
|
||||||
|
$m = 0
|
||||||
|
while $m < 128
|
||||||
|
$tb = s . $m
|
||||||
|
$filter = ' . $tb
|
||||||
|
$filter = $filter . '
|
||||||
|
sql show stables like $filter
|
||||||
|
print sql : show stables like $filter
|
||||||
|
if $rows != 1 then
|
||||||
|
print expect 1, actual: $rows
|
||||||
|
return -1
|
||||||
|
endi
|
||||||
|
$m = $m + 1
|
||||||
|
endw
|
||||||
|
|
||||||
|
|
||||||
|
print ======================== show stables
|
||||||
|
|
||||||
|
sql show d1.stables
|
||||||
|
|
||||||
|
print num of stables is $rows
|
||||||
|
if $rows != 128 then
|
||||||
|
return -1
|
||||||
|
endi
|
||||||
|
|
||||||
|
print ======================== create table
|
||||||
|
|
||||||
|
$x = 0
|
||||||
|
while $x < 424
|
||||||
|
$tb = d1.t . $x
|
||||||
|
sql create table $tb using d1.s0 tags( $x )
|
||||||
|
$x = $x + 1
|
||||||
|
endw
|
||||||
|
|
||||||
|
print ======================== show stables
|
||||||
|
|
||||||
|
sql show d1.tables
|
||||||
|
|
||||||
|
print num of tables is $rows
|
||||||
|
if $rows != 424 then
|
||||||
|
return -1
|
||||||
|
endi
|
||||||
|
|
||||||
|
|
||||||
|
_OVER:
|
||||||
|
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||||
|
print =============== check
|
||||||
|
$null=
|
||||||
|
|
||||||
|
system_content sh/checkValgrind.sh -n dnode1
|
||||||
|
print cmd return result ----> [ $system_content ]
|
||||||
|
if $system_content > 2 then
|
||||||
|
return -1
|
||||||
|
endi
|
||||||
|
|
||||||
|
if $system_content == $null then
|
||||||
|
return -1
|
||||||
|
endi
|
|
@ -95,7 +95,6 @@ class TDTestCase:
|
||||||
tdSql.error("select diff(col12) from stb_1")
|
tdSql.error("select diff(col12) from stb_1")
|
||||||
tdSql.error("select diff(col13) from stb_1")
|
tdSql.error("select diff(col13) from stb_1")
|
||||||
tdSql.error("select diff(col14) from stb_1")
|
tdSql.error("select diff(col14) from stb_1")
|
||||||
tdSql.error("select ts,diff(col1),ts from stb_1")
|
|
||||||
|
|
||||||
tdSql.query("select diff(col1) from stb_1")
|
tdSql.query("select diff(col1) from stb_1")
|
||||||
tdSql.checkRows(10)
|
tdSql.checkRows(10)
|
||||||
|
@ -115,6 +114,79 @@ class TDTestCase:
|
||||||
tdSql.query("select diff(col6) from stb_1")
|
tdSql.query("select diff(col6) from stb_1")
|
||||||
tdSql.checkRows(10)
|
tdSql.checkRows(10)
|
||||||
|
|
||||||
|
# check selectivity
|
||||||
|
tdSql.query("select ts, diff(col1), col2 from stb_1")
|
||||||
|
tdSql.checkRows(10)
|
||||||
|
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
|
||||||
|
tdSql.checkData(1, 0, "2018-09-17 09:00:00.001")
|
||||||
|
tdSql.checkData(2, 0, "2018-09-17 09:00:00.002")
|
||||||
|
tdSql.checkData(3, 0, "2018-09-17 09:00:00.003")
|
||||||
|
tdSql.checkData(4, 0, "2018-09-17 09:00:00.004")
|
||||||
|
tdSql.checkData(5, 0, "2018-09-17 09:00:00.005")
|
||||||
|
tdSql.checkData(6, 0, "2018-09-17 09:00:00.006")
|
||||||
|
tdSql.checkData(7, 0, "2018-09-17 09:00:00.007")
|
||||||
|
tdSql.checkData(8, 0, "2018-09-17 09:00:00.008")
|
||||||
|
tdSql.checkData(9, 0, "2018-09-17 09:00:00.009")
|
||||||
|
|
||||||
|
tdSql.checkData(0, 1, 1)
|
||||||
|
tdSql.checkData(1, 1, 1)
|
||||||
|
tdSql.checkData(2, 1, 1)
|
||||||
|
tdSql.checkData(3, 1, 1)
|
||||||
|
tdSql.checkData(4, 1, 1)
|
||||||
|
tdSql.checkData(5, 1, 1)
|
||||||
|
tdSql.checkData(6, 1, 1)
|
||||||
|
tdSql.checkData(7, 1, 1)
|
||||||
|
tdSql.checkData(8, 1, 1)
|
||||||
|
tdSql.checkData(9, 1, 1)
|
||||||
|
|
||||||
|
tdSql.checkData(0, 2, 0)
|
||||||
|
tdSql.checkData(1, 2, 1)
|
||||||
|
tdSql.checkData(2, 2, 2)
|
||||||
|
tdSql.checkData(3, 2, 3)
|
||||||
|
tdSql.checkData(4, 2, 4)
|
||||||
|
tdSql.checkData(5, 2, 5)
|
||||||
|
tdSql.checkData(6, 2, 6)
|
||||||
|
tdSql.checkData(7, 2, 7)
|
||||||
|
tdSql.checkData(8, 2, 8)
|
||||||
|
tdSql.checkData(9, 2, 9)
|
||||||
|
|
||||||
|
tdSql.query("select ts, diff(col1), col2 from stb order by ts")
|
||||||
|
tdSql.checkRows(10)
|
||||||
|
|
||||||
|
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
|
||||||
|
tdSql.checkData(1, 0, "2018-09-17 09:00:00.001")
|
||||||
|
tdSql.checkData(2, 0, "2018-09-17 09:00:00.002")
|
||||||
|
tdSql.checkData(3, 0, "2018-09-17 09:00:00.003")
|
||||||
|
tdSql.checkData(4, 0, "2018-09-17 09:00:00.004")
|
||||||
|
tdSql.checkData(5, 0, "2018-09-17 09:00:00.005")
|
||||||
|
tdSql.checkData(6, 0, "2018-09-17 09:00:00.006")
|
||||||
|
tdSql.checkData(7, 0, "2018-09-17 09:00:00.007")
|
||||||
|
tdSql.checkData(8, 0, "2018-09-17 09:00:00.008")
|
||||||
|
tdSql.checkData(9, 0, "2018-09-17 09:00:00.009")
|
||||||
|
|
||||||
|
tdSql.checkData(0, 1, 1)
|
||||||
|
tdSql.checkData(1, 1, 1)
|
||||||
|
tdSql.checkData(2, 1, 1)
|
||||||
|
tdSql.checkData(3, 1, 1)
|
||||||
|
tdSql.checkData(4, 1, 1)
|
||||||
|
tdSql.checkData(5, 1, 1)
|
||||||
|
tdSql.checkData(6, 1, 1)
|
||||||
|
tdSql.checkData(7, 1, 1)
|
||||||
|
tdSql.checkData(8, 1, 1)
|
||||||
|
tdSql.checkData(9, 1, 1)
|
||||||
|
|
||||||
|
tdSql.checkData(0, 2, 0)
|
||||||
|
tdSql.checkData(1, 2, 1)
|
||||||
|
tdSql.checkData(2, 2, 2)
|
||||||
|
tdSql.checkData(3, 2, 3)
|
||||||
|
tdSql.checkData(4, 2, 4)
|
||||||
|
tdSql.checkData(5, 2, 5)
|
||||||
|
tdSql.checkData(6, 2, 6)
|
||||||
|
tdSql.checkData(7, 2, 7)
|
||||||
|
tdSql.checkData(8, 2, 8)
|
||||||
|
tdSql.checkData(9, 2, 9)
|
||||||
|
|
||||||
|
|
||||||
tdSql.execute('''create table stb1(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
tdSql.execute('''create table stb1(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
||||||
col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
|
col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
|
||||||
tdSql.execute("create table stb1_1 using stb tags('shanghai')")
|
tdSql.execute("create table stb1_1 using stb tags('shanghai')")
|
||||||
|
|
|
@ -283,14 +283,14 @@ class TDTestCase:
|
||||||
tdSql.error(self.diff_query_form(alias=", diff(c1)")) # mix with calculation function 2
|
tdSql.error(self.diff_query_form(alias=", diff(c1)")) # mix with calculation function 2
|
||||||
# tdSql.error(self.diff_query_form(alias=" + 2")) # mix with arithmetic 1
|
# tdSql.error(self.diff_query_form(alias=" + 2")) # mix with arithmetic 1
|
||||||
tdSql.error(self.diff_query_form(alias=" + avg(c1)")) # mix with arithmetic 2
|
tdSql.error(self.diff_query_form(alias=" + avg(c1)")) # mix with arithmetic 2
|
||||||
tdSql.error(self.diff_query_form(alias=", c2")) # mix with other 1
|
tdSql.query(self.diff_query_form(alias=", c2")) # mix with other 1
|
||||||
# tdSql.error(self.diff_query_form(table_expr="stb1")) # select stb directly
|
# tdSql.error(self.diff_query_form(table_expr="stb1")) # select stb directly
|
||||||
stb_join = {
|
stb_join = {
|
||||||
"col": "stb1.c1",
|
"col": "stb1.c1",
|
||||||
"table_expr": "stb1, stb2",
|
"table_expr": "stb1, stb2",
|
||||||
"condition": "where stb1.ts=stb2.ts and stb1.st1=stb2.st2 order by stb1.ts"
|
"condition": "where stb1.ts=stb2.ts and stb1.st1=stb2.st2 order by stb1.ts"
|
||||||
}
|
}
|
||||||
tdSql.error(self.diff_query_form(**stb_join)) # stb join
|
tdSql.query(self.diff_query_form(**stb_join)) # stb join
|
||||||
interval_sql = {
|
interval_sql = {
|
||||||
"condition": "where ts>0 and ts < now interval(1h) fill(next)"
|
"condition": "where ts>0 and ts < now interval(1h) fill(next)"
|
||||||
}
|
}
|
||||||
|
|
|
@ -193,20 +193,11 @@ class TDTestCase:
|
||||||
tdSql.query("select c1 , DERIVATIVE(c1,2,1) from stb partition by c1 order by c1")
|
tdSql.query("select c1 , DERIVATIVE(c1,2,1) from stb partition by c1 order by c1")
|
||||||
tdSql.checkRows(90)
|
tdSql.checkRows(90)
|
||||||
# bug need fix
|
# bug need fix
|
||||||
# tdSql.checkData(0,1,None)
|
tdSql.checkData(0,1,None)
|
||||||
|
|
||||||
|
|
||||||
|
tdSql.query(" select tbname , max(c1) from stb partition by tbname order by tbname slimit 5 soffset 0 ")
|
||||||
|
tdSql.checkRows(10)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# bug need fix
|
|
||||||
# tdSql.query(" select tbname , max(c1) from stb partition by tbname order by tbname slimit 5 soffset 0 ")
|
|
||||||
# tdSql.checkRows(5)
|
|
||||||
|
|
||||||
# tdSql.query(" select tbname , max(c1) from stb partition by tbname order by tbname slimit 5 soffset 1 ")
|
|
||||||
# tdSql.checkRows(5)
|
|
||||||
|
|
||||||
tdSql.query(" select tbname , max(c1) from sub_stb_1 partition by tbname interval(10s) sliding(5s) ")
|
tdSql.query(" select tbname , max(c1) from sub_stb_1 partition by tbname interval(10s) sliding(5s) ")
|
||||||
|
|
||||||
|
|
|
@ -1,198 +0,0 @@
|
||||||
from distutils.log import error
|
|
||||||
import taos
|
|
||||||
import sys
|
|
||||||
import time
|
|
||||||
import socket
|
|
||||||
import os
|
|
||||||
import threading
|
|
||||||
import subprocess
|
|
||||||
import platform
|
|
||||||
|
|
||||||
from util.log import *
|
|
||||||
from util.sql import *
|
|
||||||
from util.cases import *
|
|
||||||
from util.dnodes import *
|
|
||||||
from util.common import *
|
|
||||||
sys.path.append("./7-tmq")
|
|
||||||
from tmqCommon import *
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
class TDTestCase:
|
|
||||||
def __init__(self):
|
|
||||||
self.snapshot = 0
|
|
||||||
self.replica = 3
|
|
||||||
self.vgroups = 3
|
|
||||||
self.ctbNum = 2
|
|
||||||
self.rowsPerTbl = 2
|
|
||||||
|
|
||||||
def init(self, conn, logSql):
|
|
||||||
tdLog.debug(f"start to excute {__file__}")
|
|
||||||
tdSql.init(conn.cursor())
|
|
||||||
#tdSql.init(conn.cursor(), logSql) # output sql.txt file
|
|
||||||
|
|
||||||
def checkFileContent(self, consumerId, queryString):
|
|
||||||
buildPath = tdCom.getBuildPath()
|
|
||||||
cfgPath = tdCom.getClientCfgPath()
|
|
||||||
dstFile = '%s/../log/dstrows_%d.txt'%(cfgPath, consumerId)
|
|
||||||
cmdStr = '%s/build/bin/taos -c %s -s "%s >> %s"'%(buildPath, cfgPath, queryString, dstFile)
|
|
||||||
tdLog.info(cmdStr)
|
|
||||||
os.system(cmdStr)
|
|
||||||
|
|
||||||
consumeRowsFile = '%s/../log/consumerid_%d.txt'%(cfgPath, consumerId)
|
|
||||||
tdLog.info("rows file: %s, %s"%(consumeRowsFile, dstFile))
|
|
||||||
|
|
||||||
consumeFile = open(consumeRowsFile, mode='r')
|
|
||||||
queryFile = open(dstFile, mode='r')
|
|
||||||
|
|
||||||
# skip first line for it is schema
|
|
||||||
queryFile.readline()
|
|
||||||
|
|
||||||
while True:
|
|
||||||
dst = queryFile.readline()
|
|
||||||
src = consumeFile.readline()
|
|
||||||
|
|
||||||
if dst:
|
|
||||||
if dst != src:
|
|
||||||
tdLog.exit("consumerId %d consume rows is not match the rows by direct query"%consumerId)
|
|
||||||
else:
|
|
||||||
break
|
|
||||||
return
|
|
||||||
|
|
||||||
def prepareTestEnv(self):
|
|
||||||
tdLog.printNoPrefix("======== prepare test env include database, stable, ctables, and insert data: ")
|
|
||||||
paraDict = {'dbName': 'dbt',
|
|
||||||
'dropFlag': 1,
|
|
||||||
'event': '',
|
|
||||||
'vgroups': 4,
|
|
||||||
'stbName': 'stb',
|
|
||||||
'colPrefix': 'c',
|
|
||||||
'tagPrefix': 't',
|
|
||||||
'colSchema': [{'type': 'INT', 'count':1},{'type': 'BIGINT', 'count':1}],
|
|
||||||
'tagSchema': [{'type': 'INT', 'count':1},{'type': 'BIGINT', 'count':1}],
|
|
||||||
'ctbPrefix': 'ctb',
|
|
||||||
'ctbStartIdx': 0,
|
|
||||||
'ctbNum': 2,
|
|
||||||
'rowsPerTbl': 1000,
|
|
||||||
'batchNum': 10,
|
|
||||||
'startTs': 1640966400000, # 2022-01-01 00:00:00.000
|
|
||||||
'pollDelay': 3,
|
|
||||||
'showMsg': 1,
|
|
||||||
'showRow': 1,
|
|
||||||
'snapshot': 0}
|
|
||||||
|
|
||||||
paraDict['vgroups'] = self.vgroups
|
|
||||||
paraDict['ctbNum'] = self.ctbNum
|
|
||||||
paraDict['rowsPerTbl'] = self.rowsPerTbl
|
|
||||||
|
|
||||||
tmqCom.initConsumerTable()
|
|
||||||
tdCom.create_database(tdSql, paraDict["dbName"],paraDict["dropFlag"], vgroups=paraDict["vgroups"],replica=self.replica)
|
|
||||||
tdLog.info("create stb")
|
|
||||||
tmqCom.create_stable(tdSql, dbName=paraDict["dbName"],stbName=paraDict["stbName"])
|
|
||||||
tdLog.info("create ctb")
|
|
||||||
tmqCom.create_ctable(tdSql, dbName=paraDict["dbName"],stbName=paraDict["stbName"],ctbPrefix=paraDict['ctbPrefix'],
|
|
||||||
ctbNum=paraDict["ctbNum"],ctbStartIdx=paraDict['ctbStartIdx'])
|
|
||||||
tdLog.info("insert data")
|
|
||||||
tmqCom.insert_data_interlaceByMultiTbl(tsql=tdSql,dbName=paraDict["dbName"],ctbPrefix=paraDict["ctbPrefix"],
|
|
||||||
ctbNum=paraDict["ctbNum"],rowsPerTbl=paraDict["rowsPerTbl"],batchNum=paraDict["batchNum"],
|
|
||||||
startTs=paraDict["startTs"],ctbStartIdx=paraDict['ctbStartIdx'])
|
|
||||||
# tmqCom.insert_data_with_autoCreateTbl(tsql=tdSql,dbName=paraDict["dbName"],stbName=paraDict["stbName"],ctbPrefix="ctbx",
|
|
||||||
# ctbNum=paraDict["ctbNum"],rowsPerTbl=paraDict["rowsPerTbl"],batchNum=paraDict["batchNum"],
|
|
||||||
# startTs=paraDict["startTs"],ctbStartIdx=paraDict['ctbStartIdx'])
|
|
||||||
# tmqCom.asyncInsertDataByInterlace(paraDict)
|
|
||||||
tdLog.printNoPrefix("11111111111111111111111")
|
|
||||||
tmqCom.create_ntable(tdSql, dbname=paraDict["dbName"], tbname_prefix="ntb", tbname_index_start_num = 1, column_elm_list=paraDict["colSchema"], colPrefix='c', tblNum=1)
|
|
||||||
tdLog.printNoPrefix("222222222222222")
|
|
||||||
tmqCom.insert_rows_into_ntbl(tdSql, dbname=paraDict["dbName"], tbname_prefix="ntb", tbname_index_start_num = 1, column_ele_list=paraDict["colSchema"], startTs=paraDict["startTs"], tblNum=1, rows=2) # tdLog.info("restart taosd to ensure that the data falls into the disk")
|
|
||||||
|
|
||||||
tdLog.printNoPrefix("333333333333333333333")
|
|
||||||
tdSql.query("drop database %s"%paraDict["dbName"])
|
|
||||||
tdLog.printNoPrefix("44444444444444444")
|
|
||||||
return
|
|
||||||
|
|
||||||
def tmqCase1(self):
|
|
||||||
tdLog.printNoPrefix("======== test case 1: ")
|
|
||||||
|
|
||||||
# create and start thread
|
|
||||||
paraDict = {'dbName': 'dbt',
|
|
||||||
'dropFlag': 1,
|
|
||||||
'event': '',
|
|
||||||
'vgroups': 4,
|
|
||||||
'stbName': 'stb',
|
|
||||||
'colPrefix': 'c',
|
|
||||||
'tagPrefix': 't',
|
|
||||||
'colSchema': [{'type': 'INT', 'count':1},{'type': 'BIGINT', 'count':1},{'type': 'DOUBLE', 'count':1},{'type': 'BINARY', 'len':32, 'count':1},{'type': 'NCHAR', 'len':32, 'count':1},{'type': 'TIMESTAMP', 'count':1}],
|
|
||||||
'tagSchema': [{'type': 'INT', 'count':1},{'type': 'BIGINT', 'count':1},{'type': 'DOUBLE', 'count':1},{'type': 'BINARY', 'len':32, 'count':1},{'type': 'NCHAR', 'len':32, 'count':1}],
|
|
||||||
'ctbPrefix': 'ctb',
|
|
||||||
'ctbStartIdx': 0,
|
|
||||||
'ctbNum': 100,
|
|
||||||
'rowsPerTbl': 1000,
|
|
||||||
'batchNum': 100,
|
|
||||||
'startTs': 1640966400000, # 2022-01-01 00:00:00.000
|
|
||||||
'pollDelay': 3,
|
|
||||||
'showMsg': 1,
|
|
||||||
'showRow': 1,
|
|
||||||
'snapshot': 1}
|
|
||||||
|
|
||||||
paraDict['vgroups'] = self.vgroups
|
|
||||||
paraDict['ctbNum'] = self.ctbNum
|
|
||||||
paraDict['rowsPerTbl'] = self.rowsPerTbl
|
|
||||||
|
|
||||||
tdLog.info("create topics from stb1")
|
|
||||||
topicFromStb1 = 'topic_stb1'
|
|
||||||
queryString = "select ts, c1, c2 from %s.%s where t4 == 'beijing' or t4 == 'changsha' "%(paraDict['dbName'], paraDict['stbName'])
|
|
||||||
sqlString = "create topic %s as %s" %(topicFromStb1, queryString)
|
|
||||||
tdLog.info("create topic sql: %s"%sqlString)
|
|
||||||
tdSql.execute(sqlString)
|
|
||||||
|
|
||||||
consumerId = 0
|
|
||||||
expectrowcnt = paraDict["rowsPerTbl"] * paraDict["ctbNum"]
|
|
||||||
topicList = topicFromStb1
|
|
||||||
ifcheckdata = 0
|
|
||||||
ifManualCommit = 0
|
|
||||||
keyList = 'group.id:cgrp1,\
|
|
||||||
enable.auto.commit:false,\
|
|
||||||
auto.commit.interval.ms:6000,\
|
|
||||||
auto.offset.reset:earliest'
|
|
||||||
tmqCom.insertConsumerInfo(consumerId, expectrowcnt,topicList,keyList,ifcheckdata,ifManualCommit)
|
|
||||||
|
|
||||||
tdLog.info("start consume processor")
|
|
||||||
pollDelay = 100
|
|
||||||
showMsg = 1
|
|
||||||
showRow = 1
|
|
||||||
tmqCom.startTmqSimProcess(pollDelay=paraDict['pollDelay'],dbName=paraDict["dbName"],showMsg=paraDict['showMsg'], showRow=paraDict['showRow'],snapshot=paraDict['snapshot'])
|
|
||||||
|
|
||||||
tdLog.info("start to check consume result")
|
|
||||||
expectRows = 1
|
|
||||||
resultList = tmqCom.selectConsumeResult(expectRows)
|
|
||||||
totalConsumeRows = 0
|
|
||||||
for i in range(expectRows):
|
|
||||||
totalConsumeRows += resultList[i]
|
|
||||||
|
|
||||||
tdSql.query(queryString)
|
|
||||||
totalRowsInserted = tdSql.getRows()
|
|
||||||
|
|
||||||
tdLog.info("act consume rows: %d, act insert rows: %d, expect consume rows: %d, "%(totalConsumeRows, totalRowsInserted, expectrowcnt))
|
|
||||||
|
|
||||||
if totalConsumeRows != expectrowcnt:
|
|
||||||
tdLog.exit("tmq consume rows error!")
|
|
||||||
|
|
||||||
# tmqCom.checkFileContent(consumerId, queryString)
|
|
||||||
|
|
||||||
tmqCom.waitSubscriptionExit(tdSql, topicFromStb1)
|
|
||||||
tdSql.query("drop topic %s"%topicFromStb1)
|
|
||||||
|
|
||||||
tdLog.printNoPrefix("======== test case 1 end ...... ")
|
|
||||||
|
|
||||||
def run(self):
|
|
||||||
self.prepareTestEnv()
|
|
||||||
# self.tmqCase1()
|
|
||||||
|
|
||||||
def stop(self):
|
|
||||||
tdSql.close()
|
|
||||||
tdLog.success(f"{__file__} successfully executed")
|
|
||||||
|
|
||||||
event = threading.Event()
|
|
||||||
|
|
||||||
tdCases.addLinux(__file__, TDTestCase())
|
|
||||||
tdCases.addWindows(__file__, TDTestCase())
|
|
|
@ -118,7 +118,7 @@ class TDTestCase:
|
||||||
if dropFlag == 1:
|
if dropFlag == 1:
|
||||||
tsql.execute("drop database if exists %s"%(dbName))
|
tsql.execute("drop database if exists %s"%(dbName))
|
||||||
|
|
||||||
tsql.execute("create database if not exists %s vgroups %d replica %d"%(dbName, vgroups, replica))
|
tsql.execute("create database if not exists %s vgroups %d replica %d wal_retention_period -1 wal_retention_size -1"%(dbName, vgroups, replica))
|
||||||
tdLog.debug("complete to create database %s"%(dbName))
|
tdLog.debug("complete to create database %s"%(dbName))
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|
|
@ -52,7 +52,7 @@ class TDTestCase:
|
||||||
paraDict['rowsPerTbl'] = self.rowsPerTbl
|
paraDict['rowsPerTbl'] = self.rowsPerTbl
|
||||||
|
|
||||||
tmqCom.initConsumerTable()
|
tmqCom.initConsumerTable()
|
||||||
tdCom.create_database(tdSql, paraDict["dbName"],paraDict["dropFlag"], vgroups=paraDict["vgroups"],replica=1)
|
tdCom.create_database(tdSql, paraDict["dbName"],paraDict["dropFlag"], vgroups=paraDict["vgroups"],replica=1,wal_retention_size=-1, wal_retention_period=-1)
|
||||||
tdLog.info("create stb")
|
tdLog.info("create stb")
|
||||||
tmqCom.create_stable(tdSql, dbName=paraDict["dbName"],stbName=paraDict["stbName"])
|
tmqCom.create_stable(tdSql, dbName=paraDict["dbName"],stbName=paraDict["stbName"])
|
||||||
tdLog.info("create ctb")
|
tdLog.info("create ctb")
|
||||||
|
|
|
@ -52,7 +52,7 @@ class TDTestCase:
|
||||||
paraDict['rowsPerTbl'] = self.rowsPerTbl
|
paraDict['rowsPerTbl'] = self.rowsPerTbl
|
||||||
|
|
||||||
tmqCom.initConsumerTable()
|
tmqCom.initConsumerTable()
|
||||||
tdCom.create_database(tdSql, paraDict["dbName"],paraDict["dropFlag"], vgroups=paraDict["vgroups"],replica=1)
|
tdCom.create_database(tdSql, paraDict["dbName"],paraDict["dropFlag"], vgroups=paraDict["vgroups"],replica=1,wal_retention_size=-1, wal_retention_period=-1)
|
||||||
tdLog.info("create stb")
|
tdLog.info("create stb")
|
||||||
tmqCom.create_stable(tdSql, dbName=paraDict["dbName"],stbName=paraDict["stbName"])
|
tmqCom.create_stable(tdSql, dbName=paraDict["dbName"],stbName=paraDict["stbName"])
|
||||||
tdLog.info("create ctb")
|
tdLog.info("create ctb")
|
||||||
|
|
|
@ -53,7 +53,7 @@ class TDTestCase:
|
||||||
paraDict['rowsPerTbl'] = self.rowsPerTbl
|
paraDict['rowsPerTbl'] = self.rowsPerTbl
|
||||||
|
|
||||||
tmqCom.initConsumerTable()
|
tmqCom.initConsumerTable()
|
||||||
tdCom.create_database(tdSql, paraDict["dbName"],paraDict["dropFlag"], vgroups=paraDict["vgroups"],replica=1)
|
tdCom.create_database(tdSql, paraDict["dbName"],paraDict["dropFlag"], vgroups=paraDict["vgroups"],replica=1,wal_retention_size=-1, wal_retention_period=-1)
|
||||||
tdLog.info("create stb")
|
tdLog.info("create stb")
|
||||||
tmqCom.create_stable(tdSql, dbName=paraDict["dbName"],stbName=paraDict["stbName"])
|
tmqCom.create_stable(tdSql, dbName=paraDict["dbName"],stbName=paraDict["stbName"])
|
||||||
tdLog.info("create ctb")
|
tdLog.info("create ctb")
|
||||||
|
|
|
@ -52,7 +52,7 @@ class TDTestCase:
|
||||||
paraDict['rowsPerTbl'] = self.rowsPerTbl
|
paraDict['rowsPerTbl'] = self.rowsPerTbl
|
||||||
|
|
||||||
tmqCom.initConsumerTable()
|
tmqCom.initConsumerTable()
|
||||||
tdCom.create_database(tdSql, paraDict["dbName"],paraDict["dropFlag"], vgroups=paraDict["vgroups"],replica=1)
|
tdCom.create_database(tdSql, paraDict["dbName"],paraDict["dropFlag"], vgroups=paraDict["vgroups"],replica=1, wal_retention_size=-1, wal_retention_period=-1)
|
||||||
tdLog.info("create stb")
|
tdLog.info("create stb")
|
||||||
tmqCom.create_stable(tdSql, dbName=paraDict["dbName"],stbName=paraDict["stbName"])
|
tmqCom.create_stable(tdSql, dbName=paraDict["dbName"],stbName=paraDict["stbName"])
|
||||||
tdLog.info("create ctb")
|
tdLog.info("create ctb")
|
||||||
|
|
Loading…
Reference in New Issue