Merge branch '3.0' into feature/3_liaohj

This commit is contained in:
Haojun Liao 2022-07-26 19:09:37 +08:00
commit 6c355184a9
150 changed files with 9274 additions and 5551 deletions

View File

@ -3,11 +3,31 @@ sidebar_label: Docker
title: 通过 Docker 快速体验 TDengine
---
虽然并不推荐在生产环境中通过 Docker 来部署 TDengine 服务,但 Docker 工具能够很好地屏蔽底层操作系统的环境差异,很适合在开发测试或初次体验时用于安装运行 TDengine 的工具集。特别是,借助 Docker能够比较方便地在 macOS 和 Windows 系统上尝试 TDengine而无需安装虚拟机或额外租用 Linux 服务器。另外,从 2.0.14.0 版本开始TDengine 提供的镜像已经可以同时支持 X86-64、X86、arm64、arm32 平台,像 NAS、树莓派、嵌入式开发板之类可以运行 docker 的非主流计算机也可以基于本文档轻松体验 TDengine
本节首先介绍如何通过 Docker 快速体验 TDengine然后介绍如何在 Docker 环境下体验 TDengine 的写入和查询功能
下文通过 Step by Step 风格的介绍,讲解如何通过 Docker 快速建立 TDengine 的单节点运行环境,以支持开发和测试。
## 启动 TDengine
## 下载 Docker
如果已经安装了 docker 只需执行下面的命令。
```shell
docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
```
确定该容器已经启动并且在正常运行
```shell
docker ps
```
进入该容器并执行 bash
```shell
docker exec -it <container name> bash
```
然后就可以执行相关的 Linux 命令操作和访问 TDengine
:::info
Docker 工具自身的下载请参考 [Docker 官网文档](https://docs.docker.com/get-docker/)。
@ -18,95 +38,49 @@ $ docker -v
Docker version 20.10.3, build 48d30b5
```
## 使用 Docker 在容器中运行 TDengine
:::
### 在 Docker 容器中运行 TDengine server
## 运行 TDengine CLI
```bash
$ docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd
```
这条命令,启动一个运行了 TDengine server 的 docker 容器,并且将容器的 6030 到 6049 端口映射到宿主机的 6030 到 6049 端口上。如果宿主机已经运行了 TDengine server 并占用了相同端口,需要映射容器的端口到不同的未使用端口段。(详情参见 [TDengine 2.0 端口说明](/train-faq/faq#port。为了支持 TDengine 客户端操作 TDengine server 服务, TCP 和 UDP 端口都需要打开。
- **docker run**:通过 Docker 运行一个容器
- **-d**:让容器在后台运行
- **-p**:指定映射端口。注意:如果不是用端口映射,依然可以进入 Docker 容器内部使用 TDengine 服务或进行应用开发,只是不能对容器外部提供服务
- **tdengine/tdengine**:拉取的 TDengine 官方发布的应用镜像
- **526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd**:这个返回的长字符是容器 ID我们也可以通过容器 ID 来查看对应的容器
进一步,还可以使用 docker run 命令启动运行 TDengine server 的 docker 容器,并使用 `--name` 命令行参数将容器命名为 `tdengine`,使用 `--hostname` 指定 hostname 为 `tdengine-server`,通过 `-v` 挂载本地目录到容器,实现宿主机与容器内部的数据同步,防止容器删除后,数据丢失。
```bash
docker run -d --name tdengine --hostname="tdengine-server" -v ~/work/taos/log:/var/log/taos -v ~/work/taos/data:/var/lib/taos -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
```
- **--name tdengine**:设置容器名称,我们可以通过容器名称来访问对应的容器
- **--hostname=tdengine-server**:设置容器内 Linux 系统的 hostname我们可以通过映射 hostname 和 IP 来解决容器 IP 可能变化的问题。
- **-v**:设置宿主机文件目录映射到容器内目录,避免容器删除后数据丢失。
### 使用 docker ps 命令确认容器是否已经正确运行
```bash
docker ps
```
输出示例如下:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS ···
c452519b0f9b tdengine/tdengine "taosd" 14 minutes ago Up 14 minutes ···
```
- **docker ps**:列出所有正在运行状态的容器信息。
- **CONTAINER ID**:容器 ID。
- **IMAGE**:使用的镜像。
- **COMMAND**:启动容器时运行的命令。
- **CREATED**:容器创建时间。
- **STATUS**容器状态。UP 表示运行中。
### 通过 docker exec 命令,进入到 docker 容器中去做开发
```bash
$ docker exec -it tdengine /bin/bash
root@tdengine-server:~/TDengine-server-2.4.0.4#
```
- **docker exec**:通过 docker exec 命令进入容器,如果退出,容器不会停止。
- **-i**:进入交互模式。
- **-t**:指定一个终端。
- **tdengine**:容器名称,需要根据 docker ps 指令返回的值进行修改。
- **/bin/bash**:载入容器后运行 bash 来进行交互。
进入容器后,执行 taos shell 客户端程序。
```bash
root@tdengine-server:~/TDengine-server-2.4.0.4# taos
Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
taos>
```
TDengine 终端成功连接服务端,打印出了欢迎消息和版本信息。如果失败,会有错误信息打印出来。
在 TDengine 终端中,可以通过 SQL 命令来创建/删除数据库、表、超级表等,并可以进行插入和查询操作。具体可以参考 [TAOS SQL 说明文档](/taos-sql/)。
### 在宿主机访问 Docker 容器中的 TDengine server
在使用了 -p 命令行参数映射了正确的端口启动了 TDengine Docker 容器后,就在宿主机使用 taos shell 命令即可访问运行在 Docker 容器中的 TDengine。
有两种方式在 Docker 环境下使用 TDengine CLI (taos) 访问 TDengine.
- 进入容器后,执行 taos
- 在宿主机使用容器映射到主机的端口进行访问 `taos -h <hostname> -P <port>`
```
$ taos
Welcome to the TDengine shell from Linux, Client Version:3.0.0.0
Copyright (c) 2022 by TAOS Data, Inc. All rights reserved.
Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
Server is Enterprise trial Edition, ver:3.0.0.0 and will expire at 2022-09-24 15:29:46.
taos>
taos>
```
也可以在宿主机使用 curl 通过 RESTful 端口访问 Docker 容器内的 TDengine server。
## 启动 REST 服务
taosAdapter 是 TDengine 中提供 REST 服务的组件。下面这条命令会在容器中同时启动 `taosd``taosadapter` 两个服务组件。
```bash
docker run -d --name tdengine-all -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
```
如果想只启动 `taosadapter`
```bash
docker run -d --name tdengine-taosa -p 6041-6049:6041-6049 -p 6041-6049:6041-6049/udp -e TAOS_FIRST_EP=tdengine-all tdengine/tdengine:3.0.0.0 taosadapter
```
如果想只启动 `taosd`
```bash
docker run -d --name tdengine-taosd -p 6030-6042:6030-6042 -p 6030-6042:6030-6042/udp -e TAOS_DISABLE_ADAPTER=true tdengine/tdengine:3.0.0.0
```
## 访问 REST 接口
可以在宿主机使用 curl 通过 RESTful 端口访问 Docker 容器内的 TDengine server。
```
curl -L -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
@ -115,217 +89,60 @@ curl -L -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
输出示例如下:
```
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep0,keep1,keep(D)","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep0,keep1,keep(D)",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["test","2021-08-18 06:01:11.021",10000,4,1,1,10,"3650,3650,3650",16,6,100,4096,1,3000,2,0,"ms",0,"ready"],["log","2021-08-18 05:51:51.065",4,1,1,1,10,"30,30,30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":2}
{"code":0,"column_meta":[["name","VARCHAR",64],["create_time","TIMESTAMP",8],["vgroups","SMALLINT",2],["ntables","BIGINT",8],["replica","TINYINT",1],["strict","VARCHAR",4],["duration","VARCHAR",10],["keep","VARCHAR",32],["buffer","INT",4],["pagesize","INT",4],["pages","INT",4],["minrows","INT",4],["maxrows","INT",4],["wal","TINYINT",1],["fsync","INT",4],["comp","TINYINT",1],["cacheModel","VARCHAR",11],["precision","VARCHAR",2],["single_stable","BOOL",1],["status","VARCHAR",10],["retention","VARCHAR",60]],"data":[["information_schema",null,null,14,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"ready"],["performance_schema",null,null,3,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"ready"]],"rows":2}
```
这条命令,通过 REST API 访问 TDengine server这时连接的是本机的 6041 端口,可见连接成功
这条命令,通过 REST API 访问 TDengine server这时连接的是从容器映射到主机的 6041 端口
TDengine REST API 详情请参考[官方文档](/reference/rest-api/)。
### 使用 Docker 容器运行 TDengine server 和 taosAdapter
## 写入数据
在 TDengine 2.4.0.0 之后版本的 Docker 容器,开始提供一个独立运行的组件 taosAdapter代替之前版本 TDengine 中 taosd 进程中内置的 http server。taosAdapter 支持通过 RESTful 接口对 TDengine server 的数据写入和查询能力,并提供和 InfluxDB/OpenTSDB 兼容的数据摄取接口,允许 InfluxDB/OpenTSDB 应用程序无缝移植到 TDengine。在新版本 Docker 镜像中,默认启用了 taosAdapter也可以使用 docker run 命令中设置 TAOS_DISABLE_ADAPTER=true 来禁用 taosAdapter也可以在 docker run 命令中单独使用 taosAdapter而不运行 taosd
可以使用 TDengine 的自带工具 taosBenchmark 快速体验 TDengine 的写入
注意:如果容器中运行 taosAdapter需要根据需要映射其他端口具体端口默认配置和修改方法请参考[taosAdapter 文档](/reference/taosadapter/)。
使用 docker 运行 TDengine 2.4.0.4 版本镜像taosd + taosAdapter
```bash
docker run -d --name tdengine-all -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine:2.4.0.4
```
使用 docker 运行 TDengine 2.4.0.4 版本镜像(仅 taosAdapter需要设置 firstEp 配置项 或 TAOS_FIRST_EP 环境变量):
```bash
docker run -d --name tdengine-taosa -p 6041-6049:6041-6049 -p 6041-6049:6041-6049/udp -e TAOS_FIRST_EP=tdengine-all tdengine/tdengine:2.4.0.4 taosadapter
```
使用 docker 运行 TDengine 2.4.0.4 版本镜像(仅 taosd
```bash
docker run -d --name tdengine-taosd -p 6030-6042:6030-6042 -p 6030-6042:6030-6042/udp -e TAOS_DISABLE_ADAPTER=true tdengine/tdengine:2.4.0.4
```
使用 curl 命令验证 RESTful 接口可以正常工作:
```bash
curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" -d "show databases;" 127.0.0.1:6041/rest/sql
```
输出示例如下:
```
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["log","2021-12-28 09:18:55.765",10,1,1,1,10,"30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":1}
```
### 应用示例:在宿主机使用 taosBenchmark 写入数据到 Docker 容器中的 TDengine server
1. 在宿主机命令行界面执行 taosBenchmark (曾命名为 taosdemo写入数据到 Docker 容器中的 TDengine server
假定启动容器时已经将容器的6030端口映射到了宿主机的6030端口则可以直接在宿主机命令行启动 taosBenchmark也可以进入容器后执行
```bash
$ taosBenchmark
taosBenchmark is simulating data generated by power equipments monitoring...
host: 127.0.0.1:6030
user: root
password: taosdata
configDir:
resultFile: ./output.txt
thread num of insert data: 10
thread num of create table: 10
top insert interval: 0
number of records per req: 30000
max sql length: 1048576
database count: 1
database[0]:
database[0] name: test
drop: yes
replica: 1
precision: ms
super table count: 1
super table[0]:
stbName: meters
autoCreateTable: no
childTblExists: no
childTblCount: 10000
childTblPrefix: d
dataSource: rand
iface: taosc
insertRows: 10000
interlaceRows: 0
disorderRange: 1000
disorderRatio: 0
maxSqlLen: 1048576
timeStampStep: 1
startTimestamp: 2017-07-14 10:40:00.000
sampleFormat:
sampleFile:
tagsFile:
columnCount: 3
column[0]:FLOAT column[1]:INT column[2]:FLOAT
tagCount: 2
tag[0]:INT tag[1]:BINARY(16)
Press enter key to continue or Ctrl-C to stop
```
回车后,该命令将在数据库 test 下面自动创建一张超级表 meters该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupIdgroupId 被设置为 1 到 10 location 被设置为 "California.SanFrancisco" 或者 "California.SanDieo"。
该命令将在数据库 test 下面自动创建一张超级表 meters该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupIdgroupId 被设置为 1 到 10 location 被设置为 "California.SanFrancisco" 或者 "California.LosAngeles"。
最后共插入 1 亿条记录
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能。
2. 进入 TDengine 终端,查看 taosBenchmark 生成的数据
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,您可以设置不同参数进行体验,请执行 `taosBenchmark --help` 详细列出。taosBenchmark 详细使用方法请参照 [taosBenchmark 参考手册](../reference/taosbenchmark)。
- **进入命令行。**
## 体验查询
```bash
$ root@c452519b0f9b:~/TDengine-server-2.4.0.4# taos
使用上述 taosBenchmark 插入数据后,可以在 TDengine CLI 输入查询命令,体验查询速度。可以直接在宿主机上也可以进入容器后运行。
Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
查询超级表下记录总条数:
taos>
```
- **查看数据库。**
```bash
$ taos> show databases;
name | created_time | ntables | vgroups | ···
test | 2021-08-18 06:01:11.021 | 10000 | 6 | ···
log | 2021-08-18 05:51:51.065 | 4 | 1 | ···
```
- **查看超级表。**
```bash
$ taos> use test;
Database changed.
$ taos> show stables;
name | created_time | columns | tags | tables |
============================================================================================
meters | 2021-08-18 06:01:11.116 | 4 | 2 | 10000 |
Query OK, 1 row(s) in set (0.003259s)
```
- **查看表,限制输出十条。**
```bash
$ taos> select * from test.t0 limit 10;
DB error: Table does not exist (0.002857s)
taos> select * from test.d0 limit 10;
ts | current | voltage | phase |
======================================================================================
2017-07-14 10:40:00.000 | 10.12072 | 223 | 0.34167 |
2017-07-14 10:40:00.001 | 10.16103 | 224 | 0.34445 |
2017-07-14 10:40:00.002 | 10.00204 | 220 | 0.33334 |
2017-07-14 10:40:00.003 | 10.00030 | 220 | 0.33333 |
2017-07-14 10:40:00.004 | 9.84029 | 216 | 0.32222 |
2017-07-14 10:40:00.005 | 9.88028 | 217 | 0.32500 |
2017-07-14 10:40:00.006 | 9.88110 | 217 | 0.32500 |
2017-07-14 10:40:00.007 | 10.08137 | 222 | 0.33889 |
2017-07-14 10:40:00.008 | 10.12063 | 223 | 0.34167 |
2017-07-14 10:40:00.009 | 10.16086 | 224 | 0.34445 |
Query OK, 10 row(s) in set (0.016791s)
```
- **查看 d0 表的标签值。**
```bash
$ taos> select groupid, location from test.d0;
groupid | location |
=================================
0 | California.SanDieo |
Query OK, 1 row(s) in set (0.003490s)
```
### 应用示例:使用数据收集代理软件写入 TDengine
taosAdapter 支持多个数据收集代理软件(如 Telegraf、StatsD、collectd 等),这里仅模拟 StasD 写入数据,在宿主机执行命令如下:
```
echo "foo:1|c" | nc -u -w0 127.0.0.1 6044
```sql
taos> select count(*) from test.meters;
```
然后可以使用 taos shell 查询 taosAdapter 自动创建的数据库 statsd 和 超级表 foo 中的内容
查询 1 亿条记录的平均值、最大值、最小值等:
```
taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
====================================================================================================================================================================================================================================================================================
log | 2021-12-28 09:18:55.765 | 12 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
statsd | 2021-12-28 09:21:48.841 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
Query OK, 2 row(s) in set (0.002112s)
taos> use statsd;
Database changed.
taos> show stables;
name | created_time | columns | tags | tables |
============================================================================================
foo | 2021-12-28 09:21:48.894 | 2 | 1 | 1 |
Query OK, 1 row(s) in set (0.001160s)
taos> select * from foo;
ts | value | metric_type |
=======================================================================================
2021-12-28 09:21:48.840820836 | 1 | counter |
Query OK, 1 row(s) in set (0.001639s)
taos>
```sql
taos> select avg(current), max(voltage), min(phase) from test.meters;
```
可以看到模拟数据已经被写入到 TDengine 中。
查询 location="California.SanFrancisco" 的记录总条数:
## 停止正在 Docker 中运行的 TDengine 服务
```bash
docker stop tdengine
```sql
taos> select count(*) from test.meters where location="California.SanFrancisco";
```
- **docker stop**:通过 docker stop 停止指定的正在运行中的 docker 镜像。
查询 groupId=10 的所有记录的平均值、最大值、最小值等:
```sql
taos> select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
```
对表 d10 按 10s 进行平均值、最大值和最小值聚合统计:
```sql
taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
```

View File

@ -3,6 +3,7 @@ title: 立即开始
description: '快速设置 TDengine 环境并体验其高效写入和查询'
---
TDengine 完整的软件包包括服务端taosd、用于与第三方系统对接并提供 RESTful 接口的 taosAdapter、应用驱动taosc、命令行程序 (CLItaos) 和一些工具软件。TDengine 除了提供多种语言的连接器之外,还通过 [taosAdapter](/reference/taosadapter) 提供 [RESTful 接口](/reference/rest-api)。
本章主要介绍如何利用 Docker 或者安装包快速设置 TDengine 环境并体验其高效写入和查询。

View File

@ -6,53 +6,85 @@ description: "创建、删除数据库,查看、修改数据库参数"
## 创建数据库
```
CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [DAYS days] [UPDATE 1];
```sql
CREATE DATABASE [IF NOT EXISTS] db_name [database_options]
database_options:
database_option ...
database_option: {
BUFFER value
| CACHEMODEL {'none' | 'last_row' | 'last_value' | 'both'}
| CACHESIZE value
| COMP {0 | 1 | 2}
| DURATION value
| FSYNC value
| MAXROWS value
| MINROWS value
| KEEP value
| PAGES value
| PAGESIZE value
| PRECISION {'ms' | 'us' | 'ns'}
| REPLICA value
| RETENTIONS ingestion_duration:keep_duration ...
| STRICT {'off' | 'on'}
| WAL {1 | 2}
| VGROUPS value
| SINGLE_STABLE {0 | 1}
| WAL_RETENTION_PERIOD value
| WAL_ROLL_PERIOD value
| WAL_RETENTION_SIZE value
| WAL_SEGMENT_SIZE value
}
```
:::info
1. KEEP 是该数据库的数据保留多长天数,缺省是 3650 天(10 年),数据库会自动删除超过时限的数据;<!-- REPLACE_OPEN_TO_ENTERPRISE__KEEP_PARAM_DESCRIPTION -->
2. UPDATE 标志数据库支持更新相同时间戳数据;(从 2.1.7.0 版本开始此参数支持设为 2表示允许部分列更新也即更新数据行时未被设置的列会保留原值。从 2.0.8.0 版本开始支持此参数。注意此参数不能通过 `ALTER DATABASE` 指令进行修改。)
1. UPDATE 设为 0 时,表示不允许更新数据,后发送的相同时间戳的数据会被直接丢弃;
2. UPDATE 设为 1 时,表示更新全部列数据,即如果更新一个数据行,其中某些列没有提供取值,那么这些列会被设为 NULL
3. UPDATE 设为 2 时,表示支持更新部分列数据,即如果更新一个数据行,其中某些列没有提供取值,那么这些列会保持原有数据行中的对应值;
4. 更多关于 UPDATE 参数的用法,请参考[FAQ](/train-faq/faq)。
3. 数据库名最大长度为 33
4. 一条 SQL 语句的最大长度为 65480 个字符;
5. 创建数据库时可用的参数有:
- cache: [详细说明](/reference/config/#cache)
- blocks: [详细说明](/reference/config/#blocks)
- days: [详细说明](/reference/config/#days)
- keep: [详细说明](/reference/config/#keep)
- minRows: [详细说明](/reference/config/#minrows)
- maxRows: [详细说明](/reference/config/#maxrows)
- wal: [详细说明](/reference/config/#wallevel)
- fsync: [详细说明](/reference/config/#fsync)
- update: [详细说明](/reference/config/#update)
- cacheLast: [详细说明](/reference/config/#cachelast)
- replica: [详细说明](/reference/config/#replica)
- quorum: [详细说明](/reference/config/#quorum)
- comp: [详细说明](/reference/config/#comp)
- precision: [详细说明](/reference/config/#precision)
6. 请注意上面列出的所有参数都可以配置在配置文件 `taosd.cfg` 中作为创建数据库时使用的默认配置, `create database` 的参数中明确指定的会覆盖配置文件中的设置。
:::
### 参数说明
- buffer: 一个 VNODE 写入内存池大小单位为MB默认为96最小为3最大为16384。
- CACHEMODEL表示是否在内存中缓存子表的最近数据。默认为none。
- none表示不缓存。
- last_row表示缓存子表最近一行数据。这将显著改善 LAST_ROW 函数的性能表现。
- last_value表示缓存子表每一列的最近的非 NULL 值。这将显著改善无特殊影响WHERE、ORDER BY、GROUP BY、INTERVAL下的 LAST 函数的性能表现。
- both表示同时打开缓存最近行和列功能。
- CACHESIZE表示缓存子表最近数据的内存大小。默认为 1 ,范围是[1, 65536],单位是 MB。
- COMP表示数据库文件压缩标志位缺省值为 2取值范围为 [0, 2]。
- 0表示不压缩。
- 1表示一阶段压缩。
- 2表示两阶段压缩。
- DURATION数据文件存储数据的时间跨度。可以使用加单位的表示形式如 DURATION 100h、DURATION 10d等支持 m分钟、h小时和 d三个单位。不加时间单位时默认单位为天如 DURATION 50 表示 50 天。
- FSYNC当 WAL 参数设置为2时落盘的周期。默认为3000单位毫秒。最小为0表示每次写入立即落盘最大为180000即三分钟。
- MAXROWS文件块中记录的最大条数默认为4096条。
- MINROWS文件块中记录的最小条数默认为100条。
- KEEP表示数据文件保存的天数缺省值为 3650取值范围 [1, 365000],且必须大于或等于 DURATION 参数值。数据库会自动删除保存时间超过KEEP值的数据。KEEP 可以使用加单位的表示形式,如 KEEP 100h、KEEP 10d 等支持m分钟、h小时和 d三个单位。也可以不写单位如 KEEP 50此时默认单位为天。
- PAGES一个 VNODE 中元数据存储引擎的缓存页个数默认为256最小64。一个 VNODE 元数据存储占用 PAGESIZE * PAGES默认情况下为1MB内存。
- PAGESIZE一个 VNODE 中元数据存储引擎的页大小单位为KB默认为4 KB。范围为1到16384即1 KB到16 MB。
- PRECISION数据库的时间戳精度。ms表示毫秒us表示微秒ns表示纳秒默认ms毫秒。
- REPLICA表示数据库副本数取值为1或3默认为1。在集群中使用副本数必须小于或等于 DNODE 的数目。
- RETENTIONS表示数据的聚合周期和保存时长如RETENTIONS 15s:7d,1m:21d,15m:50d表示数据原始采集周期为15秒原始数据保存7天按1分钟聚合的数据保存21天按15分钟聚合的数据保存50天。目前支持且只支持三级存储周期。
- STRICT表示数据同步的一致性要求默认为off。
- on 表示强一致,即运行标准的 raft 协议,半数提交返回成功。
- off表示弱一致本地提交即返回成功。
- WALWAL级别默认为1。
- 1写WAL但不执行fsync。
- 2写WAL而且执行fsync。
- VGROUPS数据库中初始vgroup的数目。
- SINGLE_STABLE表示此数据库中是否只可以创建一个超级表用于超级表列非常多的情况。
- 0表示可以创建多张超级表。
- 1表示只可以创建一张超级表。
- WAL_RETENTION_PERIODwal文件的额外保留策略用于数据订阅。wal的保存时长单位为s。默认为0即落盘后立即删除。-1表示不删除。
- WAL_RETENTION_SIZEwal文件的额外保留策略用于数据订阅。wal的保存的最大上限单位为KB。默认为0即落盘后立即删除。-1表示不删除。
- WAL_ROLL_PERIODwal文件切换时长单位为s。当wal文件创建并写入后经过该时间会自动创建一个新的wal文件。默认为0即仅在落盘时创建新文件。
- WAL_SEGMENT_SIZEwal单个文件大小单位为KB。当前写入文件大小超过上限后会自动创建一个新的wal文件。默认为0即仅在落盘时创建新文件。
### 创建数据库示例
创建时间精度为纳秒的数据库, 保留 1 年数据:
```sql
CREATE DATABASE test PRECISION 'ns' KEEP 365;
```
## 显示系统当前参数
create database if not exists db vgroups 10 buffer 10
```
SHOW VARIABLES;
```
## 使用数据库
以上示例创建了一个有 10 个 vgroup 名为 db 的数据库, 其中每个 vnode 分配也 10MB 的写入缓存
### 使用数据库
```
USE db_name;
@ -63,61 +95,42 @@ USE db_name;
## 删除数据库
```
DROP DATABASE [IF EXISTS] db_name;
DROP DATABASE [IF EXISTS] db_name
```
删除数据库。指定 Database 所包含的全部数据表将被删除,谨慎使用!
删除数据库。指定 Database 所包含的全部数据表将被删除,该数据库的所有 vgroups 也会被全部销毁,请谨慎使用!
## 修改数据库参数
```
ALTER DATABASE db_name COMP 2;
```sql
ALTER DATABASE db_name [alter_database_options]
alter_database_options:
alter_database_option ...
alter_database_option: {
CACHEMODEL {'none' | 'last_row' | 'last_value' | 'both'}
| CACHESIZE value
| FSYNC value
| KEEP value
| WAL value
}
```
COMP 参数是指修改数据库文件压缩标志位,缺省值为 2取值范围为 [0, 2]。0 表示不压缩1 表示一阶段压缩2 表示两阶段压缩。
:::note
其它参数在3.0.0.0中暂不支持修改
```
ALTER DATABASE db_name REPLICA 2;
```
REPLICA 参数是指修改数据库副本数,取值范围 [1, 3]。在集群中使用,副本数必须小于或等于 DNODE 的数目。
```
ALTER DATABASE db_name KEEP 365;
```
KEEP 参数是指修改数据文件保存的天数,缺省值为 3650取值范围 [days, 365000],必须大于或等于 days 参数值。
```
ALTER DATABASE db_name QUORUM 2;
```
QUORUM 参数是指数据写入成功所需要的确认数,取值范围 [1, 2]。对于异步复制quorum 设为 1具有 master 角色的虚拟节点自己确认即可。对于同步复制quorum 设为 2。原则上Quorum >= 1 并且 Quorum <= replica(副本数),这个参数在启动一个同步模块实例时需要提供。
```
ALTER DATABASE db_name BLOCKS 100;
```
BLOCKS 参数是每个 VNODE (TSDB) 中有多少 cache 大小的内存块,因此一个 VNODE 的用的内存大小粗略为cache \* blocks。取值范围 [3, 1000]。
```
ALTER DATABASE db_name CACHELAST 0;
```
CACHELAST 参数控制是否在内存中缓存子表的最近数据。缺省值为 0取值范围 [0, 1, 2, 3]。其中 0 表示不缓存1 表示缓存子表最近一行数据2 表示缓存子表每一列的最近的非 NULL 值3 表示同时打开缓存最近行和列功能。(从 2.0.11.0 版本开始支持参数值 [0, 1],从 2.1.2.0 版本开始支持参数值 [0, 1, 2, 3]。)
说明:缓存最近行,将显著改善 LAST_ROW 函数的性能表现;缓存每列的最近非 NULL 值将显著改善无特殊影响WHERE、ORDER BY、GROUP BY、INTERVAL下的 LAST 函数的性能表现。
:::tip
以上所有参数修改后都可以用 show databases 来确认是否修改成功。另外,从 2.1.3.0 版本开始,修改这些参数后无需重启服务器即可生效。
:::
## 显示系统所有数据库
## 查看数据库
### 查看系统中的所有数据库
```
SHOW DATABASES;
```
## 显示一个数据库的创建语句
### 显示一个数据库的创建语句
```
SHOW CREATE DATABASE db_name;
@ -125,3 +138,4 @@ SHOW CREATE DATABASE db_name;
常用于数据库迁移。对一个已经存在的数据库,返回其创建语句;在另一个集群中执行该语句,就能得到一个设置完全相同的 Database。
### 查看数据库参数

View File

@ -2,13 +2,45 @@
title: 表管理
---
## 创建数据表
## 创建表
`CREATE TABLE` 语句用于创建普通表和以超级表为模板创建子表。
```sql
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definitionn] ...) [table_options]
CREATE TABLE create_subtable_clause
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definitionn] ...)
[TAGS (create_definition [, create_definitionn] ...)]
[table_options]
create_subtable_clause: {
create_subtable_clause [create_subtable_clause] ...
| [IF NOT EXISTS] [db_name.]tb_name USING [db_name.]stb_name [(tag_name [, tag_name] ...)] TAGS (tag_value [, tag_value] ...)
}
create_definition:
col_name column_definition
column_definition:
type_name [comment 'string_value']
table_options:
table_option ...
table_option: {
COMMENT 'string_value'
| WATERMARK duration[,duration]
| MAX_DELAY duration[,duration]
| ROLLUP(func_name [, func_name] ...)
| SMA(col_name [, col_name] ...)
| TTL value
}
```
CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]);
```
:::info 说明
**使用说明**
1. 表的第一个字段必须是 TIMESTAMP并且系统自动将其设为主键
2. 表名最大长度为 192
@ -18,101 +50,112 @@ CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_nam
6. 为了兼容支持更多形式的表名TDengine 引入新的转义符 "\`",可以让表名与关键词不冲突,同时不受限于上述表名称合法性约束检查。但是同样具有长度限制要求。使用转义字符以后,不再对转义字符中的内容进行大小写统一。
例如:\`aBc\` 和 \`abc\` 是不同的表名,但是 abc 和 aBc 是相同的表名。
需要注意的是转义字符中的内容必须是可打印字符。
上述的操作逻辑和约束要求与 MySQL 数据的操作一致。
从 2.3.0.0 版本开始支持这种方式。
:::
**参数说明**
1. COMMENT表注释。可用于超级表、子表和普通表。
2. WATERMARK指定窗口的关闭时间默认值为 5 秒最小单位毫秒范围为0到15分钟多个以逗号分隔。只可用于超级表且只有当数据库使用了RETENTIONS参数时才可以使用此表参数。
3. MAX_DELAY用于控制推送计算结果的最大延迟默认值为 interval 的值(但不能超过最大值)最小单位毫秒范围为1毫秒到15分钟多个以逗号分隔。注不建议 MAX_DELAY 设置太小否则会过于频繁的推送结果影响存储和查询性能如无特殊需求取默认值即可。只可用于超级表且只有当数据库使用了RETENTIONS参数时才可以使用此表参数。
4. ROLLUPRollup 指定的聚合函数提供基于多层级的降采样聚合结果。只可用于超级表。只有当数据库使用了RETENTIONS参数时才可以使用此表参数。作用于超级表除TS列外的其它所有列但是只能定义一个聚合函数。 聚合函数支持 avg, sum, min, max, last, first。
5. SMASmall Materialized Aggregates提供基于数据块的自定义预计算功能。预计算类型包括MAX、MIN和SUM。可用于超级表/普通表。
6. TTLTime to Live是用户用来指定表的生命周期的参数。如果在持续的TTL时间内都没有数据写入该表则TDengine系统会自动删除该表。这个TTL的时间只是一个大概时间我们系统不保证到了时间一定会将其删除而只保证存在这样一个机制。TTL单位是天默认为0表示不限制。用户需要注意TTL优先级高于KEEP即TTL时间满足删除机制时即使当前数据的存在时间小于KEEP此表也会被删除。只可用于子表和普通表。
### 以超级表为模板创建数据表
## 创建子
```
### 创建子表
```sql
CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name TAGS (tag_value1, ...);
```
以指定的超级表为模板,指定 TAGS 的值来创建数据表。
### 创建子表并指定标签的值
### 以超级表为模板创建数据表,并指定具体的 TAGS 列
```
```sql
CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name (tag_name1, ...) TAGS (tag_value1, ...);
```
以指定的超级表为模板,指定一部分 TAGS 列的值来创建数据表(没被指定的 TAGS 列会设为空值)。
说明:从 2.0.17.0 版本开始支持这种方式。在之前的版本中,不允许指定 TAGS 列,而必须显式给出所有 TAGS 列的取值。
以指定的超级表为模板,也可以指定一部分 TAGS 列的值来创建数据表(没被指定的 TAGS 列会设为空值)。
### 批量创建数据
### 批量创建
```
```sql
CREATE TABLE [IF NOT EXISTS] tb_name1 USING stb_name TAGS (tag_value1, ...) [IF NOT EXISTS] tb_name2 USING stb_name TAGS (tag_value2, ...) ...;
```
以更快的速度批量创建大量数据表(服务器端 2.0.14 及以上版本)
批量建表方式要求数据表必须以超级表为模板。 在不超出 SQL 语句长度限制的前提下,单条语句中的建表数量建议控制在 1000 3000 之间,将会获得比较理想的建表速度
:::info
## 修改普通表
1.批量建表方式要求数据表必须以超级表为模板。 2.在不超出 SQL 语句长度限制的前提下,单条语句中的建表数量建议控制在 1000 3000 之间,将会获得比较理想的建表速度。
:::
## 删除数据表
```sql
ALTER TABLE [db_name.]tb_name alter_table_clause
alter_table_clause: {
alter_table_options
| ADD COLUMN col_name column_type
| DROP COLUMN col_name
| MODIFY COLUMN col_name column_type
| RENAME COLUMN old_col_name new_col_name
}
alter_table_options:
alter_table_option ...
alter_table_option: {
TTL value
| COMMENT 'string_value'
}
```
DROP TABLE [IF EXISTS] tb_name;
```
## 显示当前数据库下的所有数据表信息
**使用说明**
对普通表可以进行如下修改操作
1. ADD COLUMN添加列。
2. DROP COLUMN删除列。
3. ODIFY COLUMN修改列定义如果数据列的类型是可变长类型那么可以使用此指令修改其宽度只能改大不能改小。
4. RENAME COLUMN修改列名称。
```
SHOW TABLES [LIKE tb_name_wildchar];
```
### 增加列
显示当前数据库下的所有数据表信息。
## 显示一个数据表的创建语句
```
SHOW CREATE TABLE tb_name;
```
常用于数据库迁移。对一个已经存在的数据表,返回其创建语句;在另一个集群中执行该语句,就能得到一个结构完全相同的数据表。
## 获取表的结构信息
```
DESCRIBE tb_name;
```
## 修改表定义
### 表增加列
```
```sql
ALTER TABLE tb_name ADD COLUMN field_name data_type;
```
:::info
### 删除列
1. 列的最大个数为 1024最小个数为 2从 2.1.7.0 版本开始,改为最多允许 4096 列)
2. 列名最大长度为 64。
:::
### 表删除列
```
```sql
ALTER TABLE tb_name DROP COLUMN field_name;
```
如果表是通过超级表创建,更改表结构的操作只能对超级表进行。同时针对超级表的结构更改对所有通过该结构创建的表生效。对于不是通过超级表创建的表,可以直接修改表结构。
### 修改列宽
### 表修改列宽
```
```sql
ALTER TABLE tb_name MODIFY COLUMN field_name data_type(length);
```
如果数据列的类型是可变长格式BINARY 或 NCHAR那么可以使用此指令修改其宽度只能改大不能改小2.1.3.0 版本新增)
如果表是通过超级表创建,更改表结构的操作只能对超级表进行。同时针对超级表的结构更改对所有通过该结构创建的表生效。对于不是通过超级表创建的表,可以直接修改表结构。
### 修改列名
```sql
ALTER TABLE tb_name RENAME COLUMN old_col_name new_col_name
```
## 修改子表
ALTER TABLE [db_name.]tb_name alter_table_clause
alter_table_clause: {
alter_table_options
| SET TAG tag_name = new_tag_value
}
alter_table_options:
alter_table_option ...
alter_table_option: {
TTL value
| COMMENT 'string_value'
}
**使用说明**
1. 对子表的列和标签的修改,除了更改标签值以外,都要通过超级表才能进行。
### 修改子表标签值
@ -120,4 +163,34 @@ ALTER TABLE tb_name MODIFY COLUMN field_name data_type(length);
ALTER TABLE tb_name SET TAG tag_name=new_tag_value;
```
如果表是通过超级表创建,可以使用此指令修改其标签值
## 删除表
可以在一条SQL语句中删除一个或多个普通表或子表。
```sql
DROP TABLE [IF EXISTS] [db_name.]tb_name [, [IF EXISTS] [db_name.]tb_name] ...
```
## 查看表的信息
### 显示所有表
如下SQL语句可以列出当前数据库中的所有表名。
```sql
SHOW TABLES [LIKE tb_name_wildchar];
```
### 显示表创建语句
```
SHOW CREATE TABLE tb_name;
```
常用于数据库迁移。对一个已经存在的数据表,返回其创建语句;在另一个集群中执行该语句,就能得到一个结构完全相同的数据表。
### 获取表结构信息
```
DESCRIBE tb_name;
```

View File

@ -3,38 +3,31 @@ sidebar_label: 超级表管理
title: 超级表 STable 管理
---
:::note
在 2.0.15.0 及以后的版本中开始支持 STABLE 保留字。也即在本节后文的指令说明中CREATE、DROP、ALTER 三个指令在 2.0.15.0 之前的版本中 STABLE 保留字需写作 TABLE。
:::
## 创建超级表
```
CREATE STABLE [IF NOT EXISTS] stb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]) TAGS (tag1_name tag_type1, tag2_name tag_type2 [, tag3_name tag_type3]);
```sql
CREATE STABLE [IF NOT EXISTS] stb_name (create_definition [, create_definitionn] ...) TAGS (create_definition [, create_definition] ...) [table_options]
create_definition:
col_name column_definition
column_definition:
type_name [COMMENT 'string_value']
```
创建 STable与创建表的 SQL 语法相似,但需要指定 TAGS 字段的名称和类型。
**使用说明**
- 超级表中列的最大个数为 4096需要注意这里的 4096 是包含 TAG 列在内的,最小个数为 3包含一个时间戳主键、一个 TAG 列和一个数据列。
- 建表时可以给列或标签附加注释。
- TAGS语法指定超级表的标签列标签列需要遵循以下约定
- TAGS 中的 TIMESTAMP 列写入数据时需要提供给定值,而暂不支持四则运算,例如 NOW + 10s 这类表达式。
- TAGS 列名不能与其他列名相同。
- TAGS 列名不能为预留关键字。
- TAGS 最多允许 128 个,至少 1 个,总长度不超过 16 KB。
- 关于表参数的详细说明,参见 CREATE TABLE 中的介绍。
:::info
## 查看超级表
1. TAGS 列的数据类型不能是 timestamp 类型;(从 2.1.3.0 版本开始TAGS 列中支持使用 timestamp 类型,但需注意在 TAGS 中的 timestamp 列写入数据时需要提供给定值,而暂不支持四则运算,例如 `NOW + 10s` 这类表达式)
2. TAGS 列名不能与其他列名相同;
3. TAGS 列名不能为预留关键字(参见:[参数限制与保留关键字](/taos-sql/keywords/) 章节);
4. TAGS 最多允许 128 个,至少 1 个,总长度不超过 16 KB。
:::
## 删除超级表
```
DROP STABLE [IF EXISTS] stb_name;
```
删除 STable 会自动删除通过 STable 创建的子表。
## 显示当前数据库下的所有超级表信息
### 显示当前数据库下的所有超级表信息
```
SHOW STABLES [LIKE tb_name_wildcard];
@ -42,7 +35,7 @@ SHOW STABLES [LIKE tb_name_wildcard];
查看数据库内全部 STable及其相关信息包括 STable 的名称、创建时间、列数量、标签TAG数量、通过该 STable 建表的数量。
## 显示一个超级表的创建语句
### 显示一个超级表的创建语句
```
SHOW CREATE STABLE stb_name;
@ -50,40 +43,81 @@ SHOW CREATE STABLE stb_name;
常用于数据库迁移。对一个已经存在的超级表,返回其创建语句;在另一个集群中执行该语句,就能得到一个结构完全相同的超级表。
## 获取超级表的结构信息
### 获取超级表的结构信息
```
DESCRIBE stb_name;
```
## 修改超级表普通列
### 超级表增加列
## 删除超级表
```
ALTER STABLE stb_name ADD COLUMN field_name data_type;
DROP STABLE [IF EXISTS] [db_name.]stb_name
```
### 超级表删除列
删除 STable 会自动删除通过 STable 创建的子表以及子表中的所有数据。
## 修改超级表
```sql
ALTER STABLE [db_name.]tb_name alter_table_clause
alter_table_clause: {
alter_table_options
| ADD COLUMN col_name column_type
| DROP COLUMN col_name
| MODIFY COLUMN col_name column_type
| ADD TAG tag_name tag_type
| DROP TAG tag_name
| MODIFY TAG tag_name tag_type
| RENAME TAG old_tag_name new_tag_name
}
alter_table_options:
alter_table_option ...
alter_table_option: {
COMMENT 'string_value'
}
```
ALTER STABLE stb_name DROP COLUMN field_name;
```
### 超级表修改列宽
**使用说明**
修改超级表的结构会对其下的所有子表生效。无法针对某个特定子表修改表结构。标签结构的修改需要对超级表下发TDengine 会自动作用于此超级表的所有子表。
- ADD COLUMN添加列。
- DROP COLUMN删除列。
- MODIFY COLUMN修改列定义如果数据列的类型是可变长类型那么可以使用此指令修改其宽度只能改大不能改小。
- ADD TAG给超级表添加一个标签。
- DROP TAG删除超级表的一个标签。从超级表删除某个标签后该超级表下的所有子表也会自动删除该标签。
- MODIFY TAG修改超级表的一个标签的定义。如果标签的类型是可变长类型那么可以使用此指令修改其宽度只能改大不能改小。
- RENAME TAG修改超级表的一个标签的名称。从超级表修改某个标签名后该超级表下的所有子表也会自动更新该标签名。
### 增加列
```
ALTER STABLE stb_name MODIFY COLUMN field_name data_type(length);
ALTER STABLE stb_name ADD COLUMN col_name column_type;
```
如果数据列的类型是可变长格式BINARY 或 NCHAR那么可以使用此指令修改其宽度只能改大不能改小2.1.3.0 版本新增)
### 删除列
## 修改超级表标签列
```
ALTER STABLE stb_name DROP COLUMN col_name;
```
### 修改列宽
```
ALTER STABLE stb_name MODIFY COLUMN col_name data_type(length);
```
如果数据列的类型是可变长格式BINARY 或 NCHAR那么可以使用此指令修改其宽度只能改大不能改小
### 添加标签
```
ALTER STABLE stb_name ADD TAG new_tag_name tag_type;
ALTER STABLE stb_name ADD TAG tag_name tag_type;
```
为 STable 增加一个新的标签,并指定新标签的类型。标签总数不能超过 128 个,总长度不超过 16KB 。
@ -99,7 +133,7 @@ ALTER STABLE stb_name DROP TAG tag_name;
### 修改标签名
```
ALTER STABLE stb_name CHANGE TAG old_tag_name new_tag_name;
ALTER STABLE stb_name RENAME TAG old_tag_name new_tag_name;
```
修改超级表的标签名,从超级表修改某个标签名后,该超级表下的所有子表也会自动更新该标签名。

View File

@ -5,7 +5,7 @@ title: 数据写入
## 写入语法
```
```sql
INSERT INTO
tb_name
[USING stb_name [(tag1_name, ...)] TAGS (tag1_value, ...)]
@ -18,46 +18,64 @@ INSERT INTO
...];
```
## 插入一条或多条记录
**关于时间戳**
1. TDengine 要求插入的数据必须要有时间戳,插入数据的时间戳要注意以下几点:
2. 时间戳不同的格式语法会有不同的精度影响。字符串格式的时间戳写法不受所在 DATABASE 的时间精度设置影响;而长整形格式的时间戳写法会受到所在 DATABASE 的时间精度设置影响。例如,时间戳"2021-07-13 16:16:48"的 UNIX 秒数为 1626164208。则其在毫秒精度下需要写作 1626164208000在微秒精度设置下就需要写为 1626164208000000纳秒精度设置下需要写为 1626164208000000000。
3. 一次插入多行数据时,不要把首列的时间戳的值都写 NOW。否则会导致语句中的多条记录使用相同的时间戳于是就可能出现相互覆盖以致这些数据行无法全部被正确保存。其原因在于NOW 函数在执行中会被解析为所在 SQL 语句的客户端执行时间,出现在同一语句中的多个 NOW 标记也就会被替换为完全相同的时间戳取值。
允许插入的最老记录的时间戳,是相对于当前服务器时间,减去配置的 KEEP 值(数据保留的天数)。允许插入的最新记录的时间戳,是相对于当前服务器时间,加上配置的 DURATION 值数据文件存储数据的时间跨度单位为天。KEEP 和 DURATION 都是可以在创建数据库时指定的,缺省值分别是 3650 天和 10 天。
**语法说明**
1. USING 子句是自动建表语法。如果用户在写数据时并不确定某个表是否存在,此时可以在写入数据时使用自动建表语法来创建不存在的表,若该表已存在则不会建立新表。自动建表时,要求必须以超级表为模板,并写明数据表的 TAGS 取值。可以只是指定部分 TAGS 列的取值,未被指定的 TAGS 列将置为 NULL。
2. 可以指定要插入值的列,对于为指定的列数据库将自动填充为 NULL。
3. VALUES 语法表示了要插入的一行或多行数据。
4. FILE 语法表示数据来自于 CSV 文件英文逗号分隔、英文单引号括住每个值CSV 文件无需表头。
5. 无论使用哪种语法,均可以在一条 INSERT 语句中同时向多个表插入数据。
6. INSERT 语句是完整解析后再执行的,对如下语句,不会再出现数据错误但建表成功的情况:
```sql
INSERT INTO d1001 USING meters TAGS('Beijing.Chaoyang', 2) VALUES('a');
```
7. 对于向多个子表插入数据的情况,依然会有部分数据写入失败,部分数据写入成功的情况。这是因为多个子表可能分布在不同的 VNODE 上,客户端将 INSERT 语句完整解析后,将数据发往各个涉及的 VNODE 上,每个 VNODE 独立进行写入操作。如果某个 VNODE 因为某些原因(比如网络问题或磁盘故障)导致写入失败,并不会影响其他 VNODE 节点的写入。
## 插入一条记录
指定已经创建好的数据子表的表名,并通过 VALUES 关键字提供一行或多行数据,即可向数据库写入这些数据。例如,执行如下语句可以写入一行记录:
```
```sql
INSERT INTO d1001 VALUES (NOW, 10.2, 219, 0.32);
```
## 插入多条记录
或者,可以通过如下语句写入两行记录:
```
```sql
INSERT INTO d1001 VALUES ('2021-07-13 14:06:32.272', 10.2, 219, 0.32) (1626164208000, 10.15, 217, 0.33);
```
:::note
1. 在第二个例子中,两行记录的首列时间戳使用了不同格式的写法。其中字符串格式的时间戳写法不受所在 DATABASE 的时间精度设置影响;而长整形格式的时间戳写法会受到所在 DATABASE 的时间精度设置影响——例子中的时间戳在毫秒精度下可以写作 1626164208000而如果是在微秒精度设置下就需要写为 1626164208000000纳秒精度设置下需要写为 1626164208000000000。
2. 在使用“插入多条记录”方式写入数据时,不能把第一列的时间戳取值都设为 NOW否则会导致语句中的多条记录使用相同的时间戳于是就可能出现相互覆盖以致这些数据行无法全部被正确保存。其原因在于NOW 函数在执行中会被解析为所在 SQL 语句的实际执行时间,出现在同一语句中的多个 NOW 标记也就会被替换为完全相同的时间戳取值。
3. 允许插入的最老记录的时间戳,是相对于当前服务器时间,减去配置的 keep 值(数据保留的天数);允许插入的最新记录的时间戳,是相对于当前服务器时间,加上配置的 days 值数据文件存储数据的时间跨度单位为天。keep 和 days 都是可以在创建数据库时指定的,缺省值分别是 3650 天和 10 天。
:::
## 插入记录,数据对应到指定的列
## 指定列插入
向数据子表中插入记录时,无论插入一行还是多行,都可以让数据对应到指定的列。对于 SQL 语句中没有出现的列,数据库将自动填充为 NULL。主键时间戳不能为 NULL。例如
```
```sql
INSERT INTO d1001 (ts, current, phase) VALUES ('2021-07-13 14:06:33.196', 10.27, 0.31);
```
:::info
如果不指定列,也即使用全列模式——那么在 VALUES 部分提供的数据,必须为数据表的每个列都显式地提供数据。全列模式写入速度会远快于指定列,因此建议尽可能采用全列写入方式,此时空列可以填入 NULL。
:::
## 向多个表插入记录
可以在一条语句中,分别向多个表插入一条或多条记录,并且也可以在插入过程中指定列。例如:
```
```sql
INSERT INTO d1001 VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-13 14:06:35.779', 10.15, 217, 0.33)
d1002 (ts, current, phase) VALUES ('2021-07-13 14:06:34.255', 10.27, 0.31;
```
@ -66,28 +84,24 @@ INSERT INTO d1001 VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-
如果用户在写数据时并不确定某个表是否存在,此时可以在写入数据时使用自动建表语法来创建不存在的表,若该表已存在则不会建立新表。自动建表时,要求必须以超级表为模板,并写明数据表的 TAGS 取值。例如:
```
```sql
INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) VALUES ('2021-07-13 14:06:32.272', 10.2, 219, 0.32);
```
也可以在自动建表时,只是指定部分 TAGS 列的取值,未被指定的 TAGS 列将置为 NULL。例如
```
```sql
INSERT INTO d21001 USING meters (groupId) TAGS (2) VALUES ('2021-07-13 14:06:33.196', 10.15, 217, 0.33);
```
自动建表语法也支持在一条语句中向多个表插入记录。例如:
```
```sql
INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-13 14:06:35.779', 10.15, 217, 0.33)
d21002 USING meters (groupId) TAGS (2) VALUES ('2021-07-13 14:06:34.255', 10.15, 217, 0.33)
d21003 USING meters (groupId) TAGS (2) (ts, current, phase) VALUES ('2021-07-13 14:06:34.255', 10.27, 0.31);
```
:::info
在 2.0.20.5 版本之前,在使用自动建表语法并指定列时,子表的列名必须紧跟在子表名称后面,而不能如例子里那样放在 TAGS 和 VALUES 之间。从 2.0.20.5 版本开始,两种写法都可以,但不能在一条 SQL 语句中混用,否则会报语法错误。
:::
## 插入来自文件的数据记录
除了使用 VALUES 关键字插入一行或多行数据外,也可以把要写入的数据放在 CSV 文件中(英文逗号分隔、英文单引号括住每个值)供 SQL 指令读取。其中 CSV 文件无需表头。例如,如果 /tmp/csvfile.csv 文件的内容为:
@ -99,51 +113,19 @@ INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) VALUES ('202
那么通过如下指令可以把这个文件中的数据写入子表中:
```
```sql
INSERT INTO d1001 FILE '/tmp/csvfile.csv';
```
## 插入来自文件的数据记录,并自动建表
从 2.1.5.0 版本开始,支持在插入来自 CSV 文件的数据时,以超级表为模板来自动创建不存在的数据表。例如:
```
```sql
INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) FILE '/tmp/csvfile.csv';
```
也可以在一条语句中向多个表以自动建表的方式插入记录。例如:
```
```sql
INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) FILE '/tmp/csvfile_21001.csv'
d21002 USING meters (groupId) TAGS (2) FILE '/tmp/csvfile_21002.csv';
```
## 历史记录写入
可使用 IMPORT 或者 INSERT 命令IMPORT 的语法,功能与 INSERT 完全一样。
针对 insert 类型的 SQL 语句,我们采用的流式解析策略,在发现后面的错误之前,前面正确的部分 SQL 仍会执行。下面的 SQL 中INSERT 语句是无效的,但是 d1001 仍会被创建。
```
taos> CREATE TABLE meters(ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS(location BINARY(30), groupId INT);
Query OK, 0 row(s) affected (0.008245s)
taos> SHOW STABLES;
name | created_time | columns | tags | tables |
============================================================================================
meters | 2020-08-06 17:50:27.831 | 4 | 2 | 0 |
Query OK, 1 row(s) in set (0.001029s)
taos> SHOW TABLES;
Query OK, 0 row(s) in set (0.000946s)
taos> INSERT INTO d1001 USING meters TAGS('California.SanFrancisco', 2) VALUES('a');
DB error: invalid SQL: 'a' (invalid timestamp) (0.039494s)
taos> SHOW TABLES;
table_name | created_time | columns | stable_name |
======================================================================================================
d1001 | 2020-08-06 17:52:02.097 | 4 | meters |
Query OK, 1 row(s) in set (0.001091s)
```

View File

@ -5,121 +5,118 @@ title: 数据查询
## 查询语法
```
SELECT select_expr [, select_expr ...]
FROM {tb_name_list}
[WHERE where_condition]
[SESSION(ts_col, tol_val)]
[STATE_WINDOW(col)]
[INTERVAL(interval_val [, interval_offset]) [SLIDING sliding_val]]
[FILL(fill_mod_and_val)]
[GROUP BY col_list]
[ORDER BY col_list { DESC | ASC }]
```sql
SELECT {DATABASE() | CLIENT_VERSION() | SERVER_VERSION() | SERVER_STATUS() | NOW() | TODAY() | TIMEZONE()}
SELECT [DISTINCT] select_list
from_clause
[WHERE condition]
[PARTITION BY tag_list]
[window_clause]
[group_by_clause]
[order_by_clasue]
[SLIMIT limit_val [SOFFSET offset_val]]
[LIMIT limit_val [OFFSET offset_val]]
[>> export_file];
[>> export_file]
select_list:
select_expr [, select_expr] ...
select_expr: {
*
| query_name.*
| [schema_name.] {table_name | view_name} .*
| t_alias.*
| expr [[AS] c_alias]
}
from_clause: {
table_reference [, table_reference] ...
| join_clause [, join_clause] ...
}
table_reference:
table_expr t_alias
table_expr: {
table_name
| view_name
| ( subquery )
}
join_clause:
table_reference [INNER] JOIN table_reference ON condition
window_clause: {
SESSION(ts_col, tol_val)
| STATE_WINDOW(col)
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [WATERMARK(watermark_val)] [FILL(fill_mod_and_val)]
changes_option: {
DURATION duration_val
| ROWS rows_val
}
group_by_clause:
GROUP BY expr [, expr] ... HAVING condition
order_by_clasue:
ORDER BY order_expr [, order_expr] ...
order_expr:
{expr | position | c_alias} [DESC | ASC] [NULLS FIRST | NULLS LAST]
```
## 通配符
## 列表
通配符 \* 可以用于代指全部列。对于普通表,结果中只有普通列。
查询语句可以指定部分或全部列作为返回结果。数据列和标签列都可以出现在列表中
```
taos> SELECT * FROM d1001;
ts | current | voltage | phase |
======================================================================================
2018-10-03 14:38:05.000 | 10.30000 | 219 | 0.31000 |
2018-10-03 14:38:15.000 | 12.60000 | 218 | 0.33000 |
2018-10-03 14:38:16.800 | 12.30000 | 221 | 0.31000 |
Query OK, 3 row(s) in set (0.001165s)
```
### 通配符
在针对超级表,通配符包含 _标签列_
通配符 \* 可以用于代指全部列。对于普通表,结果中只有普通列。对于超级表和子表,还包含了 TAG 列。
```
taos> SELECT * FROM meters;
ts | current | voltage | phase | location | groupid |
=====================================================================================================================================
2018-10-03 14:38:05.500 | 11.80000 | 221 | 0.28000 | California.LosAngeles | 2 |
2018-10-03 14:38:16.600 | 13.40000 | 223 | 0.29000 | California.LosAngeles | 2 |
2018-10-03 14:38:05.000 | 10.80000 | 223 | 0.29000 | California.LosAngeles | 3 |
2018-10-03 14:38:06.500 | 11.50000 | 221 | 0.35000 | California.LosAngeles | 3 |
2018-10-03 14:38:04.000 | 10.20000 | 220 | 0.23000 | California.SanFrancisco | 3 |
2018-10-03 14:38:16.650 | 10.30000 | 218 | 0.25000 | California.SanFrancisco | 3 |
2018-10-03 14:38:05.000 | 10.30000 | 219 | 0.31000 | California.SanFrancisco | 2 |
2018-10-03 14:38:15.000 | 12.60000 | 218 | 0.33000 | California.SanFrancisco | 2 |
2018-10-03 14:38:16.800 | 12.30000 | 221 | 0.31000 | California.SanFrancisco | 2 |
Query OK, 9 row(s) in set (0.002022s)
```sql
SELECT * FROM d1001;
```
通配符支持表名前缀,以下两个 SQL 语句均为返回全部的列:
```
```sql
SELECT * FROM d1001;
SELECT d1001.* FROM d1001;
```
在 JOIN 查询中,带前缀的\*和不带前缀\*返回的结果有差别, \*返回全部表的所有列数据(不包含标签),带前缀的通配符,则只返回该表的列数据。
在 JOIN 查询中,带表名前缀的\*和不带前缀\*返回的结果有差别, \*返回全部表的所有列数据(不包含标签),而带表名前缀的通配符,则只返回该表的列数据。
```
taos> SELECT * FROM d1001, d1003 WHERE d1001.ts=d1003.ts;
ts | current | voltage | phase | ts | current | voltage | phase |
==================================================================================================================================
2018-10-03 14:38:05.000 | 10.30000| 219 | 0.31000 | 2018-10-03 14:38:05.000 | 10.80000| 223 | 0.29000 |
Query OK, 1 row(s) in set (0.017385s)
```sql
SELECT * FROM d1001, d1003 WHERE d1001.ts=d1003.ts;
SELECT d1001.* FROM d1001,d1003 WHERE d1001.ts = d1003.ts;
```
```
taos> SELECT d1001.* FROM d1001,d1003 WHERE d1001.ts = d1003.ts;
ts | current | voltage | phase |
======================================================================================
2018-10-03 14:38:05.000 | 10.30000 | 219 | 0.31000 |
Query OK, 1 row(s) in set (0.020443s)
```
上面的查询语句中,前者返回 d1001 和 d1003 的全部列,而后者仅返回 d1001 的全部列。
在使用 SQL 函数来进行查询的过程中,部分 SQL 函数支持通配符操作。其中的区别在于:
`count(*)`函数只返回一列。`first`、`last`、`last_row`函数则是返回全部列。
```
taos> SELECT COUNT(*) FROM d1001;
count(*) |
========================
3 |
Query OK, 1 row(s) in set (0.001035s)
### 标签列
在超级表和子表的查询中可以指定 _标签列_,且标签列的值会与普通列的数据一起返回。
```sql
ELECT location, groupid, current FROM d1001 LIMIT 2;
```
```
taos> SELECT FIRST(*) FROM d1001;
first(ts) | first(current) | first(voltage) | first(phase) |
=========================================================================================
2018-10-03 14:38:05.000 | 10.30000 | 219 | 0.31000 |
Query OK, 1 row(s) in set (0.000849s)
```
### 结果去重
## 标签列
`DISINTCT` 关键字可以对结果集中的一列或多列进行去重,去除的列既可以是标签列也可以是数据列。
从 2.0.14 版本开始,支持在普通表的查询中指定 _标签列_,且标签列的值会与普通列的数据一起返回。
```
taos> SELECT location, groupid, current FROM d1001 LIMIT 2;
location | groupid | current |
======================================================================
California.SanFrancisco | 2 | 10.30000 |
California.SanFrancisco | 2 | 12.60000 |
Query OK, 2 row(s) in set (0.003112s)
```
注意:普通表的通配符 \* 中并不包含 _标签列_
## 获取标签列或普通列的去重取值
从 2.0.15.0 版本开始,支持在超级表查询标签列时,指定 DISTINCT 关键字,这样将返回指定标签列的所有不重复取值。注意,在 2.1.6.0 版本之前DISTINCT 只支持处理单个标签列,而从 2.1.6.0 版本开始DISTINCT 可以对多个标签列进行处理,输出这些标签列取值不重复的组合。
对标签列去重:
```sql
SELECT DISTINCT tag_name [, tag_name ...] FROM stb_name;
```
从 2.1.7.0 版本开始DISTINCT 也支持对数据子表或普通表进行处理,也即支持获取单个普通列的不重复取值,或多个普通列取值的不重复组合。
对数据列去重:
```sql
SELECT DISTINCT col_name [, col_name ...] FROM tb_name;
@ -133,210 +130,162 @@ SELECT DISTINCT col_name [, col_name ...] FROM tb_name;
:::
## 结果集列名
### 结果集列名
`SELECT`子句中,如果不指定返回结果集合的列名,结果集列名称默认使用`SELECT`子句中的表达式名称作为列名称。此外,用户可使用`AS`来重命名返回结果集合中列的名称。例如:
```
```sql
taos> SELECT ts, ts AS primary_key_ts FROM d1001;
ts | primary_key_ts |
====================================================
2018-10-03 14:38:05.000 | 2018-10-03 14:38:05.000 |
2018-10-03 14:38:15.000 | 2018-10-03 14:38:15.000 |
2018-10-03 14:38:16.800 | 2018-10-03 14:38:16.800 |
Query OK, 3 row(s) in set (0.001191s)
```
但是针对`first(*)`、`last(*)`、`last_row(*)`不支持针对单列的重命名。
## 隐式结果列
### 隐式结果列
`Select_exprs`可以是表所属列的列名,也可以是基于列的函数表达式或计算式,数量的上限 256 个。当用户使用了`interval`或`group by tags`的子句以后,在最后返回结果中会强制返回时间戳列(第一列)和 group by 子句中的标签列。后续的版本中可以支持关闭 group by 子句中隐式列的输出,列输出完全由 select 子句控制。
## 表(超级表)列表
### 伪列
FROM 关键字后面可以是若干个表(超级表)列表,也可以是子查询的结果。
如果没有指定用户的当前数据库,可以在表名称之前使用数据库的名称来指定表所属的数据库。例如:`power.d1001` 方式来跨库使用表。
```
SELECT * FROM power.d1001;
------------------------------
USE power;
SELECT * FROM d1001;
```
## 特殊功能
部分特殊的查询功能可以不使用 FROM 子句执行。获取当前所在的数据库 database()
```
taos> SELECT DATABASE();
database() |
=================================
power |
Query OK, 1 row(s) in set (0.000079s)
```
如果登录的时候没有指定默认数据库,且没有使用`USE`命令切换数据,则返回 NULL。
```
taos> SELECT DATABASE();
database() |
=================================
NULL |
Query OK, 1 row(s) in set (0.000184s)
```
获取服务器和客户端版本号:
```
taos> SELECT CLIENT_VERSION();
client_version() |
===================
2.0.0.0 |
Query OK, 1 row(s) in set (0.000070s)
taos> SELECT SERVER_VERSION();
server_version() |
===================
2.0.0.0 |
Query OK, 1 row(s) in set (0.000077s)
```
服务器状态检测语句。如果服务器正常,返回一个数字(例如 1。如果服务器异常返回 error code。该 SQL 语法能兼容连接池对于 TDengine 状态的检查及第三方工具对于数据库服务器状态的检查。并可以避免出现使用了错误的心跳检测 SQL 语句导致的连接池连接丢失的问题。
```
taos> SELECT SERVER_STATUS();
server_status() |
==================
1 |
Query OK, 1 row(s) in set (0.000074s)
taos> SELECT SERVER_STATUS() AS status;
status |
==============
1 |
Query OK, 1 row(s) in set (0.000081s)
```
## \_block_dist 函数
**功能说明**: 用于获得指定的(超级)表的数据块分布信息
```txt title="语法"
SELECT _block_dist() FROM { tb_name | stb_name }
```
**返回结果类型**:字符串。
**适用数据类型**:不能输入任何参数。
**嵌套子查询支持**:不支持子查询或嵌套查询。
**返回结果**:
- 返回 FROM 子句中输入的表或超级表的数据块分布情况。不支持查询条件。
- 返回的结果是该表或超级表的数据块所包含的行数的数据分布直方图。
```txt title="返回结果"
summary:
5th=[392], 10th=[392], 20th=[392], 30th=[392], 40th=[792], 50th=[792] 60th=[792], 70th=[792], 80th=[792], 90th=[792], 95th=[792], 99th=[792] Min=[392(Rows)] Max=[800(Rows)] Avg=[666(Rows)] Stddev=[2.17] Rows=[2000], Blocks=[3], Size=[5.440(Kb)] Comp=[0.23] RowsInMem=[0] SeekHeaderTime=[1(us)]
```
**上述信息的说明如下**:
- 查询的超级表所包含的存储在文件中的数据块data block中所包含的数据行的数量分布直方图信息5% 10% 20% 30% 40% 50% 60% 70% 80% 90% 95% 99% 的数值;
- 所有数据块中,包含行数最少的数据块所包含的行数量, 其中的 Min 指标 392 行。
- 所有数据块中,包含行数最多的数据块所包含的行数量, 其中的 Max 指标 800 行。
- 所有数据块行数的算数平均值 666 行(其中的 Avg 项)。
- 所有数据块中行数分布的均方差为 2.17 ( stddev )。
- 数据块包含的行的总数为 2000 行Rows
- 数据块总数是 3 个数据块 Blocks
- 数据块占用磁盘空间大小 5.44 Kb size
- 压缩后的数据块的大小除以原始数据的所获得的压缩比例: 23%Comp及压缩后的数据规模是原始数据规模的 23%。
- 内存中存在的数据行数是 0表示内存中没有数据缓存。
- 获取数据块信息的过程中读取头文件的时间开销 1 微秒SeekHeaderTime
**支持版本**:指定计算算法的功能从 2.1.0.x 版本开始2.1.0.0 之前的版本不支持指定使用算法的功能。
## TAOS SQL 中特殊关键词
- `TBNAME` 在超级表查询中可视为一个特殊的标签,代表查询涉及的子表名
- `_c0`: 表示表(超级表)的第一列
## 小技巧
**TBNAME**
`TBNAME` 可以视为超级表中一个特殊的标签,代表子表的表名。
获取一个超级表所有的子表名及相关的标签信息:
```
```mysql
SELECT TBNAME, location FROM meters;
```
统计超级表下辖子表数量:
```
SELECT COUNT(TBNAME) FROM meters;
```mysql
SELECT COUNT(*) FROM (SELECT DISTINCT TBNAME FROM meters);
```
以上两个查询均只支持在 WHERE 条件子句中添加针对标签TAGS的过滤条件。例如
```
taos> SELECT TBNAME, location FROM meters;
tbname | location |
==================================================================
d1004 | California.LosAngeles |
d1003 | California.LosAngeles |
d1002 | California.SanFrancisco |
d1001 | California.SanFrancisco |
Query OK, 4 row(s) in set (0.000881s)
**\_QSTART/\_QEND**
taos> SELECT COUNT(tbname) FROM meters WHERE groupId > 2;
count(tbname) |
========================
2 |
Query OK, 1 row(s) in set (0.001091s)
\_qstart 和\_qend 表示用户输入的查询时间范围,即 WHERE 子句中主键时间戳条件所限定的时间范围。如果 WHERE 子句中没有有效的主键时间戳条件,则时间范围为[-2^63, 2^63-1]。
\_qstart 和\_qend 不能用于 WHERE 子句中。
**\_WSTART/\_WEND/\_WDURATION**
\_wstart 伪列、\_wend 伪列和\_wduration 伪列
\_wstart 表示窗口起始时间戳,\_wend 表示窗口结束时间戳,\_wduration 表示窗口持续时长。
这三个伪列只能用于时间窗口的窗口切分查询之中,且要在窗口切分子句之后出现。
**\_c0/\_ROWTS**
TDengine 中,所有表的第一列都必须是时间戳类型,且为其主键,\_rowts 伪列和\_c0 伪列均代表了此列的值。相比实际的主键时间戳列,使用伪列更加灵活,语义也更加标准。例如,可以和 max\min 等函数一起使用。
```sql
select _rowts, max(current) from meters;
```
- 可以使用 \* 返回所有列,或指定列名。可以对数字列进行四则运算,可以给输出的列取列名。
- 暂不支持含列名的四则运算表达式用于条件过滤算子(例如,不支持 `where a*2>6;`,但可以写 `where a>6/2;`)。
- 暂不支持含列名的四则运算表达式作为 SQL 函数的应用对象(例如,不支持 `select min(2*a) from t;`,但可以写 `select 2*min(a) from t;`)。
- WHERE 语句可以使用各种逻辑判断来过滤数字值,或使用通配符来过滤字符串。
- 输出结果缺省按首列时间戳升序排序,但可以指定按降序排序( \_c0 指首列时间戳)。使用 ORDER BY 对其他字段进行排序,排序结果顺序不确定。
- 参数 LIMIT 控制输出条数OFFSET 指定从第几条开始输出。LIMIT/OFFSET 对结果集的执行顺序在 ORDER BY 之后。且 `LIMIT 5 OFFSET 2` 可以简写为 `LIMIT 2, 5`
- 在有 GROUP BY 子句的情况下LIMIT 参数控制的是每个分组中至多允许输出的条数。
- 参数 SLIMIT 控制由 GROUP BY 指令划分的分组中,至多允许输出几个分组的数据。且 `SLIMIT 5 SOFFSET 2` 可以简写为 `SLIMIT 2, 5`
- 通过 “>>” 输出结果可以导出到指定文件。
## 查询对象
## 条件过滤操作
FROM 关键字后面可以是若干个表(超级表)列表,也可以是子查询的结果。
如果没有指定用户的当前数据库,可以在表名称之前使用数据库的名称来指定表所属的数据库。例如:`power.d1001` 方式来跨库使用表。
| **Operation** | **Note** | **Applicable Data Types** |
| ------------- | ------------------------ | ----------------------------------------- |
| > | larger than | all types except bool |
| < | smaller than | all types except bool |
| >= | larger than or equal to | all types except bool |
| <= | smaller than or equal to | all types except bool |
| = | equal to | all types |
| <\> | not equal to | all types |
| is [not] null | is null or is not null | all types |
| between and | within a certain range | all types except bool |
| in | match any value in a set | all types except first column `timestamp` |
| like | match a wildcard string | **`binary`** **`nchar`** |
| match/nmatch | filter regex | **`binary`** **`nchar`** |
TDengine 支持基于时间戳主键的 INNER JOIN规则如下
**使用说明**:
1. 支持 FROM 表列表和显式的 JOIN 子句两种语法。
2. 对于普通表和子表ON 条件必须有且只有时间戳主键的等值条件。
3. 对于超级表ON 条件在时间戳主键的等值条件之外,还要求有可以一一对应的标签列等值条件,不支持 OR 条件。
4. 参与 JOIN 计算的表只能是同一种类型,即只能都是超级表,或都是子表,或都是普通表。
5. JOIN 两侧均支持子查询。
6. 参与 JOIN 的表个数上限为 10 个。
7. 不支持与 FILL 子句混合使用。
- <\> 算子也可以写为 != 请注意这个算子不能用于数据表第一列的 timestamp 字段
- like 算子使用通配符字符串进行匹配检查。
- 在通配符字符串中:'%'(百分号)匹配 0 到任意个字符;'\_'(下划线)匹配单个任意 ASCII 字符。
- 如果希望匹配字符串中原本就带有的 \_下划线字符那么可以在通配符字符串中写作 `\_`,也即加一个反斜线来进行转义。(从 2.2.0.0 版本开始支持)
- 通配符字符串最长不能超过 20 字节。(从 2.1.6.1 版本开始,通配符字符串的长度放宽到了 100 字节,并可以通过 taos.cfg 中的 maxWildCardsLength 参数来配置这一长度限制。但不建议使用太长的通配符字符串,将有可能严重影响 LIKE 操作的执行性能。)
- 同时进行多个字段的范围过滤,需要使用关键词 AND 来连接不同的查询条件,暂不支持 OR 连接的不同列之间的查询过滤条件。
- 从 2.3.0.0 版本开始,已支持完整的同一列和/或不同列间的 AND/OR 运算。
- 针对单一字段的过滤,如果是时间过滤条件,则一条语句中只支持设定一个;但针对其他的(普通)列或标签列,则可以使用 `OR` 关键字进行组合条件的查询过滤。例如: `((value > 20 AND value < 30) OR (value < 12))`
- 从 2.3.0.0 版本开始,允许使用多个时间过滤条件,但首列时间戳的过滤运算结果只能包含一个区间。
- 从 2.0.17.0 版本开始,条件过滤开始支持 BETWEEN AND 语法,例如 `WHERE col2 BETWEEN 1.5 AND 3.25` 表示查询条件为“1.5 ≤ col2 ≤ 3.25”。
- 从 2.1.4.0 版本开始,条件过滤开始支持 IN 算子,例如 `WHERE city IN ('California.SanFrancisco', 'California.SanDieo')`。说明BOOL 类型写作 `{true, false}``{0, 1}` 均可,但不能写作 0、1 之外的整数FLOAT 和 DOUBLE 类型会受到浮点数精度影响集合内的值在精度范围内认为和数据行的值完全相等才能匹配成功TIMESTAMP 类型支持非主键的列。
- 从 2.3.0.0 版本开始,条件过滤开始支持正则表达式,关键字 match/nmatch不区分大小写。
## GROUP BY
如果在语句中同时指定了 GROUP BY 子句,那么 SELECT 列表只能包含如下表达式:
1. 常量
2. 聚集函数
3. 与 GROUP BY 后表达式相同的表达式。
4. 包含前面表达式的表达式
GROUP BY 子句对每行数据按 GROUP BY 后的表达式的值进行分组,并为每个组返回一行汇总信息。
GROUP BY 子句中的表达式可以包含表或视图中的任何列,这些列不需要出现在 SELECT 列表中。
该子句对行进行分组,但不保证结果集的顺序。若要对分组进行排序,请使用 ORDER BY 子句
## PARTITON BY
PARTITION BY 子句是 TDengine 特色语法,按 part_list 对数据进行切分,在每个切分的分片中进行计算。
详见 [TDengine 特色查询](taos-sql/distinguished)
## ORDER BY
ORDER BY 子句对结果集排序。如果没有指定 ORDER BY无法保证同一语句多次查询的结果集返回顺序一致。
ORDER BY 后可以使用位置语法,位置标识为正整数,从 1 开始,表示使用 SELECT 列表的第几个表达式进行排序。
ASC 表示升序DESC 表示降序。
NULLS 语法用来指定 NULL 值在排序中输出的位置。NULLS LAST 是升序的默认值NULLS FIRST 是降序的默认值。
## LIMIT
LIMIT 控制输出条数OFFSET 指定从第几条之后开始输出。LIMIT/OFFSET 对结果集的执行顺序在 ORDER BY 之后。LIMIT 5 OFFSET 2 可以简写为 LIMIT 2, 5都输出第 3 行到第 7 行数据。
在有 PARTITION BY 子句时LIMIT 控制的是每个切分的分片中的输出,而不是总的结果集输出。
## SLIMIT
SLIMIT 和 PARTITION BY 子句一起使用用来控制输出的分片的数量。SLIMIT 5 SOFFSET 2 可以简写为 SLIMIT 2, 5都表示输出第 3 个到第 7 个分片。
需要注意,如果有 ORDER BY 子句,则输出只有一个分片。
## 特殊功能
部分特殊的查询功能可以不使用 FROM 子句执行。
### 获取当前数据库
下面的命令可以获取当前所在的数据库 database(),如果登录的时候没有指定默认数据库,且没有使用`USE`命令切换数据,则返回 NULL。
```sql
SELECT DATABASE();
```
### 获取服务器和客户端版本号
```sql
SELECT CLIENT_VERSION();
SELECT SERVER_VERSION();
```
### 获取服务器状态
服务器状态检测语句。如果服务器正常,返回一个数字(例如 1。如果服务器异常返回 error code。该 SQL 语法能兼容连接池对于 TDengine 状态的检查及第三方工具对于数据库服务器状态的检查。并可以避免出现使用了错误的心跳检测 SQL 语句导致的连接池连接丢失的问题。
```sql
SELECT SERVER_STATUS();
```
### 获取当前时间
```sql
SELECT NOW();
```
### 获取当前日期
```sql
SELECT TODAY();
```
### 获取当前时区
```sql
SELECT TIMEZONE();
```
## 正则表达式过滤
@ -358,7 +307,7 @@ WHERE (column|tbname) **match/MATCH/nmatch/NMATCH** _regex_
## JOIN 子句
从 2.2.0.0 版本开始,TDengine 对内连接INNER JOIN中的自然连接Natural join操作实现了完整的支持。也即支持“普通表与普通表之间”、“超级表与超级表之间”、“子查询与子查询之间”进行自然连接。自然连接与内连接的主要区别是,自然连接要求参与连接的字段在不同的表/超级表中必须是同名字段。也即TDengine 在连接关系的表达中,要求必须使用同名数据列/标签列的相等关系。
TDengine 支持“普通表与普通表之间”、“超级表与超级表之间”、“子查询与子查询之间” 进行自然连接。自然连接与内连接的主要区别是,自然连接要求参与连接的字段在不同的表/超级表中必须是同名字段。也即TDengine 在连接关系的表达中,要求必须使用同名数据列/标签列的相等关系。
在普通表与普通表之间的 JOIN 操作中,只能使用主键时间戳之间的相等关系。例如:
@ -429,7 +378,7 @@ UNION ALL SELECT ...
TDengine 支持 UNION ALL 操作符。也就是说,如果多个 SELECT 子句返回结果集的结构完全相同(列名、列类型、列数、顺序),那么可以通过 UNION ALL 把这些结果集合并到一起。目前只支持 UNION ALL 模式,也即在结果集的合并过程中是不去重的。在同一个 sql 语句中UNION ALL 最多支持 100 个。
### SQL 示例
## SQL 示例
对于下面的例子,表 tb1 用以下语句创建:

View File

@ -5,8 +5,6 @@ title: "删除数据"
---
删除数据是 TDengine 提供的根据指定时间段删除指定表或超级表中数据记录的功能,方便用户清理由于设备故障等原因产生的异常数据。
注意:本功能只在企业版 2.6.0.0 及以后的版本中提供,如需此功能请点击下面的链接访问[企业版产品](https://www.taosdata.com/products#enterprise-edition-link)
**语法:**
@ -17,21 +15,21 @@ DELETE FROM [ db_name. ] tb_name [WHERE condition];
**功能:** 删除指定表或超级表中的数据记录
**参数:**
- `db_name` 可选参数,指定要删除表所在的数据库名,不填写则在当前数据库中
- `tb_name` 必填参数,指定要删除数据的表名,可以是普通表、子表,也可以是超级表。
- `condition` 可选参数指定删除数据的过滤条件不指定过滤条件则为表中所有数据请慎重使用。特别说明这里的where 条件中只支持对第一列时间列的过滤,如果是超级表,支持对 tag 列过滤。
- `db_name` 可选参数,指定要删除表所在的数据库名,不填写则在当前数据库中
- `tb_name` 必填参数,指定要删除数据的表名,可以是普通表、子表,也可以是超级表。
- `condition` 可选参数,指定删除数据的过滤条件,不指定过滤条件则为表中所有数据,请慎重使用。特别说明,这里的 where 条件中只支持对第一列时间列的过滤。
**特别说明:**
数据删除后不可恢复,请慎重使用。为了确保删除的数据确实是自己要删除的,建议可以先使用 `select` 语句加 `where` 后的删除条件查看要删除的数据内容,确认无误后再执行 `delete` 命令。
数据删除后不可恢复,请慎重使用。为了确保删除的数据确实是自己要删除的,建议可以先使用 `select` 语句加 `where` 后的删除条件查看要删除的数据内容,确认无误后再执行 `delete` 命令。
**示例:**
`meters` 是一个超级表,`groupid` 是 int 类型的 tag 列,现在要删除 `meters` 表中时间小于 2021-10-01 10:40:00.100 且 tag 列 `groupid` 值为 1 的所有数据sql 如下:
`meters` 是一个超级表,`groupid` 是 int 类型的 tag 列,现在要删除 `meters` 表中时间小于 2021-10-01 10:40:00.100 的所有数据sql 如下:
```sql
delete from meters where ts < '2021-10-01 10:40:00.100' and groupid=1 ;
delete from meters where ts < '2021-10-01 10:40:00.100' ;
```
执行后显示结果为:

View File

@ -12,16 +12,16 @@ TDengine 提供的特色查询包括标签切分查询和窗口切分查询。
超级表查询中,当需要针对标签进行数据切分然后在切分出的数据空间内再进行一系列的计算时使用标签切分子句,标签切分的语句如下:
```sql
PARTITION BY tag_list
PARTITION BY part_list
```
其中 `tag_list` 是标签列的列表,还可以包括 tbname 伪列
part_list 可以是任意的标量表达式,包括列、常量、标量函数和它们的组合
TDengine 按如下方式处理标签切分子句:
当 PARTITION BY 和标签一起使用时,TDengine 按如下方式处理标签切分子句:
标签切分子句位于 `WHERE` 子句之后,且不能和 `JOIN` 子句一起使用。
标签切分子句将超级表数据按指定的标签组合进行切分,然后对每个切分的分片进行指定的计算。计算由之后的子句定义(窗口子句、`GROUP BY` 子句或`SELECT` 子句)。
标签切分子句可以和窗口切分子句(或 `GROUP BY` 子句)一起使用,此时后面的子句作用在每个切分的分片上。例如,下面的示例将数据按标签 `location` 进行分组,并对每个组按 10 分钟进行降采样,取其最大值。
- 标签切分子句位于 WHERE 子句之后,且不能和 JOIN 子句一起使用。
- 标签切分子句将超级表数据按指定的标签组合进行切分,每个切分的分片进行指定的计算。计算由之后的子句定义(窗口子句、GROUP BY 子句或 SELECT 子句)。
- 标签切分子句可以和窗口切分子句(或 GROUP BY 子句)一起使用,此时后面的子句作用在每个切分的分片上。例如,将数据按标签 location 进行分组,并对每个组按 10 分钟进行降采样,取其最大值。
```sql
select max(current) from meters partition by location interval(10m)

View File

@ -0,0 +1,122 @@
---
sidebar_label: 流式计算
title: 流式计算
---
在时序数据的处理中,经常要对原始数据进行清洗、预处理,再使用时序数据库进行长久的储存。用户通常需要在时序数据库之外再搭建 Kafka、Flink、Spark 等流计算处理引擎,增加了用户的开发成本和维护成本。
使用 TDengine 3.0 的流式计算引擎能够最大限度的减少对这些额外中间件的依赖,真正将数据的写入、预处理、长期存储、复杂分析、实时计算、实时报警触发等功能融为一体,并且,所有这些任务只需要使用 SQL 完成,极大降低了用户的学习成本、使用成本。
## 创建流式计算
```sql
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name AS subquery
stream_options: {
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
WATERMARK time
}
```
其中 subquery 是 select 普通查询语法的子集:
```sql
subquery: SELECT [DISTINCT] select_list
from_clause
[WHERE condition]
[PARTITION BY tag_list]
[window_clause]
[group_by_clause]
```
不支持 order_bylimitslimitfill 语句
例如,如下语句创建流式计算,同时自动创建名为 avg_vol 的超级表此流计算以一分钟为时间窗口、30 秒为前向增量统计这些电表的平均电压,并将来自 meters 表的数据的计算结果写入 avg_vol 表,不同 partition 的数据会分别创建子表并写入不同子表。
```sql
CREATE STREAM avg_vol_s INTO avg_vol AS
SELECT _wstartts, count(*), avg(voltage) FROM meters PARTITION BY tbname INTERVAL(1m) SLIDING(30s);
```
## 删除流式计算
```sql
DROP STREAM [IF NOT EXISTS] stream_name
```
仅删除流式计算任务,由流式计算写入的数据不会被删除。
## 展示流式计算
```sql
SHOW STREAMS;
```
## 流式计算的触发模式
在创建流时,可以通过 TRIGGER 指令指定流式计算的触发模式。
对于非窗口计算,流式计算的触发是实时的;对于窗口计算,目前提供 3 种触发模式:
1. AT_ONCE写入立即触发
2. WINDOW_CLOSE窗口关闭时触发窗口关闭由事件时间决定可配合 watermark 使用,详见《流式计算的乱序数据容忍策略》)
3. MAX_DELAY time若窗口关闭则触发计算。若窗口未关闭且未关闭时长超过 max delay 指定的时间,则触发计算。
由于窗口关闭是由事件时间决定的,如事件流中断、或持续延迟,则事件时间无法更新,可能导致无法得到最新的计算结果。
因此,流式计算提供了以事件时间结合处理时间计算的 MAX_DELAY 触发模式。
MAX_DELAY 模式在窗口关闭时会立即触发计算。此外,当数据写入后,计算触发的时间超过 max delay 指定的时间,则立即触发计算
## 流式计算的乱序数据容忍策略
在创建流时,可以在 stream_option 中指定 watermark。
流式计算通过 watermark 来度量对乱序数据的容忍程度watermark 默认为 0。
T = 最新事件时间 - watermark
每批到来的数据都会以上述公式更新窗口关闭时间,并将窗口结束时间 < T 的所有打开的窗口关闭若触发模式为 WINDOW_CLOSE MAX_DELAY则推送窗口聚合结果
流式计算的过期数据处理策略
对于已关闭的窗口,再次落入该窗口中的数据被标记为过期数据,对于过期数据,流式计算提供两种处理方式:
1. 直接丢弃:这是常见流式计算引擎提供的默认(甚至是唯一)计算模式
2. 重新计算:从 TSDB 中重新查找对应窗口的所有数据并重新计算得到最新结果
无论在哪种模式下watermark 都应该被妥善设置,来得到正确结果(直接丢弃模式)或避免频繁触发重算带来的性能开销(重新计算模式)。
## 流式计算的数据填充策略
TODO
## 流式计算与会话窗口session window
```sql
window_clause: {
SESSION(ts_col, tol_val)
| STATE_WINDOW(col)
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [FILL(fill_mod_and_val)]
}
```
其中SESSION 是会话窗口tol_val 是时间间隔的最大范围。在 tol_val 时间间隔范围内的数据都属于同一个窗口,如果连续的两条数据的时间超过 tol_val则自动开启下一个窗口。
## 流式计算的监控与流任务分布查询
TODO
## 流式计算的内存控制与存算分离
TODO
## 流式计算的暂停与恢复
```sql
STOP STREAM stream_name;
RESUME STREAM stream_name;
```

View File

@ -1,5 +1,5 @@
---
sidebar_label: 保留关键字
sidebar_label: 保留关键字
title: TDengine 保留关键字
---
@ -58,70 +58,70 @@ title: TDengine 保留关键字
### D
- DATABASE
- DATABASES
- DAYS
- DBS
- DEFERRED
- DATABASE
- DATABASES
- DAYS
- DBS
- DEFERRED
- DELETE
- DELIMITERS
- DESC
- DESCRIBE
- DETACH
- DISTINCT
- DIVIDE
- DNODE
- DNODES
- DOT
- DOUBLE
- DROP
- DESC
- DESCRIBE
- DETACH
- DISTINCT
- DIVIDE
- DNODE
- DNODES
- DOT
- DOUBLE
- DROP
### E
- END
- EQ
- EXISTS
- EXPLAIN
- END
- EQ
- EXISTS
- EXPLAIN
### F
- FAIL
- FILE
- FILL
- FLOAT
- FOR
- FROM
- FSYNC
- FAIL
- FILE
- FILL
- FLOAT
- FOR
- FROM
- FSYNC
### G
- GE
- GLOB
- GE
- GLOB
- GRANTS
- GROUP
- GT
- GROUP
- GT
### H
- HAVING
- HAVING
### I
- ID
- IF
- IGNORE
- IGNORE
- IMMEDIA
- IMPORT
- IN
- IMPORT
- IN
- INITIAL
- INSERT
- INSERT
- INSTEAD
- INT
- INT
- INTEGER
- INTERVA
- INTO
- IS
- ISNULL
- INTO
- IS
- ISNULL
### J
@ -130,190 +130,147 @@ title: TDengine 保留关键字
### K
- KEEP
- KEY
- KEY
- KILL
### L
- LE
- LIKE
- LIMIT
- LE
- LIKE
- LIMIT
- LINEAR
- LOCAL
- LP
- LOCAL
- LP
- LSHIFT
- LT
- LT
### M
- MATCH
- MAXROWS
- MINROWS
- MINUS
- MNODES
- MODIFY
- MODULES
- MATCH
- MAXROWS
- MINROWS
- MINUS
- MNODES
- MODIFY
- MODULES
### N
- NE
- NONE
- NOT
- NE
- NONE
- NOT
- NOTNULL
- NOW
- NOW
- NULL
### O
- OF
- OF
- OFFSET
- OR
- ORDER
- OR
- ORDER
### P
- PARTITION
- PASS
- PLUS
- PPS
- PASS
- PLUS
- PPS
- PRECISION
- PREV
- PREV
- PRIVILEGE
### Q
- QTIME
- QTIME
- QUERIE
- QUERY
- QUERY
- QUORUM
### R
- RAISE
- REM
- RAISE
- REM
- REPLACE
- REPLICA
- RESET
- RESET
- RESTRIC
- ROW
- RP
- ROW
- RP
- RSHIFT
### S
- SCORES
- SELECT
- SEMI
- SCORES
- SELECT
- SEMI
- SESSION
- SET
- SHOW
- SLASH
- SET
- SHOW
- SLASH
- SLIDING
- SLIMIT
- SLIMIT
- SMALLIN
- SOFFSET
- STable
- STable
- STableS
- STAR
- STATE
- STAR
- STATE
- STATEMEN
- STATE_WI
- STORAGE
- STREAM
- STREAMS
- STRING
- SYNCDB
- STORAGE
- STREAM
- STREAMS
- STRING
- SYNCDB
### T
- TABLE
- TABLES
- TAG
- TAGS
- TBNAME
- TIMES
- TIMESTAMP
- TINYINT
- TOPIC
- TOPICS
- TRIGGER
- TSERIES
- TABLE
- TABLES
- TAG
- TAGS
- TBNAME
- TIMES
- TIMESTAMP
- TINYINT
- TOPIC
- TOPICS
- TRIGGER
- TSERIES
### U
- UMINUS
- UNION
- UNSIGNED
- UPDATE
- UPLUS
- USE
- USER
- USERS
- USING
- UMINUS
- UNION
- UNSIGNED
- UPDATE
- UPLUS
- USE
- USER
- USERS
- USING
### V
- VALUES
- VARIABLE
- VALUES
- VARIABLE
- VARIABLES
- VGROUPS
- VIEW
- VNODES
- VGROUPS
- VIEW
- VNODES
### W
- WAL
- WHERE
### _
### \_
- _C0
- _QSTART
- _QSTOP
- _QDURATION
- _WSTART
- _WSTOP
- _WDURATION
## 特殊说明
### TBNAME
`TBNAME` 可以视为超级表中一个特殊的标签,代表子表的表名。
获取一个超级表所有的子表名及相关的标签信息:
```mysql
SELECT TBNAME, location FROM meters;
```
统计超级表下辖子表数量:
```mysql
SELECT COUNT(TBNAME) FROM meters;
```
以上两个查询均只支持在WHERE条件子句中添加针对标签TAGS的过滤条件。例如
```mysql
taos> SELECT TBNAME, location FROM meters;
tbname | location |
==================================================================
d1004 | California.SanFrancisco |
d1003 | California.SanFrancisco |
d1002 | California.LosAngeles |
d1001 | California.LosAngeles |
Query OK, 4 row(s) in set (0.000881s)
taos> SELECT COUNT(tbname) FROM meters WHERE groupId > 2;
count(tbname) |
========================
2 |
Query OK, 1 row(s) in set (0.001091s)
```
### _QSTART/_QSTOP/_QDURATION
表示查询过滤窗口的起始,结束以及持续时间。
### _WSTART/_WSTOP/_WDURATION
窗口切分聚合查询(例如 interval/session window/state window中表示每个切分窗口的起始结束以及持续时间。
### _c0/_ROWTS
_c0 _ROWTS 等价,表示表或超级表的第一列
- \_C0
- \_QSTART
- \_QSTOP
- \_QDURATION
- \_WSTART
- \_WSTOP
- \_WDURATION

View File

@ -1,5 +0,0 @@
---
sidebar_label: Information内置数据库
title: Information内置数据库
---

View File

@ -0,0 +1,186 @@
---
sidebar_label: 元数据库
title: 元数据库
---
TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数据库元数据、数据库系统信息和状态的访问,例如数据库或表的名称,当前执行的 SQL 语句等。
`INFORMATION_SCHEMA` 是 TDengine 启动时自动创建的数据库,该数据库存储有关 TDengine 维护的所有其他数据库的信息。它包含多个只读表。实际上,这些表都是视图,而不是基表,因此没有与它们关联的文件。所以对这些表只能查询,不能进行 INSERT 等写入操作。
可以使用 USE 语句将 INFORMATION_SCHEMA 设为默认数据库。
INFORMATION_SCHEMA 旨在以一种更一致的方式来提供对 TDengine 支持的各种 SHOW 语句(如 SHOW TABLES、SHOW DATABASES提供的信息的访问。与 SHOW 语句相比,使用 SELECT ... FROM INFORMATION_SCHEMA.tablename 具有以下优点:
您可以使用 SELECT 语句熟悉的语法,只需要学习一些表名和列名。
您可以对查询结果进行筛选、排序等操作,事实上,您可以使用任意 TDengine 支持的 SELECT 语句对 INFORMATION_SCHEMA 中的表进行查询。
TDengine 在后续演进中可以灵活的添加已有 INFORMATION_SCHEMA 中表的列,而不用担心对既有业务系统造成影响。
此技术与其他数据库系统更具互操作性。例如Oracle 数据库用户熟悉查询 Oracle 数据字典中的表。
由于 SHOW 语句已经被开发者熟悉的和广泛使用,所以它们仍然是可用的。
本章将详细介绍 `INFORMATION_SCHEMA` 这个内置元数据库中的表和表结构。
## DNODES
提供 dnode 的相关信息。也可以使用 SHOW DNODES 来查询这些信息。
| # | **列名** | **数据类型** | **说明** |
| --- | :------------: | ------------ | --------------------- |
| 1 | vnodes | SMALLINT | dnode 中的 vnode 个数 |
| 2 | support_vnodes | SMALLINT | 支持的 vnode 个数 |
| 3 | status | BINARY(10) | 当前状态 |
| 4 | note | BINARY(256) | 离线原因等信息 |
| 5 | id | SMALLINT | dnode id |
| 6 | endpoint | BINARY(134) | dnode 的地址 |
| 7 | create | TIMESTAMP | 创建时间 |
## MNODES
提供 mnode 的相关信息。也可以使用 SHOW MNODES 来查询这些信息。
| # | **列名** | **数据类型** | **说明** |
| --- | :---------: | ------------ | ------------------ |
| 1 | id | SMALLINT | mnode id |
| 2 | endpoint | BINARY(134) | mnode 的地址 |
| 3 | role | BINARY(10) | 当前角色 |
| 4 | role_time | TIMESTAMP | 成为当前角色的时间 |
| 5 | create_time | TIMESTAMP | 创建时间 |
## MODULES
提供组件的相关信息。也可以使用 SHOW MODULES 来查询这些信息
| # | **列名** | **数据类型** | **说明** |
| --- | :------: | ------------ | ---------- |
| 1 | id | SMALLINT | module id |
| 2 | endpoint | BINARY(134) | 组件的地址 |
| 3 | module | BINARY(10) | 组件状态 |
## QNODES
当前系统中 QNODE 的信息。也可以使用 SHOW QNODES 来查询这些信息。
| # | **列名** | **数据类型** | **说明** |
| --- | :---------: | ------------ | ------------ |
| 1 | id | SMALLINT | module id |
| 2 | endpoint | BINARY(134) | qnode 的地址 |
| 3 | create_time | TIMESTAMP | 创建时间 |
## USER_DATABASES
提供用户创建的数据库对象的相关信息。也可以使用 SHOW DATABASES 来查询这些信息。
TODO
| # | **列名** | **数据类型** | **说明** |
| --- | :---------: | ------------ | ------------------------------------------------ |
| 1 | name | BINARY(32) | 数据库名 |
| 2 | create_time | TIMESTAMP | 创建时间 |
| 3 | ntables | INT | 数据库中表的数量,包含子表和普通表但不包含超级表 |
| 4 | vgroups | INT | 数据库中有多少个 vgroup |
| 5 | replica | INT | 副本数 |
| 6 | quorum | INT | 写成功的确认数 |
| 7 | days | INT | 单文件存储数据的时间跨度 |
| 8 | keep | INT | 数据保留时长 |
| 9 | buffer | INT | 每个 vnode 写缓存的内存块大小,单位 MB |
| 10 | minrows | INT | 文件块中记录的最大条数 |
| 11 | maxrows | INT | 文件块中记录的最小条数 |
| 12 | wallevel | INT | WAL 级别 |
| 13 | fsync | INT | 数据落盘周期 |
| 14 | comp | INT | 数据压缩方式 |
| 15 | precision | BINARY(2) | 时间分辨率 |
| 16 | status | BINARY(10) | 数据库状态 |
## USER_FUNCTIONS
TODO
## USER_INDEXES
提供用户创建的索引的相关信息。也可以使用 SHOW INDEX 来查询这些信息。
| # | **列名** | **数据类型** | **说明** |
| --- | :--------------: | ------------ | ---------------------------------------------------------------------------------- |
| 1 | db_name | BINARY(32) | 包含此索引的表所在的数据库名 |
| 2 | table_name | BINARY(192) | 包含此索引的表的名称 |
| 3 | index_name | BINARY(192) | 索引名 |
| 4 | column_name | BINARY(64) | 建索引的列的列名 |
| 5 | index_type | BINARY(10) | 目前有 SMA 和 FULLTEXT |
| 6 | index_extensions | BINARY(256) | 索引的额外信息。对 SMA 类型的索引,是函数名的列表。对 FULLTEXT 类型的索引为 NULL。 |
## USER_STABLES
提供用户创建的超级表的相关信息。
| # | **列名** | **数据类型** | **说明** |
| --- | :-----------: | ------------ | ------------------------ |
| 1 | stable_name | BINARY(192) | 超级表表名 |
| 2 | db_name | BINARY(64) | 超级表所在的数据库的名称 |
| 3 | create_time | TIMESTAMP | 创建时间 |
| 4 | columns | INT | 列数目 |
| 5 | tags | INT | 标签数目 |
| 6 | last_update | TIMESTAMP | 最后更新时间 |
| 7 | table_comment | BINARY(1024) | 表注释 |
| 8 | watermark | BINARY(64) | 窗口的关闭时间 |
| 9 | max_delay | BINARY(64) | 推送计算结果的最大延迟 |
| 10 | rollup | BINARY(128) | rollup 聚合函数 |
## USER_STREAMS
提供用户创建的流计算的相关信息。
| # | **列名** | **数据类型** | **说明** |
| --- | :---------: | ------------ | --------------------------- |
| 1 | stream_name | BINARY(192) | 流计算名称 |
| 2 | user_name | BINARY(23) | 创建流计算的用户 |
| 3 | dest_table | BINARY(192) | 流计算写入的目标表 |
| 4 | create_time | TIMESTAMP | 创建时间 |
| 5 | sql | BLOB | 创建流计算时提供的 SQL 语句 |
## USER_TABLES
提供用户创建的普通表和子表的相关信息
| # | **列名** | **数据类型** | **说明** |
| --- | :-----------: | ------------ | ---------------- |
| 1 | table_name | BINARY(192) | 表名 |
| 2 | db_name | BINARY(64) | 数据库名 |
| 3 | create_time | TIMESTAMP | 创建时间 |
| 4 | columns | INT | 列数目 |
| 5 | stable_name | BINARY(192) | 所属的超级表表名 |
| 6 | uid | BIGINT | 表 id |
| 7 | vgroup_id | INT | vgroup id |
| 8 | ttl | INT | 表的生命周期 |
| 9 | table_comment | BINARY(1024) | 表注释 |
| 10 | type | BINARY(20) | 表类型 |
## USER_USERS
提供系统中创建的用户的相关信息。
| # | **列名** | **数据类型** | **说明** |
| --- | :---------: | ------------ | -------- |
| 1 | user_name | BINARY(23) | 用户名 |
| 2 | privilege | BINARY(256) | 权限 |
| 3 | create_time | TIMESTAMP | 创建时间 |
## VGROUPS
系统中所有 vgroups 的信息。
| # | **列名** | **数据类型** | **说明** |
| --- | :--------: | ------------ | ---------------------------- |
| 1 | vg_id | INT | vgroup id |
| 2 | db_name | BINARY(32) | 数据库名 |
| 3 | tables | INT | 此 vgroup 内有多少表 |
| 4 | status | BINARY(10) | 此 vgroup 的状态 |
| 5 | onlines | INT | 在线的成员数目 |
| 6 | v1_dnode | INT | 第一个成员所在的 dnode 的 id |
| 7 | v1_status | BINARY(10) | 第一个成员的状态 |
| 8 | v2_dnode | INT | 第二个成员所在的 dnode 的 id |
| 9 | v2_status | BINARY(10) | 第二个成员的状态 |
| 10 | v3_dnode | INT | 第三个成员所在的 dnode 的 id |
| 11 | v3_status | BINARY(10) | 第三个成员的状态 |
| 12 | compacting | INT | compact 状态 |

View File

@ -0,0 +1,270 @@
---
sidebar_label: SHOW 命令
title: 使用 SHOW 命令查看系统元数据
---
除了使用 `select` 语句查询 `INFORMATION_SCHEMA` 数据库中的表获得系统中的各种元数据、系统信息和状态之外,也可以用 `SHOW` 命令来实现同样的目的。
## SHOW ACCOUNTS
```sql
SHOW ACCOUNTS;
```
显示当前系统中所有租户的信息。
注:企业版独有
## SHOW APPS
```sql
SHOW APPS;
```
显示接入集群的应用(客户端)信息。
## SHOW BNODES
```sql
SHOW BNODES;
```
显示当前系统中存在的 BNODE (backup node, 即备份节点)的信息。
## SHOW CLUSTER
```sql
SHOW CLUSTER;
```
显示当前集群的信息
## SHOW CONNECTIONS
```sql
SHOW CONNECTIONS;
```
显示当前系统中存在的连接的信息。
## SHOW CONSUMERS
```sql
SHOW CONSUMERS;
```
显示当前数据库下所有活跃的消费者的信息。
## SHOW CREATE DATABASE
```sql
SHOW CREATE DATABASE db_name;
```
显示 db_name 指定的数据库的创建语句。
## SHOW CREATE STABLE
```sql
SHOW CREATE STABLE [db_name.]stb_name;
```
显示 tb_name 指定的超级表的创建语句
## SHOW CREATE TABLE
```sql
SHOW CREATE TABLE [db_name.]tb_name
```
显示 tb_name 指定的表的创建语句。支持普通表、超级表和子表。
## SHOW DATABASES
```sql
SHOW DATABASES;
```
显示用户定义的所有数据库。
## SHOW DNODES
```sql
SHOW DNODES;
```
显示当前系统中 DNODE 的信息。
## SHOW FUNCTIONS
```sql
SHOW FUNCTIONS;
```
显示用户定义的自定义函数。
## SHOW LICENSE
```sql
SHOW LICENSE;
SHOW GRANTS;
```
显示企业版许可授权的信息。
注:企业版独有
## SHOW INDEXES
```sql
SHOW INDEXES FROM tbl_name [FROM db_name];
```
显示已创建的索引。
## SHOW LOCAL VARIABLES
```sql
SHOW LOCAL VARIABLES;
```
显示当前客户端配置参数的运行值。
## SHOW MNODES
```sql
SHOW MNODES;
```
显示当前系统中 MNODE 的信息。
## SHOW MODULES
```sql
SHOW MODULES;
```
显示当前系统中所安装的组件的信息。
## SHOW QNODES
```sql
SHOW QNODES;
```
显示当前系统中 QNODE (查询节点)的信息。
## SHOW SCORES
```sql
SHOW SCORES;
```
显示系统被许可授权的容量的信息。
注:企业版独有
## SHOW SNODES
```sql
SHOW SNODES;
```
显示当前系统中 SNODE (流计算节点)的信息。
## SHOW STABLES
```sql
SHOW [db_name.]STABLES [LIKE 'pattern'];
```
显示当前数据库下的所有超级表的信息。可以使用 LIKE 对表名进行模糊匹配。
## SHOW STREAMS
```sql
SHOW STREAMS;
```
显示当前系统内所有流计算的信息。
## SHOW SUBSCRIPTIONS
```sql
SHOW SUBSCRIPTIONS;
```
显示当前数据库下的所有的订阅关系
## SHOW TABLES
```sql
SHOW [db_name.]TABLES [LIKE 'pattern'];
```
显示当前数据库下的所有普通表和子表的信息。可以使用 LIKE 对表名进行模糊匹配。
## SHOW TABLE DISTRIBUTED
```sql
SHOW TABLE DISTRIBUTED table_name;
```
显示表的数据分布信息。
## SHOW TAGS
```sql
SHOW TAGS FROM child_table_name [FROM db_name];
```
显示子表的标签信息。
## SHOW TOPICS
```sql
SHOW TOPICS;
```
显示当前数据库下的所有主题的信息。
## SHOW TRANSACTIONS
```sql
SHOW TRANSACTIONS;
```
显示当前系统中正在执行的事务的信息
## SHOW USERS
```sql
SHOW USERS;
```
显示当前系统中所有用户的信息。包括用户自定义的用户和系统默认用户。
## SHOW VARIABLES
```sql
SHOW VARIABLES;
SHOW DNODE dnode_id VARIABLES;
```
显示当前系统中各节点需要相同的配置参数的运行值,也可以指定 DNODE 来查看其的配置参数。
## SHOW VGROUPS
```sql
SHOW [db_name.]VGROUPS;
```
显示当前系统中所有 VGROUP 或某个 db 的 VGROUPS 的信息。
## SHOW VNODES
```sql
SHOW VNODES [dnode_name];
```
显示当前系统中所有 VNODE 或某个 DNODE 的 VNODE 的信息。

View File

@ -0,0 +1,77 @@
---
sidebar_label: 权限管理
title: 权限管理
---
本节讲述如何在 TDengine 中进行权限管理的相关操作。
## 创建用户
```sql
CREATE USER use_name PASS password;
```
创建用户。
use_name最长为23字节。
password最长为128字节合法字符包括"a-zA-Z0-9!?$%^&*()_+={[}]:;@~#|<,>.?/",不可以出现单双引号、撇号、反斜杠和空格,且不可以为空。
## 删除用户
```sql
DROP USER user_name;
```
## 授权
```sql
GRANT privileges ON priv_level TO user_name
privileges : {
ALL
| priv_type [, priv_type] ...
}
priv_type : {
READ
| WRITE
}
priv_level : {
dbname.*
| *.*
}
```
对用户授权。
授权级别支持到DATABASE权限有READ和WRITE两种。
TDengine 有超级用户和普通用户两类用户。超级用户缺省创建为root拥有所有权限。使用超级用户创建出来的用户为普通用户。在未授权的情况下普通用户可以创建DATABASE并拥有自己创建的DATABASE的所有权限包括删除数据库、修改数据库、查询时序数据和写入时序数据。超级用户可以给普通用户授予其他DATABASE的读写权限使其可以在此DATABASE上读写数据但不能对其进行删除和修改数据库的操作。
对于非DATABASE的对象如USER、DNODE、UDF、QNODE等普通用户只有读权限一般为SHOW命令不能创建和修改。
## 撤销授权
```sql
REVOKE privileges ON priv_level FROM user_name
privileges : {
ALL
| priv_type [, priv_type] ...
}
priv_type : {
READ
| WRITE
}
priv_level : {
dbname.*
| *.*
}
```
收回对用户的授权。

View File

@ -152,7 +152,10 @@ void taosCfgDynamicOptions(const char *option, const char *value);
void taosAddDataDir(int32_t index, char *v1, int32_t level, int32_t primary);
struct SConfig *taosGetCfg();
int32_t taosSetCfg(SConfig *pCfg, char* name);
void taosSetAllDebugFlag(int32_t flag);
void taosSetDebugFlag(int32_t *pFlagPtr, const char *flagName, int32_t flagVal);
int32_t taosSetCfg(SConfig *pCfg, char *name);
#ifdef __cplusplus
}

View File

@ -748,6 +748,10 @@ typedef struct {
int8_t ignoreExist;
int32_t numOfRetensions;
SArray* pRetensions; // SRetention
int32_t walRetentionPeriod;
int32_t walRetentionSize;
int32_t walRollPeriod;
int32_t walSegmentSize;
} SCreateDbReq;
int32_t tSerializeSCreateDbReq(void* buf, int32_t bufLen, SCreateDbReq* pReq);
@ -1150,6 +1154,10 @@ typedef struct {
int32_t numOfRetensions;
SArray* pRetensions; // SRetention
void* pTsma;
int32_t walRetentionPeriod;
int64_t walRetentionSize;
int32_t walRollPeriod;
int64_t walSegmentSize;
} SCreateVnodeReq;
int32_t tSerializeSCreateVnodeReq(void* buf, int32_t bufLen, SCreateVnodeReq* pReq);
@ -1977,7 +1985,7 @@ typedef struct SVCreateTbReq {
union {
struct {
char* name; // super table name
uint8_t tagNum;
uint8_t tagNum;
tb_uid_t suid;
SArray* tagName;
uint8_t* pTag;

View File

@ -16,261 +16,265 @@
#ifndef _TD_COMMON_TOKEN_H_
#define _TD_COMMON_TOKEN_H_
#define TK_OR 1
#define TK_AND 2
#define TK_UNION 3
#define TK_ALL 4
#define TK_MINUS 5
#define TK_EXCEPT 6
#define TK_INTERSECT 7
#define TK_NK_BITAND 8
#define TK_NK_BITOR 9
#define TK_NK_LSHIFT 10
#define TK_NK_RSHIFT 11
#define TK_NK_PLUS 12
#define TK_NK_MINUS 13
#define TK_NK_STAR 14
#define TK_NK_SLASH 15
#define TK_NK_REM 16
#define TK_NK_CONCAT 17
#define TK_CREATE 18
#define TK_ACCOUNT 19
#define TK_NK_ID 20
#define TK_PASS 21
#define TK_NK_STRING 22
#define TK_ALTER 23
#define TK_PPS 24
#define TK_TSERIES 25
#define TK_STORAGE 26
#define TK_STREAMS 27
#define TK_QTIME 28
#define TK_DBS 29
#define TK_USERS 30
#define TK_CONNS 31
#define TK_STATE 32
#define TK_USER 33
#define TK_ENABLE 34
#define TK_NK_INTEGER 35
#define TK_SYSINFO 36
#define TK_DROP 37
#define TK_GRANT 38
#define TK_ON 39
#define TK_TO 40
#define TK_REVOKE 41
#define TK_FROM 42
#define TK_NK_COMMA 43
#define TK_READ 44
#define TK_WRITE 45
#define TK_NK_DOT 46
#define TK_DNODE 47
#define TK_PORT 48
#define TK_DNODES 49
#define TK_NK_IPTOKEN 50
#define TK_LOCAL 51
#define TK_QNODE 52
#define TK_BNODE 53
#define TK_SNODE 54
#define TK_MNODE 55
#define TK_DATABASE 56
#define TK_USE 57
#define TK_FLUSH 58
#define TK_TRIM 59
#define TK_IF 60
#define TK_NOT 61
#define TK_EXISTS 62
#define TK_BUFFER 63
#define TK_CACHEMODEL 64
#define TK_CACHESIZE 65
#define TK_COMP 66
#define TK_DURATION 67
#define TK_NK_VARIABLE 68
#define TK_FSYNC 69
#define TK_MAXROWS 70
#define TK_MINROWS 71
#define TK_KEEP 72
#define TK_PAGES 73
#define TK_PAGESIZE 74
#define TK_PRECISION 75
#define TK_REPLICA 76
#define TK_STRICT 77
#define TK_WAL 78
#define TK_VGROUPS 79
#define TK_SINGLE_STABLE 80
#define TK_RETENTIONS 81
#define TK_SCHEMALESS 82
#define TK_NK_COLON 83
#define TK_TABLE 84
#define TK_NK_LP 85
#define TK_NK_RP 86
#define TK_STABLE 87
#define TK_ADD 88
#define TK_COLUMN 89
#define TK_MODIFY 90
#define TK_RENAME 91
#define TK_TAG 92
#define TK_SET 93
#define TK_NK_EQ 94
#define TK_USING 95
#define TK_TAGS 96
#define TK_COMMENT 97
#define TK_BOOL 98
#define TK_TINYINT 99
#define TK_SMALLINT 100
#define TK_INT 101
#define TK_INTEGER 102
#define TK_BIGINT 103
#define TK_FLOAT 104
#define TK_DOUBLE 105
#define TK_BINARY 106
#define TK_TIMESTAMP 107
#define TK_NCHAR 108
#define TK_UNSIGNED 109
#define TK_JSON 110
#define TK_VARCHAR 111
#define TK_MEDIUMBLOB 112
#define TK_BLOB 113
#define TK_VARBINARY 114
#define TK_DECIMAL 115
#define TK_MAX_DELAY 116
#define TK_WATERMARK 117
#define TK_ROLLUP 118
#define TK_TTL 119
#define TK_SMA 120
#define TK_FIRST 121
#define TK_LAST 122
#define TK_SHOW 123
#define TK_DATABASES 124
#define TK_TABLES 125
#define TK_STABLES 126
#define TK_MNODES 127
#define TK_MODULES 128
#define TK_QNODES 129
#define TK_FUNCTIONS 130
#define TK_INDEXES 131
#define TK_ACCOUNTS 132
#define TK_APPS 133
#define TK_CONNECTIONS 134
#define TK_LICENCE 135
#define TK_GRANTS 136
#define TK_QUERIES 137
#define TK_SCORES 138
#define TK_TOPICS 139
#define TK_VARIABLES 140
#define TK_BNODES 141
#define TK_SNODES 142
#define TK_CLUSTER 143
#define TK_TRANSACTIONS 144
#define TK_DISTRIBUTED 145
#define TK_CONSUMERS 146
#define TK_SUBSCRIPTIONS 147
#define TK_LIKE 148
#define TK_INDEX 149
#define TK_FUNCTION 150
#define TK_INTERVAL 151
#define TK_TOPIC 152
#define TK_AS 153
#define TK_WITH 154
#define TK_META 155
#define TK_CONSUMER 156
#define TK_GROUP 157
#define TK_DESC 158
#define TK_DESCRIBE 159
#define TK_RESET 160
#define TK_QUERY 161
#define TK_CACHE 162
#define TK_EXPLAIN 163
#define TK_ANALYZE 164
#define TK_VERBOSE 165
#define TK_NK_BOOL 166
#define TK_RATIO 167
#define TK_NK_FLOAT 168
#define TK_COMPACT 169
#define TK_VNODES 170
#define TK_IN 171
#define TK_OUTPUTTYPE 172
#define TK_AGGREGATE 173
#define TK_BUFSIZE 174
#define TK_STREAM 175
#define TK_INTO 176
#define TK_TRIGGER 177
#define TK_AT_ONCE 178
#define TK_WINDOW_CLOSE 179
#define TK_IGNORE 180
#define TK_EXPIRED 181
#define TK_KILL 182
#define TK_CONNECTION 183
#define TK_TRANSACTION 184
#define TK_BALANCE 185
#define TK_VGROUP 186
#define TK_MERGE 187
#define TK_REDISTRIBUTE 188
#define TK_SPLIT 189
#define TK_SYNCDB 190
#define TK_DELETE 191
#define TK_INSERT 192
#define TK_NULL 193
#define TK_NK_QUESTION 194
#define TK_NK_ARROW 195
#define TK_ROWTS 196
#define TK_TBNAME 197
#define TK_QSTART 198
#define TK_QEND 199
#define TK_QDURATION 200
#define TK_WSTART 201
#define TK_WEND 202
#define TK_WDURATION 203
#define TK_CAST 204
#define TK_NOW 205
#define TK_TODAY 206
#define TK_TIMEZONE 207
#define TK_CLIENT_VERSION 208
#define TK_SERVER_VERSION 209
#define TK_SERVER_STATUS 210
#define TK_CURRENT_USER 211
#define TK_COUNT 212
#define TK_LAST_ROW 213
#define TK_BETWEEN 214
#define TK_IS 215
#define TK_NK_LT 216
#define TK_NK_GT 217
#define TK_NK_LE 218
#define TK_NK_GE 219
#define TK_NK_NE 220
#define TK_MATCH 221
#define TK_NMATCH 222
#define TK_CONTAINS 223
#define TK_JOIN 224
#define TK_INNER 225
#define TK_SELECT 226
#define TK_DISTINCT 227
#define TK_WHERE 228
#define TK_PARTITION 229
#define TK_BY 230
#define TK_SESSION 231
#define TK_STATE_WINDOW 232
#define TK_SLIDING 233
#define TK_FILL 234
#define TK_VALUE 235
#define TK_NONE 236
#define TK_PREV 237
#define TK_LINEAR 238
#define TK_NEXT 239
#define TK_HAVING 240
#define TK_RANGE 241
#define TK_EVERY 242
#define TK_ORDER 243
#define TK_SLIMIT 244
#define TK_SOFFSET 245
#define TK_LIMIT 246
#define TK_OFFSET 247
#define TK_ASC 248
#define TK_NULLS 249
#define TK_ID 250
#define TK_NK_BITNOT 251
#define TK_VALUES 252
#define TK_IMPORT 253
#define TK_NK_SEMI 254
#define TK_FILE 255
#define TK_OR 1
#define TK_AND 2
#define TK_UNION 3
#define TK_ALL 4
#define TK_MINUS 5
#define TK_EXCEPT 6
#define TK_INTERSECT 7
#define TK_NK_BITAND 8
#define TK_NK_BITOR 9
#define TK_NK_LSHIFT 10
#define TK_NK_RSHIFT 11
#define TK_NK_PLUS 12
#define TK_NK_MINUS 13
#define TK_NK_STAR 14
#define TK_NK_SLASH 15
#define TK_NK_REM 16
#define TK_NK_CONCAT 17
#define TK_CREATE 18
#define TK_ACCOUNT 19
#define TK_NK_ID 20
#define TK_PASS 21
#define TK_NK_STRING 22
#define TK_ALTER 23
#define TK_PPS 24
#define TK_TSERIES 25
#define TK_STORAGE 26
#define TK_STREAMS 27
#define TK_QTIME 28
#define TK_DBS 29
#define TK_USERS 30
#define TK_CONNS 31
#define TK_STATE 32
#define TK_USER 33
#define TK_ENABLE 34
#define TK_NK_INTEGER 35
#define TK_SYSINFO 36
#define TK_DROP 37
#define TK_GRANT 38
#define TK_ON 39
#define TK_TO 40
#define TK_REVOKE 41
#define TK_FROM 42
#define TK_NK_COMMA 43
#define TK_READ 44
#define TK_WRITE 45
#define TK_NK_DOT 46
#define TK_DNODE 47
#define TK_PORT 48
#define TK_DNODES 49
#define TK_NK_IPTOKEN 50
#define TK_LOCAL 51
#define TK_QNODE 52
#define TK_BNODE 53
#define TK_SNODE 54
#define TK_MNODE 55
#define TK_DATABASE 56
#define TK_USE 57
#define TK_FLUSH 58
#define TK_TRIM 59
#define TK_IF 60
#define TK_NOT 61
#define TK_EXISTS 62
#define TK_BUFFER 63
#define TK_CACHEMODEL 64
#define TK_CACHESIZE 65
#define TK_COMP 66
#define TK_DURATION 67
#define TK_NK_VARIABLE 68
#define TK_FSYNC 69
#define TK_MAXROWS 70
#define TK_MINROWS 71
#define TK_KEEP 72
#define TK_PAGES 73
#define TK_PAGESIZE 74
#define TK_PRECISION 75
#define TK_REPLICA 76
#define TK_STRICT 77
#define TK_WAL 78
#define TK_VGROUPS 79
#define TK_SINGLE_STABLE 80
#define TK_RETENTIONS 81
#define TK_SCHEMALESS 82
#define TK_WAL_RETENTION_PERIOD 83
#define TK_WAL_RETENTION_SIZE 84
#define TK_WAL_ROLL_PERIOD 85
#define TK_WAL_SEGMENT_SIZE 86
#define TK_NK_COLON 87
#define TK_TABLE 88
#define TK_NK_LP 89
#define TK_NK_RP 90
#define TK_STABLE 91
#define TK_ADD 92
#define TK_COLUMN 93
#define TK_MODIFY 94
#define TK_RENAME 95
#define TK_TAG 96
#define TK_SET 97
#define TK_NK_EQ 98
#define TK_USING 99
#define TK_TAGS 100
#define TK_COMMENT 101
#define TK_BOOL 102
#define TK_TINYINT 103
#define TK_SMALLINT 104
#define TK_INT 105
#define TK_INTEGER 106
#define TK_BIGINT 107
#define TK_FLOAT 108
#define TK_DOUBLE 109
#define TK_BINARY 110
#define TK_TIMESTAMP 111
#define TK_NCHAR 112
#define TK_UNSIGNED 113
#define TK_JSON 114
#define TK_VARCHAR 115
#define TK_MEDIUMBLOB 116
#define TK_BLOB 117
#define TK_VARBINARY 118
#define TK_DECIMAL 119
#define TK_MAX_DELAY 120
#define TK_WATERMARK 121
#define TK_ROLLUP 122
#define TK_TTL 123
#define TK_SMA 124
#define TK_FIRST 125
#define TK_LAST 126
#define TK_SHOW 127
#define TK_DATABASES 128
#define TK_TABLES 129
#define TK_STABLES 130
#define TK_MNODES 131
#define TK_MODULES 132
#define TK_QNODES 133
#define TK_FUNCTIONS 134
#define TK_INDEXES 135
#define TK_ACCOUNTS 136
#define TK_APPS 137
#define TK_CONNECTIONS 138
#define TK_LICENCE 139
#define TK_GRANTS 140
#define TK_QUERIES 141
#define TK_SCORES 142
#define TK_TOPICS 143
#define TK_VARIABLES 144
#define TK_BNODES 145
#define TK_SNODES 146
#define TK_CLUSTER 147
#define TK_TRANSACTIONS 148
#define TK_DISTRIBUTED 149
#define TK_CONSUMERS 150
#define TK_SUBSCRIPTIONS 151
#define TK_LIKE 152
#define TK_INDEX 153
#define TK_FUNCTION 154
#define TK_INTERVAL 155
#define TK_TOPIC 156
#define TK_AS 157
#define TK_WITH 158
#define TK_META 159
#define TK_CONSUMER 160
#define TK_GROUP 161
#define TK_DESC 162
#define TK_DESCRIBE 163
#define TK_RESET 164
#define TK_QUERY 165
#define TK_CACHE 166
#define TK_EXPLAIN 167
#define TK_ANALYZE 168
#define TK_VERBOSE 169
#define TK_NK_BOOL 170
#define TK_RATIO 171
#define TK_NK_FLOAT 172
#define TK_COMPACT 173
#define TK_VNODES 174
#define TK_IN 175
#define TK_OUTPUTTYPE 176
#define TK_AGGREGATE 177
#define TK_BUFSIZE 178
#define TK_STREAM 179
#define TK_INTO 180
#define TK_TRIGGER 181
#define TK_AT_ONCE 182
#define TK_WINDOW_CLOSE 183
#define TK_IGNORE 184
#define TK_EXPIRED 185
#define TK_KILL 186
#define TK_CONNECTION 187
#define TK_TRANSACTION 188
#define TK_BALANCE 189
#define TK_VGROUP 190
#define TK_MERGE 191
#define TK_REDISTRIBUTE 192
#define TK_SPLIT 193
#define TK_SYNCDB 194
#define TK_DELETE 195
#define TK_INSERT 196
#define TK_NULL 197
#define TK_NK_QUESTION 198
#define TK_NK_ARROW 199
#define TK_ROWTS 200
#define TK_TBNAME 201
#define TK_QSTART 202
#define TK_QEND 203
#define TK_QDURATION 204
#define TK_WSTART 205
#define TK_WEND 206
#define TK_WDURATION 207
#define TK_CAST 208
#define TK_NOW 209
#define TK_TODAY 210
#define TK_TIMEZONE 211
#define TK_CLIENT_VERSION 212
#define TK_SERVER_VERSION 213
#define TK_SERVER_STATUS 214
#define TK_CURRENT_USER 215
#define TK_COUNT 216
#define TK_LAST_ROW 217
#define TK_BETWEEN 218
#define TK_IS 219
#define TK_NK_LT 220
#define TK_NK_GT 221
#define TK_NK_LE 222
#define TK_NK_GE 223
#define TK_NK_NE 224
#define TK_MATCH 225
#define TK_NMATCH 226
#define TK_CONTAINS 227
#define TK_JOIN 228
#define TK_INNER 229
#define TK_SELECT 230
#define TK_DISTINCT 231
#define TK_WHERE 232
#define TK_PARTITION 233
#define TK_BY 234
#define TK_SESSION 235
#define TK_STATE_WINDOW 236
#define TK_SLIDING 237
#define TK_FILL 238
#define TK_VALUE 239
#define TK_NONE 240
#define TK_PREV 241
#define TK_LINEAR 242
#define TK_NEXT 243
#define TK_HAVING 244
#define TK_RANGE 245
#define TK_EVERY 246
#define TK_ORDER 247
#define TK_SLIMIT 248
#define TK_SOFFSET 249
#define TK_LIMIT 250
#define TK_OFFSET 251
#define TK_ASC 252
#define TK_NULLS 253
#define TK_ID 254
#define TK_NK_BITNOT 255
#define TK_VALUES 256
#define TK_IMPORT 257
#define TK_NK_SEMI 258
#define TK_FILE 259
#define TK_NK_SPACE 300
#define TK_NK_COMMENT 301

View File

@ -143,6 +143,7 @@ typedef struct SqlFunctionCtx {
struct SExprInfo *pExpr;
struct SDiskbasedBuf *pBuf;
struct SSDataBlock *pSrcBlock;
struct SSDataBlock *pDstBlock; // used by indifinite rows function to set selectivity
int32_t curBufPage;
bool increase;

View File

@ -74,6 +74,10 @@ typedef struct SDatabaseOptions {
int8_t singleStable;
SNodeList* pRetentions;
int8_t schemaless;
int32_t walRetentionPeriod;
int32_t walRetentionSize;
int32_t walRollPeriod;
int32_t walSegmentSize;
} SDatabaseOptions;
typedef struct SCreateDatabaseStmt {

View File

@ -104,6 +104,7 @@ typedef struct SJoinLogicNode {
SNode* pMergeCondition;
SNode* pOnConditions;
bool isSingleTableJoin;
EOrder inputTsOrder;
} SJoinLogicNode;
typedef struct SAggLogicNode {
@ -201,6 +202,7 @@ typedef struct SWindowLogicNode {
int64_t watermark;
int8_t igExpired;
EWindowAlgorithm windowAlgo;
EOrder inputTsOrder;
} SWindowLogicNode;
typedef struct SFillLogicNode {
@ -356,15 +358,14 @@ typedef struct SInterpFuncPhysiNode {
SNode* pTimeSeries; // SColumnNode
} SInterpFuncPhysiNode;
typedef struct SJoinPhysiNode {
typedef struct SSortMergeJoinPhysiNode {
SPhysiNode node;
EJoinType joinType;
SNode* pMergeCondition;
SNode* pOnConditions;
SNodeList* pTargets;
} SJoinPhysiNode;
typedef SJoinPhysiNode SSortMergeJoinPhysiNode;
EOrder inputTsOrder;
} SSortMergeJoinPhysiNode;
typedef struct SAggPhysiNode {
SPhysiNode node;

View File

@ -255,6 +255,7 @@ typedef struct SSelectStmt {
int32_t selectFuncNum;
bool isEmptyResult;
bool isTimeLineResult;
bool isSubquery;
bool hasAggFuncs;
bool hasRepeatScanFuncs;
bool hasIndefiniteRowsFunc;

View File

@ -103,8 +103,8 @@ typedef struct SWal {
int32_t fsyncSeq;
// meta
SWalVer vers;
TdFilePtr pWriteLogTFile;
TdFilePtr pWriteIdxTFile;
TdFilePtr pLogFile;
TdFilePtr pIdxFile;
int32_t writeCur;
SArray *fileInfoSet; // SArray<SWalFileInfo>
// status
@ -114,21 +114,30 @@ typedef struct SWal {
int64_t refId;
TdThreadMutex mutex;
// ref
SHashObj *pRefHash; // ref -> SWalRef
SHashObj *pRefHash; // refId -> SWalRef
// path
char path[WAL_PATH_LEN];
// reusable write head
SWalCkHead writeHead;
} SWal; // WAL HANDLE
} SWal;
typedef struct {
int64_t refId;
int64_t refVer;
int64_t refFile;
SWal *pWal;
} SWalRef;
typedef struct {
int8_t scanUncommited;
int8_t scanNotApplied;
int8_t scanMeta;
int8_t enableRef;
} SWalFilterCond;
typedef struct {
SWal *pWal;
int64_t readerId;
TdFilePtr pLogFile;
TdFilePtr pIdxFile;
int64_t curFileFirstVer;
@ -138,7 +147,8 @@ typedef struct {
int8_t curStopped;
TdThreadMutex mutex;
SWalFilterCond cond;
SWalCkHead *pHead;
// TODO remove it
SWalCkHead *pHead;
} SWalReader;
// module initialization
@ -157,11 +167,7 @@ int32_t walWrite(SWal *, int64_t index, tmsg_t msgType, const void *body, int32_
int32_t walWriteWithSyncInfo(SWal *, int64_t index, tmsg_t msgType, SWalSyncInfo syncMeta, const void *body,
int32_t bodyLen);
// This interface assign version automatically and return to caller.
// When using this interface with concurrent writes,
// wal will write all logs atomically,
// but not sure which one will be actually write first,
// and then the unique index of successful writen is returned.
// Assign version automatically and return to caller,
// -1 will be returned for failed writes
int64_t walAppendLog(SWal *, tmsg_t msgType, SWalSyncInfo syncMeta, const void *body, int32_t bodyLen);
@ -191,17 +197,15 @@ void walSetReaderCapacity(SWalReader *pRead, int32_t capacity);
int32_t walFetchHead(SWalReader *pRead, int64_t ver, SWalCkHead *pHead);
int32_t walFetchBody(SWalReader *pRead, SWalCkHead **ppHead);
int32_t walSkipFetchBody(SWalReader *pRead, const SWalCkHead *pHead);
typedef struct {
int64_t refId;
int64_t ver;
} SWalRef;
SWalRef *walRefCommittedVer(SWal *);
SWalRef *walOpenRef(SWal *);
void walCloseRef(SWalRef *);
void walCloseRef(SWal *pWal, int64_t refId);
int32_t walRefVer(SWalRef *, int64_t ver);
int32_t walUnrefVer(SWal *);
void walUnrefVer(SWalRef *);
// help function for raft
// helper function for raft
bool walLogExist(SWal *, int64_t ver);
bool walIsEmpty(SWal *);

View File

@ -358,6 +358,15 @@ typedef enum ELogicConditionType {
#define TSDB_DB_SCHEMALESS_OFF 0
#define TSDB_DEFAULT_DB_SCHEMALESS TSDB_DB_SCHEMALESS_OFF
#define TSDB_DB_MIN_WAL_RETENTION_PERIOD -1
#define TSDB_DEFAULT_DB_WAL_RETENTION_PERIOD 0
#define TSDB_DB_MIN_WAL_RETENTION_SIZE -1
#define TSDB_DEFAULT_DB_WAL_RETENTION_SIZE 0
#define TSDB_DB_MIN_WAL_ROLL_PERIOD 0
#define TSDB_DEFAULT_DB_WAL_ROLL_PERIOD 0
#define TSDB_DB_MIN_WAL_SEGMENT_SIZE 0
#define TSDB_DEFAULT_DB_WAL_SEGMENT_SIZE 0
#define TSDB_MIN_ROLLUP_MAX_DELAY 1 // unit millisecond
#define TSDB_MAX_ROLLUP_MAX_DELAY (15 * 60 * 1000)
#define TSDB_MIN_ROLLUP_WATERMARK 0 // unit millisecond

View File

@ -63,11 +63,11 @@ extern int32_t metaDebugFlag;
extern int32_t udfDebugFlag;
extern int32_t smaDebugFlag;
extern int32_t idxDebugFlag;
extern int32_t tdbDebugFlag;
int32_t taosInitLog(const char *logName, int32_t maxFiles);
void taosCloseLog();
void taosResetLog();
void taosSetAllDebugFlag(int32_t flag);
void taosDumpData(uint8_t *msg, int32_t len);
void taosPrintLog(const char *flags, ELogLevel level, int32_t dflag, const char *format, ...)

View File

@ -2019,7 +2019,7 @@ int32_t transferTableNameList(const char* tbList, int32_t acctId, char* dbName,
}
if (('a' <= *(tbList + i) && 'z' >= *(tbList + i)) || ('A' <= *(tbList + i) && 'Z' >= *(tbList + i)) ||
('0' <= *(tbList + i) && '9' >= *(tbList + i))) {
('0' <= *(tbList + i) && '9' >= *(tbList + i)) || ('_' == *(tbList + i))) {
if (vLen[vIdx] > 0) {
goto _return;
}

View File

@ -973,7 +973,7 @@ int taos_load_table_info(TAOS *taos, const char *tableNameList) {
conn.mgmtEps = getEpSet_s(&pTscObj->pAppInfo->mgmtEp);
code = catalogAsyncGetAllMeta(pCtg, &conn, &catalogReq, syncCatalogFn, NULL, NULL);
code = catalogAsyncGetAllMeta(pCtg, &conn, &catalogReq, syncCatalogFn, pRequest->body.param, NULL);
if (code) {
goto _return;
}

View File

@ -1763,9 +1763,9 @@ char* dumpBlockData(SSDataBlock* pDataBlock, const char* flag, char** pDataBuf)
int32_t colNum = taosArrayGetSize(pDataBlock->pDataBlock);
int32_t rows = pDataBlock->info.rows;
int32_t len = 0;
len += snprintf(dumpBuf + len, size - len, "===stream===%s |block type %d |child id %d|group id:%" PRIu64 "| uid:%ld|\n", flag,
len += snprintf(dumpBuf + len, size - len, "===stream===%s |block type %d|child id %d|group id:%" PRIu64 "|uid:%ld|rows:%d\n", flag,
(int32_t)pDataBlock->info.type, pDataBlock->info.childId, pDataBlock->info.groupId,
pDataBlock->info.uid);
pDataBlock->info.uid, pDataBlock->info.rows);
if (len >= size - 1) return dumpBuf;
for (int32_t j = 0; j < rows; j++) {
@ -1878,7 +1878,7 @@ int32_t buildSubmitReqFromDataBlock(SSubmitReq** pReq, const SArray* pDataBlocks
msgLen += sizeof(SSubmitBlk);
int32_t dataLen = 0;
for (int32_t j = 0; j < rows; ++j) { // iterate by row
tdSRowResetBuf(&rb, POINTER_SHIFT(pDataBuf, msgLen)); // set row buf
tdSRowResetBuf(&rb, POINTER_SHIFT(pDataBuf, msgLen + dataLen)); // set row buf
bool isStartKey = false;
int32_t offset = 0;
for (int32_t k = 0; k < colNum; ++k) { // iterate by column

View File

@ -316,6 +316,7 @@ static int32_t taosAddServerLogCfg(SConfig *pCfg) {
if (cfgAddInt32(pCfg, "udfDebugFlag", udfDebugFlag, 0, 255, 0) != 0) return -1;
if (cfgAddInt32(pCfg, "smaDebugFlag", smaDebugFlag, 0, 255, 0) != 0) return -1;
if (cfgAddInt32(pCfg, "idxDebugFlag", idxDebugFlag, 0, 255, 0) != 0) return -1;
if (cfgAddInt32(pCfg, "tdbDebugFlag", tdbDebugFlag, 0, 255, 0) != 0) return -1;
return 0;
}
@ -506,6 +507,7 @@ static void taosSetServerLogCfg(SConfig *pCfg) {
udfDebugFlag = cfgGetItem(pCfg, "udfDebugFlag")->i32;
smaDebugFlag = cfgGetItem(pCfg, "smaDebugFlag")->i32;
idxDebugFlag = cfgGetItem(pCfg, "idxDebugFlag")->i32;
tdbDebugFlag = cfgGetItem(pCfg, "tdbDebugFlag")->i32;
}
static int32_t taosSetClientCfg(SConfig *pCfg) {
@ -950,6 +952,8 @@ int32_t taosSetCfg(SConfig *pCfg, char *name) {
uError("failed to create tempDir:%s since %s", tsTempDir, terrstr());
return -1;
}
} else if (strcasecmp("tdbDebugFlag", name) == 0) {
tdbDebugFlag = cfgGetItem(pCfg, "tdbDebugFlag")->i32;
} else if (strcasecmp("telemetryReporting", name) == 0) {
tsEnableTelem = cfgGetItem(pCfg, "telemetryReporting")->bval;
} else if (strcasecmp("telemetryInterval", name) == 0) {
@ -1143,18 +1147,22 @@ void taosCfgDynamicOptions(const char *option, const char *value) {
int32_t monitor = atoi(value);
uInfo("monitor set from %d to %d", tsEnableMonitor, monitor);
tsEnableMonitor = monitor;
SConfigItem *pItem = cfgGetItem(tsCfg, "monitor");
if (pItem != NULL) {
pItem->bval = tsEnableMonitor;
}
return;
}
const char *options[] = {
"dDebugFlag", "vDebugFlag", "mDebugFlag", "wDebugFlag", "sDebugFlag", "tsdbDebugFlag",
"tqDebugFlag", "fsDebugFlag", "udfDebugFlag", "smaDebugFlag", "idxDebugFlag", "tmrDebugFlag",
"uDebugFlag", "smaDebugFlag", "rpcDebugFlag", "qDebugFlag",
"dDebugFlag", "vDebugFlag", "mDebugFlag", "wDebugFlag", "sDebugFlag", "tsdbDebugFlag",
"tqDebugFlag", "fsDebugFlag", "udfDebugFlag", "smaDebugFlag", "idxDebugFlag", "tdbDebugFlag",
"tmrDebugFlag", "uDebugFlag", "smaDebugFlag", "rpcDebugFlag", "qDebugFlag",
};
int32_t *optionVars[] = {
&dDebugFlag, &vDebugFlag, &mDebugFlag, &wDebugFlag, &sDebugFlag, &tsdbDebugFlag,
&tqDebugFlag, &fsDebugFlag, &udfDebugFlag, &smaDebugFlag, &idxDebugFlag, &tmrDebugFlag,
&uDebugFlag, &smaDebugFlag, &rpcDebugFlag, &qDebugFlag,
&dDebugFlag, &vDebugFlag, &mDebugFlag, &wDebugFlag, &sDebugFlag, &tsdbDebugFlag,
&tqDebugFlag, &fsDebugFlag, &udfDebugFlag, &smaDebugFlag, &idxDebugFlag, &tdbDebugFlag,
&tmrDebugFlag, &uDebugFlag, &smaDebugFlag, &rpcDebugFlag, &qDebugFlag,
};
int32_t optionSize = tListLen(options);
@ -1166,8 +1174,40 @@ void taosCfgDynamicOptions(const char *option, const char *value) {
int32_t flag = atoi(value);
uInfo("%s set from %d to %d", optName, *optionVars[d], flag);
*optionVars[d] = flag;
taosSetDebugFlag(optionVars[d], optName, flag);
return;
}
uError("failed to cfg dynamic option:%s value:%s", option, value);
}
void taosSetDebugFlag(int32_t *pFlagPtr, const char *flagName, int32_t flagVal) {
SConfigItem *pItem = cfgGetItem(tsCfg, flagName);
if (pItem != NULL) {
pItem->i32 = flagVal;
}
*pFlagPtr = flagVal;
}
void taosSetAllDebugFlag(int32_t flag) {
if (flag <= 0) return;
taosSetDebugFlag(&uDebugFlag, "uDebugFlag", flag);
taosSetDebugFlag(&rpcDebugFlag, "rpcDebugFlag", flag);
taosSetDebugFlag(&jniDebugFlag, "jniDebugFlag", flag);
taosSetDebugFlag(&qDebugFlag, "qDebugFlag", flag);
taosSetDebugFlag(&cDebugFlag, "cDebugFlag", flag);
taosSetDebugFlag(&dDebugFlag, "dDebugFlag", flag);
taosSetDebugFlag(&vDebugFlag, "vDebugFlag", flag);
taosSetDebugFlag(&mDebugFlag, "mDebugFlag", flag);
taosSetDebugFlag(&wDebugFlag, "wDebugFlag", flag);
taosSetDebugFlag(&sDebugFlag, "sDebugFlag", flag);
taosSetDebugFlag(&tsdbDebugFlag, "tsdbDebugFlag", flag);
taosSetDebugFlag(&tqDebugFlag, "tqDebugFlag", flag);
taosSetDebugFlag(&fsDebugFlag, "fsDebugFlag", flag);
taosSetDebugFlag(&udfDebugFlag, "udfDebugFlag", flag);
taosSetDebugFlag(&smaDebugFlag, "smaDebugFlag", flag);
taosSetDebugFlag(&idxDebugFlag, "idxDebugFlag", flag);
taosSetDebugFlag(&tdbDebugFlag, "tdbDebugFlag", flag);
uInfo("all debug flag are set to %d", flag);
}

View File

@ -2018,6 +2018,10 @@ int32_t tSerializeSCreateDbReq(void *buf, int32_t bufLen, SCreateDbReq *pReq) {
if (tEncodeI8(&encoder, pReq->strict) < 0) return -1;
if (tEncodeI8(&encoder, pReq->cacheLast) < 0) return -1;
if (tEncodeI8(&encoder, pReq->schemaless) < 0) return -1;
if (tEncodeI32(&encoder, pReq->walRetentionPeriod) < 0) return -1;
if (tEncodeI32(&encoder, pReq->walRetentionSize) < 0) return -1;
if (tEncodeI32(&encoder, pReq->walRollPeriod) < 0) return -1;
if (tEncodeI32(&encoder, pReq->walSegmentSize) < 0) return -1;
if (tEncodeI8(&encoder, pReq->ignoreExist) < 0) return -1;
if (tEncodeI32(&encoder, pReq->numOfRetensions) < 0) return -1;
for (int32_t i = 0; i < pReq->numOfRetensions; ++i) {
@ -2060,6 +2064,10 @@ int32_t tDeserializeSCreateDbReq(void *buf, int32_t bufLen, SCreateDbReq *pReq)
if (tDecodeI8(&decoder, &pReq->strict) < 0) return -1;
if (tDecodeI8(&decoder, &pReq->cacheLast) < 0) return -1;
if (tDecodeI8(&decoder, &pReq->schemaless) < 0) return -1;
if (tDecodeI32(&decoder, &pReq->walRetentionPeriod) < 0) return -1;
if (tDecodeI32(&decoder, &pReq->walRetentionSize) < 0) return -1;
if (tDecodeI32(&decoder, &pReq->walRollPeriod) < 0) return -1;
if (tDecodeI32(&decoder, &pReq->walSegmentSize) < 0) return -1;
if (tDecodeI8(&decoder, &pReq->ignoreExist) < 0) return -1;
if (tDecodeI32(&decoder, &pReq->numOfRetensions) < 0) return -1;
pReq->pRetensions = taosArrayInit(pReq->numOfRetensions, sizeof(SRetention));
@ -3742,6 +3750,10 @@ int32_t tSerializeSCreateVnodeReq(void *buf, int32_t bufLen, SCreateVnodeReq *pR
uint32_t tsmaLen = (uint32_t)(htonl(((SMsgHead *)pReq->pTsma)->contLen));
if (tEncodeBinary(&encoder, (const uint8_t *)pReq->pTsma, tsmaLen) < 0) return -1;
}
if (tEncodeI32(&encoder, pReq->walRetentionPeriod) < 0) return -1;
if (tEncodeI64(&encoder, pReq->walRetentionSize) < 0) return -1;
if (tEncodeI32(&encoder, pReq->walRollPeriod) < 0) return -1;
if (tEncodeI64(&encoder, pReq->walSegmentSize) < 0) return -1;
tEndEncode(&encoder);
@ -3810,6 +3822,11 @@ int32_t tDeserializeSCreateVnodeReq(void *buf, int32_t bufLen, SCreateVnodeReq *
if (tDecodeBinary(&decoder, (uint8_t **)&pReq->pTsma, NULL) < 0) return -1;
}
if (tDecodeI32(&decoder, &pReq->walRetentionPeriod) < 0) return -1;
if (tDecodeI64(&decoder, &pReq->walRetentionSize) < 0) return -1;
if (tDecodeI32(&decoder, &pReq->walRollPeriod) < 0) return -1;
if (tDecodeI64(&decoder, &pReq->walSegmentSize) < 0) return -1;
tEndDecode(&decoder);
tDecoderClear(&decoder);
return 0;

View File

@ -160,6 +160,13 @@ static void vmGenerateVnodeCfg(SCreateVnodeReq *pCreate, SVnodeCfg *pCfg) {
}
pCfg->walCfg.vgId = pCreate->vgId;
pCfg->walCfg.fsyncPeriod = pCreate->fsyncPeriod;
pCfg->walCfg.retentionPeriod = pCreate->walRetentionPeriod;
pCfg->walCfg.rollPeriod = pCreate->walRollPeriod;
pCfg->walCfg.retentionSize = pCreate->walRetentionSize;
pCfg->walCfg.segSize = pCreate->walSegmentSize;
pCfg->walCfg.level = pCreate->walLevel;
pCfg->hashBegin = pCreate->hashBegin;
pCfg->hashEnd = pCreate->hashEnd;
pCfg->hashMethod = pCreate->hashMethod;

View File

@ -164,8 +164,8 @@ typedef struct {
int32_t lastErrorNo;
tmsg_t lastMsgType;
SEpSet lastEpset;
char dbname1[TSDB_DB_FNAME_LEN];
char dbname2[TSDB_DB_FNAME_LEN];
char dbname1[TSDB_TABLE_FNAME_LEN];
char dbname2[TSDB_TABLE_FNAME_LEN];
int32_t startFunc;
int32_t stopFunc;
int32_t paramLen;
@ -302,9 +302,13 @@ typedef struct {
int8_t strict;
int8_t hashMethod; // default is 1
int8_t cacheLast;
int8_t schemaless;
int32_t numOfRetensions;
SArray* pRetensions;
int8_t schemaless;
int32_t walRetentionPeriod;
int64_t walRetentionSize;
int32_t walRollPeriod;
int64_t walSegmentSize;
} SDbCfg;
typedef struct {

View File

@ -120,6 +120,10 @@ static SSdbRaw *mndDbActionEncode(SDbObj *pDb) {
SDB_SET_INT8(pRaw, dataPos, pRetension->keepUnit, _OVER)
}
SDB_SET_INT8(pRaw, dataPos, pDb->cfg.schemaless, _OVER)
SDB_SET_INT32(pRaw, dataPos, pDb->cfg.walRetentionPeriod, _OVER)
SDB_SET_INT64(pRaw, dataPos, pDb->cfg.walRetentionSize, _OVER)
SDB_SET_INT32(pRaw, dataPos, pDb->cfg.walRollPeriod, _OVER)
SDB_SET_INT64(pRaw, dataPos, pDb->cfg.walSegmentSize, _OVER)
SDB_SET_RESERVE(pRaw, dataPos, DB_RESERVE_SIZE, _OVER)
SDB_SET_DATALEN(pRaw, dataPos, _OVER)
@ -199,6 +203,10 @@ static SSdbRow *mndDbActionDecode(SSdbRaw *pRaw) {
}
}
SDB_GET_INT8(pRaw, dataPos, &pDb->cfg.schemaless, _OVER)
SDB_GET_INT32(pRaw, dataPos, &pDb->cfg.walRetentionPeriod, _OVER)
SDB_GET_INT64(pRaw, dataPos, &pDb->cfg.walRetentionSize, _OVER)
SDB_GET_INT32(pRaw, dataPos, &pDb->cfg.walRollPeriod, _OVER)
SDB_GET_INT64(pRaw, dataPos, &pDb->cfg.walSegmentSize, _OVER)
SDB_GET_RESERVE(pRaw, dataPos, DB_RESERVE_SIZE, _OVER)
taosInitRWLatch(&pDb->lock);
@ -318,6 +326,10 @@ static int32_t mndCheckDbCfg(SMnode *pMnode, SDbCfg *pCfg) {
terrno = TSDB_CODE_MND_NO_ENOUGH_DNODES;
return -1;
}
if (pCfg->walRetentionPeriod < TSDB_DB_MIN_WAL_RETENTION_PERIOD) return -1;
if (pCfg->walRetentionSize < TSDB_DB_MIN_WAL_RETENTION_SIZE) return -1;
if (pCfg->walRollPeriod < TSDB_DB_MIN_WAL_ROLL_PERIOD) return -1;
if (pCfg->walSegmentSize < TSDB_DB_MIN_WAL_SEGMENT_SIZE) return -1;
terrno = 0;
return terrno;
@ -345,6 +357,12 @@ static void mndSetDefaultDbCfg(SDbCfg *pCfg) {
if (pCfg->cacheLastSize <= 0) pCfg->cacheLastSize = TSDB_DEFAULT_CACHE_SIZE;
if (pCfg->numOfRetensions < 0) pCfg->numOfRetensions = 0;
if (pCfg->schemaless < 0) pCfg->schemaless = TSDB_DB_SCHEMALESS_OFF;
if (pCfg->walRetentionPeriod < 0 && pCfg->walRetentionPeriod != -1)
pCfg->walRetentionPeriod = TSDB_DEFAULT_DB_WAL_RETENTION_PERIOD;
if (pCfg->walRetentionSize < 0 && pCfg->walRetentionSize != -1)
pCfg->walRetentionSize = TSDB_DEFAULT_DB_WAL_RETENTION_SIZE;
if (pCfg->walRollPeriod < 0) pCfg->walRollPeriod = TSDB_DEFAULT_DB_WAL_ROLL_PERIOD;
if (pCfg->walSegmentSize < 0) pCfg->walSegmentSize = TSDB_DEFAULT_DB_WAL_SEGMENT_SIZE;
}
static int32_t mndSetCreateDbRedoLogs(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SVgObj *pVgroups) {
@ -457,6 +475,10 @@ static int32_t mndCreateDb(SMnode *pMnode, SRpcMsg *pReq, SCreateDbReq *pCreate,
.cacheLast = pCreate->cacheLast,
.hashMethod = 1,
.schemaless = pCreate->schemaless,
.walRetentionPeriod = pCreate->walRetentionPeriod,
.walRetentionSize = pCreate->walRetentionSize,
.walRollPeriod = pCreate->walRollPeriod,
.walSegmentSize = pCreate->walSegmentSize,
};
dbObj.cfg.numOfRetensions = pCreate->numOfRetensions;

View File

@ -788,9 +788,9 @@ _OVER:
static int32_t mndProcessConfigDnodeReq(SRpcMsg *pReq) {
SMnode *pMnode = pReq->info.node;
const char *options[] = {
"debugFlag", "dDebugFlag", "vDebugFlag", "mDebugFlag", "wDebugFlag", "sDebugFlag",
"tsdbDebugFlag", "tqDebugFlag", "fsDebugFlag", "udfDebugFlag", "smaDebugFlag", "idxDebugFlag",
"tmrDebugFlag", "uDebugFlag", "smaDebugFlag", "rpcDebugFlag", "qDebugFlag",
"debugFlag", "dDebugFlag", "vDebugFlag", "mDebugFlag", "wDebugFlag", "sDebugFlag",
"tsdbDebugFlag", "tqDebugFlag", "fsDebugFlag", "udfDebugFlag", "smaDebugFlag", "idxDebugFlag",
"tdbDebugFlag", "tmrDebugFlag", "uDebugFlag", "smaDebugFlag", "rpcDebugFlag", "qDebugFlag",
};
int32_t optionSize = tListLen(options);
@ -813,7 +813,6 @@ static int32_t mndProcessConfigDnodeReq(SRpcMsg *pReq) {
SEpSet epSet = mndGetDnodeEpset(pDnode);
mndReleaseDnode(pMnode, pDnode);
SDCfgDnodeReq dcfgReq = {0};
if (strcasecmp(cfgReq.config, "resetlog") == 0) {
strcpy(dcfgReq.config, "resetlog");
@ -839,7 +838,7 @@ static int32_t mndProcessConfigDnodeReq(SRpcMsg *pReq) {
if (strncasecmp(cfgReq.config, optName, optLen) != 0) continue;
const char *value = cfgReq.value;
int32_t flag = atoi(value);
int32_t flag = atoi(value);
if (flag <= 0) {
flag = atoi(cfgReq.config + optLen + 1);
}
@ -874,7 +873,7 @@ static int32_t mndProcessConfigDnodeReq(SRpcMsg *pReq) {
}
static int32_t mndProcessConfigDnodeRsp(SRpcMsg *pRsp) {
mInfo("config rsp from dnode, app:%p", pRsp->info.ahandle);
mInfo("config rsp from dnode");
return 0;
}

View File

@ -281,7 +281,7 @@ static int32_t mndSetDropOffsetRedoLogs(SMnode *pMnode, STrans *pTrans, SMqOffse
}
int32_t mndDropOffsetByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
int32_t code = -1;
int32_t code = 0;
SSdb *pSdb = pMnode->pSdb;
void *pIter = NULL;
@ -297,15 +297,15 @@ int32_t mndDropOffsetByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
if (mndSetDropOffsetCommitLogs(pMnode, pTrans, pOffset) < 0) {
sdbRelease(pSdb, pOffset);
goto END;
sdbCancelFetch(pSdb, pIter);
code = -1;
break;
}
sdbRelease(pSdb, pOffset);
}
code = 0;
END:
return code;
return code;
}
int32_t mndDropOffsetByTopic(SMnode *pMnode, STrans *pTrans, const char *topic) {

View File

@ -641,6 +641,7 @@ static int32_t mndSetCreateStbRedoActions(SMnode *pMnode, STrans *pTrans, SDbObj
action.contLen = contLen;
action.msgType = TDMT_VND_CREATE_STB;
action.acceptableCode = TSDB_CODE_TDB_STB_ALREADY_EXIST;
action.retryCode = TSDB_CODE_TDB_STB_NOT_EXIST;
if (mndTransAppendRedoAction(pTrans, &action) != 0) {
taosMemoryFree(pReq);
sdbCancelFetch(pSdb, pIter);
@ -805,7 +806,7 @@ _OVER:
}
int32_t mndAddStbToTrans(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SStbObj *pStb) {
mndTransSetDbName(pTrans, pDb->name, NULL);
mndTransSetDbName(pTrans, pDb->name, pStb->name);
if (mndSetCreateStbRedoLogs(pMnode, pTrans, pDb, pStb) != 0) return -1;
if (mndSetCreateStbUndoLogs(pMnode, pTrans, pDb, pStb) != 0) return -1;
if (mndSetCreateStbCommitLogs(pMnode, pTrans, pDb, pStb) != 0) return -1;
@ -1612,7 +1613,7 @@ static int32_t mndAlterStbImp(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SStbOb
if (pTrans == NULL) goto _OVER;
mDebug("trans:%d, used to alter stb:%s", pTrans->id, pStb->name);
mndTransSetDbName(pTrans, pDb->name, NULL);
mndTransSetDbName(pTrans, pDb->name, pStb->name);
if (needRsp) {
void *pCont = NULL;
@ -1811,7 +1812,7 @@ static int32_t mndDropStb(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SStbObj *p
if (pTrans == NULL) goto _OVER;
mDebug("trans:%d, used to drop stb:%s", pTrans->id, pStb->name);
mndTransSetDbName(pTrans, pDb->name, NULL);
mndTransSetDbName(pTrans, pDb->name, pStb->name);
if (mndSetDropStbRedoLogs(pMnode, pTrans, pStb) != 0) goto _OVER;
if (mndSetDropStbCommitLogs(pMnode, pTrans, pStb) != 0) goto _OVER;

View File

@ -824,7 +824,7 @@ int32_t mndSetDropSubCommitLogs(SMnode *pMnode, STrans *pTrans, SMqSubscribeObj
}
int32_t mndDropSubByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
int32_t code = -1;
int32_t code = 0;
SSdb *pSdb = pMnode->pSdb;
void *pIter = NULL;
@ -840,12 +840,14 @@ int32_t mndDropSubByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
if (mndSetDropSubCommitLogs(pMnode, pTrans, pSub) < 0) {
sdbRelease(pSdb, pSub);
goto END;
sdbCancelFetch(pSdb, pIter);
code = -1;
break;
}
sdbRelease(pSdb, pSub);
}
code = 0;
END:
return code;
}

View File

@ -833,7 +833,7 @@ static void mndCancelGetNextTopic(SMnode *pMnode, void *pIter) {
}
int32_t mndDropTopicByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
int32_t code = -1;
int32_t code = 0;
SSdb *pSdb = pMnode->pSdb;
void *pIter = NULL;
@ -848,11 +848,14 @@ int32_t mndDropTopicByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
}
if (mndSetDropTopicCommitLogs(pMnode, pTrans, pTopic) < 0) {
goto END;
sdbRelease(pSdb, pTopic);
sdbCancelFetch(pSdb, pIter);
code = -1;
break;
}
sdbRelease(pSdb, pTopic);
}
code = 0;
END:
return code;
}

View File

@ -127,8 +127,8 @@ static SSdbRaw *mndTransActionEncode(STrans *pTrans) {
SDB_SET_INT8(pRaw, dataPos, 0, _OVER)
SDB_SET_INT8(pRaw, dataPos, 0, _OVER)
SDB_SET_INT64(pRaw, dataPos, pTrans->createdTime, _OVER)
SDB_SET_BINARY(pRaw, dataPos, pTrans->dbname1, TSDB_DB_FNAME_LEN, _OVER)
SDB_SET_BINARY(pRaw, dataPos, pTrans->dbname2, TSDB_DB_FNAME_LEN, _OVER)
SDB_SET_BINARY(pRaw, dataPos, pTrans->dbname1, TSDB_TABLE_FNAME_LEN, _OVER)
SDB_SET_BINARY(pRaw, dataPos, pTrans->dbname2, TSDB_TABLE_FNAME_LEN, _OVER)
SDB_SET_INT32(pRaw, dataPos, pTrans->redoActionPos, _OVER)
int32_t redoActionNum = taosArrayGetSize(pTrans->redoActions);
@ -290,8 +290,8 @@ static SSdbRow *mndTransActionDecode(SSdbRaw *pRaw) {
pTrans->exec = exec;
pTrans->oper = oper;
SDB_GET_INT64(pRaw, dataPos, &pTrans->createdTime, _OVER)
SDB_GET_BINARY(pRaw, dataPos, pTrans->dbname1, TSDB_DB_FNAME_LEN, _OVER)
SDB_GET_BINARY(pRaw, dataPos, pTrans->dbname2, TSDB_DB_FNAME_LEN, _OVER)
SDB_GET_BINARY(pRaw, dataPos, pTrans->dbname1, TSDB_TABLE_FNAME_LEN, _OVER)
SDB_GET_BINARY(pRaw, dataPos, pTrans->dbname2, TSDB_TABLE_FNAME_LEN, _OVER)
SDB_GET_INT32(pRaw, dataPos, &pTrans->redoActionPos, _OVER)
SDB_GET_INT32(pRaw, dataPos, &redoActionNum, _OVER)
SDB_GET_INT32(pRaw, dataPos, &undoActionNum, _OVER)
@ -727,10 +727,10 @@ int32_t mndSetRpcInfoForDbTrans(SMnode *pMnode, SRpcMsg *pMsg, EOperType oper, c
void mndTransSetDbName(STrans *pTrans, const char *dbname1, const char *dbname2) {
if (dbname1 != NULL) {
memcpy(pTrans->dbname1, dbname1, TSDB_DB_FNAME_LEN);
tstrncpy(pTrans->dbname1, dbname1, TSDB_TABLE_FNAME_LEN);
}
if (dbname2 != NULL) {
memcpy(pTrans->dbname2, dbname2, TSDB_DB_FNAME_LEN);
tstrncpy(pTrans->dbname2, dbname2, TSDB_TABLE_FNAME_LEN);
}
}
@ -1289,6 +1289,19 @@ static bool mndTransPerformRedoActionStage(SMnode *pMnode, STrans *pTrans) {
} else {
pTrans->code = terrno;
if (pTrans->policy == TRN_POLICY_ROLLBACK) {
if (pTrans->lastAction != 0) {
STransAction *pAction = taosArrayGet(pTrans->redoActions, pTrans->lastAction);
if (pAction->retryCode != 0 && pAction->retryCode != pAction->errCode) {
if (pTrans->failedTimes < 6) {
mError("trans:%d, stage keep on redoAction since action:%d code:0x%x not 0x%x, failedTimes:%d", pTrans->id,
pTrans->lastAction, pTrans->code, pAction->retryCode, pTrans->failedTimes);
taosMsleep(1000);
continueExec = true;
return true;
}
}
}
pTrans->stage = TRN_STAGE_ROLLBACK;
mError("trans:%d, stage from redoAction to rollback since %s", pTrans->id, terrstr());
continueExec = true;

View File

@ -230,6 +230,10 @@ void *mndBuildCreateVnodeReq(SMnode *pMnode, SDnodeObj *pDnode, SDbObj *pDb, SVg
createReq.standby = standby;
createReq.isTsma = pVgroup->isTsma;
createReq.pTsma = pVgroup->pTsma;
createReq.walRetentionPeriod = pDb->cfg.walRetentionPeriod;
createReq.walRetentionSize = pDb->cfg.walRetentionSize;
createReq.walRollPeriod = pDb->cfg.walRollPeriod;
createReq.walSegmentSize = pDb->cfg.walSegmentSize;
for (int32_t v = 0; v < pVgroup->replica; ++v) {
SReplica *pReplica = &createReq.replicas[v];

View File

@ -104,6 +104,8 @@ typedef struct {
// TODO remove
SWalReader* pWalReader;
SWalRef* pRef;
// push
STqPushHandle pushHandle;

View File

@ -268,6 +268,7 @@ struct SVnode {
tsem_t canCommit;
int64_t sync;
int32_t blockCount;
bool restored;
tsem_t syncSem;
SQHandle* pQuery;
};

View File

@ -180,11 +180,41 @@ int metaClose(SMeta *pMeta) {
return 0;
}
int32_t metaRLock(SMeta *pMeta) { return taosThreadRwlockRdlock(&pMeta->lock); }
int32_t metaRLock(SMeta *pMeta) {
int32_t ret = 0;
int32_t metaWLock(SMeta *pMeta) { return taosThreadRwlockWrlock(&pMeta->lock); }
metaDebug("meta rlock %p B", &pMeta->lock);
int32_t metaULock(SMeta *pMeta) { return taosThreadRwlockUnlock(&pMeta->lock); }
ret = taosThreadRwlockRdlock(&pMeta->lock);
metaDebug("meta rlock %p E", &pMeta->lock);
return ret;
}
int32_t metaWLock(SMeta *pMeta) {
int32_t ret = 0;
metaDebug("meta wlock %p B", &pMeta->lock);
ret = taosThreadRwlockWrlock(&pMeta->lock);
metaDebug("meta wlock %p E", &pMeta->lock);
return ret;
}
int32_t metaULock(SMeta *pMeta) {
int32_t ret = 0;
metaDebug("meta ulock %p B", &pMeta->lock);
ret = taosThreadRwlockUnlock(&pMeta->lock);
metaDebug("meta ulock %p E", &pMeta->lock);
return ret;
}
static int tbDbKeyCmpr(const void *pKey1, int kLen1, const void *pKey2, int kLen2) {
STbDbKey *pTbDbKey1 = (STbDbKey *)pKey1;
@ -259,7 +289,7 @@ static int ctbIdxKeyCmpr(const void *pKey1, int kLen1, const void *pKey2, int kL
static int tagIdxKeyCmpr(const void *pKey1, int kLen1, const void *pKey2, int kLen2) {
STagIdxKey *pTagIdxKey1 = (STagIdxKey *)pKey1;
STagIdxKey *pTagIdxKey2 = (STagIdxKey *)pKey2;
tb_uid_t uid1, uid2;
tb_uid_t uid1 = 0, uid2 = 0;
int c;
// compare suid
@ -287,14 +317,15 @@ static int tagIdxKeyCmpr(const void *pKey1, int kLen1, const void *pKey2, int kL
// all not NULL, compr tag vals
c = doCompare(pTagIdxKey1->data, pTagIdxKey2->data, pTagIdxKey1->type, 0);
if (c) return c;
}
if (IS_VAR_DATA_TYPE(pTagIdxKey1->type)) {
uid1 = *(tb_uid_t *)(pTagIdxKey1->data + varDataTLen(pTagIdxKey1->data));
uid2 = *(tb_uid_t *)(pTagIdxKey2->data + varDataTLen(pTagIdxKey2->data));
} else {
uid1 = *(tb_uid_t *)(pTagIdxKey1->data + tDataTypes[pTagIdxKey1->type].bytes);
uid2 = *(tb_uid_t *)(pTagIdxKey2->data + tDataTypes[pTagIdxKey2->type].bytes);
}
// both null or tag values are equal, then continue to compare uids
if (IS_VAR_DATA_TYPE(pTagIdxKey1->type)) {
uid1 = *(tb_uid_t *)(pTagIdxKey1->data + varDataTLen(pTagIdxKey1->data));
uid2 = *(tb_uid_t *)(pTagIdxKey2->data + varDataTLen(pTagIdxKey2->data));
} else {
uid1 = *(tb_uid_t *)(pTagIdxKey1->data + tDataTypes[pTagIdxKey1->type].bytes);
uid2 = *(tb_uid_t *)(pTagIdxKey2->data + tDataTypes[pTagIdxKey2->type].bytes);
}
// compare uid

View File

@ -178,7 +178,7 @@ int metaCreateSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) {
if (metaGetTableEntryByName(&mr, pReq->name) == 0) {
// TODO: just for pass case
#if 0
terrno = TSDB_CODE_TDB_TABLE_ALREADY_EXIST;
terrno = TSDB_CODE_TDB_STB_ALREADY_EXIST;
metaReaderClear(&mr);
return -1;
#else
@ -223,7 +223,7 @@ int metaDropSTable(SMeta *pMeta, int64_t verison, SVDropStbReq *pReq, SArray *tb
// check if super table exists
rc = tdbTbGet(pMeta->pNameIdx, pReq->name, strlen(pReq->name) + 1, &pData, &nData);
if (rc < 0 || *(tb_uid_t *)pData != pReq->suid) {
terrno = TSDB_CODE_VND_TABLE_NOT_EXIST;
terrno = TSDB_CODE_TDB_STB_NOT_EXIST;
return -1;
}

View File

@ -212,6 +212,15 @@ int32_t tqProcessOffsetCommitReq(STQ* pTq, char* msg, int32_t msgLen) {
ASSERT(0);
return -1;
}
if (offset.val.type == TMQ_OFFSET__LOG) {
STqHandle* pHandle = taosHashGet(pTq->handles, offset.subKey, strlen(offset.subKey));
if (walRefVer(pHandle->pRef, offset.val.version) < 0) {
ASSERT(0);
return -1;
}
}
/*}*/
/*}*/
@ -376,8 +385,8 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg) {
}
if (pHandle->execHandle.subType != TOPIC_SUB_TYPE__COLUMN) {
int64_t fetchVer = fetchOffsetNew.version + 1;
SWalCkHead* pCkHead = taosMemoryMalloc(sizeof(SWalCkHead) + 2048);
int64_t fetchVer = fetchOffsetNew.version + 1;
pCkHead = taosMemoryMalloc(sizeof(SWalCkHead) + 2048);
if (pCkHead == NULL) {
code = -1;
goto OVER;
@ -534,11 +543,14 @@ int32_t tqProcessVgChangeReq(STQ* pTq, char* msg, int32_t msgLen) {
pHandle->execHandle.subType = req.subType;
pHandle->fetchMeta = req.withMeta;
// TODO version should be assigned and refed during preprocess
SWalRef* pRef = walRefCommittedVer(pTq->pVnode->pWal);
if (pRef == NULL) {
ASSERT(0);
}
int64_t ver = pRef->refVer;
pHandle->pRef = pRef;
pHandle->pWalReader = walOpenReader(pTq->pVnode->pWal, NULL);
// TODO version should be assigned in preprocess
int64_t ver = walGetCommittedVer(pTq->pVnode->pWal);
if (pHandle->execHandle.subType == TOPIC_SUB_TYPE__COLUMN) {
pHandle->execHandle.execCol.qmsg = req.qmsg;
pHandle->snapshotVer = ver;
@ -560,14 +572,18 @@ int32_t tqProcessVgChangeReq(STQ* pTq, char* msg, int32_t msgLen) {
pHandle->execHandle.pExecReader = qExtractReaderFromStreamScanner(scanner);
ASSERT(pHandle->execHandle.pExecReader);
} else if (pHandle->execHandle.subType == TOPIC_SUB_TYPE__DB) {
pHandle->pWalReader = walOpenReader(pTq->pVnode->pWal, NULL);
pHandle->execHandle.pExecReader = tqOpenReader(pTq->pVnode);
pHandle->execHandle.execDb.pFilterOutTbUid =
taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK);
} else if (pHandle->execHandle.subType == TOPIC_SUB_TYPE__TABLE) {
pHandle->pWalReader = walOpenReader(pTq->pVnode->pWal, NULL);
pHandle->execHandle.execTb.suid = req.suid;
SArray* tbUidList = taosArrayInit(0, sizeof(int64_t));
vnodeGetCtbIdList(pTq->pVnode, req.suid, tbUidList);
tqDebug("vgId:%d, tq try get suid:%" PRId64, pTq->pVnode->config.vgId, req.suid);
tqDebug("vgId:%d, tq try to get all ctb, suid:%" PRId64, pTq->pVnode->config.vgId, req.suid);
for (int32_t i = 0; i < taosArrayGetSize(tbUidList); i++) {
int64_t tbUid = *(int64_t*)taosArrayGet(tbUidList, i);
tqDebug("vgId:%d, idx %d, uid:%" PRId64, TD_VID(pTq->pVnode), i, tbUid);

View File

@ -52,7 +52,7 @@ int32_t tqMetaOpen(STQ* pTq) {
ASSERT(0);
}
TXN txn;
TXN txn = {0};
if (tdbTxnOpen(&txn, 0, tdbDefaultMalloc, tdbDefaultFree, NULL, 0) < 0) {
ASSERT(0);
@ -75,7 +75,13 @@ int32_t tqMetaOpen(STQ* pTq) {
STqHandle handle;
tDecoderInit(&decoder, (uint8_t*)pVal, vLen);
tDecodeSTqHandle(&decoder, &handle);
handle.pWalReader = walOpenReader(pTq->pVnode->pWal, NULL);
handle.pRef = walOpenRef(pTq->pVnode->pWal);
if (handle.pRef == NULL) {
ASSERT(0);
}
walRefVer(handle.pRef, handle.snapshotVer);
if (handle.execHandle.subType == TOPIC_SUB_TYPE__COLUMN) {
SReadHandle reader = {
.meta = pTq->pVnode->pMeta,
@ -94,6 +100,7 @@ int32_t tqMetaOpen(STQ* pTq) {
handle.execHandle.pExecReader = qExtractReaderFromStreamScanner(scanner);
ASSERT(handle.execHandle.pExecReader);
} else {
handle.pWalReader = walOpenReader(pTq->pVnode->pWal, NULL);
handle.execHandle.execDb.pFilterOutTbUid =
taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK);
}

View File

@ -40,8 +40,8 @@ const SVnodeCfg vnodeCfgDefault = {.vgId = -1,
.vgId = -1,
.fsyncPeriod = 0,
.retentionPeriod = -1,
.rollPeriod = -1,
.segSize = -1,
.rollPeriod = 0,
.segSize = 0,
.retentionSize = -1,
.level = TAOS_WAL_WRITE,
},

View File

@ -16,23 +16,28 @@
#define _DEFAULT_SOURCE
#include "vnd.h"
#define BATCH_DISABLE 1
static inline bool vnodeIsMsgBlock(tmsg_t type) {
return (type == TDMT_VND_CREATE_TABLE) || (type == TDMT_VND_CREATE_TABLE) || (type == TDMT_VND_CREATE_TABLE) ||
(type == TDMT_VND_ALTER_TABLE) || (type == TDMT_VND_DROP_TABLE) || (type == TDMT_VND_UPDATE_TAG_VAL);
(type == TDMT_VND_ALTER_TABLE) || (type == TDMT_VND_DROP_TABLE) || (type == TDMT_VND_UPDATE_TAG_VAL) ||
(type == TDMT_VND_ALTER_REPLICA);
}
static inline bool vnodeIsMsgWeak(tmsg_t type) { return false; }
static inline void vnodeWaitBlockMsg(SVnode *pVnode, const SRpcMsg *pMsg) {
if (vnodeIsMsgBlock(pMsg->msgType)) {
vTrace("vgId:%d, msg:%p wait block, type:%s", pVnode->config.vgId, pMsg, TMSG_INFO(pMsg->msgType));
const STraceId *trace = &pMsg->info.traceId;
vGTrace("vgId:%d, msg:%p wait block, type:%s", pVnode->config.vgId, pMsg, TMSG_INFO(pMsg->msgType));
tsem_wait(&pVnode->syncSem);
}
}
static inline void vnodePostBlockMsg(SVnode *pVnode, const SRpcMsg *pMsg) {
if (vnodeIsMsgBlock(pMsg->msgType)) {
vTrace("vgId:%d, msg:%p post block, type:%s", pVnode->config.vgId, pMsg, TMSG_INFO(pMsg->msgType));
const STraceId *trace = &pMsg->info.traceId;
vGTrace("vgId:%d, msg:%p post block, type:%s", pVnode->config.vgId, pMsg, TMSG_INFO(pMsg->msgType));
tsem_post(&pVnode->syncSem);
}
}
@ -124,60 +129,147 @@ void vnodeRedirectRpcMsg(SVnode *pVnode, SRpcMsg *pMsg) {
tmsgSendRedirectRsp(&rsp, &newEpSet);
}
void vnodeProposeWriteMsg(SQueueInfo *pInfo, STaosQall *qall, int32_t numOfMsgs) {
SVnode *pVnode = pInfo->ahandle;
int32_t vgId = pVnode->config.vgId;
int32_t code = 0;
SRpcMsg *pMsg = NULL;
vTrace("vgId:%d, get %d msgs from vnode-write queue", vgId, numOfMsgs);
for (int32_t m = 0; m < numOfMsgs; m++) {
if (taosGetQitem(qall, (void **)&pMsg) == 0) continue;
static void inline vnodeHandleWriteMsg(SVnode *pVnode, SRpcMsg *pMsg) {
SRpcMsg rsp = {.code = pMsg->code, .info = pMsg->info};
if (vnodeProcessWriteMsg(pVnode, pMsg, pMsg->info.conn.applyIndex, &rsp) < 0) {
rsp.code = terrno;
const STraceId *trace = &pMsg->info.traceId;
vGTrace("vgId:%d, msg:%p get from vnode-write queue handle:%p", vgId, pMsg, pMsg->info.handle);
vGError("vgId:%d, msg:%p failed to apply right now since %s", pVnode->config.vgId, pMsg, terrstr());
}
if (rsp.info.handle != NULL) {
tmsgSendRsp(&rsp);
}
}
code = vnodePreProcessWriteMsg(pVnode, pMsg);
if (code != 0) {
vError("vgId:%d, msg:%p failed to pre-process since %s", vgId, pMsg, terrstr());
} else {
if (pMsg->msgType == TDMT_VND_ALTER_REPLICA) {
code = vnodeProcessAlterReplicaReq(pVnode, pMsg);
} else {
code = syncPropose(pVnode->sync, pMsg, vnodeIsMsgWeak(pMsg->msgType));
if (code > 0) {
SRpcMsg rsp = {.code = pMsg->code, .info = pMsg->info};
if (vnodeProcessWriteMsg(pVnode, pMsg, pMsg->info.conn.applyIndex, &rsp) < 0) {
rsp.code = terrno;
vError("vgId:%d, msg:%p failed to apply right now since %s", vgId, pMsg, terrstr());
}
if (rsp.info.handle != NULL) {
tmsgSendRsp(&rsp);
}
} else if (code == 0) {
vnodeWaitBlockMsg(pVnode, pMsg);
} else {
}
}
static void vnodeHandleProposeError(SVnode *pVnode, SRpcMsg *pMsg, int32_t code) {
if (code == TSDB_CODE_SYN_NOT_LEADER) {
vnodeRedirectRpcMsg(pVnode, pMsg);
} else {
const STraceId *trace = &pMsg->info.traceId;
vGError("vgId:%d, msg:%p failed to propose since %s, code:0x%x", pVnode->config.vgId, pMsg, tstrerror(code), code);
SRpcMsg rsp = {.code = code, .info = pMsg->info};
if (rsp.info.handle != NULL) {
tmsgSendRsp(&rsp);
}
}
}
if (code < 0) {
if (terrno == TSDB_CODE_SYN_NOT_LEADER) {
vnodeRedirectRpcMsg(pVnode, pMsg);
} else {
if (terrno != 0) code = terrno;
vError("vgId:%d, msg:%p failed to propose since %s, code:0x%x", vgId, pMsg, tstrerror(code), code);
SRpcMsg rsp = {.code = code, .info = pMsg->info};
if (rsp.info.handle != NULL) {
tmsgSendRsp(&rsp);
}
}
static void vnodeHandleAlterReplicaReq(SVnode *pVnode, SRpcMsg *pMsg) {
int32_t code = vnodeProcessAlterReplicaReq(pVnode, pMsg);
if (code > 0) {
ASSERT(0);
} else if (code == 0) {
vnodeWaitBlockMsg(pVnode, pMsg);
} else {
if (terrno != 0) code = terrno;
vnodeHandleProposeError(pVnode, pMsg, code);
}
const STraceId *trace = &pMsg->info.traceId;
vGTrace("vgId:%d, msg:%p is freed, code:0x%x", pVnode->config.vgId, pMsg, code);
rpcFreeCont(pMsg->pCont);
taosFreeQitem(pMsg);
}
static void inline vnodeProposeBatchMsg(SVnode *pVnode, SRpcMsg **pMsgArr, bool *pIsWeakArr, int32_t *arrSize) {
if (*arrSize <= 0) return;
#if BATCH_DISABLE
int32_t code = syncPropose(pVnode->sync, pMsgArr[0], pIsWeakArr[0]);
#else
int32_t code = syncProposeBatch(pVnode->sync, pMsgArr, pIsWeakArr, *arrSize);
#endif
if (code > 0) {
for (int32_t i = 0; i < *arrSize; ++i) {
vnodeHandleWriteMsg(pVnode, pMsgArr[i]);
}
} else if (code == 0) {
vnodeWaitBlockMsg(pVnode, pMsgArr[*arrSize - 1]);
} else {
if (terrno != 0) code = terrno;
for (int32_t i = 0; i < *arrSize; ++i) {
vnodeHandleProposeError(pVnode, pMsgArr[i], code);
}
}
vGTrace("vgId:%d, msg:%p is freed, code:0x%x", vgId, pMsg, code);
for (int32_t i = 0; i < *arrSize; ++i) {
SRpcMsg *pMsg = pMsgArr[i];
const STraceId *trace = &pMsg->info.traceId;
vGTrace("vgId:%d, msg:%p is freed, code:0x%x", pVnode->config.vgId, pMsg, code);
rpcFreeCont(pMsg->pCont);
taosFreeQitem(pMsg);
}
*arrSize = 0;
}
void vnodeProposeWriteMsg(SQueueInfo *pInfo, STaosQall *qall, int32_t numOfMsgs) {
SVnode *pVnode = pInfo->ahandle;
int32_t vgId = pVnode->config.vgId;
int32_t code = 0;
SRpcMsg *pMsg = NULL;
int32_t arrayPos = 0;
SRpcMsg **pMsgArr = taosMemoryCalloc(numOfMsgs, sizeof(SRpcMsg *));
bool *pIsWeakArr = taosMemoryCalloc(numOfMsgs, sizeof(bool));
vTrace("vgId:%d, get %d msgs from vnode-write queue", vgId, numOfMsgs);
for (int32_t msg = 0; msg < numOfMsgs; msg++) {
if (taosGetQitem(qall, (void **)&pMsg) == 0) continue;
bool isWeak = vnodeIsMsgWeak(pMsg->msgType);
bool isBlock = vnodeIsMsgBlock(pMsg->msgType);
const STraceId *trace = &pMsg->info.traceId;
vGTrace("vgId:%d, msg:%p get from vnode-write queue, weak:%d block:%d msg:%d:%d pos:%d, handle:%p", vgId, pMsg,
isWeak, isBlock, msg, numOfMsgs, arrayPos, pMsg->info.handle);
if (!pVnode->restored) {
vGError("vgId:%d, msg:%p failed to process since not leader", vgId, pMsg);
terrno = TSDB_CODE_APP_NOT_READY;
vnodeHandleProposeError(pVnode, pMsg, TSDB_CODE_APP_NOT_READY);
rpcFreeCont(pMsg->pCont);
taosFreeQitem(pMsg);
continue;
}
if (pMsgArr == NULL || pIsWeakArr == NULL) {
vGError("vgId:%d, msg:%p failed to process since out of memory", vgId, pMsg);
terrno = TSDB_CODE_OUT_OF_MEMORY;
vnodeHandleProposeError(pVnode, pMsg, terrno);
rpcFreeCont(pMsg->pCont);
taosFreeQitem(pMsg);
continue;
}
code = vnodePreProcessWriteMsg(pVnode, pMsg);
if (code != 0) {
vGError("vgId:%d, msg:%p failed to pre-process since %s", vgId, pMsg, terrstr());
rpcFreeCont(pMsg->pCont);
taosFreeQitem(pMsg);
continue;
}
if (pMsg->msgType == TDMT_VND_ALTER_REPLICA) {
vnodeHandleAlterReplicaReq(pVnode, pMsg);
continue;
}
if (isBlock || BATCH_DISABLE) {
vnodeProposeBatchMsg(pVnode, pMsgArr, pIsWeakArr, &arrayPos);
}
pMsgArr[arrayPos] = pMsg;
pIsWeakArr[arrayPos] = isWeak;
arrayPos++;
if (isBlock || msg == numOfMsgs - 1 || BATCH_DISABLE) {
vnodeProposeBatchMsg(pVnode, pMsgArr, pIsWeakArr, &arrayPos);
}
}
taosMemoryFree(pMsgArr);
taosMemoryFree(pIsWeakArr);
}
void vnodeApplyWriteMsg(SQueueInfo *pInfo, STaosQall *qall, int32_t numOfMsgs) {
@ -409,34 +501,56 @@ static void vnodeSyncReconfig(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SReCon
}
static void vnodeSyncCommitMsg(SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta cbMeta) {
SVnode *pVnode = pFsm->data;
vTrace("vgId:%d, commit-cb is excuted, fsm:%p, index:%" PRId64 ", isWeak:%d, code:%d, state:%d %s, msgtype:%d %s",
syncGetVgId(pVnode->sync), pFsm, cbMeta.index, cbMeta.isWeak, cbMeta.code, cbMeta.state,
syncUtilState2String(cbMeta.state), pMsg->msgType, TMSG_INFO(pMsg->msgType));
if (cbMeta.isWeak == 0) {
SVnode *pVnode = pFsm->data;
vTrace("vgId:%d, commit-cb is excuted, fsm:%p, index:%" PRId64 ", isWeak:%d, code:%d, state:%d %s, msgtype:%d %s",
syncGetVgId(pVnode->sync), pFsm, cbMeta.index, cbMeta.isWeak, cbMeta.code, cbMeta.state,
syncUtilState2String(cbMeta.state), pMsg->msgType, TMSG_INFO(pMsg->msgType));
if (cbMeta.code == 0) {
SRpcMsg rpcMsg = {.msgType = pMsg->msgType, .contLen = pMsg->contLen};
rpcMsg.pCont = rpcMallocCont(rpcMsg.contLen);
memcpy(rpcMsg.pCont, pMsg->pCont, pMsg->contLen);
syncGetAndDelRespRpc(pVnode->sync, cbMeta.seqNum, &rpcMsg.info);
rpcMsg.info.conn.applyIndex = cbMeta.index;
rpcMsg.info.conn.applyTerm = cbMeta.term;
tmsgPutToQueue(&pVnode->msgCb, APPLY_QUEUE, &rpcMsg);
} else {
SRpcMsg rsp = {.code = cbMeta.code, .info = pMsg->info};
vError("vgId:%d, sync commit error, msgtype:%d,%s, error:0x%X, errmsg:%s", syncGetVgId(pVnode->sync), pMsg->msgType,
TMSG_INFO(pMsg->msgType), cbMeta.code, tstrerror(cbMeta.code));
if (rsp.info.handle != NULL) {
tmsgSendRsp(&rsp);
if (cbMeta.code == 0) {
SRpcMsg rpcMsg = {.msgType = pMsg->msgType, .contLen = pMsg->contLen};
rpcMsg.pCont = rpcMallocCont(rpcMsg.contLen);
memcpy(rpcMsg.pCont, pMsg->pCont, pMsg->contLen);
syncGetAndDelRespRpc(pVnode->sync, cbMeta.seqNum, &rpcMsg.info);
rpcMsg.info.conn.applyIndex = cbMeta.index;
rpcMsg.info.conn.applyTerm = cbMeta.term;
tmsgPutToQueue(&pVnode->msgCb, APPLY_QUEUE, &rpcMsg);
} else {
SRpcMsg rsp = {.code = cbMeta.code, .info = pMsg->info};
vError("vgId:%d, sync commit error, msgtype:%d,%s, error:0x%X, errmsg:%s", syncGetVgId(pVnode->sync),
pMsg->msgType, TMSG_INFO(pMsg->msgType), cbMeta.code, tstrerror(cbMeta.code));
if (rsp.info.handle != NULL) {
tmsgSendRsp(&rsp);
}
}
}
}
static void vnodeSyncPreCommitMsg(SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta cbMeta) {
SVnode *pVnode = pFsm->data;
vTrace("vgId:%d, pre-commit-cb is excuted, fsm:%p, index:%" PRId64 ", isWeak:%d, code:%d, state:%d %s, msgtype:%d %s",
syncGetVgId(pVnode->sync), pFsm, cbMeta.index, cbMeta.isWeak, cbMeta.code, cbMeta.state,
syncUtilState2String(cbMeta.state), pMsg->msgType, TMSG_INFO(pMsg->msgType));
if (cbMeta.isWeak == 1) {
SVnode *pVnode = pFsm->data;
vTrace("vgId:%d, pre-commit-cb is excuted, fsm:%p, index:%" PRId64
", isWeak:%d, code:%d, state:%d %s, msgtype:%d %s",
syncGetVgId(pVnode->sync), pFsm, cbMeta.index, cbMeta.isWeak, cbMeta.code, cbMeta.state,
syncUtilState2String(cbMeta.state), pMsg->msgType, TMSG_INFO(pMsg->msgType));
if (cbMeta.code == 0) {
SRpcMsg rpcMsg = {.msgType = pMsg->msgType, .contLen = pMsg->contLen};
rpcMsg.pCont = rpcMallocCont(rpcMsg.contLen);
memcpy(rpcMsg.pCont, pMsg->pCont, pMsg->contLen);
syncGetAndDelRespRpc(pVnode->sync, cbMeta.seqNum, &rpcMsg.info);
rpcMsg.info.conn.applyIndex = cbMeta.index;
rpcMsg.info.conn.applyTerm = cbMeta.term;
tmsgPutToQueue(&pVnode->msgCb, APPLY_QUEUE, &rpcMsg);
} else {
SRpcMsg rsp = {.code = cbMeta.code, .info = pMsg->info};
vError("vgId:%d, sync pre-commit error, msgtype:%d,%s, error:0x%X, errmsg:%s", syncGetVgId(pVnode->sync),
pMsg->msgType, TMSG_INFO(pMsg->msgType), cbMeta.code, tstrerror(cbMeta.code));
if (rsp.info.handle != NULL) {
tmsgSendRsp(&rsp);
}
}
}
}
static void vnodeSyncRollBackMsg(SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta cbMeta) {
@ -527,6 +641,12 @@ static void vnodeLeaderTransfer(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsm
SVnode *pVnode = pFsm->data;
}
static void vnodeRestoreFinish(struct SSyncFSM *pFsm) {
SVnode *pVnode = pFsm->data;
pVnode->restored = true;
vDebug("vgId:%d, sync restore finished", pVnode->config.vgId);
}
static SSyncFSM *vnodeSyncMakeFsm(SVnode *pVnode) {
SSyncFSM *pFsm = taosMemoryCalloc(1, sizeof(SSyncFSM));
pFsm->data = pVnode;
@ -534,7 +654,7 @@ static SSyncFSM *vnodeSyncMakeFsm(SVnode *pVnode) {
pFsm->FpPreCommitCb = vnodeSyncPreCommitMsg;
pFsm->FpRollBackCb = vnodeSyncRollBackMsg;
pFsm->FpGetSnapshotInfo = vnodeSyncGetSnapshot;
pFsm->FpRestoreFinishCb = NULL;
pFsm->FpRestoreFinishCb = vnodeRestoreFinish;
pFsm->FpLeaderTransferCb = vnodeLeaderTransfer;
pFsm->FpReConfigCb = vnodeSyncReconfig;
pFsm->FpSnapshotStartRead = vnodeSnapshotStartRead;
@ -588,11 +708,10 @@ bool vnodeIsLeader(SVnode *pVnode) {
return false;
}
// todo
// if (!pVnode->restored) {
// terrno = TSDB_CODE_APP_NOT_READY;
// return false;
// }
if (!pVnode->restored) {
terrno = TSDB_CODE_APP_NOT_READY;
return false;
}
return true;
}

View File

@ -135,7 +135,7 @@ int32_t qExplainGenerateResChildren(SPhysiNode *pNode, SExplainGroup *group, SNo
break;
}
case QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN: {
SJoinPhysiNode *pJoinNode = (SJoinPhysiNode *)pNode;
SSortMergeJoinPhysiNode *pJoinNode = (SSortMergeJoinPhysiNode *)pNode;
pPhysiChildren = pJoinNode->node.pChildren;
break;
}
@ -434,7 +434,8 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
case QUERY_NODE_PHYSICAL_PLAN_TABLE_SCAN: {
STableScanPhysiNode *pTblScanNode = (STableScanPhysiNode *)pNode;
EXPLAIN_ROW_NEW(level,
QUERY_NODE_PHYSICAL_PLAN_TABLE_MERGE_SCAN == pNode->type ? EXPLAIN_TBL_MERGE_SCAN_FORMAT : EXPLAIN_TBL_SCAN_FORMAT,
QUERY_NODE_PHYSICAL_PLAN_TABLE_MERGE_SCAN == pNode->type ? EXPLAIN_TBL_MERGE_SCAN_FORMAT
: EXPLAIN_TBL_SCAN_FORMAT,
pTblScanNode->scan.tableName.tname);
EXPLAIN_ROW_APPEND(EXPLAIN_LEFT_PARENTHESIS_FORMAT);
if (pResNode->pExecInfo) {
@ -551,7 +552,7 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
if (pSTblScanNode->scan.pScanPseudoCols) {
EXPLAIN_ROW_APPEND(EXPLAIN_PSEUDO_COLUMNS_FORMAT, pSTblScanNode->scan.pScanPseudoCols->length);
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
}
EXPLAIN_ROW_APPEND(EXPLAIN_WIDTH_FORMAT, pSTblScanNode->scan.node.pOutputDataBlockDesc->totalRowSize);
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
@ -613,7 +614,7 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
break;
}
case QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN: {
SJoinPhysiNode *pJoinNode = (SJoinPhysiNode *)pNode;
SSortMergeJoinPhysiNode *pJoinNode = (SSortMergeJoinPhysiNode *)pNode;
EXPLAIN_ROW_NEW(level, EXPLAIN_JOIN_FORMAT, EXPLAIN_JOIN_STRING(pJoinNode->joinType));
EXPLAIN_ROW_APPEND(EXPLAIN_LEFT_PARENTHESIS_FORMAT);
if (pResNode->pExecInfo) {
@ -1180,7 +1181,7 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
if (pDistScanNode->pScanPseudoCols) {
EXPLAIN_ROW_APPEND(EXPLAIN_PSEUDO_COLUMNS_FORMAT, pDistScanNode->pScanPseudoCols->length);
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
}
EXPLAIN_ROW_APPEND(EXPLAIN_WIDTH_FORMAT, pDistScanNode->node.pOutputDataBlockDesc->totalRowSize);
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
@ -1367,7 +1368,7 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
EXPLAIN_ROW_APPEND(EXPLAIN_FUNCTIONS_FORMAT, pInterpNode->pFuncs->length);
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
}
EXPLAIN_ROW_APPEND(EXPLAIN_MODE_FORMAT, nodesGetFillModeString(pInterpNode->fillMode));
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
@ -1419,7 +1420,7 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
}
}
break;
}
}
default:
qError("not supported physical node type %d", pNode->type);
return TSDB_CODE_QRY_APP_ERROR;

View File

@ -320,6 +320,49 @@ typedef struct STableScanInfo {
int8_t noTable;
} STableScanInfo;
typedef struct STableMergeScanInfo {
STableListInfo* tableListInfo;
int32_t tableStartIndex;
int32_t tableEndIndex;
bool hasGroupId;
uint64_t groupId;
SArray* dataReaders; // array of tsdbReaderT*
SReadHandle readHandle;
int32_t bufPageSize;
uint32_t sortBufSize; // max buffer size for in-memory sort
SArray* pSortInfo;
SSortHandle* pSortHandle;
SSDataBlock* pSortInputBlock;
int64_t startTs; // sort start time
SArray* sortSourceParams;
SFileBlockLoadRecorder readRecorder;
int64_t numOfRows;
SScanInfo scanInfo;
int32_t scanTimes;
SNode* pFilterNode; // filter info, which is push down by optimizer
SqlFunctionCtx* pCtx; // which belongs to the direct upstream operator operator query context
SResultRowInfo* pResultRowInfo;
int32_t* rowEntryInfoOffset;
SExprInfo* pExpr;
SSDataBlock* pResBlock;
SArray* pColMatchInfo;
int32_t numOfOutput;
SExprSupp pseudoSup;
SQueryTableDataCond cond;
int32_t scanFlag; // table scan flag to denote if it is a repeat/reverse/main scan
int32_t dataBlockLoadFlag;
// if the upstream is an interval operator, the interval info is also kept here to get the time
// window to check if current data block needs to be loaded.
SInterval interval;
SSampleExecInfo sample; // sample execution info
SSortExecInfo sortExecInfo;
} STableMergeScanInfo;
typedef struct STagScanInfo {
SColumnInfo *pCols;
SSDataBlock *pRes;
@ -886,7 +929,7 @@ SOperatorInfo* createPartitionOperatorInfo(SOperatorInfo* downstream, SPartition
SOperatorInfo* createTimeSliceOperatorInfo(SOperatorInfo* downstream, SPhysiNode* pNode, SExecTaskInfo* pTaskInfo);
SOperatorInfo* createMergeJoinOperatorInfo(SOperatorInfo** pDownstream, int32_t numOfDownstream, SJoinPhysiNode* pJoinNode,
SOperatorInfo* createMergeJoinOperatorInfo(SOperatorInfo** pDownstream, int32_t numOfDownstream, SSortMergeJoinPhysiNode* pJoinNode,
SExecTaskInfo* pTaskInfo);
SOperatorInfo* createStreamSessionAggOperatorInfo(SOperatorInfo* downstream,
@ -959,6 +1002,7 @@ int32_t updateSessionWindowInfo(SResultWindowInfo* pWinInfo, TSKEY* pStartTs,
bool functionNeedToExecute(SqlFunctionCtx* pCtx);
bool isCloseWindow(STimeWindow* pWin, STimeWindowAggSupp* pSup);
void appendOneRow(SSDataBlock* pBlock, TSKEY* pStartTs, TSKEY* pEndTs, uint64_t* pUid);
void printDataBlock(SSDataBlock* pBlock, const char* flag);
int32_t finalizeResultRowIntoResultDataBlock(SDiskbasedBuf* pBuf, SResultRowPosition* resultRowPosition,
SqlFunctionCtx* pCtx, SExprInfo* pExprInfo, int32_t numOfExprs, const int32_t* rowCellOffset,

View File

@ -182,7 +182,8 @@ static int32_t getDataBlock(SDataSinkHandle* pHandle, SOutputData* pOutput) {
return TSDB_CODE_SUCCESS;
}
SDataCacheEntry* pEntry = (SDataCacheEntry*)(pDeleter->nextOutput.pData);
memcpy(pOutput->pData, pEntry->data, pEntry->dataLen);
memcpy(pOutput->pData, pEntry->data, pEntry->dataLen);
pDeleter->pParam->pUidList = NULL;
pOutput->numOfRows = pEntry->numOfRows;
pOutput->numOfCols = pEntry->numOfCols;
pOutput->compressed = pEntry->compressed;
@ -205,6 +206,8 @@ static int32_t destroyDataSinker(SDataSinkHandle* pHandle) {
SDataDeleterHandle* pDeleter = (SDataDeleterHandle*)pHandle;
atomic_sub_fetch_64(&gDataSinkStat.cachedSize, pDeleter->cachedSize);
taosMemoryFreeClear(pDeleter->nextOutput.pData);
taosArrayDestroy(pDeleter->pParam->pUidList);
taosMemoryFree(pDeleter->pParam);
while (!taosQueueEmpty(pDeleter->pDataBlocks)) {
SDataDeleterBuf* pBuf = NULL;
taosReadQitem(pDeleter->pDataBlocks, (void**)&pBuf);

View File

@ -666,6 +666,11 @@ int32_t projectApplyFunctions(SExprInfo* pExpr, SSDataBlock* pResult, SSDataBloc
pfCtx->pTsOutput = (SColumnInfoData*)pCtx[*outputColIndex].pOutput;
}
// link pDstBlock to set selectivity value
if (pfCtx->subsidiaries.num > 0) {
pfCtx->pDstBlock = pResult;
}
numOfRows = pfCtx->fpSet.process(pfCtx);
} else if (fmIsAggFunc(pfCtx->functionId)) {
// _group_key function for "partition by tbname" + csum(col_name) query
@ -1325,7 +1330,7 @@ void doFilter(const SNode* pFilterNode, SSDataBlock* pBlock, const SArray* pColM
extractQualifiedTupleByFilterResult(pBlock, rowRes, keep);
if (pColMatchInfo != NULL) {
for(int32_t i = 0; i < taosArrayGetSize(pColMatchInfo); ++i) {
for (int32_t i = 0; i < taosArrayGetSize(pColMatchInfo); ++i) {
SColMatchInfo* pInfo = taosArrayGet(pColMatchInfo, i);
if (pInfo->colId == PRIMARYKEY_TIMESTAMP_COL_ID) {
SColumnInfoData* pColData = taosArrayGet(pBlock->pDataBlock, pInfo->targetSlotId);
@ -1646,10 +1651,10 @@ void queryCostStatis(SExecTaskInfo* pTaskInfo) {
SFileBlockLoadRecorder* pRecorder = pSummary->pRecoder;
if (pSummary->pRecoder != NULL) {
qDebug(
"%s :cost summary: elapsed time:%.2f ms, total blocks:%d, load block SMA:%d, load data block:%d, total rows:%"
PRId64 ", check rows:%" PRId64, GET_TASKID(pTaskInfo), pSummary->elapsedTime / 1000.0,
pRecorder->totalBlocks, pRecorder->loadBlockStatis, pRecorder->loadBlocks, pRecorder->totalRows,
pRecorder->totalCheckedRows);
"%s :cost summary: elapsed time:%.2f ms, total blocks:%d, load block SMA:%d, load data block:%d, total "
"rows:%" PRId64 ", check rows:%" PRId64,
GET_TASKID(pTaskInfo), pSummary->elapsedTime / 1000.0, pRecorder->totalBlocks, pRecorder->loadBlockStatis,
pRecorder->loadBlocks, pRecorder->totalRows, pRecorder->totalCheckedRows);
}
// qDebug("QInfo:0x%"PRIx64" :cost summary: winResPool size:%.2f Kb, numOfWin:%"PRId64", tableInfoSize:%.2f Kb,
@ -2783,11 +2788,16 @@ int32_t getTableScanInfo(SOperatorInfo* pOperator, int32_t* order, int32_t* scan
*order = TSDB_ORDER_ASC;
*scanFlag = MAIN_SCAN;
return TSDB_CODE_SUCCESS;
} else if (type == QUERY_NODE_PHYSICAL_PLAN_TABLE_SCAN || type == QUERY_NODE_PHYSICAL_PLAN_TABLE_MERGE_SCAN) {
} else if (type == QUERY_NODE_PHYSICAL_PLAN_TABLE_SCAN) {
STableScanInfo* pTableScanInfo = pOperator->info;
*order = pTableScanInfo->cond.order;
*scanFlag = pTableScanInfo->scanFlag;
return TSDB_CODE_SUCCESS;
} else if (type == QUERY_NODE_PHYSICAL_PLAN_TABLE_MERGE_SCAN) {
STableMergeScanInfo* pTableScanInfo = pOperator->info;
*order = pTableScanInfo->cond.order;
*scanFlag = pTableScanInfo->scanFlag;
return TSDB_CODE_SUCCESS;
} else {
if (pOperator->pDownstream == NULL || pOperator->pDownstream[0] == NULL) {
return TSDB_CODE_INVALID_PARA;
@ -3727,7 +3737,7 @@ SSchemaWrapper* extractQueriedColumnSchema(SScanPhysiNode* pScanNode) {
}
// this the tags and pseudo function columns, we only keep the tag columns
for(int32_t i = 0; i < numOfTags; ++i) {
for (int32_t i = 0; i < numOfTags; ++i) {
STargetNode* pNode = (STargetNode*)nodesListGetNode(pScanNode->pScanPseudoCols, i);
int32_t type = nodeType(pNode->pExpr);
@ -3843,7 +3853,7 @@ int32_t generateGroupIdMap(STableListInfo* pTableListInfo, SReadHandle* pHandle,
int32_t groupNum = 0;
for (int32_t i = 0; i < taosArrayGetSize(pTableListInfo->pTableList); i++) {
STableKeyInfo* info = taosArrayGet(pTableListInfo->pTableList, i);
int32_t code = getGroupIdFromTagsVal(pHandle->meta, info->uid, group, keyBuf, &info->groupId);
int32_t code = getGroupIdFromTagsVal(pHandle->meta, info->uid, group, keyBuf, &info->groupId);
if (code != TSDB_CODE_SUCCESS) {
return code;
}
@ -4164,7 +4174,7 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
} else if (QUERY_NODE_PHYSICAL_PLAN_STREAM_STATE == type) {
pOptr = createStreamStateAggOperatorInfo(ops[0], pPhyNode, pTaskInfo);
} else if (QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN == type) {
pOptr = createMergeJoinOperatorInfo(ops, size, (SJoinPhysiNode*)pPhyNode, pTaskInfo);
pOptr = createMergeJoinOperatorInfo(ops, size, (SSortMergeJoinPhysiNode*)pPhyNode, pTaskInfo);
} else if (QUERY_NODE_PHYSICAL_PLAN_FILL == type) {
pOptr = createFillOperatorInfo(ops[0], (SFillPhysiNode*)pPhyNode, pTaskInfo);
} else if (QUERY_NODE_PHYSICAL_PLAN_INDEF_ROWS_FUNC == type) {

View File

@ -28,30 +28,30 @@ static SSDataBlock* doMergeJoin(struct SOperatorInfo* pOperator);
static void destroyMergeJoinOperator(void* param, int32_t numOfOutput);
static void extractTimeCondition(SJoinOperatorInfo* Info, SLogicConditionNode* pLogicConditionNode);
SOperatorInfo* createMergeJoinOperatorInfo(SOperatorInfo** pDownstream, int32_t numOfDownstream, SJoinPhysiNode* pJoinNode,
SExecTaskInfo* pTaskInfo) {
SOperatorInfo* createMergeJoinOperatorInfo(SOperatorInfo** pDownstream, int32_t numOfDownstream,
SSortMergeJoinPhysiNode* pJoinNode, SExecTaskInfo* pTaskInfo) {
SJoinOperatorInfo* pInfo = taosMemoryCalloc(1, sizeof(SJoinOperatorInfo));
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
if (pOperator == NULL || pInfo == NULL) {
goto _error;
}
SSDataBlock* pResBlock = createResDataBlock(pJoinNode->node.pOutputDataBlockDesc);
SSDataBlock* pResBlock = createResDataBlock(pJoinNode->node.pOutputDataBlockDesc);
int32_t numOfCols = 0;
int32_t numOfCols = 0;
SExprInfo* pExprInfo = createExprInfo(pJoinNode->pTargets, NULL, &numOfCols);
initResultSizeInfo(&pOperator->resultInfo, 4096);
pInfo->pRes = pResBlock;
pOperator->name = "MergeJoinOperator";
pInfo->pRes = pResBlock;
pOperator->name = "MergeJoinOperator";
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN;
pOperator->blocking = false;
pOperator->status = OP_NOT_OPENED;
pOperator->exprSupp.pExprInfo = pExprInfo;
pOperator->exprSupp.numOfExprs = numOfCols;
pOperator->info = pInfo;
pOperator->pTaskInfo = pTaskInfo;
pOperator->blocking = false;
pOperator->status = OP_NOT_OPENED;
pOperator->exprSupp.pExprInfo = pExprInfo;
pOperator->exprSupp.numOfExprs = numOfCols;
pOperator->info = pInfo;
pOperator->pTaskInfo = pTaskInfo;
SNode* pMergeCondition = pJoinNode->pMergeCondition;
if (nodeType(pMergeCondition) == QUERY_NODE_OPERATOR) {
@ -104,7 +104,7 @@ void setJoinColumnInfo(SColumnInfo* pColumn, const SColumnNode* pColumnNode) {
void destroyMergeJoinOperator(void* param, int32_t numOfOutput) {
SJoinOperatorInfo* pJoinOperator = (SJoinOperatorInfo*)param;
nodesDestroyNode(pJoinOperator->pCondAfterMerge);
taosMemoryFreeClear(param);
}

View File

@ -68,10 +68,12 @@ SOperatorInfo* createProjectOperatorInfo(SOperatorInfo* downstream, SProjectPhys
pInfo->mergeDataBlocks = pProjPhyNode->mergeDataBlock;
// todo remove it soon
if (pTaskInfo->execModel == OPTR_EXEC_MODEL_STREAM) {
pInfo->mergeDataBlocks = false;
}
int32_t numOfRows = 4096;
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
@ -181,6 +183,16 @@ static int32_t doIngroupLimitOffset(SLimitInfo* pLimitInfo, uint64_t groupId, SS
return PROJECT_RETRIEVE_DONE;
}
void printDataBlock1(SSDataBlock* pBlock, const char* flag) {
if (!pBlock || pBlock->info.rows == 0) {
qDebug("===stream===printDataBlock: Block is Null or Empty");
return;
}
char* pBuf = NULL;
qDebug("%s", dumpBlockData(pBlock, flag, &pBuf));
taosMemoryFreeClear(pBuf);
}
SSDataBlock* doProjectOperation(SOperatorInfo* pOperator) {
SProjectOperatorInfo* pProjectInfo = pOperator->info;
SOptrBasicInfo* pInfo = &pProjectInfo->binfo;
@ -229,6 +241,7 @@ SSDataBlock* doProjectOperation(SOperatorInfo* pOperator) {
// for stream interval
if (pBlock->info.type == STREAM_RETRIEVE) {
// printDataBlock1(pBlock, "project1");
return pBlock;
}
@ -304,7 +317,8 @@ SSDataBlock* doProjectOperation(SOperatorInfo* pOperator) {
if (pOperator->cost.openCost == 0) {
pOperator->cost.openCost = (taosGetTimestampUs() - st) / 1000.0;
}
// printDataBlock1(p, "project");
return (p->info.rows > 0) ? p : NULL;
}
@ -589,4 +603,4 @@ SSDataBlock* doGenerateSourceData(SOperatorInfo* pOperator) {
}
return (pRes->info.rows > 0) ? pRes : NULL;
}
}

View File

@ -274,7 +274,7 @@ static int32_t loadDataBlock(SOperatorInfo* pOperator, STableScanInfo* pTableSca
qDebug("%s data block filter out, brange:%" PRId64 "-%" PRId64 ", rows:%d", GET_TASKID(pTaskInfo),
pBlockInfo->window.skey, pBlockInfo->window.ekey, pBlockInfo->rows);
} else {
qDebug("%s data block filter out, elapsed time:%"PRId64, GET_TASKID(pTaskInfo), (et - st));
qDebug("%s data block filter out, elapsed time:%" PRId64, GET_TASKID(pTaskInfo), (et - st));
}
return TSDB_CODE_SUCCESS;
@ -1838,11 +1838,14 @@ static SSDataBlock* sysTableScanUserTags(SOperatorInfo* pOperator) {
int8_t tagType = smr.me.stbEntry.schemaTag.pSchema[i].type;
pColInfoData = taosArrayGet(p->pDataBlock, 4);
char tagTypeStr[VARSTR_HEADER_SIZE + 32];
int tagTypeLen = sprintf(varDataVal(tagTypeStr), "%s", tDataTypes[tagType].name);
int tagTypeLen = sprintf(varDataVal(tagTypeStr), "%s", tDataTypes[tagType].name);
if (tagType == TSDB_DATA_TYPE_VARCHAR) {
tagTypeLen += sprintf(varDataVal(tagTypeStr) + tagTypeLen, "(%d)", (int32_t)(smr.me.stbEntry.schemaTag.pSchema[i].bytes - VARSTR_HEADER_SIZE));
tagTypeLen += sprintf(varDataVal(tagTypeStr) + tagTypeLen, "(%d)",
(int32_t)(smr.me.stbEntry.schemaTag.pSchema[i].bytes - VARSTR_HEADER_SIZE));
} else if (tagType == TSDB_DATA_TYPE_NCHAR) {
tagTypeLen += sprintf(varDataVal(tagTypeStr) + tagTypeLen, "(%d)", (int32_t)((smr.me.stbEntry.schemaTag.pSchema[i].bytes - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE));
tagTypeLen +=
sprintf(varDataVal(tagTypeStr) + tagTypeLen, "(%d)",
(int32_t)((smr.me.stbEntry.schemaTag.pSchema[i].bytes - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE));
}
varDataSetLen(tagTypeStr, tagTypeLen);
colDataAppend(pColInfoData, numOfRows, (char*)tagTypeStr, false);
@ -2527,49 +2530,6 @@ _error:
return NULL;
}
typedef struct STableMergeScanInfo {
STableListInfo* tableListInfo;
int32_t tableStartIndex;
int32_t tableEndIndex;
bool hasGroupId;
uint64_t groupId;
SArray* dataReaders; // array of tsdbReaderT*
SReadHandle readHandle;
int32_t bufPageSize;
uint32_t sortBufSize; // max buffer size for in-memory sort
SArray* pSortInfo;
SSortHandle* pSortHandle;
SSDataBlock* pSortInputBlock;
int64_t startTs; // sort start time
SArray* sortSourceParams;
SFileBlockLoadRecorder readRecorder;
int64_t numOfRows;
SScanInfo scanInfo;
int32_t scanTimes;
SNode* pFilterNode; // filter info, which is push down by optimizer
SqlFunctionCtx* pCtx; // which belongs to the direct upstream operator operator query context
SResultRowInfo* pResultRowInfo;
int32_t* rowEntryInfoOffset;
SExprInfo* pExpr;
SSDataBlock* pResBlock;
SArray* pColMatchInfo;
int32_t numOfOutput;
SExprInfo* pPseudoExpr;
int32_t numOfPseudoExpr;
SqlFunctionCtx* pPseudoCtx;
SQueryTableDataCond cond;
int32_t scanFlag; // table scan flag to denote if it is a repeat/reverse/main scan
int32_t dataBlockLoadFlag;
// if the upstream is an interval operator, the interval info is also kept here to get the time
// window to check if current data block needs to be loaded.
SInterval interval;
SSampleExecInfo sample; // sample execution info
} STableMergeScanInfo;
int32_t createScanTableListInfo(SScanPhysiNode* pScanNode, SNodeList* pGroupTags, bool groupSort, SReadHandle* pHandle,
STableListInfo* pTableListInfo, SNode* pTagCond, SNode* pTagIndexCond,
const char* idStr) {
@ -2700,9 +2660,9 @@ static int32_t loadDataBlockFromOneTable(SOperatorInfo* pOperator, STableMergeSc
relocateColumnData(pBlock, pTableScanInfo->pColMatchInfo, pCols, true);
// currently only the tbname pseudo column
if (pTableScanInfo->numOfPseudoExpr > 0) {
int32_t code = addTagPseudoColumnData(&pTableScanInfo->readHandle, pTableScanInfo->pPseudoExpr,
pTableScanInfo->numOfPseudoExpr, pBlock, GET_TASKID(pTaskInfo));
if (pTableScanInfo->pseudoSup.numOfExprs > 0) {
int32_t code = addTagPseudoColumnData(&pTableScanInfo->readHandle, pTableScanInfo->pseudoSup.pExprInfo,
pTableScanInfo->pseudoSup.numOfExprs, pBlock, GET_TASKID(pTaskInfo));
if (code != TSDB_CODE_SUCCESS) {
longjmp(pTaskInfo->env, code);
}
@ -2869,29 +2829,38 @@ int32_t stopGroupTableMergeScan(SOperatorInfo* pOperator) {
STableMergeScanInfo* pInfo = pOperator->info;
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
tsortDestroySortHandle(pInfo->pSortHandle);
size_t numReaders = taosArrayGetSize(pInfo->dataReaders);
SSortExecInfo sortExecInfo = tsortGetSortExecInfo(pInfo->pSortHandle);
pInfo->sortExecInfo.sortMethod = sortExecInfo.sortMethod;
pInfo->sortExecInfo.sortBuffer = sortExecInfo.sortBuffer;
pInfo->sortExecInfo.loops += sortExecInfo.loops;
pInfo->sortExecInfo.readBytes += sortExecInfo.readBytes;
pInfo->sortExecInfo.writeBytes += sortExecInfo.writeBytes;
for (int32_t i = 0; i < numReaders; ++i) {
STableMergeScanSortSourceParam* param = taosArrayGet(pInfo->sortSourceParams, i);
blockDataDestroy(param->inputBlock);
}
taosArrayClear(pInfo->sortSourceParams);
for (int32_t i = 0; i < taosArrayGetSize(pInfo->dataReaders); ++i) {
tsortDestroySortHandle(pInfo->pSortHandle);
for (int32_t i = 0; i < numReaders; ++i) {
STsdbReader* reader = taosArrayGetP(pInfo->dataReaders, i);
tsdbReaderClose(reader);
}
taosArrayDestroy(pInfo->dataReaders);
pInfo->dataReaders = NULL;
return TSDB_CODE_SUCCESS;
}
SSDataBlock* getSortedTableMergeScanBlockData(SSortHandle* pHandle, int32_t capacity, SOperatorInfo* pOperator) {
SSDataBlock* getSortedTableMergeScanBlockData(SSortHandle* pHandle, SSDataBlock* pResBlock, int32_t capacity, SOperatorInfo* pOperator) {
STableMergeScanInfo* pInfo = pOperator->info;
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
SSDataBlock* p = tsortGetSortedDataBlock(pHandle);
if (p == NULL) {
return NULL;
}
blockDataEnsureCapacity(p, capacity);
blockDataCleanup(pResBlock);
blockDataEnsureCapacity(pResBlock, capacity);
while (1) {
STupleHandle* pTupleHandle = tsortNextTuple(pHandle);
@ -2899,14 +2868,15 @@ SSDataBlock* getSortedTableMergeScanBlockData(SSortHandle* pHandle, int32_t capa
break;
}
appendOneRowToDataBlock(p, pTupleHandle);
if (p->info.rows >= capacity) {
appendOneRowToDataBlock(pResBlock, pTupleHandle);
if (pResBlock->info.rows >= capacity) {
break;
}
}
qDebug("%s get sorted row blocks, rows:%d", GET_TASKID(pTaskInfo), p->info.rows);
return (p->info.rows > 0) ? p : NULL;
qDebug("%s get sorted row blocks, rows:%d", GET_TASKID(pTaskInfo), pResBlock->info.rows);
return (pResBlock->info.rows > 0) ? pResBlock : NULL;
}
SSDataBlock* doTableMergeScan(SOperatorInfo* pOperator) {
@ -2935,7 +2905,7 @@ SSDataBlock* doTableMergeScan(SOperatorInfo* pOperator) {
}
SSDataBlock* pBlock = NULL;
while (pInfo->tableStartIndex < tableListSize) {
pBlock = getSortedTableMergeScanBlockData(pInfo->pSortHandle, pOperator->resultInfo.capacity, pOperator);
pBlock = getSortedTableMergeScanBlockData(pInfo->pSortHandle, pInfo->pResBlock, pOperator->resultInfo.capacity, pOperator);
if (pBlock != NULL) {
pBlock->info.groupId = pInfo->groupId;
pOperator->resultInfo.totalRows += pBlock->info.rows;
@ -2959,6 +2929,7 @@ SSDataBlock* doTableMergeScan(SOperatorInfo* pOperator) {
void destroyTableMergeScanOperatorInfo(void* param, int32_t numOfOutput) {
STableMergeScanInfo* pTableScanInfo = (STableMergeScanInfo*)param;
cleanupQueryTableDataCond(&pTableScanInfo->cond);
taosArrayDestroy(pTableScanInfo->sortSourceParams);
for (int32_t i = 0; i < taosArrayGetSize(pTableScanInfo->dataReaders); ++i) {
STsdbReader* reader = taosArrayGetP(pTableScanInfo->dataReaders, i);
@ -2974,7 +2945,9 @@ void destroyTableMergeScanOperatorInfo(void* param, int32_t numOfOutput) {
pTableScanInfo->pSortInputBlock = blockDataDestroy(pTableScanInfo->pSortInputBlock);
taosArrayDestroy(pTableScanInfo->pSortInfo);
cleanupExprSupp(&pTableScanInfo->pseudoSup);
taosMemoryFreeClear(pTableScanInfo->rowEntryInfoOffset);
taosMemoryFreeClear(param);
}
@ -2989,7 +2962,7 @@ int32_t getTableMergeScanExplainExecInfo(SOperatorInfo* pOptr, void** pOptrExpla
STableMergeScanExecInfo* execInfo = taosMemoryCalloc(1, sizeof(STableMergeScanExecInfo));
STableMergeScanInfo* pInfo = pOptr->info;
execInfo->blockRecorder = pInfo->readRecorder;
execInfo->sortExecInfo = tsortGetSortExecInfo(pInfo->pSortHandle);
execInfo->sortExecInfo = pInfo->sortExecInfo;
*pOptrExplain = execInfo;
*len = sizeof(STableMergeScanExecInfo);
@ -3031,8 +3004,9 @@ SOperatorInfo* createTableMergeScanOperatorInfo(STableScanPhysiNode* pTableScanN
}
if (pTableScanNode->scan.pScanPseudoCols != NULL) {
pInfo->pPseudoExpr = createExprInfo(pTableScanNode->scan.pScanPseudoCols, NULL, &pInfo->numOfPseudoExpr);
pInfo->pPseudoCtx = createSqlFunctionCtx(pInfo->pPseudoExpr, pInfo->numOfPseudoExpr, &pInfo->rowEntryInfoOffset);
SExprSupp* pSup = &pInfo->pseudoSup;
pSup->pExprInfo = createExprInfo(pTableScanNode->scan.pScanPseudoCols, NULL, &pSup->numOfExprs);
pSup->pCtx = createSqlFunctionCtx(pSup->pExprInfo, pSup->numOfExprs, &pSup->rowEntryInfoOffset);
}
pInfo->scanInfo = (SScanInfo){.numOfAsc = pTableScanNode->scanSeq[0], .numOfDesc = pTableScanNode->scanSeq[1]};

View File

@ -2771,7 +2771,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
SExprSupp* pSup = &pOperator->exprSupp;
qDebug("interval status %d %s", pOperator->status, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
qDebug("interval status %d %s", pOperator->status, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
if (pOperator->status == OP_EXEC_DONE) {
return NULL;
@ -2780,7 +2780,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
if (pInfo->pPullDataRes->info.rows != 0) {
// process the rest of the data
ASSERT(IS_FINAL_OP(pInfo));
printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
return pInfo->pPullDataRes;
}
@ -2795,20 +2795,20 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
}
return NULL;
}
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
return pInfo->binfo.pRes;
} else {
if (!IS_FINAL_OP(pInfo)) {
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->aggSup.pResultBuf);
if (pInfo->binfo.pRes->info.rows != 0) {
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
return pInfo->binfo.pRes;
}
}
if (pInfo->pUpdateRes->info.rows != 0 && pInfo->returnUpdate) {
pInfo->returnUpdate = false;
ASSERT(!IS_FINAL_OP(pInfo));
printDataBlock(pInfo->pUpdateRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
printDataBlock(pInfo->pUpdateRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
// process the rest of the data
return pInfo->pUpdateRes;
}
@ -2816,13 +2816,13 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
// if (pInfo->pPullDataRes->info.rows != 0) {
// // process the rest of the data
// ASSERT(IS_FINAL_OP(pInfo));
// printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
// printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
// return pInfo->pPullDataRes;
// }
doBuildDeleteResult(pInfo->pDelWins, &pInfo->delIndex, pInfo->pDelRes);
if (pInfo->pDelRes->info.rows != 0) {
// process the rest of the data
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
return pInfo->pDelRes;
}
}
@ -2833,10 +2833,10 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
clearSpecialDataBlock(pInfo->pUpdateRes);
removeDeleteResults(pUpdated, pInfo->pDelWins);
pOperator->status = OP_RES_TO_RETURN;
qDebug("%s return data", IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
qDebug("%s return data", IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
break;
}
printDataBlock(pBlock, IS_FINAL_OP(pInfo) ? "interval Final recv" : "interval Semi recv");
printDataBlock(pBlock, IS_FINAL_OP(pInfo) ? "interval final recv" : "interval semi recv");
maxTs = TMAX(maxTs, pBlock->info.window.ekey);
if (pBlock->info.type == STREAM_NORMAL || pBlock->info.type == STREAM_PULL_DATA ||
@ -2936,20 +2936,20 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
if (pInfo->pPullDataRes->info.rows != 0) {
// process the rest of the data
ASSERT(IS_FINAL_OP(pInfo));
printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
printDataBlock(pInfo->pPullDataRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
return pInfo->pPullDataRes;
}
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->aggSup.pResultBuf);
if (pInfo->binfo.pRes->info.rows != 0) {
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
printDataBlock(pInfo->binfo.pRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
return pInfo->binfo.pRes;
}
if (pInfo->pUpdateRes->info.rows != 0 && pInfo->returnUpdate) {
pInfo->returnUpdate = false;
ASSERT(!IS_FINAL_OP(pInfo));
printDataBlock(pInfo->pUpdateRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
printDataBlock(pInfo->pUpdateRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
// process the rest of the data
return pInfo->pUpdateRes;
}
@ -2957,7 +2957,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
doBuildDeleteResult(pInfo->pDelWins, &pInfo->delIndex, pInfo->pDelRes);
if (pInfo->pDelRes->info.rows != 0) {
// process the rest of the data
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "interval Final" : "interval Semi");
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
return pInfo->pDelRes;
}
// ASSERT(false);
@ -3817,14 +3817,14 @@ static SSDataBlock* doStreamSessionAgg(SOperatorInfo* pOperator) {
} else if (pOperator->status == OP_RES_TO_RETURN) {
doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
if (pInfo->pDelRes->info.rows > 0) {
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "Final Session" : "Single Session");
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "final session" : "single session");
return pInfo->pDelRes;
}
doBuildResultDatablock(pOperator, pBInfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
if (pBInfo->pRes->info.rows == 0 || !hasDataInGroupInfo(&pInfo->groupResInfo)) {
doSetOperatorCompleted(pOperator);
}
printDataBlock(pBInfo->pRes, IS_FINAL_OP(pInfo) ? "Final Session" : "Single Session");
printDataBlock(pBInfo->pRes, IS_FINAL_OP(pInfo) ? "final session" : "single session");
return pBInfo->pRes->info.rows == 0 ? NULL : pBInfo->pRes;
}
@ -3837,7 +3837,7 @@ static SSDataBlock* doStreamSessionAgg(SOperatorInfo* pOperator) {
if (pBlock == NULL) {
break;
}
printDataBlock(pBlock, IS_FINAL_OP(pInfo) ? "Final Session Recv" : "Single Session Recv");
printDataBlock(pBlock, IS_FINAL_OP(pInfo) ? "final session recv" : "single session recv");
if (pBlock->info.type == STREAM_CLEAR) {
SArray* pWins = taosArrayInit(16, sizeof(SResultWindowInfo));
@ -3914,11 +3914,11 @@ static SSDataBlock* doStreamSessionAgg(SOperatorInfo* pOperator) {
blockDataEnsureCapacity(pInfo->binfo.pRes, pOperator->resultInfo.capacity);
doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
if (pInfo->pDelRes->info.rows > 0) {
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "Final Session" : "Single Session");
printDataBlock(pInfo->pDelRes, IS_FINAL_OP(pInfo) ? "final session" : "single session");
return pInfo->pDelRes;
}
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
printDataBlock(pBInfo->pRes, IS_FINAL_OP(pInfo) ? "Final Session" : "Single Session");
printDataBlock(pBInfo->pRes, IS_FINAL_OP(pInfo) ? "final session" : "single session");
return pBInfo->pRes->info.rows == 0 ? NULL : pBInfo->pRes;
}
@ -3957,21 +3957,21 @@ static SSDataBlock* doStreamSessionSemiAgg(SOperatorInfo* pOperator) {
} else if (pOperator->status == OP_RES_TO_RETURN) {
doBuildResultDatablock(pOperator, pBInfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
if (pBInfo->pRes->info.rows > 0) {
printDataBlock(pBInfo->pRes, "Semi Session");
printDataBlock(pBInfo->pRes, "sems session");
return pBInfo->pRes;
}
// doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
if (pInfo->pDelRes->info.rows > 0 && !pInfo->returnDelete) {
pInfo->returnDelete = true;
printDataBlock(pInfo->pDelRes, "Semi Session");
printDataBlock(pInfo->pDelRes, "sems session");
return pInfo->pDelRes;
}
if (pInfo->pUpdateRes->info.rows > 0) {
// process the rest of the data
pOperator->status = OP_OPENED;
printDataBlock(pInfo->pUpdateRes, "Semi Session");
printDataBlock(pInfo->pUpdateRes, "sems session");
return pInfo->pUpdateRes;
}
// semi interval operator clear disk buffer
@ -4035,21 +4035,21 @@ static SSDataBlock* doStreamSessionSemiAgg(SOperatorInfo* pOperator) {
doBuildResultDatablock(pOperator, pBInfo, &pInfo->groupResInfo, pInfo->streamAggSup.pResultBuf);
if (pBInfo->pRes->info.rows > 0) {
printDataBlock(pBInfo->pRes, "Semi Session");
printDataBlock(pBInfo->pRes, "sems session");
return pBInfo->pRes;
}
// doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
if (pInfo->pDelRes->info.rows > 0 && !pInfo->returnDelete) {
pInfo->returnDelete = true;
printDataBlock(pInfo->pDelRes, "Semi Session");
printDataBlock(pInfo->pDelRes, "sems session");
return pInfo->pDelRes;
}
if (pInfo->pUpdateRes->info.rows > 0) {
// process the rest of the data
pOperator->status = OP_OPENED;
printDataBlock(pInfo->pUpdateRes, "Semi Session");
printDataBlock(pInfo->pUpdateRes, "sems session");
return pInfo->pUpdateRes;
}

View File

@ -557,11 +557,13 @@ static int32_t translateApercentileImpl(SFunctionNode* pFunc, char* pErrBuf, int
pFunc->node.resType =
(SDataType){.bytes = getApercentileMaxSize() + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY};
} else {
if (1 != numOfParams) {
// original percent param is reserved
if (2 != numOfParams) {
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
}
uint8_t para1Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type;
if (TSDB_DATA_TYPE_BINARY != para1Type) {
uint8_t para2Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 1))->resType.type;
if (TSDB_DATA_TYPE_BINARY != para1Type || !IS_INTEGER_TYPE(para2Type)) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
@ -621,7 +623,7 @@ static int32_t translateTopBot(SFunctionNode* pFunc, char* pErrBuf, int32_t len)
return TSDB_CODE_SUCCESS;
}
int32_t topBotCreateMergePara(SNodeList* pRawParameters, SNode* pPartialRes, SNodeList** pParameters) {
static int32_t reserveFirstMergeParam(SNodeList* pRawParameters, SNode* pPartialRes, SNodeList** pParameters) {
int32_t code = nodesListMakeAppend(pParameters, pPartialRes);
if (TSDB_CODE_SUCCESS == code) {
code = nodesListStrictAppend(*pParameters, nodesCloneNode(nodesListGetNode(pRawParameters, 1)));
@ -629,6 +631,14 @@ int32_t topBotCreateMergePara(SNodeList* pRawParameters, SNode* pPartialRes, SNo
return TSDB_CODE_SUCCESS;
}
int32_t topBotCreateMergeParam(SNodeList* pRawParameters, SNode* pPartialRes, SNodeList** pParameters) {
return reserveFirstMergeParam(pRawParameters, pPartialRes, pParameters);
}
int32_t apercentileCreateMergeParam(SNodeList* pRawParameters, SNode* pPartialRes, SNodeList** pParameters) {
return reserveFirstMergeParam(pRawParameters, pPartialRes, pParameters);
}
static int32_t translateSpread(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
if (1 != LIST_LENGTH(pFunc->pParameterList)) {
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
@ -2068,7 +2078,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.invertFunc = NULL,
.combineFunc = apercentileCombine,
.pPartialFunc = "_apercentile_partial",
.pMergeFunc = "_apercentile_merge"
.pMergeFunc = "_apercentile_merge",
.createMergeParaFuc = apercentileCreateMergeParam
},
{
.name = "_apercentile_partial",
@ -2107,7 +2118,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.combineFunc = topCombine,
.pPartialFunc = "top",
.pMergeFunc = "top",
.createMergeParaFuc = topBotCreateMergePara
.createMergeParaFuc = topBotCreateMergeParam
},
{
.name = "bottom",
@ -2122,7 +2133,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.combineFunc = bottomCombine,
.pPartialFunc = "bottom",
.pMergeFunc = "bottom",
.createMergeParaFuc = topBotCreateMergePara
.createMergeParaFuc = topBotCreateMergeParam
},
{
.name = "spread",
@ -2220,7 +2231,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
{
.name = "derivative",
.type = FUNCTION_TYPE_DERIVATIVE,
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_IMPLICIT_TS_FUNC,
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_IMPLICIT_TS_FUNC,
.translateFunc = translateDerivative,
.getEnvFunc = getDerivativeFuncEnv,
.initFunc = derivativeFuncSetup,
@ -2425,7 +2436,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
{
.name = "diff",
.type = FUNCTION_TYPE_DIFF,
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
.translateFunc = translateDiff,
.getEnvFunc = getDiffFuncEnv,
.initFunc = diffFunctionSetup,

View File

@ -1624,6 +1624,10 @@ int32_t minmaxFunctionFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
}
void setNullSelectivityValue(SqlFunctionCtx* pCtx, SSDataBlock* pBlock, int32_t rowIndex) {
if (pCtx->subsidiaries.num <= 0) {
return;
}
for (int32_t j = 0; j < pCtx->subsidiaries.num; ++j) {
SqlFunctionCtx* pc = pCtx->subsidiaries.pCtx[j];
int32_t dstSlotId = pc->pExpr->base.resSchema.slotId;
@ -1655,8 +1659,6 @@ void setSelectivityValue(SqlFunctionCtx* pCtx, SSDataBlock* pBlock, const STuple
SFunctParam* pFuncParam = &pc->pExpr->base.pParam[0];
int32_t dstSlotId = pc->pExpr->base.resSchema.slotId;
int32_t ps = 0;
SColumnInfoData* pDstCol = taosArrayGet(pBlock->pDataBlock, dstSlotId);
ASSERT(pc->pExpr->base.resSchema.bytes == pDstCol->info.bytes);
if (nullList[j]) {
@ -1678,6 +1680,39 @@ void releaseSource(STuplePos* pPos) {
// Todo(liuyao) relase row
}
// This function append the selectivity to subsidiaries function context directly, without fetching data
// from intermediate disk based buf page
void appendSelectivityValue(SqlFunctionCtx* pCtx, int32_t rowIndex, int32_t pos) {
if (pCtx->subsidiaries.num <= 0) {
return;
}
for (int32_t j = 0; j < pCtx->subsidiaries.num; ++j) {
SqlFunctionCtx* pc = pCtx->subsidiaries.pCtx[j];
// get data from source col
SFunctParam* pFuncParam = &pc->pExpr->base.pParam[0];
int32_t srcSlotId = pFuncParam->pCol->slotId;
SColumnInfoData* pSrcCol = taosArrayGet(pCtx->pSrcBlock->pDataBlock, srcSlotId);
char* pData = colDataGetData(pSrcCol, rowIndex);
// append to dest col
int32_t dstSlotId = pc->pExpr->base.resSchema.slotId;
SColumnInfoData* pDstCol = taosArrayGet(pCtx->pDstBlock->pDataBlock, dstSlotId);
ASSERT(pc->pExpr->base.resSchema.bytes == pDstCol->info.bytes);
if (colDataIsNull_s(pSrcCol, rowIndex) == true) {
colDataAppendNULL(pDstCol, pos);
} else {
colDataAppend(pDstCol, pos, pData, false);
}
}
}
void replaceTupleData(STuplePos* pDestPos, STuplePos* pSourcePos) {
releaseSource(pDestPos);
*pDestPos = *pSourcePos;
@ -2218,6 +2253,7 @@ int32_t leastSQRFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
int32_t currentRow = pBlock->info.rows;
if (0 == pInfo->num) {
colDataAppendNULL(pCol, currentRow);
return 0;
}
@ -3154,6 +3190,7 @@ static void doHandleDiff(SDiffInfo* pDiffInfo, int32_t type, const char* pv, SCo
colDataAppendInt64(pOutput, pos, &delta);
}
pDiffInfo->prev.i64 = v;
break;
}
case TSDB_DATA_TYPE_BOOL:
@ -3247,6 +3284,10 @@ int32_t diffFunction(SqlFunctionCtx* pCtx) {
if (pDiffInfo->hasPrev) {
doHandleDiff(pDiffInfo, pInputCol->info.type, pv, pOutput, pos, pCtx->order);
// handle selectivity
if (pCtx->subsidiaries.num > 0) {
appendSelectivityValue(pCtx, i, pos);
}
numOfElems++;
} else {
@ -3273,6 +3314,10 @@ int32_t diffFunction(SqlFunctionCtx* pCtx) {
// there is a row of previous data block to be handled in the first place.
if (pDiffInfo->hasPrev) {
doHandleDiff(pDiffInfo, pInputCol->info.type, pv, pOutput, pos, pCtx->order);
// handle selectivity
if (pCtx->subsidiaries.num > 0) {
appendSelectivityValue(pCtx, i, pos);
}
numOfElems++;
} else {
@ -5723,6 +5768,12 @@ int32_t derivativeFunction(SqlFunctionCtx* pCtx) {
if (pTsOutput != NULL) {
colDataAppendInt64(pTsOutput, pos, &tsList[i]);
}
// handle selectivity
if (pCtx->subsidiaries.num > 0) {
appendSelectivityValue(pCtx, i, pos);
}
numOfElems++;
}
}
@ -5755,6 +5806,12 @@ int32_t derivativeFunction(SqlFunctionCtx* pCtx) {
if (pTsOutput != NULL) {
colDataAppendInt64(pTsOutput, pos, &pDerivInfo->prevTs);
}
// handle selectivity
if (pCtx->subsidiaries.num > 0) {
appendSelectivityValue(pCtx, i, pos);
}
numOfElems++;
}
}

View File

@ -877,7 +877,7 @@ void udfcUvHandleError(SClientUvConn *conn);
void onUdfcPipeRead(uv_stream_t *client, ssize_t nread, const uv_buf_t *buf);
void onUdfcPipeWrite(uv_write_t *write, int status);
void onUdfcPipeConnect(uv_connect_t *connect, int status);
int32_t udfcCreateUvTask(SClientUdfTask *task, int8_t uvTaskType, SClientUvTaskNode **pUvTask);
int32_t udfcInitializeUvTask(SClientUdfTask *task, int8_t uvTaskType, SClientUvTaskNode *uvTask);
int32_t udfcQueueUvTask(SClientUvTaskNode *uvTask);
int32_t udfcStartUvTask(SClientUvTaskNode *uvTask);
void udfcAsyncTaskCb(uv_async_t *async);
@ -1376,8 +1376,7 @@ void onUdfcPipeConnect(uv_connect_t *connect, int status) {
uv_sem_post(&uvTask->taskSem);
}
int32_t udfcCreateUvTask(SClientUdfTask *task, int8_t uvTaskType, SClientUvTaskNode **pUvTask) {
SClientUvTaskNode *uvTask = taosMemoryCalloc(1, sizeof(SClientUvTaskNode));
int32_t udfcInitializeUvTask(SClientUdfTask *task, int8_t uvTaskType, SClientUvTaskNode *uvTask) {
uvTask->type = uvTaskType;
uvTask->udfc = task->session->udfc;
@ -1412,7 +1411,6 @@ int32_t udfcCreateUvTask(SClientUdfTask *task, int8_t uvTaskType, SClientUvTaskN
}
uv_sem_init(&uvTask->taskSem, 0);
*pUvTask = uvTask;
return 0;
}
@ -1615,10 +1613,10 @@ int32_t udfcClose() {
}
int32_t udfcRunUdfUvTask(SClientUdfTask *task, int8_t uvTaskType) {
SClientUvTaskNode *uvTask = NULL;
udfcCreateUvTask(task, uvTaskType, &uvTask);
SClientUvTaskNode *uvTask = taosMemoryCalloc(1, sizeof(SClientUvTaskNode));
fnDebug("udfc client task: %p created uvTask: %p. pipe: %p", task, uvTask, task->session->udfUvPipe);
udfcInitializeUvTask(task, uvTaskType, uvTask);
udfcQueueUvTask(uvTask);
udfcGetUdfTaskResultFromUvTask(task, uvTask);
if (uvTaskType == UV_TASK_CONNECT) {
@ -1629,6 +1627,8 @@ int32_t udfcRunUdfUvTask(SClientUdfTask *task, int8_t uvTaskType) {
taosMemoryFree(uvTask->reqBuf.base);
uvTask->reqBuf.base = NULL;
taosMemoryFree(uvTask);
fnDebug("udfc freed uvTask: %p", task);
uvTask = NULL;
return task->errCode;
}

View File

@ -1,7 +1,12 @@
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
#ifdef LINUX
#include <unistd.h>
#endif
#ifdef WINDOWS
#include <windows.h>
#endif
#include "taosudf.h"
@ -35,6 +40,12 @@ DLL_EXPORT int32_t udf1(SUdfDataBlock* block, SUdfColumn *resultCol) {
udfColDataSet(resultCol, i, (char *)&luckyNum, false);
}
}
//to simulate actual processing delay by udf
#ifdef LINUX
usleep(1 * 1000); // usleep takes sleep time in us (1 millionth of a second)
#endif
#ifdef WINDOWS
Sleep(1);
#endif
return 0;
}

View File

@ -375,6 +375,7 @@ static int32_t logicJoinCopy(const SJoinLogicNode* pSrc, SJoinLogicNode* pDst) {
CLONE_NODE_FIELD(pMergeCondition);
CLONE_NODE_FIELD(pOnConditions);
COPY_SCALAR_FIELD(isSingleTableJoin);
COPY_SCALAR_FIELD(inputTsOrder);
return TSDB_CODE_SUCCESS;
}
@ -440,6 +441,7 @@ static int32_t logicWindowCopy(const SWindowLogicNode* pSrc, SWindowLogicNode* p
COPY_SCALAR_FIELD(watermark);
COPY_SCALAR_FIELD(igExpired);
COPY_SCALAR_FIELD(windowAlgo);
COPY_SCALAR_FIELD(inputTsOrder);
return TSDB_CODE_SUCCESS;
}

View File

@ -1717,7 +1717,7 @@ static const char* jkJoinPhysiPlanOnConditions = "OnConditions";
static const char* jkJoinPhysiPlanTargets = "Targets";
static int32_t physiJoinNodeToJson(const void* pObj, SJson* pJson) {
const SJoinPhysiNode* pNode = (const SJoinPhysiNode*)pObj;
const SSortMergeJoinPhysiNode* pNode = (const SSortMergeJoinPhysiNode*)pObj;
int32_t code = physicPlanNodeToJson(pObj, pJson);
if (TSDB_CODE_SUCCESS == code) {
@ -1737,7 +1737,7 @@ static int32_t physiJoinNodeToJson(const void* pObj, SJson* pJson) {
}
static int32_t jsonToPhysiJoinNode(const SJson* pJson, void* pObj) {
SJoinPhysiNode* pNode = (SJoinPhysiNode*)pObj;
SSortMergeJoinPhysiNode* pNode = (SSortMergeJoinPhysiNode*)pObj;
int32_t code = jsonToPhysicPlanNode(pJson, pObj);
if (TSDB_CODE_SUCCESS == code) {

View File

@ -468,7 +468,7 @@ static EDealRes dispatchPhysiPlan(SNode* pNode, ETraversalOrder order, FNodeWalk
break;
}
case QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN: {
SJoinPhysiNode* pJoin = (SJoinPhysiNode*)pNode;
SSortMergeJoinPhysiNode* pJoin = (SSortMergeJoinPhysiNode*)pNode;
res = walkPhysiNode((SPhysiNode*)pNode, order, walker, pContext);
if (DEAL_RES_ERROR != res && DEAL_RES_END != res) {
res = walkPhysiPlan(pJoin->pMergeCondition, order, walker, pContext);

View File

@ -287,7 +287,7 @@ SNode* nodesMakeNode(ENodeType type) {
case QUERY_NODE_PHYSICAL_PLAN_PROJECT:
return makeNode(type, sizeof(SProjectPhysiNode));
case QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN:
return makeNode(type, sizeof(SJoinPhysiNode));
return makeNode(type, sizeof(SSortMergeJoinPhysiNode));
case QUERY_NODE_PHYSICAL_PLAN_HASH_AGG:
return makeNode(type, sizeof(SAggPhysiNode));
case QUERY_NODE_PHYSICAL_PLAN_EXCHANGE:
@ -883,7 +883,7 @@ void nodesDestroyNode(SNode* pNode) {
break;
}
case QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN: {
SJoinPhysiNode* pPhyNode = (SJoinPhysiNode*)pNode;
SSortMergeJoinPhysiNode* pPhyNode = (SSortMergeJoinPhysiNode*)pNode;
destroyPhysiNode((SPhysiNode*)pPhyNode);
nodesDestroyNode(pPhyNode->pMergeCondition);
nodesDestroyNode(pPhyNode->pOnConditions);

View File

@ -55,7 +55,11 @@ typedef enum EDatabaseOptionType {
DB_OPTION_VGROUPS,
DB_OPTION_SINGLE_STABLE,
DB_OPTION_RETENTIONS,
DB_OPTION_SCHEMALESS
DB_OPTION_SCHEMALESS,
DB_OPTION_WAL_RETENTION_PERIOD,
DB_OPTION_WAL_RETENTION_SIZE,
DB_OPTION_WAL_ROLL_PERIOD,
DB_OPTION_WAL_SEGMENT_SIZE
} EDatabaseOptionType;
typedef enum ETableOptionType {
@ -90,7 +94,7 @@ SNode* createValueNode(SAstCreateContext* pCxt, int32_t dataType, const SToken*
SNode* createDurationValueNode(SAstCreateContext* pCxt, const SToken* pLiteral);
SNode* createDefaultDatabaseCondValue(SAstCreateContext* pCxt);
SNode* createPlaceholderValueNode(SAstCreateContext* pCxt, const SToken* pLiteral);
SNode* setProjectionAlias(SAstCreateContext* pCxt, SNode* pNode, const SToken* pAlias);
SNode* setProjectionAlias(SAstCreateContext* pCxt, SNode* pNode, SToken* pAlias);
SNode* createLogicConditionNode(SAstCreateContext* pCxt, ELogicConditionType type, SNode* pParam1, SNode* pParam2);
SNode* createOperatorNode(SAstCreateContext* pCxt, EOperatorType type, SNode* pLeft, SNode* pRight);
SNode* createBetweenAnd(SAstCreateContext* pCxt, SNode* pExpr, SNode* pLeft, SNode* pRight);

View File

@ -191,6 +191,20 @@ db_options(A) ::= db_options(B) VGROUPS NK_INTEGER(C).
db_options(A) ::= db_options(B) SINGLE_STABLE NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_SINGLE_STABLE, &C); }
db_options(A) ::= db_options(B) RETENTIONS retention_list(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_RETENTIONS, C); }
db_options(A) ::= db_options(B) SCHEMALESS NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_SCHEMALESS, &C); }
db_options(A) ::= db_options(B) WAL_RETENTION_PERIOD NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_WAL_RETENTION_PERIOD, &C); }
db_options(A) ::= db_options(B) WAL_RETENTION_PERIOD NK_MINUS(D) NK_INTEGER(C). {
SToken t = D;
t.n = (C.z + C.n) - D.z;
A = setDatabaseOption(pCxt, B, DB_OPTION_WAL_RETENTION_PERIOD, &t);
}
db_options(A) ::= db_options(B) WAL_RETENTION_SIZE NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_WAL_RETENTION_SIZE, &C); }
db_options(A) ::= db_options(B) WAL_RETENTION_SIZE NK_MINUS(D) NK_INTEGER(C). {
SToken t = D;
t.n = (C.z + C.n) - D.z;
A = setDatabaseOption(pCxt, B, DB_OPTION_WAL_RETENTION_SIZE, &t);
}
db_options(A) ::= db_options(B) WAL_ROLL_PERIOD NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_WAL_ROLL_PERIOD, &C); }
db_options(A) ::= db_options(B) WAL_SEGMENT_SIZE NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_WAL_SEGMENT_SIZE, &C); }
alter_db_options(A) ::= alter_db_option(B). { A = createAlterDatabaseOptions(pCxt); A = setAlterDatabaseOption(pCxt, A, &B); }
alter_db_options(A) ::= alter_db_options(B) alter_db_option(C). { A = setAlterDatabaseOption(pCxt, B, &C); }

View File

@ -527,6 +527,7 @@ SNode* createTempTableNode(SAstCreateContext* pCxt, SNode* pSubquery, const STok
}
if (QUERY_NODE_SELECT_STMT == nodeType(pSubquery)) {
strcpy(((SSelectStmt*)pSubquery)->stmtName, tempTable->table.tableAlias);
((SSelectStmt*)pSubquery)->isSubquery = true;
} else if (QUERY_NODE_SET_OPERATOR == nodeType(pSubquery)) {
strcpy(((SSetOperator*)pSubquery)->stmtName, tempTable->table.tableAlias);
}
@ -637,8 +638,9 @@ SNode* createInterpTimeRange(SAstCreateContext* pCxt, SNode* pStart, SNode* pEnd
return createBetweenAnd(pCxt, createPrimaryKeyCol(pCxt), pStart, pEnd);
}
SNode* setProjectionAlias(SAstCreateContext* pCxt, SNode* pNode, const SToken* pAlias) {
SNode* setProjectionAlias(SAstCreateContext* pCxt, SNode* pNode, SToken* pAlias) {
CHECK_PARSER_STATUS(pCxt);
trimEscape(pAlias);
int32_t len = TMIN(sizeof(((SExprNode*)pNode)->aliasName) - 1, pAlias->n);
strncpy(((SExprNode*)pNode)->aliasName, pAlias->z, len);
((SExprNode*)pNode)->aliasName[len] = '\0';
@ -892,6 +894,18 @@ SNode* setDatabaseOption(SAstCreateContext* pCxt, SNode* pOptions, EDatabaseOpti
case DB_OPTION_RETENTIONS:
((SDatabaseOptions*)pOptions)->pRetentions = pVal;
break;
case DB_OPTION_WAL_RETENTION_PERIOD:
((SDatabaseOptions*)pOptions)->walRetentionPeriod = taosStr2Int32(((SToken*)pVal)->z, NULL, 10);
break;
case DB_OPTION_WAL_RETENTION_SIZE:
((SDatabaseOptions*)pOptions)->walRetentionSize = taosStr2Int32(((SToken*)pVal)->z, NULL, 10);
break;
case DB_OPTION_WAL_ROLL_PERIOD:
((SDatabaseOptions*)pOptions)->walRollPeriod = taosStr2Int32(((SToken*)pVal)->z, NULL, 10);
break;
case DB_OPTION_WAL_SEGMENT_SIZE:
((SDatabaseOptions*)pOptions)->walSegmentSize = taosStr2Int32(((SToken*)pVal)->z, NULL, 10);
break;
default:
break;
}

View File

@ -739,12 +739,13 @@ static int32_t parseBoundColumns(SInsertParseContext* pCxt, SParsedDataColInfo*
return TSDB_CODE_SUCCESS;
}
static void buildCreateTbReq(SVCreateTbReq* pTbReq, const char* tname, STag* pTag, int64_t suid, const char* sname, SArray* tagName, uint8_t tagNum) {
static void buildCreateTbReq(SVCreateTbReq* pTbReq, const char* tname, STag* pTag, int64_t suid, const char* sname,
SArray* tagName, uint8_t tagNum) {
pTbReq->type = TD_CHILD_TABLE;
pTbReq->name = strdup(tname);
pTbReq->ctb.suid = suid;
pTbReq->ctb.tagNum = tagNum;
if(sname) pTbReq->ctb.name = strdup(sname);
if (sname) pTbReq->ctb.name = strdup(sname);
pTbReq->ctb.pTag = (uint8_t*)pTag;
pTbReq->ctb.tagName = taosArrayDup(tagName);
pTbReq->commentLen = -1;
@ -969,7 +970,7 @@ static int32_t parseTagsClause(SInsertParseContext* pCxt, SSchema* pSchema, uint
}
SSchema* pTagSchema = &pSchema[pCxt->tags.boundColumns[i]];
char tmpTokenBuf[TSDB_MAX_BYTES_PER_ROW] = {0}; // todo this can be optimize with parse column
char tmpTokenBuf[TSDB_MAX_BYTES_PER_ROW] = {0}; // todo this can be optimize with parse column
code = checkAndTrimValue(&sToken, tmpTokenBuf, &pCxt->msg);
if (code != TSDB_CODE_SUCCESS) {
goto end;
@ -1012,7 +1013,8 @@ static int32_t parseTagsClause(SInsertParseContext* pCxt, SSchema* pSchema, uint
goto end;
}
buildCreateTbReq(&pCxt->createTblReq, tName, pTag, pCxt->pTableMeta->suid, pCxt->sTableName, tagName, pCxt->pTableMeta->tableInfo.numOfTags);
buildCreateTbReq(&pCxt->createTblReq, tName, pTag, pCxt->pTableMeta->suid, pCxt->sTableName, tagName,
pCxt->pTableMeta->tableInfo.numOfTags);
end:
for (int i = 0; i < taosArrayGetSize(pTagVals); ++i) {
@ -1650,7 +1652,6 @@ static int32_t skipUsingClause(SInsertParseSyntaxCxt* pCxt) {
static int32_t collectTableMetaKey(SInsertParseSyntaxCxt* pCxt, SToken* pTbToken) {
SName name;
CHECK_CODE(createSName(&name, pTbToken, pCxt->pComCxt->acctId, pCxt->pComCxt->db, &pCxt->msg));
CHECK_CODE(reserveDbCfgInCache(pCxt->pComCxt->acctId, name.dbname, pCxt->pMetaCache));
CHECK_CODE(reserveUserAuthInCacheExt(pCxt->pComCxt->pUser, &name, AUTH_TYPE_WRITE, pCxt->pMetaCache));
CHECK_CODE(reserveTableMetaInCacheExt(&name, pCxt->pMetaCache));
CHECK_CODE(reserveTableVgroupInCacheExt(&name, pCxt->pMetaCache));
@ -2332,7 +2333,8 @@ int32_t smlBindData(void* handle, SArray* tags, SArray* colsSchema, SArray* cols
return ret;
}
buildCreateTbReq(&smlHandle->tableExecHandle.createTblReq, tableName, pTag, pTableMeta->suid, NULL, tagName, pTableMeta->tableInfo.numOfTags);
buildCreateTbReq(&smlHandle->tableExecHandle.createTblReq, tableName, pTag, pTableMeta->suid, NULL, tagName,
pTableMeta->tableInfo.numOfTags);
taosArrayDestroy(tagName);
smlHandle->tableExecHandle.createTblReq.ctb.name = taosMemoryMalloc(sTableNameLen + 1);

View File

@ -234,6 +234,10 @@ static SKeyword keywordTable[] = {
{"VGROUPS", TK_VGROUPS},
{"VNODES", TK_VNODES},
{"WAL", TK_WAL},
{"WAL_RETENTION_PERIOD", TK_WAL_RETENTION_PERIOD},
{"WAL_RETENTION_SIZE", TK_WAL_RETENTION_SIZE},
{"WAL_ROLL_PERIOD", TK_WAL_ROLL_PERIOD},
{"WAL_SEGMENT_SIZE", TK_WAL_SEGMENT_SIZE},
{"WATERMARK", TK_WATERMARK},
{"WHERE", TK_WHERE},
{"WINDOW_CLOSE", TK_WINDOW_CLOSE},

View File

@ -2984,6 +2984,10 @@ static int32_t buildCreateDbReq(STranslateContext* pCxt, SCreateDatabaseStmt* pS
pReq->cacheLast = pStmt->pOptions->cacheModel;
pReq->cacheLastSize = pStmt->pOptions->cacheLastSize;
pReq->schemaless = pStmt->pOptions->schemaless;
pReq->walRetentionPeriod = pStmt->pOptions->walRetentionPeriod;
pReq->walRetentionSize = pStmt->pOptions->walRetentionSize;
pReq->walRollPeriod = pStmt->pOptions->walRollPeriod;
pReq->walSegmentSize = pStmt->pOptions->walSegmentSize;
pReq->ignoreExist = pStmt->ignoreExists;
return buildCreateDbRetentions(pStmt->pOptions->pRetentions, pReq);
}
@ -3252,6 +3256,21 @@ static int32_t checkDatabaseOptions(STranslateContext* pCxt, const char* pDbName
if (TSDB_CODE_SUCCESS == code) {
code = checkDbEnumOption(pCxt, "schemaless", pOptions->schemaless, TSDB_DB_SCHEMALESS_ON, TSDB_DB_SCHEMALESS_OFF);
}
if (TSDB_CODE_SUCCESS == code) {
code = checkDbRangeOption(pCxt, "walRetentionPeriod", pOptions->walRetentionPeriod,
TSDB_DB_MIN_WAL_RETENTION_PERIOD, INT32_MAX);
}
if (TSDB_CODE_SUCCESS == code) {
code = checkDbRangeOption(pCxt, "walRetentionSize", pOptions->walRetentionSize, TSDB_DB_MIN_WAL_RETENTION_SIZE,
INT32_MAX);
}
if (TSDB_CODE_SUCCESS == code) {
code = checkDbRangeOption(pCxt, "walRollPeriod", pOptions->walRollPeriod, TSDB_DB_MIN_WAL_ROLL_PERIOD, INT32_MAX);
}
if (TSDB_CODE_SUCCESS == code) {
code =
checkDbRangeOption(pCxt, "walSegmentSize", pOptions->walSegmentSize, TSDB_DB_MIN_WAL_SEGMENT_SIZE, INT32_MAX);
}
if (TSDB_CODE_SUCCESS == code) {
code = checkOptionsDependency(pCxt, pDbName, pOptions);
}

View File

@ -92,7 +92,7 @@ static char* getSyntaxErrFormat(int32_t errCode) {
case TSDB_CODE_PAR_INTER_SLIDING_TOO_BIG:
return "sliding value no larger than the interval value";
case TSDB_CODE_PAR_INTER_SLIDING_TOO_SMALL:
return "sliding value can not less than 1% of interval value";
return "sliding value can not less than 1%% of interval value";
case TSDB_CODE_PAR_ONLY_ONE_JSON_TAG:
return "Only one tag if there is a json tag";
case TSDB_CODE_PAR_INCORRECT_NUM_OF_COL:

File diff suppressed because it is too large Load Diff

View File

@ -77,6 +77,10 @@ TEST_F(ParserInitialCTest, createBnode) {
* | WAL value
* | VGROUPS value
* | SINGLE_STABLE {0 | 1}
* | WAL_RETENTION_PERIOD value
* | WAL_ROLL_PERIOD value
* | WAL_RETENTION_SIZE value
* | WAL_SEGMENT_SIZE value
* }
*/
TEST_F(ParserInitialCTest, createDatabase) {
@ -149,6 +153,10 @@ TEST_F(ParserInitialCTest, createDatabase) {
++expect.numOfRetensions;
};
auto setDbSchemalessFunc = [&](int8_t schemaless) { expect.schemaless = schemaless; };
auto setDbWalRetentionPeriod = [&](int32_t walRetentionPeriod) { expect.walRetentionPeriod = walRetentionPeriod; };
auto setDbWalRetentionSize = [&](int32_t walRetentionSize) { expect.walRetentionSize = walRetentionSize; };
auto setDbWalRollPeriod = [&](int32_t walRollPeriod) { expect.walRollPeriod = walRollPeriod; };
auto setDbWalSegmentSize = [&](int32_t walSegmentSize) { expect.walSegmentSize = walSegmentSize; };
setCheckDdlFunc([&](const SQuery* pQuery, ParserStage stage) {
ASSERT_EQ(nodeType(pQuery->pRoot), QUERY_NODE_CREATE_DATABASE_STMT);
@ -175,6 +183,10 @@ TEST_F(ParserInitialCTest, createDatabase) {
ASSERT_EQ(req.strict, expect.strict);
ASSERT_EQ(req.cacheLast, expect.cacheLast);
ASSERT_EQ(req.cacheLastSize, expect.cacheLastSize);
ASSERT_EQ(req.walRetentionPeriod, expect.walRetentionPeriod);
ASSERT_EQ(req.walRetentionSize, expect.walRetentionSize);
ASSERT_EQ(req.walRollPeriod, expect.walRollPeriod);
ASSERT_EQ(req.walSegmentSize, expect.walSegmentSize);
// ASSERT_EQ(req.schemaless, expect.schemaless);
ASSERT_EQ(req.ignoreExist, expect.ignoreExist);
ASSERT_EQ(req.numOfRetensions, expect.numOfRetensions);
@ -219,6 +231,10 @@ TEST_F(ParserInitialCTest, createDatabase) {
setDbVgroupsFunc(100);
setDbSingleStableFunc(1);
setDbSchemalessFunc(1);
setDbWalRetentionPeriod(-1);
setDbWalRetentionSize(-1);
setDbWalRollPeriod(10);
setDbWalSegmentSize(20);
run("CREATE DATABASE IF NOT EXISTS wxy_db "
"BUFFER 64 "
"CACHEMODEL 'last_value' "
@ -238,7 +254,11 @@ TEST_F(ParserInitialCTest, createDatabase) {
"WAL 2 "
"VGROUPS 100 "
"SINGLE_STABLE 1 "
"SCHEMALESS 1");
"SCHEMALESS 1 "
"WAL_RETENTION_PERIOD -1 "
"WAL_RETENTION_SIZE -1 "
"WAL_ROLL_PERIOD 10 "
"WAL_SEGMENT_SIZE 20");
clearCreateDbReq();
setCreateDbReqFunc("wxy_db", 1);

View File

@ -144,9 +144,9 @@ TEST_F(ParserSelectTest, IndefiniteRowsFunc) {
TEST_F(ParserSelectTest, IndefiniteRowsFuncSemanticCheck) {
useDb("root", "test");
run("SELECT DIFF(c1), c2 FROM t1", TSDB_CODE_PAR_NOT_SINGLE_GROUP);
run("SELECT DIFF(c1), c2 FROM t1");
run("SELECT DIFF(c1), tbname FROM t1", TSDB_CODE_PAR_NOT_SINGLE_GROUP);
run("SELECT DIFF(c1), tbname FROM t1");
run("SELECT DIFF(c1), count(*) FROM t1", TSDB_CODE_PAR_NOT_ALLOWED_FUNC);

View File

@ -339,6 +339,7 @@ static int32_t createJoinLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSelect
pJoin->joinType = pJoinTable->joinType;
pJoin->isSingleTableJoin = pJoinTable->table.singleTable;
pJoin->inputTsOrder = ORDER_ASC;
pJoin->node.groupAction = GROUP_ACTION_CLEAR;
pJoin->node.requireDataOrder = DATA_ORDER_LEVEL_GLOBAL;
pJoin->node.requireDataOrder = DATA_ORDER_LEVEL_GLOBAL;
@ -625,14 +626,14 @@ static int32_t createInterpFuncLogicNode(SLogicPlanContext* pCxt, SSelectStmt* p
static int32_t createWindowLogicNodeFinalize(SLogicPlanContext* pCxt, SSelectStmt* pSelect, SWindowLogicNode* pWindow,
SLogicNode** pLogicNode) {
int32_t code = nodesCollectFuncs(pSelect, SQL_CLAUSE_WINDOW, fmIsWindowClauseFunc, &pWindow->pFuncs);
if (pCxt->pPlanCxt->streamQuery) {
pWindow->triggerType = pCxt->pPlanCxt->triggerType;
pWindow->watermark = pCxt->pPlanCxt->watermark;
pWindow->igExpired = pCxt->pPlanCxt->igExpired;
}
pWindow->inputTsOrder = ORDER_ASC;
int32_t code = nodesCollectFuncs(pSelect, SQL_CLAUSE_WINDOW, fmIsWindowClauseFunc, &pWindow->pFuncs);
if (TSDB_CODE_SUCCESS == code) {
code = rewriteExprsForSelect(pWindow->pFuncs, pSelect, SQL_CLAUSE_WINDOW);
}
@ -861,7 +862,8 @@ static int32_t createProjectLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSel
TSWAP(pProject->node.pLimit, pSelect->pLimit);
TSWAP(pProject->node.pSlimit, pSelect->pSlimit);
pProject->node.groupAction = GROUP_ACTION_CLEAR;
pProject->node.groupAction =
(!pSelect->isSubquery && pCxt->pPlanCxt->streamQuery) ? GROUP_ACTION_KEEP : GROUP_ACTION_CLEAR;
pProject->node.requireDataOrder = DATA_ORDER_LEVEL_NONE;
pProject->node.resultDataOrder = DATA_ORDER_LEVEL_NONE;

View File

@ -993,25 +993,28 @@ static bool sortPriKeyOptMayBeOptimized(SLogicNode* pNode) {
}
static int32_t sortPriKeyOptGetScanNodesImpl(SLogicNode* pNode, bool* pNotOptimize, SNodeList** pScanNodes) {
int32_t code = TSDB_CODE_SUCCESS;
switch (nodeType(pNode)) {
case QUERY_NODE_LOGIC_PLAN_SCAN:
if (TSDB_SUPER_TABLE != ((SScanLogicNode*)pNode)->tableType) {
return nodesListMakeAppend(pScanNodes, (SNode*)pNode);
case QUERY_NODE_LOGIC_PLAN_SCAN: {
SScanLogicNode* pScan = (SScanLogicNode*)pNode;
if (NULL != pScan->pGroupTags) {
*pNotOptimize = true;
return TSDB_CODE_SUCCESS;
}
break;
case QUERY_NODE_LOGIC_PLAN_JOIN:
code =
return nodesListMakeAppend(pScanNodes, (SNode*)pNode);
}
case QUERY_NODE_LOGIC_PLAN_JOIN: {
int32_t code =
sortPriKeyOptGetScanNodesImpl((SLogicNode*)nodesListGetNode(pNode->pChildren, 0), pNotOptimize, pScanNodes);
if (TSDB_CODE_SUCCESS == code) {
code =
sortPriKeyOptGetScanNodesImpl((SLogicNode*)nodesListGetNode(pNode->pChildren, 1), pNotOptimize, pScanNodes);
}
return code;
}
case QUERY_NODE_LOGIC_PLAN_AGG:
case QUERY_NODE_LOGIC_PLAN_PARTITION:
*pNotOptimize = true;
return code;
return TSDB_CODE_SUCCESS;
default:
break;
}
@ -1037,17 +1040,33 @@ static EOrder sortPriKeyOptGetPriKeyOrder(SSortLogicNode* pSort) {
return ((SOrderByExprNode*)nodesListGetNode(pSort->pSortKeys, 0))->order;
}
static void sortPriKeyOptSetParentOrder(SLogicNode* pNode, EOrder order) {
if (NULL == pNode) {
return;
}
if (QUERY_NODE_LOGIC_PLAN_WINDOW == nodeType(pNode)) {
((SWindowLogicNode*)pNode)->inputTsOrder = order;
} else if (QUERY_NODE_LOGIC_PLAN_JOIN == nodeType(pNode)) {
((SJoinLogicNode*)pNode)->inputTsOrder = order;
}
sortPriKeyOptSetParentOrder(pNode->pParent, order);
}
static int32_t sortPriKeyOptApply(SOptimizeContext* pCxt, SLogicSubplan* pLogicSubplan, SSortLogicNode* pSort,
SNodeList* pScanNodes) {
EOrder order = sortPriKeyOptGetPriKeyOrder(pSort);
if (ORDER_DESC == order) {
SNode* pScanNode = NULL;
FOREACH(pScanNode, pScanNodes) {
SScanLogicNode* pScan = (SScanLogicNode*)pScanNode;
if (pScan->scanSeq[0] > 0) {
TSWAP(pScan->scanSeq[0], pScan->scanSeq[1]);
}
SNode* pScanNode = NULL;
FOREACH(pScanNode, pScanNodes) {
SScanLogicNode* pScan = (SScanLogicNode*)pScanNode;
if (ORDER_DESC == order && pScan->scanSeq[0] > 0) {
TSWAP(pScan->scanSeq[0], pScan->scanSeq[1]);
}
if (TSDB_SUPER_TABLE == pScan->tableType) {
pScan->scanType = SCAN_TYPE_TABLE_MERGE;
pScan->node.resultDataOrder = DATA_ORDER_LEVEL_GLOBAL;
pScan->node.requireDataOrder = DATA_ORDER_LEVEL_GLOBAL;
}
sortPriKeyOptSetParentOrder(pScan->node.pParent, order);
}
SLogicNode* pChild = (SLogicNode*)nodesListGetNode(pSort->node.pChildren, 0);
@ -1613,10 +1632,10 @@ static void alignProjectionWithTarget(SLogicNode* pNode) {
}
SProjectLogicNode* pProjectNode = (SProjectLogicNode*)pNode;
SNode* pProjection = NULL;
SNode* pProjection = NULL;
FOREACH(pProjection, pProjectNode->pProjections) {
SNode* pTarget = NULL;
bool keep = false;
bool keep = false;
FOREACH(pTarget, pNode->pTargets) {
if (0 == strcmp(((SColumnNode*)pProjection)->node.aliasName, ((SColumnNode*)pTarget)->colName)) {
keep = true;
@ -2214,7 +2233,7 @@ static bool tagScanMayBeOptimized(SLogicNode* pNode) {
!planOptNodeListHasTbname(pAgg->pGroupKeys)) {
return false;
}
SNode* pGroupKey = NULL;
FOREACH(pGroupKey, pAgg->pGroupKeys) {
SNode* pGroup = NULL;

View File

@ -415,7 +415,6 @@ static int32_t createScanPhysiNodeFinalize(SPhysiPlanContext* pCxt, SSubplan* pS
SScanPhysiNode* pScanPhysiNode, SPhysiNode** pPhyNode) {
int32_t code = createScanCols(pCxt, pScanPhysiNode, pScanLogicNode->pScanCols);
if (TSDB_CODE_SUCCESS == code) {
// Data block describe also needs to be set without scanning column, such as SELECT COUNT(*) FROM t
code = addDataBlockSlots(pCxt, pScanPhysiNode->pScanCols, pScanPhysiNode->node.pOutputDataBlockDesc);
}
@ -622,8 +621,8 @@ static int32_t createScanPhysiNode(SPhysiPlanContext* pCxt, SSubplan* pSubplan,
static int32_t createJoinPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChildren, SJoinLogicNode* pJoinLogicNode,
SPhysiNode** pPhyNode) {
SJoinPhysiNode* pJoin =
(SJoinPhysiNode*)makePhysiNode(pCxt, (SLogicNode*)pJoinLogicNode, QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN);
SSortMergeJoinPhysiNode* pJoin =
(SSortMergeJoinPhysiNode*)makePhysiNode(pCxt, (SLogicNode*)pJoinLogicNode, QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN);
if (NULL == pJoin) {
return TSDB_CODE_OUT_OF_MEMORY;
}
@ -975,6 +974,9 @@ static int32_t createInterpFuncPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pCh
}
static bool projectCanMergeDataBlock(SProjectLogicNode* pProject) {
if (GROUP_ACTION_KEEP == pProject->node.groupAction) {
return false;
}
if (DATA_ORDER_LEVEL_NONE == pProject->node.resultDataOrder) {
return true;
}

View File

@ -469,7 +469,7 @@ static int32_t stbSplCreateExchangeNode(SSplitContext* pCxt, SLogicNode* pParent
return code;
}
static int32_t stbSplCreateMergeKeysByPrimaryKey(SNode* pPrimaryKey, SNodeList** pMergeKeys) {
static int32_t stbSplCreateMergeKeysByPrimaryKey(SNode* pPrimaryKey, EOrder order, SNodeList** pMergeKeys) {
SOrderByExprNode* pMergeKey = (SOrderByExprNode*)nodesMakeNode(QUERY_NODE_ORDER_BY_EXPR);
if (NULL == pMergeKey) {
return TSDB_CODE_OUT_OF_MEMORY;
@ -479,7 +479,7 @@ static int32_t stbSplCreateMergeKeysByPrimaryKey(SNode* pPrimaryKey, SNodeList**
nodesDestroyNode((SNode*)pMergeKey);
return TSDB_CODE_OUT_OF_MEMORY;
}
pMergeKey->order = ORDER_ASC;
pMergeKey->order = order;
pMergeKey->nullOrder = NULL_ORDER_FIRST;
return nodesListMakeStrictAppend(pMergeKeys, (SNode*)pMergeKey);
}
@ -491,7 +491,8 @@ static int32_t stbSplSplitIntervalForBatch(SSplitContext* pCxt, SStableSplitInfo
((SWindowLogicNode*)pPartWindow)->windowAlgo = INTERVAL_ALGO_HASH;
((SWindowLogicNode*)pInfo->pSplitNode)->windowAlgo = INTERVAL_ALGO_MERGE;
SNodeList* pMergeKeys = NULL;
code = stbSplCreateMergeKeysByPrimaryKey(((SWindowLogicNode*)pInfo->pSplitNode)->pTspk, &pMergeKeys);
code = stbSplCreateMergeKeysByPrimaryKey(((SWindowLogicNode*)pInfo->pSplitNode)->pTspk,
((SWindowLogicNode*)pInfo->pSplitNode)->inputTsOrder, &pMergeKeys);
if (TSDB_CODE_SUCCESS == code) {
code = stbSplCreateMergeNode(pCxt, NULL, pInfo->pSplitNode, pMergeKeys, pPartWindow, true);
}
@ -579,7 +580,8 @@ static int32_t stbSplSplitSessionOrStateForBatch(SSplitContext* pCxt, SStableSpl
SLogicNode* pChild = (SLogicNode*)nodesListGetNode(pWindow->pChildren, 0);
SNodeList* pMergeKeys = NULL;
int32_t code = stbSplCreateMergeKeysByPrimaryKey(((SWindowLogicNode*)pWindow)->pTspk, &pMergeKeys);
int32_t code = stbSplCreateMergeKeysByPrimaryKey(((SWindowLogicNode*)pWindow)->pTspk,
((SWindowLogicNode*)pWindow)->inputTsOrder, &pMergeKeys);
if (TSDB_CODE_SUCCESS == code) {
code = stbSplCreateMergeNode(pCxt, pInfo->pSubplan, pChild, pMergeKeys, (SLogicNode*)pChild, true);
@ -913,27 +915,70 @@ static int32_t stbSplSplitScanNodeWithPartTags(SSplitContext* pCxt, SStableSplit
}
static SNode* stbSplFindPrimaryKeyFromScan(SScanLogicNode* pScan) {
bool find = false;
SNode* pCol = NULL;
FOREACH(pCol, pScan->pScanCols) {
if (PRIMARYKEY_TIMESTAMP_COL_ID == ((SColumnNode*)pCol)->colId) {
find = true;
break;
}
}
if (!find) {
return NULL;
}
SNode* pTarget = NULL;
FOREACH(pTarget, pScan->node.pTargets) {
if (nodesEqualNode(pTarget, pCol)) {
return pCol;
}
}
return NULL;
nodesListStrictAppend(pScan->node.pTargets, nodesCloneNode(pCol));
return pCol;
}
static int32_t stbSplCreateMergeScanNode(SScanLogicNode* pScan, SLogicNode** pOutputMergeScan,
SNodeList** pOutputMergeKeys) {
SNodeList* pChildren = pScan->node.pChildren;
pScan->node.pChildren = NULL;
int32_t code = TSDB_CODE_SUCCESS;
SScanLogicNode* pMergeScan = (SScanLogicNode*)nodesCloneNode((SNode*)pScan);
if (NULL == pMergeScan) {
code = TSDB_CODE_OUT_OF_MEMORY;
}
SNodeList* pMergeKeys = NULL;
if (TSDB_CODE_SUCCESS == code) {
pMergeScan->scanType = SCAN_TYPE_TABLE_MERGE;
pMergeScan->node.pChildren = pChildren;
splSetParent((SLogicNode*)pMergeScan);
code = stbSplCreateMergeKeysByPrimaryKey(stbSplFindPrimaryKeyFromScan(pMergeScan),
pMergeScan->scanSeq[0] > 0 ? ORDER_ASC : ORDER_DESC, &pMergeKeys);
}
if (TSDB_CODE_SUCCESS == code) {
*pOutputMergeScan = (SLogicNode*)pMergeScan;
*pOutputMergeKeys = pMergeKeys;
} else {
nodesDestroyNode((SNode*)pMergeScan);
nodesDestroyList(pMergeKeys);
}
return code;
}
static int32_t stbSplSplitMergeScanNode(SSplitContext* pCxt, SLogicSubplan* pSubplan, SScanLogicNode* pScan,
bool groupSort) {
SNodeList* pMergeKeys = NULL;
int32_t code = stbSplCreateMergeKeysByPrimaryKey(stbSplFindPrimaryKeyFromScan(pScan), &pMergeKeys);
SLogicNode* pMergeScan = NULL;
SNodeList* pMergeKeys = NULL;
int32_t code = stbSplCreateMergeScanNode(pScan, &pMergeScan, &pMergeKeys);
if (TSDB_CODE_SUCCESS == code) {
code = stbSplCreateMergeNode(pCxt, pSubplan, (SLogicNode*)pScan, pMergeKeys, (SLogicNode*)pScan, groupSort);
code = stbSplCreateMergeNode(pCxt, pSubplan, (SLogicNode*)pScan, pMergeKeys, pMergeScan, groupSort);
}
if (TSDB_CODE_SUCCESS == code) {
code = nodesListMakeStrictAppend(&pSubplan->pChildren,
(SNode*)splCreateScanSubplan(pCxt, (SLogicNode*)pScan, SPLIT_FLAG_STABLE_SPLIT));
(SNode*)splCreateScanSubplan(pCxt, pMergeScan, SPLIT_FLAG_STABLE_SPLIT));
}
pScan->scanType = SCAN_TYPE_TABLE_MERGE;
++(pCxt->groupId);
return code;
}
@ -978,14 +1023,14 @@ static int32_t stbSplSplitJoinNode(SSplitContext* pCxt, SStableSplitInfo* pInfo)
}
static int32_t stbSplCreateMergeKeysForPartitionNode(SLogicNode* pPart, SNodeList** pMergeKeys) {
SNode* pPrimaryKey =
nodesCloneNode(stbSplFindPrimaryKeyFromScan((SScanLogicNode*)nodesListGetNode(pPart->pChildren, 0)));
SScanLogicNode* pScan = (SScanLogicNode*)nodesListGetNode(pPart->pChildren, 0);
SNode* pPrimaryKey = nodesCloneNode(stbSplFindPrimaryKeyFromScan(pScan));
if (NULL == pPrimaryKey) {
return TSDB_CODE_OUT_OF_MEMORY;
}
int32_t code = nodesListAppend(pPart->pTargets, pPrimaryKey);
if (TSDB_CODE_SUCCESS == code) {
code = stbSplCreateMergeKeysByPrimaryKey(pPrimaryKey, pMergeKeys);
code = stbSplCreateMergeKeysByPrimaryKey(pPrimaryKey, pScan->scanSeq[0] > 0 ? ORDER_ASC : ORDER_DESC, pMergeKeys);
}
return code;
}

View File

@ -124,7 +124,8 @@ int32_t replaceLogicNode(SLogicSubplan* pSubplan, SLogicNode* pOld, SLogicNode*
}
static int32_t adjustScanDataRequirement(SScanLogicNode* pScan, EDataOrderLevel requirement) {
if (SCAN_TYPE_TABLE != pScan->scanType && SCAN_TYPE_TABLE_MERGE != pScan->scanType) {
if ((SCAN_TYPE_TABLE != pScan->scanType && SCAN_TYPE_TABLE_MERGE != pScan->scanType) ||
DATA_ORDER_LEVEL_GLOBAL == pScan->node.requireDataOrder) {
return TSDB_CODE_SUCCESS;
}
// The lowest sort level of scan output data is DATA_ORDER_LEVEL_IN_BLOCK

View File

@ -24,9 +24,10 @@ TEST_F(PlanBasicTest, selectClause) {
useDb("root", "test");
run("SELECT * FROM t1");
run("SELECT 1 FROM t1");
run("SELECT * FROM st1");
run("SELECT 1 FROM st1");
run("SELECT MAX(c1) c2, c2 FROM t1");
run("SELECT MAX(c1) c2, c2 FROM st1");
}
TEST_F(PlanBasicTest, whereClause) {

View File

@ -53,6 +53,8 @@ TEST_F(PlanOptimizeTest, sortPrimaryKey) {
run("SELECT c1 FROM t1 ORDER BY ts");
run("SELECT c1 FROM st1 ORDER BY ts");
run("SELECT c1 FROM t1 ORDER BY ts DESC");
run("SELECT COUNT(*) FROM t1 INTERVAL(10S) ORDER BY _WSTART DESC");

View File

@ -283,6 +283,8 @@ int32_t qwGetDeleteResFromSink(QW_FPARAMS_DEF, SQWTaskCtx *ctx, SDeleteRes *pRes
pRes->skey = pDelRes->skey;
pRes->ekey = pDelRes->ekey;
pRes->affectedRows = pDelRes->affectedRows;
taosMemoryFree(output.pData);
return TSDB_CODE_SUCCESS;
}

View File

@ -26,7 +26,7 @@ static int32_t streamTaskExecImpl(SStreamTask* pTask, void* data, SArray* pRes)
} else if (pItem->type == STREAM_INPUT__DATA_SUBMIT) {
ASSERT(pTask->isDataScan);
SStreamDataSubmit* pSubmit = (SStreamDataSubmit*)data;
qDebug("task %d %p set submit input %p %p %d", pTask->taskId, pTask, pSubmit, pSubmit->data, *pSubmit->dataRef);
qDebug("task %d %p set submit input %p %p %d 1", pTask->taskId, pTask, pSubmit, pSubmit->data, *pSubmit->dataRef);
qSetStreamInput(exec, pSubmit->data, STREAM_INPUT__DATA_SUBMIT, false);
} else if (pItem->type == STREAM_INPUT__DATA_BLOCK || pItem->type == STREAM_INPUT__DATA_RETRIEVE) {
SStreamDataBlock* pBlock = (SStreamDataBlock*)data;
@ -72,6 +72,8 @@ static int32_t streamTaskExecImpl(SStreamTask* pTask, void* data, SArray* pRes)
continue;
}
qDebug("task %d(child %d) executed and get block");
SSDataBlock block = {0};
assignOneDataBlock(&block, output);
block.info.childId = pTask->selfChildId;
@ -188,7 +190,7 @@ static SArray* streamExecForQall(SStreamTask* pTask, SArray* pRes) {
if (pTask->execType == TASK_EXEC__NONE) {
ASSERT(((SStreamQueueItem*)data)->type == STREAM_INPUT__DATA_BLOCK);
streamTaskOutput(pTask, data);
return pRes;
continue;
}
qDebug("stream task %d exec begin, msg batch: %d", pTask->taskId, cnt);

View File

@ -238,6 +238,7 @@ int32_t syncNodeGetPreIndexTerm(SSyncNode* pSyncNode, SyncIndex index, SyncInd
bool syncNodeIsOptimizedOneReplica(SSyncNode* ths, SRpcMsg* pMsg);
int32_t syncNodeCommit(SSyncNode* ths, SyncIndex beginIndex, SyncIndex endIndex, uint64_t flag);
int32_t syncNodePreCommit(SSyncNode* ths, SSyncRaftEntry* pEntry, int32_t code);
int32_t syncNodeUpdateNewConfigIndex(SSyncNode* ths, SSyncCfg* pNewCfg);

View File

@ -26,6 +26,7 @@ extern "C" {
#include "syncInt.h"
#include "syncMessage.h"
#include "taosdef.h"
#include "tref.h"
#include "tskiplist.h"
typedef struct SSyncRaftEntry {
@ -89,6 +90,7 @@ typedef struct SRaftEntryCache {
SSkipList* pSkipList;
int32_t maxCount;
int32_t currentCount;
int32_t refMgr;
TdThreadMutex mutex;
SSyncNode* pSyncNode;
} SRaftEntryCache;

View File

@ -244,22 +244,7 @@ int32_t syncNodeOnAppendEntriesCb(SSyncNode* ths, SyncAppendEntries* pMsg) {
ths->pLogStore->appendEntry(ths->pLogStore, pAppendEntry);
// pre commit
SRpcMsg rpcMsg;
syncEntry2OriginalRpc(pAppendEntry, &rpcMsg);
if (ths->pFsm != NULL) {
// if (ths->pFsm->FpPreCommitCb != NULL && pAppendEntry->originalRpcType != TDMT_SYNC_NOOP) {
if (ths->pFsm->FpPreCommitCb != NULL && syncUtilUserPreCommit(pAppendEntry->originalRpcType)) {
SFsmCbMeta cbMeta = {0};
cbMeta.index = pAppendEntry->index;
cbMeta.lastConfigIndex = syncNodeGetSnapshotConfigIndex(ths, cbMeta.index);
cbMeta.isWeak = pAppendEntry->isWeak;
cbMeta.code = 2;
cbMeta.state = ths->state;
cbMeta.seqNum = pAppendEntry->seqNum;
ths->pFsm->FpPreCommitCb(ths->pFsm, &rpcMsg, cbMeta);
}
}
rpcFreeCont(rpcMsg.pCont);
syncNodePreCommit(ths, pAppendEntry, 0);
}
// free memory
@ -280,22 +265,7 @@ int32_t syncNodeOnAppendEntriesCb(SSyncNode* ths, SyncAppendEntries* pMsg) {
ths->pLogStore->appendEntry(ths->pLogStore, pAppendEntry);
// pre commit
SRpcMsg rpcMsg;
syncEntry2OriginalRpc(pAppendEntry, &rpcMsg);
if (ths->pFsm != NULL) {
// if (ths->pFsm->FpPreCommitCb != NULL && pAppendEntry->originalRpcType != TDMT_SYNC_NOOP) {
if (ths->pFsm->FpPreCommitCb != NULL && syncUtilUserPreCommit(pAppendEntry->originalRpcType)) {
SFsmCbMeta cbMeta = {0};
cbMeta.index = pAppendEntry->index;
cbMeta.lastConfigIndex = syncNodeGetSnapshotConfigIndex(ths, cbMeta.index);
cbMeta.isWeak = pAppendEntry->isWeak;
cbMeta.code = 3;
cbMeta.state = ths->state;
cbMeta.seqNum = pAppendEntry->seqNum;
ths->pFsm->FpPreCommitCb(ths->pFsm, &rpcMsg, cbMeta);
}
}
rpcFreeCont(rpcMsg.pCont);
syncNodePreCommit(ths, pAppendEntry, 0);
// free memory
syncEntryDestory(pAppendEntry);
@ -440,7 +410,7 @@ static int32_t syncNodeDoMakeLogSame(SSyncNode* ths, SyncIndex FromIndex) {
return code;
}
static int32_t syncNodePreCommit(SSyncNode* ths, SSyncRaftEntry* pEntry) {
int32_t syncNodePreCommit(SSyncNode* ths, SSyncRaftEntry* pEntry, int32_t code) {
SRpcMsg rpcMsg;
syncEntry2OriginalRpc(pEntry, &rpcMsg);
@ -456,7 +426,7 @@ static int32_t syncNodePreCommit(SSyncNode* ths, SSyncRaftEntry* pEntry) {
cbMeta.index = pEntry->index;
cbMeta.lastConfigIndex = syncNodeGetSnapshotConfigIndex(ths, cbMeta.index);
cbMeta.isWeak = pEntry->isWeak;
cbMeta.code = 2;
cbMeta.code = code;
cbMeta.state = ths->state;
cbMeta.seqNum = pEntry->seqNum;
ths->pFsm->FpPreCommitCb(ths->pFsm, &rpcMsg, cbMeta);
@ -594,7 +564,7 @@ int32_t syncNodeOnAppendEntriesSnapshot2Cb(SSyncNode* ths, SyncAppendEntriesBatc
return -1;
}
code = syncNodePreCommit(ths, pAppendEntry);
code = syncNodePreCommit(ths, pAppendEntry, 0);
ASSERT(code == 0);
// syncEntryDestory(pAppendEntry);
@ -715,7 +685,7 @@ int32_t syncNodeOnAppendEntriesSnapshot2Cb(SSyncNode* ths, SyncAppendEntriesBatc
return -1;
}
code = syncNodePreCommit(ths, pAppendEntry);
code = syncNodePreCommit(ths, pAppendEntry, 0);
ASSERT(code == 0);
// syncEntryDestory(pAppendEntry);
@ -919,7 +889,7 @@ int32_t syncNodeOnAppendEntriesSnapshotCb(SSyncNode* ths, SyncAppendEntries* pMs
}
// pre commit
code = syncNodePreCommit(ths, pAppendEntry);
code = syncNodePreCommit(ths, pAppendEntry, 0);
ASSERT(code == 0);
// update match index
@ -1032,7 +1002,7 @@ int32_t syncNodeOnAppendEntriesSnapshotCb(SSyncNode* ths, SyncAppendEntries* pMs
}
// pre commit
code = syncNodePreCommit(ths, pAppendEntry);
code = syncNodePreCommit(ths, pAppendEntry, 0);
ASSERT(code == 0);
syncEntryDestory(pAppendEntry);

View File

@ -67,11 +67,6 @@ void syncMaybeAdvanceCommitIndex(SSyncNode* pSyncNode) {
for (SyncIndex index = syncNodeGetLastIndex(pSyncNode); index > pSyncNode->commitIndex; --index) {
bool agree = syncAgree(pSyncNode, index);
if (gRaftDetailLog) {
sTrace("syncMaybeAdvanceCommitIndex syncAgree:%d, index:%" PRId64 ", pSyncNode->commitIndex:%" PRId64, agree,
index, pSyncNode->commitIndex);
}
if (agree) {
// term
SSyncRaftEntry* pEntry = pSyncNode->pLogStore->getEntry(pSyncNode->pLogStore, index);
@ -82,20 +77,15 @@ void syncMaybeAdvanceCommitIndex(SSyncNode* pSyncNode) {
// update commit index
newCommitIndex = index;
if (gRaftDetailLog) {
sTrace("syncMaybeAdvanceCommitIndex maybe to update, newCommitIndex:%" PRId64
" commit, pSyncNode->commitIndex:%" PRId64,
newCommitIndex, pSyncNode->commitIndex);
}
syncEntryDestory(pEntry);
break;
} else {
if (gRaftDetailLog) {
sTrace("syncMaybeAdvanceCommitIndex can not commit due to term not equal, pEntry->term:%" PRIu64
", pSyncNode->pRaftStore->currentTerm:%" PRIu64,
pEntry->term, pSyncNode->pRaftStore->currentTerm);
}
do {
char logBuf[128];
snprintf(logBuf, sizeof(logBuf), "can not commit due to term not equal, index:%ld, term:%lu", pEntry->index,
pEntry->term);
syncNodeEventLog(pSyncNode, logBuf);
} while (0);
}
syncEntryDestory(pEntry);
@ -107,10 +97,6 @@ void syncMaybeAdvanceCommitIndex(SSyncNode* pSyncNode) {
SyncIndex beginIndex = pSyncNode->commitIndex + 1;
SyncIndex endIndex = newCommitIndex;
if (gRaftDetailLog) {
sTrace("syncMaybeAdvanceCommitIndex sync commit %" PRId64, newCommitIndex);
}
// update commit index
pSyncNode->commitIndex = newCommitIndex;

View File

@ -242,9 +242,9 @@ static int32_t syncIOStopInternal(SSyncIO *io) {
}
static void *syncIOConsumerFunc(void *param) {
SSyncIO *io = param;
SSyncIO * io = param;
STaosQall *qall = taosAllocateQall();
SRpcMsg *pRpcMsg, rpcMsg;
SRpcMsg * pRpcMsg, rpcMsg;
SQueueInfo qinfo = {0};
while (1) {

View File

@ -125,7 +125,7 @@ cJSON *syncIndexMgr2Json(SSyncIndexMgr *pSyncIndexMgr) {
char *syncIndexMgr2Str(SSyncIndexMgr *pSyncIndexMgr) {
cJSON *pJson = syncIndexMgr2Json(pSyncIndexMgr);
char *serialized = cJSON_Print(pJson);
char * serialized = cJSON_Print(pJson);
cJSON_Delete(pJson);
return serialized;
}

View File

@ -2504,22 +2504,7 @@ int32_t syncNodeOnClientRequestCb(SSyncNode* ths, SyncClientRequest* pMsg, SyncI
}
// pre commit
SRpcMsg rpcMsg;
syncEntry2OriginalRpc(pEntry, &rpcMsg);
if (ths->pFsm != NULL) {
if (ths->pFsm->FpPreCommitCb != NULL && syncUtilUserPreCommit(pEntry->originalRpcType)) {
SFsmCbMeta cbMeta = {0};
cbMeta.index = pEntry->index;
cbMeta.lastConfigIndex = syncNodeGetSnapshotConfigIndex(ths, cbMeta.index);
cbMeta.isWeak = pEntry->isWeak;
cbMeta.code = 0;
cbMeta.state = ths->state;
cbMeta.seqNum = pEntry->seqNum;
ths->pFsm->FpPreCommitCb(ths->pFsm, &rpcMsg, cbMeta);
}
}
rpcFreeCont(rpcMsg.pCont);
syncNodePreCommit(ths, pEntry, 0);
// if only myself, maybe commit right now
if (ths->replicaNum == 1) {
@ -2528,22 +2513,7 @@ int32_t syncNodeOnClientRequestCb(SSyncNode* ths, SyncClientRequest* pMsg, SyncI
} else {
// pre commit
SRpcMsg rpcMsg;
syncEntry2OriginalRpc(pEntry, &rpcMsg);
if (ths->pFsm != NULL) {
if (ths->pFsm->FpPreCommitCb != NULL && syncUtilUserPreCommit(pEntry->originalRpcType)) {
SFsmCbMeta cbMeta = {0};
cbMeta.index = pEntry->index;
cbMeta.lastConfigIndex = syncNodeGetSnapshotConfigIndex(ths, cbMeta.index);
cbMeta.isWeak = pEntry->isWeak;
cbMeta.code = 1;
cbMeta.state = ths->state;
cbMeta.seqNum = pEntry->seqNum;
ths->pFsm->FpPreCommitCb(ths->pFsm, &rpcMsg, cbMeta);
}
}
rpcFreeCont(rpcMsg.pCont);
syncNodePreCommit(ths, pEntry, 0);
}
if (pRetIndex != NULL) {

View File

@ -101,7 +101,7 @@ cJSON *syncCfg2Json(SSyncCfg *pSyncCfg) {
char *syncCfg2Str(SSyncCfg *pSyncCfg) {
cJSON *pJson = syncCfg2Json(pSyncCfg);
char *serialized = cJSON_Print(pJson);
char * serialized = cJSON_Print(pJson);
cJSON_Delete(pJson);
return serialized;
}
@ -109,7 +109,7 @@ char *syncCfg2Str(SSyncCfg *pSyncCfg) {
char *syncCfg2SimpleStr(SSyncCfg *pSyncCfg) {
if (pSyncCfg != NULL) {
int32_t len = 512;
char *s = taosMemoryMalloc(len);
char * s = taosMemoryMalloc(len);
memset(s, 0, len);
snprintf(s, len, "{r-num:%d, my:%d, ", pSyncCfg->replicaNum, pSyncCfg->myIndex);
@ -206,7 +206,7 @@ cJSON *raftCfg2Json(SRaftCfg *pRaftCfg) {
char *raftCfg2Str(SRaftCfg *pRaftCfg) {
cJSON *pJson = raftCfg2Json(pRaftCfg);
char *serialized = cJSON_Print(pJson);
char * serialized = cJSON_Print(pJson);
cJSON_Delete(pJson);
return serialized;
}
@ -285,7 +285,7 @@ int32_t raftCfgFromJson(const cJSON *pRoot, SRaftCfg *pRaftCfg) {
(pRaftCfg->configIndexArr)[i] = atoll(pIndex->valuestring);
}
cJSON *pJsonSyncCfg = cJSON_GetObjectItem(pJson, "SSyncCfg");
cJSON * pJsonSyncCfg = cJSON_GetObjectItem(pJson, "SSyncCfg");
int32_t code = syncCfgFromJson(pJsonSyncCfg, &(pRaftCfg->cfg));
ASSERT(code == 0);

View File

@ -23,6 +23,7 @@ SSyncRaftEntry* syncEntryBuild(uint32_t dataLen) {
memset(pEntry, 0, bytes);
pEntry->bytes = bytes;
pEntry->dataLen = dataLen;
pEntry->rid = -1;
return pEntry;
}
@ -451,6 +452,11 @@ static char* keyFn(const void* pData) {
static int cmpFn(const void* p1, const void* p2) { return memcmp(p1, p2, sizeof(SyncIndex)); }
static void freeRaftEntry(void* param) {
SSyncRaftEntry* pEntry = (SSyncRaftEntry*)param;
syncEntryDestory(pEntry);
}
SRaftEntryCache* raftEntryCacheCreate(SSyncNode* pSyncNode, int32_t maxCount) {
SRaftEntryCache* pCache = taosMemoryMalloc(sizeof(SRaftEntryCache));
if (pCache == NULL) {
@ -466,6 +472,7 @@ SRaftEntryCache* raftEntryCacheCreate(SSyncNode* pSyncNode, int32_t maxCount) {
}
taosThreadMutexInit(&(pCache->mutex), NULL);
pCache->refMgr = taosOpenRef(10, freeRaftEntry);
pCache->maxCount = maxCount;
pCache->currentCount = 0;
pCache->pSyncNode = pSyncNode;
@ -477,6 +484,10 @@ void raftEntryCacheDestroy(SRaftEntryCache* pCache) {
if (pCache != NULL) {
taosThreadMutexLock(&(pCache->mutex));
tSkipListDestroy(pCache->pSkipList);
if (pCache->refMgr != -1) {
taosCloseRef(pCache->refMgr);
pCache->refMgr = -1;
}
taosThreadMutexUnlock(&(pCache->mutex));
taosThreadMutexDestroy(&(pCache->mutex));
taosMemoryFree(pCache);
@ -498,6 +509,9 @@ int32_t raftEntryCachePutEntry(struct SRaftEntryCache* pCache, SSyncRaftEntry* p
ASSERT(pSkipListNode != NULL);
++(pCache->currentCount);
pEntry->rid = taosAddRef(pCache->refMgr, pEntry);
ASSERT(pEntry->rid >= 0);
do {
char eventLog[128];
snprintf(eventLog, sizeof(eventLog), "raft cache add, type:%s,%d, type2:%s,%d, index:%" PRId64 ", bytes:%d",
@ -520,6 +534,7 @@ int32_t raftEntryCacheGetEntry(struct SRaftEntryCache* pCache, SyncIndex index,
if (code == 1) {
*ppEntry = taosMemoryMalloc(pEntry->bytes);
memcpy(*ppEntry, pEntry, pEntry->bytes);
(*ppEntry)->rid = -1;
} else {
*ppEntry = NULL;
}
@ -541,6 +556,7 @@ int32_t raftEntryCacheGetEntryP(struct SRaftEntryCache* pCache, SyncIndex index,
SSkipListNode** ppNode = (SSkipListNode**)taosArrayGet(entryPArray, 0);
ASSERT(*ppNode != NULL);
*ppEntry = (SSyncRaftEntry*)SL_GET_NODE_DATA(*ppNode);
taosAcquireRef(pCache->refMgr, (*ppEntry)->rid);
code = 1;
} else if (arraySize == 0) {
@ -600,7 +616,9 @@ int32_t raftEntryCacheClear(struct SRaftEntryCache* pCache, int32_t count) {
taosArrayPush(delNodeArray, &pNode);
++returnCnt;
SSyncRaftEntry* pEntry = (SSyncRaftEntry*)SL_GET_NODE_DATA(pNode);
syncEntryDestory(pEntry);
// syncEntryDestory(pEntry);
taosRemoveRef(pCache->refMgr, pEntry->rid);
}
tSkipListDestroyIter(pIter);

View File

@ -216,7 +216,7 @@ cJSON *raftStore2Json(SRaftStore *pRaftStore) {
char *raftStore2Str(SRaftStore *pRaftStore) {
cJSON *pJson = raftStore2Json(pRaftStore);
char *serialized = cJSON_Print(pJson);
char * serialized = cJSON_Print(pJson);
cJSON_Delete(pJson);
return serialized;
}

View File

@ -129,7 +129,7 @@ void syncRespCleanByTTL(SSyncRespMgr *pObj, int64_t ttl) {
while (pStub) {
size_t len;
void *key = taosHashGetKey(pStub, &len);
void * key = taosHashGetKey(pStub, &len);
uint64_t *pSeqNum = (uint64_t *)key;
sum++;

View File

@ -374,14 +374,14 @@ cJSON *snapshotSender2Json(SSyncSnapshotSender *pSender) {
char *snapshotSender2Str(SSyncSnapshotSender *pSender) {
cJSON *pJson = snapshotSender2Json(pSender);
char *serialized = cJSON_Print(pJson);
char * serialized = cJSON_Print(pJson);
cJSON_Delete(pJson);
return serialized;
}
char *snapshotSender2SimpleStr(SSyncSnapshotSender *pSender, char *event) {
int32_t len = 256;
char *s = taosMemoryMalloc(len);
char * s = taosMemoryMalloc(len);
SRaftId destId = pSender->pSyncNode->replicasId[pSender->replicaIndex];
char host[64];
@ -653,7 +653,7 @@ cJSON *snapshotReceiver2Json(SSyncSnapshotReceiver *pReceiver) {
cJSON_AddStringToObject(pFromId, "addr", u64buf);
{
uint64_t u64 = pReceiver->fromId.addr;
cJSON *pTmp = pFromId;
cJSON * pTmp = pFromId;
char host[128] = {0};
uint16_t port;
syncUtilU642Addr(u64, host, sizeof(host), &port);
@ -686,14 +686,14 @@ cJSON *snapshotReceiver2Json(SSyncSnapshotReceiver *pReceiver) {
char *snapshotReceiver2Str(SSyncSnapshotReceiver *pReceiver) {
cJSON *pJson = snapshotReceiver2Json(pReceiver);
char *serialized = cJSON_Print(pJson);
char * serialized = cJSON_Print(pJson);
cJSON_Delete(pJson);
return serialized;
}
char *snapshotReceiver2SimpleStr(SSyncSnapshotReceiver *pReceiver, char *event) {
int32_t len = 256;
char *s = taosMemoryMalloc(len);
char * s = taosMemoryMalloc(len);
SRaftId fromId = pReceiver->fromId;
char host[128];

View File

@ -125,7 +125,7 @@ int32_t SnapshotStartWrite(struct SSyncFSM* pFsm, void* pParam, void** ppWriter)
return 0;
}
int32_t SnapshotStopWrite(struct SSyncFSM* pFsm, void* pWriter, bool isApply, SSnapshot *pSnapshot) {
int32_t SnapshotStopWrite(struct SSyncFSM* pFsm, void* pWriter, bool isApply, SSnapshot* pSnapshot) {
char logBuf[256] = {0};
snprintf(logBuf, sizeof(logBuf), "==callback== ==SnapshotStopWrite== pFsm:%p, pWriter:%p, isApply:%d", pFsm, pWriter,
isApply);

View File

@ -5,8 +5,8 @@
#include "syncRaftLog.h"
#include "syncRaftStore.h"
#include "syncUtil.h"
#include "tskiplist.h"
#include "tref.h"
#include "tskiplist.h"
void logTest() {
sTrace("--- sync log test: trace");
@ -51,7 +51,7 @@ SRaftEntryCache* createCache(int maxCount) {
}
void test1() {
int32_t code = 0;
int32_t code = 0;
SRaftEntryCache* pCache = createCache(5);
for (int i = 0; i < 10; ++i) {
SSyncRaftEntry* pEntry = createEntry(i);
@ -68,7 +68,7 @@ void test1() {
}
void test2() {
int32_t code = 0;
int32_t code = 0;
SRaftEntryCache* pCache = createCache(5);
for (int i = 0; i < 10; ++i) {
SSyncRaftEntry* pEntry = createEntry(i);
@ -77,7 +77,7 @@ void test2() {
}
raftEntryCacheLog2((char*)"==test1 write 5 entries==", pCache);
SyncIndex index = 2;
SyncIndex index = 2;
SSyncRaftEntry* pEntry = NULL;
code = raftEntryCacheGetEntryP(pCache, index, &pEntry);
@ -107,7 +107,7 @@ void test2() {
}
void test3() {
int32_t code = 0;
int32_t code = 0;
SRaftEntryCache* pCache = createCache(20);
for (int i = 0; i <= 4; ++i) {
SSyncRaftEntry* pEntry = createEntry(i);
@ -122,8 +122,6 @@ void test3() {
raftEntryCacheLog2((char*)"==test3 write 10 entries==", pCache);
}
static void freeObj(void* param) {
SSyncRaftEntry* pEntry = (SSyncRaftEntry*)param;
syncEntryLog2((char*)"freeObj: ", pEntry);
@ -138,19 +136,41 @@ void test4() {
int64_t rid = taosAddRef(testRefId, pEntry);
sTrace("rid: %ld", rid);
do {
SSyncRaftEntry* pAcquireEntry = (SSyncRaftEntry*)taosAcquireRef(testRefId, rid);
syncEntryLog2((char*)"acquire: ", pAcquireEntry);
taosAcquireRef(testRefId, rid);
taosAcquireRef(testRefId, rid);
taosAcquireRef(testRefId, rid);
taosReleaseRef(testRefId, rid);
//taosReleaseRef(testRefId, rid);
// taosReleaseRef(testRefId, rid);
// taosReleaseRef(testRefId, rid);
} while (0);
taosRemoveRef(testRefId, rid);
for (int i = 0; i < 10; ++i) {
sTrace("taosReleaseRef, %d", i);
taosReleaseRef(testRefId, rid);
}
}
void test5() {
int32_t testRefId = taosOpenRef(5, freeObj);
for (int i = 0; i < 100; i++) {
SSyncRaftEntry* pEntry = createEntry(i);
ASSERT(pEntry != NULL);
int64_t rid = taosAddRef(testRefId, pEntry);
sTrace("rid: %ld", rid);
}
for (int64_t rid = 2; rid < 101; rid++) {
SSyncRaftEntry* pAcquireEntry = (SSyncRaftEntry*)taosAcquireRef(testRefId, rid);
syncEntryLog2((char*)"taosAcquireRef: ", pAcquireEntry);
}
}
int main(int argc, char** argv) {
@ -158,11 +178,13 @@ int main(int argc, char** argv) {
tsAsyncLog = 0;
sDebugFlag = DEBUG_TRACE + DEBUG_SCREEN + DEBUG_FILE + DEBUG_DEBUG;
test1();
test2();
test3();
//test4();
/*
test1();
test2();
test3();
*/
test4();
// test5();
return 0;
}

View File

@ -30,7 +30,7 @@ int32_t SnapshotStopRead(struct SSyncFSM* pFsm, void* pReader) { return 0; }
int32_t SnapshotDoRead(struct SSyncFSM* pFsm, void* pReader, void** ppBuf, int32_t* len) { return 0; }
int32_t SnapshotStartWrite(struct SSyncFSM* pFsm, void* pParam, void** ppWriter) { return 0; }
int32_t SnapshotStopWrite(struct SSyncFSM* pFsm, void* pWriter, bool isApply, SSnapshot *pSnapshot) { return 0; }
int32_t SnapshotStopWrite(struct SSyncFSM* pFsm, void* pWriter, bool isApply, SSnapshot* pSnapshot) { return 0; }
int32_t SnapshotDoWrite(struct SSyncFSM* pFsm, void* pWriter, void* pBuf, int32_t len) { return 0; }
SSyncSnapshotReceiver* createReceiver() {

View File

@ -126,7 +126,7 @@ int32_t SnapshotStartWrite(struct SSyncFSM* pFsm, void* pParam, void** ppWriter)
return 0;
}
int32_t SnapshotStopWrite(struct SSyncFSM* pFsm, void* pWriter, bool isApply, SSnapshot *pSnapshot) {
int32_t SnapshotStopWrite(struct SSyncFSM* pFsm, void* pWriter, bool isApply, SSnapshot* pSnapshot) {
if (isApply) {
gSnapshotLastApplyIndex = gFinishLastApplyIndex;
gSnapshotLastApplyTerm = gFinishLastApplyTerm;

View File

@ -93,7 +93,7 @@ SWal *walOpen(const char *path, SWalCfg *pCfg) {
}
// init ref
pWal->pRefHash = taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_UBIGINT), true, HASH_ENTRY_LOCK);
pWal->pRefHash = taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, HASH_ENTRY_LOCK);
if (pWal->pRefHash == NULL) {
taosMemoryFree(pWal);
return NULL;
@ -101,8 +101,8 @@ SWal *walOpen(const char *path, SWalCfg *pCfg) {
// open meta
walResetVer(&pWal->vers);
pWal->pWriteLogTFile = NULL;
pWal->pWriteIdxTFile = NULL;
pWal->pLogFile = NULL;
pWal->pIdxFile = NULL;
pWal->writeCur = -1;
pWal->fileInfoSet = taosArrayInit(8, sizeof(SWalFileInfo));
if (pWal->fileInfoSet == NULL) {
@ -179,10 +179,10 @@ int32_t walAlter(SWal *pWal, SWalCfg *pCfg) {
void walClose(SWal *pWal) {
taosThreadMutexLock(&pWal->mutex);
taosCloseFile(&pWal->pWriteLogTFile);
pWal->pWriteLogTFile = NULL;
taosCloseFile(&pWal->pWriteIdxTFile);
pWal->pWriteIdxTFile = NULL;
taosCloseFile(&pWal->pLogFile);
pWal->pLogFile = NULL;
taosCloseFile(&pWal->pIdxFile);
pWal->pIdxFile = NULL;
walSaveMeta(pWal);
taosArrayDestroy(pWal->fileInfoSet);
pWal->fileInfoSet = NULL;
@ -223,7 +223,7 @@ static void walFsyncAll() {
if (walNeedFsync(pWal)) {
wTrace("vgId:%d, do fsync, level:%d seq:%d rseq:%d", pWal->cfg.vgId, pWal->cfg.level, pWal->fsyncSeq,
atomic_load_32(&tsWal.seq));
int32_t code = taosFsyncFile(pWal->pWriteLogTFile);
int32_t code = taosFsyncFile(pWal->pLogFile);
if (code != 0) {
wError("vgId:%d, file:%" PRId64 ".log, failed to fsync since %s", pWal->cfg.vgId, walGetLastFileFirstVer(pWal),
strerror(code));

Some files were not shown because too many files have changed in this diff Show More