Merge branch '3.0' into feature/3_liaohj
This commit is contained in:
commit
fceee02622
|
@ -19,3 +19,6 @@
|
|||
[submodule "tools/taosadapter"]
|
||||
path = tools/taosadapter
|
||||
url = https://github.com/taosdata/taosadapter.git
|
||||
[submodule "tools/taosws-rs"]
|
||||
path = tools/taosws-rs
|
||||
url = https://github.com/taosdata/taosws-rs.git
|
||||
|
|
209
README-CN.md
209
README-CN.md
|
@ -1,38 +1,59 @@
|
|||
<p>
|
||||
<p align="center">
|
||||
<a href="https://tdengine.com" target="_blank">
|
||||
<img
|
||||
src="docs/assets/tdengine.svg"
|
||||
alt="TDengine"
|
||||
width="500"
|
||||
/>
|
||||
</a>
|
||||
</p>
|
||||
<p>
|
||||
|
||||
[](https://travis-ci.org/taosdata/TDengine)
|
||||
[](https://ci.appveyor.com/project/sangshuduo/tdengine-2n8ge/branch/master)
|
||||
[](https://coveralls.io/github/taosdata/TDengine?branch=develop)
|
||||
[](https://bestpractices.coreinfrastructure.org/projects/4201)
|
||||
[](https://snapcraft.io/tdengine)
|
||||
|
||||
[](https://www.taosdata.com)
|
||||
|
||||
简体中文 | [English](./README.md)
|
||||
简体中文 | [English](README.md) | 很多职位正在热招中,请看[这里](https://www.taosdata.com/cn/careers/)
|
||||
|
||||
# TDengine 简介
|
||||
|
||||
TDengine是涛思数据专为物联网、车联网、工业互联网、IT运维等设计和优化的大数据平台。除核心的快10倍以上的时序数据库功能外,还提供缓存、数据订阅、流式计算等功能,最大程度减少研发和运维的复杂度,且核心代码,包括集群功能全部开源(开源协议,AGPL v3.0)。
|
||||
TDengine 是一款高性能、分布式、支持 SQL 的时序数据库(Time-Series Database)。而且除时序数据库功能外,它还提供缓存、数据订阅、流式计算等功能,最大程度减少研发和运维的复杂度,且核心代码,包括集群功能全部开源(开源协议,AGPL v3.0)。与其他时序数据数据库相比,TDengine 有以下特点:
|
||||
|
||||
- 10 倍以上性能提升。定义了创新的数据存储结构,单核每秒就能处理至少2万次请求,插入数百万个数据点,读出一千万以上数据点,比现有通用数据库快了十倍以上。
|
||||
- 硬件或云服务成本降至1/5。由于超强性能,计算资源不到通用大数据方案的1/5;通过列式存储和先进的压缩算法,存储空间不到通用数据库的1/10。
|
||||
- 全栈时序数据处理引擎。将数据库、消息队列、缓存、流式计算等功能融合一起,应用无需再集成Kafka/Redis/HBase/Spark等软件,大幅降低应用开发和维护成本。
|
||||
- 强大的分析功能。无论是十年前还是一秒钟前的数据,指定时间范围即可查询。数据可在时间轴上或多个设备上进行聚合。即席查询可通过Shell/Python/R/Matlab随时进行。
|
||||
- 与第三方工具无缝连接。不用一行代码,即可与Telegraf, Grafana, EMQ X, Prometheus, Matlab, R集成。后续还将支持MQTT, OPC, Hadoop,Spark等, BI工具也将无缝连接。
|
||||
- 零运维成本、零学习成本。安装、集群一秒搞定,无需分库分表,实时备份。标准SQL,支持JDBC,RESTful,支持Python/Java/C/C++/Go/Node.JS, 与MySQL相似,零学习成本。
|
||||
- **高性能**:通过创新的存储引擎设计,无论是数据写入还是查询,TDengine 的性能比通用数据库快 10 倍以上,也远超其他时序数据库,而且存储空间也大为节省。
|
||||
|
||||
- **分布式**:通过原生分布式的设计,TDengine 提供了水平扩展的能力,只需要增加节点就能获得更强的数据处理能力,同时通过多副本机制保证了系统的高可用。
|
||||
|
||||
- **支持 SQL**:TDengine 采用 SQL 作为数据查询语言,减少学习和迁移成本,同时提供 SQL 扩展来处理时序数据特有的分析,而且支持方便灵活的 schemaless 数据写入。
|
||||
|
||||
- **All in One**:将数据库、消息队列、缓存、流式计算等功能融合一起,应用无需再集成 Kafka/Redis/HBase/Spark 等软件,大幅降低应用开发和维护成本。
|
||||
|
||||
- **零管理**:安装、集群几秒搞定,无任何依赖,不用分库分表,系统运行状态监测能与 Grafana 或其他运维工具无缝集成。
|
||||
|
||||
- **零学习成本**:采用 SQL 查询语言,支持 Python、Java、C/C++、Go、Rust、Node.js 等多种编程语言,与 MySQL 相似,零学习成本。
|
||||
|
||||
- **无缝集成**:不用一行代码,即可与 Telegraf、Grafana、EMQX、Prometheus、StatsD、collectd、Matlab、R 等第三方工具无缝集成。
|
||||
|
||||
- **互动 Console**: 通过命令行 console,不用编程,执行 SQL 语句就能做即席查询、各种数据库的操作、管理以及集群的维护.
|
||||
|
||||
TDengine 可以广泛应用于物联网、工业互联网、车联网、IT 运维、能源、金融等领域,让大量设备、数据采集器每天产生的高达 TB 甚至 PB 级的数据能得到高效实时的处理,对业务的运行状态进行实时的监测、预警,从大数据中挖掘出商业价值。
|
||||
|
||||
# 文档
|
||||
|
||||
TDengine是一个高效的存储、查询、分析时序大数据的平台,专为物联网、车联网、工业互联网、运维监测等优化而设计。您可以像使用关系型数据库MySQL一样来使用它,但建议您在使用前仔细阅读一遍下面的文档,特别是 [数据模型](https://www.taosdata.com/cn/documentation/architecture) 与 [数据建模](https://www.taosdata.com/cn/documentation/model)。除本文档之外,欢迎 [下载产品白皮书](https://www.taosdata.com/downloads/TDengine%20White%20Paper.pdf)。
|
||||
TDengine 采用传统的关系数据库模型,您可以像使用关系型数据库 MySQL 一样来使用它。但由于引入了超级表,一个采集点一张表的概念,建议您在使用前仔细阅读一遍下面的文档,特别是 [数据模型](https://www.taosdata.com/cn/documentation/architecture) 与 [数据建模](https://www.taosdata.com/cn/documentation/model)。除本文档之外,欢迎 [下载产品白皮书](https://www.taosdata.com/downloads/TDengine%20White%20Paper.pdf)。
|
||||
|
||||
# 构建
|
||||
|
||||
TDengine目前2.0版服务器仅能在Linux系统上安装和运行,后续会支持Windows、macOS等系统。客户端可以在Windows或Linux上安装和运行。任何OS的应用也可以选择RESTful接口连接服务器taosd。CPU支持X64/ARM64/MIPS64/Alpha64,后续会支持ARM32、RISC-V等CPU架构。用户可根据需求选择通过[源码](https://www.taosdata.com/cn/getting-started/#通过源码安装)或者[安装包](https://www.taosdata.com/cn/getting-started/#通过安装包安装)来安装。本快速指南仅适用于通过源码安装。
|
||||
TDengine 目前 2.0 版服务器仅能在 Linux 系统上安装和运行,后续会支持 Windows、macOS 等系统。客户端可以在 Windows 或 Linux 上安装和运行。任何 OS 的应用也可以选择 RESTful 接口连接服务器 taosd。CPU 支持 X64/ARM64/MIPS64/Alpha64,后续会支持 ARM32、RISC-V 等 CPU 架构。用户可根据需求选择通过[源码](https://www.taosdata.com/cn/getting-started/#通过源码安装)或者[安装包](https://www.taosdata.com/cn/getting-started/#通过安装包安装)来安装。本快速指南仅适用于通过源码安装。
|
||||
|
||||
## 安装工具
|
||||
|
||||
### Ubuntu 16.04 及以上版本 & Debian:
|
||||
|
||||
```bash
|
||||
sudo apt-get install -y gcc cmake build-essential git
|
||||
sudo apt-get install -y gcc cmake build-essential git libssl-dev
|
||||
```
|
||||
|
||||
### Ubuntu 14.04:
|
||||
|
@ -56,10 +77,22 @@ sudo apt-get install -y openjdk-8-jdk
|
|||
sudo apt-get install -y maven
|
||||
```
|
||||
|
||||
#### 为 taos-tools 安装编译需要的软件
|
||||
|
||||
taosTools 是用于 TDengine 的辅助工具软件集合。目前它包含 taosBenchmark(曾命名为 taosdemo)和 taosdump 两个软件。
|
||||
|
||||
默认 TDengine 编译不包含 taosTools。您可以在编译 TDengine 时使用`cmake .. -DBUILD_TOOLS=true` 来同时编译 taosTools。
|
||||
|
||||
为了在 Ubuntu/Debian 系统上编译 [taos-tools](https://github.com/taosdata/taos-tools) 需要安装如下软件:
|
||||
|
||||
```bash
|
||||
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev pkg-config
|
||||
```
|
||||
|
||||
### CentOS 7:
|
||||
|
||||
```bash
|
||||
sudo yum install -y gcc gcc-c++ make cmake git
|
||||
sudo yum install -y gcc gcc-c++ make cmake git openssl-devel
|
||||
```
|
||||
|
||||
安装 OpenJDK 8:
|
||||
|
@ -74,10 +107,10 @@ sudo yum install -y java-1.8.0-openjdk
|
|||
sudo yum install -y maven
|
||||
```
|
||||
|
||||
### CentOS 8 & Fedora:
|
||||
### CentOS 8 & Fedora
|
||||
|
||||
```bash
|
||||
sudo dnf install -y gcc gcc-c++ make cmake epel-release git
|
||||
sudo dnf install -y gcc gcc-c++ make cmake epel-release git openssl-devel
|
||||
```
|
||||
|
||||
安装 OpenJDK 8:
|
||||
|
@ -92,6 +125,33 @@ sudo dnf install -y java-1.8.0-openjdk
|
|||
sudo dnf install -y maven
|
||||
```
|
||||
|
||||
#### 在 CentOS 上构建 taosTools 安装依赖软件
|
||||
|
||||
为了在 CentOS 上构建 [taosTools](https://github.com/taosdata/taos-tools) 需要安装如下依赖软件
|
||||
|
||||
```bash
|
||||
sudo yum install zlib-devel xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libstdc++-static openssl-devel
|
||||
```
|
||||
|
||||
注意:由于 snappy 缺乏 pkg-config 支持
|
||||
(参考 [链接](https://github.com/google/snappy/pull/86)),会导致
|
||||
cmake 提示无法发现 libsnappy,实际上工作正常。
|
||||
|
||||
### 设置 golang 开发环境
|
||||
|
||||
TDengine 包含数个使用 Go 语言开发的组件,请参考 golang.org 官方文档设置 go 开发环境。
|
||||
|
||||
请使用 1.14 及以上版本。对于中国用户,我们建议使用代理来加速软件包下载。
|
||||
|
||||
```
|
||||
go env -w GO111MODULE=on
|
||||
go env -w GOPROXY=https://goproxy.cn,direct
|
||||
```
|
||||
|
||||
### 设置 rust 开发环境
|
||||
|
||||
TDengine 包含数个使用 Rust 语言开发的组件. 请参考 rust-lang.org 官方文档设置 rust 开发环境。
|
||||
|
||||
## 获取源码
|
||||
|
||||
首先,你需要从 GitHub 克隆源码:
|
||||
|
@ -107,22 +167,41 @@ Go 连接器和 Grafana 插件在其他独立仓库,如果安装它们的话
|
|||
git submodule update --init --recursive
|
||||
```
|
||||
|
||||
如果使用 https 协议下载比较慢,可以通过修改 ~/.gitconfig 文件添加以下两行设置使用 ssh 协议下载。需要首先上传 ssh 密钥到 GitHub,详细方法请参考 GitHub 官方文档。
|
||||
|
||||
```
|
||||
[url "git@github.com:"]
|
||||
insteadOf = https://github.com/
|
||||
```
|
||||
|
||||
## 构建 TDengine
|
||||
|
||||
### Linux 系统
|
||||
|
||||
可以运行代码仓库中的 `build.sh` 脚本编译出 TDengine 和 taosTools(包含 taosBenchmark 和 taosdump)。
|
||||
|
||||
```bash
|
||||
mkdir debug && cd debug
|
||||
cmake .. && cmake --build .
|
||||
./build.sh
|
||||
```
|
||||
|
||||
您可以选择使用 Jemalloc 作为内存分配器,替代默认的 glibc:
|
||||
这个脚本等价于执行如下命令:
|
||||
|
||||
```bash
|
||||
git submodule update --init --recursive
|
||||
mkdir debug
|
||||
cd debug
|
||||
cmake .. -DBUILD_TOOLS=true
|
||||
make
|
||||
```
|
||||
|
||||
您也可以选择使用 jemalloc 作为内存分配器,替代默认的 glibc:
|
||||
|
||||
```bash
|
||||
apt install autoconf
|
||||
cmake .. -DJEMALLOC_ENABLED=true
|
||||
```
|
||||
|
||||
在X86-64、X86、arm64、arm32 和 mips64 平台上,TDengine 生成脚本可以自动检测机器架构。也可以手动配置 CPUTYPE 参数来指定 CPU 类型,如 aarch64 或 aarch32 等。
|
||||
在 X86-64、X86、arm64、arm32 和 mips64 平台上,TDengine 生成脚本可以自动检测机器架构。也可以手动配置 CPUTYPE 参数来指定 CPU 类型,如 aarch64 或 aarch32 等。
|
||||
|
||||
aarch64:
|
||||
|
||||
|
@ -157,7 +236,7 @@ nmake
|
|||
|
||||
如果你使用的是 Visual Studio 2019 或 2017 版本:
|
||||
|
||||
打开cmd.exe,执行 vcvarsall.bat 时,为 64 位操作系统指定“x64”,为 32 位操作系统指定“x86”。
|
||||
打开 cmd.exe,执行 vcvarsall.bat 时,为 64 位操作系统指定“x64”,为 32 位操作系统指定“x86”。
|
||||
|
||||
```bash
|
||||
mkdir debug && cd debug
|
||||
|
@ -174,9 +253,7 @@ cmake .. -G "NMake Makefiles"
|
|||
nmake
|
||||
```
|
||||
|
||||
如果你使用的是 Visual Studio 2022 版本, 脚本 `vcvarsall.bat` 的默认安装路径是 `C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvarsall.bat`。
|
||||
|
||||
### Mac OS X 系统
|
||||
### macOS 系统
|
||||
|
||||
安装 Xcode 命令行工具和 cmake. 在 Catalina 和 Big Sur 操作系统上,需要安装 XCode 11.4+ 版本。
|
||||
|
||||
|
@ -187,13 +264,17 @@ cmake .. && cmake --build .
|
|||
|
||||
# 安装
|
||||
|
||||
生成完成后,安装 TDengine(下文给出的指令以 Linux 为例,如果是在 Windows 下,那么对应的指令会是 `nmake install`):
|
||||
## Linux 系统
|
||||
|
||||
生成完成后,安装 TDengine:
|
||||
|
||||
```bash
|
||||
sudo make install
|
||||
```
|
||||
|
||||
用户可以在[文件目录结构](https://www.taosdata.com/cn/documentation/administrator#directories)中了解更多在操作系统中生成的目录或文件。
|
||||
从 2.0 版本开始, 从源代码安装也会为 TDengine 配置服务管理。
|
||||
用户也可以选择[从安装包中安装](https://www.taosdata.com/en/getting-started/#Install-from-Package)。
|
||||
|
||||
安装成功后,在终端中启动 TDengine 服务:
|
||||
|
||||
|
@ -209,6 +290,40 @@ taos
|
|||
|
||||
如果 TDengine Shell 连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印出错误消息。
|
||||
|
||||
## Windows 系统
|
||||
|
||||
生成完成后,安装 TDengine:
|
||||
|
||||
```cmd
|
||||
nmake install
|
||||
```
|
||||
|
||||
## macOS 系统
|
||||
|
||||
生成完成后,安装 TDengine:
|
||||
|
||||
```bash
|
||||
sudo make install
|
||||
```
|
||||
|
||||
安装成功后,如果想以服务形式启动,先配置 `.plist` 文件,在终端中执行:
|
||||
|
||||
```bash
|
||||
sudo cp ../packaging/macOS/com.taosdata.tdengine.plist /Library/LaunchDaemons
|
||||
```
|
||||
|
||||
在终端中启动 TDengine 服务:
|
||||
|
||||
```bash
|
||||
sudo launchctl load /Library/LaunchDaemons/com.taosdata.tdengine.plist
|
||||
```
|
||||
|
||||
在终端中停止 TDengine 服务:
|
||||
|
||||
```bash
|
||||
sudo launchctl unload /Library/LaunchDaemons/com.taosdata.tdengine.plist
|
||||
```
|
||||
|
||||
## 快速运行
|
||||
|
||||
如果不希望以服务方式运行 TDengine,也可以在终端中直接运行它。也即在生成完成后,执行以下命令(在 Windows 下,生成的可执行文件会带有 .exe 后缀,例如会名为 taosd.exe ):
|
||||
|
@ -227,15 +342,15 @@ taos
|
|||
|
||||
# 体验 TDengine
|
||||
|
||||
在TDengine终端中,用户可以通过SQL命令来创建/删除数据库、表等,并进行插入查询操作。
|
||||
在 TDengine 终端中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行插入查询操作。
|
||||
|
||||
```bash
|
||||
create database demo;
|
||||
use demo;
|
||||
create table t (ts timestamp, speed int);
|
||||
insert into t values ('2019-07-15 00:00:00', 10);
|
||||
insert into t values ('2019-07-15 01:00:00', 20);
|
||||
select * from t;
|
||||
```sql
|
||||
CREATE DATABASE demo;
|
||||
USE demo;
|
||||
CREATE TABLE t (ts TIMESTAMP, speed INT);
|
||||
INSERT INTO t VALUES('2019-07-15 00:00:00', 10);
|
||||
INSERT INTO t VALUES('2019-07-15 01:00:00', 20);
|
||||
SELECT * FROM t;
|
||||
ts | speed |
|
||||
===================================
|
||||
19-07-15 00:00:00.000| 10|
|
||||
|
@ -247,33 +362,35 @@ Query OK, 2 row(s) in set (0.001700s)
|
|||
|
||||
## 官方连接器
|
||||
|
||||
TDengine 提供了丰富的应用程序开发接口,其中包括C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用:
|
||||
TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用:
|
||||
|
||||
- Java
|
||||
- [Java](https://www.taosdata.com/cn/documentation/connector/java)
|
||||
|
||||
- C/C++
|
||||
- [C/C++](https://www.taosdata.com/cn/documentation/connector#c-cpp)
|
||||
|
||||
- Python
|
||||
- [Python](https://www.taosdata.com/cn/documentation/connector#python)
|
||||
|
||||
- Go
|
||||
- [Go](https://www.taosdata.com/cn/documentation/connector#go)
|
||||
|
||||
- RESTful API
|
||||
- [RESTful API](https://www.taosdata.com/cn/documentation/connector#restful)
|
||||
|
||||
- Node.js
|
||||
- [Node.js](https://www.taosdata.com/cn/documentation/connector#nodejs)
|
||||
|
||||
- [Rust](https://www.taosdata.com/cn/documentation/connector/rust)
|
||||
|
||||
## 第三方连接器
|
||||
|
||||
TDengine 社区生态中也有一些非常友好的第三方连接器,可以通过以下链接访问它们的源码。
|
||||
|
||||
- [Rust Connector](https://github.com/taosdata/TDengine/tree/master/tests/examples/rust)
|
||||
- [Rust Bindings](https://github.com/songtianyi/tdengine-rust-bindings/tree/master/examples)
|
||||
- [.Net Core Connector](https://github.com/maikebing/Maikebing.EntityFrameworkCore.Taos)
|
||||
- [Lua Connector](https://github.com/taosdata/TDengine/tree/develop/tests/examples/lua)
|
||||
- [Lua Connector](https://github.com/taosdata/TDengine/tree/develop/examples/lua)
|
||||
|
||||
# 运行和添加测试例
|
||||
|
||||
TDengine 的测试框架和所有测试例全部开源。
|
||||
|
||||
点击 [这里](tests/How-To-Run-Test-And-How-To-Add-New-Test-Case.md),了解如何运行测试例和添加新的测试例。
|
||||
点击 [这里](https://github.com/taosdata/TDengine/blob/develop/tests/How-To-Run-Test-And-How-To-Add-New-Test-Case.md),了解如何运行测试例和添加新的测试例。
|
||||
|
||||
# 成为社区贡献者
|
||||
|
||||
|
@ -281,8 +398,8 @@ TDengine 的测试框架和所有测试例全部开源。
|
|||
|
||||
# 加入技术交流群
|
||||
|
||||
TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小T为好友,即可入群。
|
||||
TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小 T 为好友,即可入群。
|
||||
|
||||
# [谁在使用TDengine](https://github.com/taosdata/TDengine/issues/2432)
|
||||
# [谁在使用 TDengine](https://github.com/taosdata/TDengine/issues/2432)
|
||||
|
||||
欢迎所有 TDengine 用户及贡献者在 [这里](https://github.com/taosdata/TDengine/issues/2432) 分享您在当前工作中开发/使用 TDengine 的故事。
|
||||
欢迎所有 TDengine 用户及贡献者在 [这里](https://github.com/taosdata/TDengine/issues/2432) 分享您在当前工作中开发/使用 TDengine 的故事。
|
||||
|
|
273
README.md
273
README.md
|
@ -1,116 +1,218 @@
|
|||
<p>
|
||||
<p align="center">
|
||||
<a href="https://tdengine.com" target="_blank">
|
||||
<img
|
||||
src="docs/assets/tdengine.svg"
|
||||
alt="TDengine"
|
||||
width="500"
|
||||
/>
|
||||
</a>
|
||||
</p>
|
||||
<p>
|
||||
|
||||
[](https://cloud.drone.io/taosdata/TDengine)
|
||||
[](https://ci.appveyor.com/project/sangshuduo/tdengine-2n8ge/branch/master)
|
||||
[](https://coveralls.io/github/taosdata/TDengine?branch=develop)
|
||||
[](https://bestpractices.coreinfrastructure.org/projects/4201)
|
||||
[](https://snapcraft.io/tdengine)
|
||||
|
||||
[](https://www.taosdata.com)
|
||||
|
||||
English | [简体中文](./README-CN.md)
|
||||
English | [简体中文](README-CN.md) | We are hiring, check [here](https://tdengine.com/careers)
|
||||
|
||||
# What is TDengine?
|
||||
|
||||
TDengine is an open-sourced big data platform under [GNU AGPL v3.0](http://www.gnu.org/licenses/agpl-3.0.html), designed and optimized for the Internet of Things (IoT), Connected Cars, Industrial IoT, and IT Infrastructure and Application Monitoring. Besides the 10x faster time-series database, it provides caching, stream computing, message queuing and other functionalities to reduce the complexity and cost of development and operation.
|
||||
TDengine is a high-performance, scalable time-series database with SQL support. Its code including cluster feature is open source under [GNU AGPL v3.0](http://www.gnu.org/licenses/agpl-3.0.html). Besides the database, it provides caching, stream processing, data subscription and other functionalities to reduce the complexity and cost of development and operation. TDengine differentiates itself from other TSDBs with the following advantages.
|
||||
|
||||
- **10x Faster on Insert/Query Speeds**: Through the innovative design on storage, on a single-core machine, over 20K requests can be processed, millions of data points can be ingested, and over 10 million data points can be retrieved in a second. It is 10 times faster than other databases.
|
||||
- **High Performance**: TDengine outperforms other time series databases in data ingestion and querying while significantly reducing storage cost and compute costs, with an innovatively designed and purpose-built storage engine.
|
||||
|
||||
- **1/5 Hardware/Cloud Service Costs**: Compared with typical big data solutions, less than 1/5 of computing resources are required. Via column-based storage and tuned compression algorithms for different data types, less than 1/10 of storage space is needed.
|
||||
- **Scalable**: TDengine provides out-of-box scalability and high-availability through its native distributed design. Nodes can be added through simple configuration to achieve greater data processing power. In addition, this feature is open source.
|
||||
|
||||
- **Full Stack for Time-Series Data**: By integrating a database with message queuing, caching, and stream computing features together, it is no longer necessary to integrate Kafka/Redis/HBase/Spark or other software. It makes the system architecture much simpler and more robust.
|
||||
- **SQL Support**: TDengine uses SQL as the query language, thereby reducing learning and migration costs, while adding SQL extensions to handle time-series data better, and supporting convenient and flexible schemaless data ingestion.
|
||||
|
||||
- **Powerful Data Analysis**: Whether it is 10 years or one minute ago, data can be queried just by specifying the time range. Data can be aggregated over time, multiple time streams or both. Ad Hoc queries or analyses can be executed via TDengine shell, Python, R or Matlab.
|
||||
- **All in One**: TDengine has built-in caching, stream processing and data subscription functions, it is no longer necessary to integrate Kafka/Redis/HBase/Spark or other software in some scenarios. It makes the system architecture much simpler and easy to maintain.
|
||||
|
||||
- **Seamless Integration with Other Tools**: Telegraf, Grafana, Matlab, R, and other tools can be integrated with TDengine without a line of code. MQTT, OPC, Hadoop, Spark, and many others will be integrated soon.
|
||||
- **Seamless Integration**: Without a single line of code, TDengine provide seamless integration with third-party tools such as Telegraf, Grafana, EMQX, Prometheus, StatsD, collectd, etc. More will be integrated.
|
||||
|
||||
- **Zero Management, No Learning Curve**: It takes only seconds to download, install, and run it successfully; there are no other dependencies. Automatic partitioning on tables or DBs. Standard SQL is used, with C/C++, Python, JDBC, Go and RESTful connectors.
|
||||
- **Zero Management**: Installation and cluster setup can be done in seconds. Data partitioning and sharding are executed automatically. TDengine’s running status can be monitored via Grafana or other DevOps tools.
|
||||
|
||||
- **Zero Learning Cost**: With SQL as the query language, support for ubiquitous tools like Python, Java, C/C++, Go, Rust, Node.js connectors, there is zero learning cost.
|
||||
|
||||
- **Interactive Console**: TDengine provides convenient console access to the database to run ad hoc queries, maintain the database, or manage the cluster without any programming.
|
||||
|
||||
TDengine can be widely applied to Internet of Things (IoT), Connected Vehicles, Industrial IoT, DevOps, energy, finance and many other scenarios.
|
||||
|
||||
# Documentation
|
||||
|
||||
For user manual, system design and architecture, engineering blogs, refer to [TDengine Documentation](https://www.taosdata.com/en/documentation/)(中文版请点击[这里](https://www.taosdata.com/cn/documentation20/))
|
||||
for details. The documentation from our website can also be downloaded locally from *documentation/tdenginedocs-en* or *documentation/tdenginedocs-cn*.
|
||||
for details. The documentation from our website can also be downloaded locally from _documentation/tdenginedocs-en_ or _documentation/tdenginedocs-cn_.
|
||||
|
||||
# Building
|
||||
At the moment, TDengine only supports building and running on Linux systems. You can choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) or from the source code. This quick guide is for installation from the source only.
|
||||
|
||||
To build TDengine, use [CMake](https://cmake.org/) 2.8.12.x or higher versions in the project directory.
|
||||
At the moment, TDengine server only supports running on Linux systems. You can choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) or build it from the source code. This quick guide is for installation from the source only.
|
||||
|
||||
## Install tools
|
||||
To build TDengine, use [CMake](https://cmake.org/) 3.0.2 or higher versions in the project directory.
|
||||
|
||||
## Install build dependencies
|
||||
|
||||
### Ubuntu 16.04 and above or Debian
|
||||
|
||||
### Ubuntu 16.04 and above & Debian:
|
||||
```bash
|
||||
sudo apt-get install -y gcc cmake build-essential git
|
||||
sudo apt-get install -y gcc cmake build-essential git libssl-dev
|
||||
```
|
||||
|
||||
### Ubuntu 14.04:
|
||||
### Ubuntu 14.04
|
||||
|
||||
```bash
|
||||
sudo apt-get install -y gcc cmake3 build-essential git binutils-2.26
|
||||
export PATH=/usr/lib/binutils-2.26/bin:$PATH
|
||||
```
|
||||
|
||||
To compile and package the JDBC driver source code, you should have a Java jdk-8 or higher and Apache Maven 2.7 or higher installed.
|
||||
To compile and package the JDBC driver source code, you should have a Java jdk-8 or higher and Apache Maven 2.7 or higher installed.
|
||||
|
||||
To install openjdk-8:
|
||||
|
||||
```bash
|
||||
sudo apt-get install -y openjdk-8-jdk
|
||||
```
|
||||
|
||||
To install Apache Maven:
|
||||
|
||||
```bash
|
||||
sudo apt-get install -y maven
|
||||
sudo apt-get install -y maven
|
||||
```
|
||||
|
||||
### Centos 7:
|
||||
#### Install build dependencies for taosTools
|
||||
|
||||
We provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. From TDengine 2.4.0.0, taosBenchmark and taosdump were not released together with TDengine.
|
||||
By default, TDengine compiling does not include taosTools. You can use 'cmake .. -DBUILD_TOOLS=true' to make them be compiled with TDengine.
|
||||
|
||||
To build the [taosTools](https://github.com/taosdata/taos-tools) on Ubuntu/Debian, the following packages need to be installed.
|
||||
|
||||
```bash
|
||||
sudo yum install -y gcc gcc-c++ make cmake git
|
||||
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev pkg-config
|
||||
```
|
||||
|
||||
### CentOS 7
|
||||
|
||||
```bash
|
||||
sudo yum install epel-release
|
||||
sudo yum update
|
||||
sudo yum install -y gcc gcc-c++ make cmake3 git openssl-devel
|
||||
sudo ln -sf /usr/bin/cmake3 /usr/bin/cmake
|
||||
```
|
||||
|
||||
To install openjdk-8:
|
||||
|
||||
```bash
|
||||
sudo yum install -y java-1.8.0-openjdk
|
||||
```
|
||||
|
||||
To install Apache Maven:
|
||||
|
||||
```bash
|
||||
sudo yum install -y maven
|
||||
```
|
||||
|
||||
### Centos 8 & Fedora:
|
||||
### CentOS 8 & Fedora
|
||||
|
||||
```bash
|
||||
sudo dnf install -y gcc gcc-c++ make cmake epel-release git
|
||||
sudo dnf install -y gcc gcc-c++ make cmake epel-release git openssl-devel
|
||||
```
|
||||
|
||||
To install openjdk-8:
|
||||
|
||||
```bash
|
||||
sudo dnf install -y java-1.8.0-openjdk
|
||||
```
|
||||
|
||||
To install Apache Maven:
|
||||
|
||||
```bash
|
||||
sudo dnf install -y maven
|
||||
```
|
||||
|
||||
#### Install build dependencies for taosTools on CentOS
|
||||
|
||||
To build the [taosTools](https://github.com/taosdata/taos-tools) on CentOS, the following packages need to be installed.
|
||||
|
||||
```bash
|
||||
sudo yum install zlib-devel xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libstdc++-static openssl-devel
|
||||
```
|
||||
|
||||
Note: Since snappy lacks pkg-config support (refer to [link](https://github.com/google/snappy/pull/86)), it lead a cmake prompt libsnappy not found. But snappy will works well.
|
||||
|
||||
### Setup golang environment
|
||||
|
||||
TDengine includes few components developed by Go language. Please refer to golang.org official documentation for golang environment setup.
|
||||
|
||||
Please use version 1.14+. For the user in China, we recommend using a proxy to accelerate package downloading.
|
||||
|
||||
```
|
||||
go env -w GO111MODULE=on
|
||||
go env -w GOPROXY=https://goproxy.cn,direct
|
||||
```
|
||||
|
||||
### Setup rust environment
|
||||
|
||||
TDengine includees few compoments developed by Rust language. Please refer to rust-lang.org official documentation for rust environment setup.
|
||||
|
||||
## Get the source codes
|
||||
|
||||
First of all, you may clone the source codes from github:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/taosdata/TDengine.git
|
||||
cd TDengine
|
||||
```
|
||||
|
||||
The connectors for go & grafana have been moved to separated repositories,
|
||||
The connectors for go & Grafana and some tools have been moved to separated repositories,
|
||||
so you should run this command in the TDengine directory to install them:
|
||||
|
||||
```bash
|
||||
git submodule update --init --recursive
|
||||
```
|
||||
|
||||
You can modify the file ~/.gitconfig to use ssh protocol instead of https for better download speed. You need to upload ssh public key to GitHub first. Please refer to GitHub official documentation for detail.
|
||||
|
||||
```
|
||||
[url "git@github.com:"]
|
||||
insteadOf = https://github.com/
|
||||
```
|
||||
|
||||
## Build TDengine
|
||||
|
||||
### On Linux platform
|
||||
|
||||
You can run the bash script `build.sh` to build both TDengine and taosTools including taosBenchmark and taosdump as below:
|
||||
|
||||
```bash
|
||||
mkdir debug && cd debug
|
||||
cmake .. && cmake --build .
|
||||
./build.sh
|
||||
```
|
||||
|
||||
It equals to execute following commands:
|
||||
|
||||
```bash
|
||||
git submodule update --init --recursive
|
||||
mkdir debug
|
||||
cd debug
|
||||
cmake .. -DBUILD_TOOLS=true
|
||||
make
|
||||
```
|
||||
|
||||
Note TDengine 2.3.x.0 and later use a component named 'taosAdapter' to play http daemon role by default instead of the http daemon embedded in the early version of TDengine. The taosAdapter is programmed by go language. If you pull TDengine source code to the latest from an existing codebase, please execute 'git submodule update --init --recursive' to pull taosAdapter source code. Please install go language version 1.14 or above for compiling taosAdapter. If you meet difficulties regarding 'go mod', especially you are from China, you can use a proxy to solve the problem.
|
||||
|
||||
```
|
||||
go env -w GO111MODULE=on
|
||||
go env -w GOPROXY=https://goproxy.cn,direct
|
||||
```
|
||||
|
||||
The embedded http daemon still be built from TDengine source code by default. Or you can use the following command to choose to build taosAdapter.
|
||||
|
||||
```
|
||||
cmake .. -DBUILD_HTTP=false
|
||||
```
|
||||
|
||||
You can use Jemalloc as memory allocator instead of glibc:
|
||||
|
||||
```
|
||||
apt install autoconf
|
||||
cmake .. -DJEMALLOC_ENABLED=true
|
||||
|
@ -120,24 +222,28 @@ TDengine build script can detect the host machine's architecture on X86-64, X86,
|
|||
You can also specify CPUTYPE option like aarch64 or aarch32 too if the detection result is not correct:
|
||||
|
||||
aarch64:
|
||||
|
||||
```bash
|
||||
cmake .. -DCPUTYPE=aarch64 && cmake --build .
|
||||
```
|
||||
|
||||
aarch32:
|
||||
|
||||
```bash
|
||||
cmake .. -DCPUTYPE=aarch32 && cmake --build .
|
||||
```
|
||||
|
||||
mips64:
|
||||
|
||||
```bash
|
||||
cmake .. -DCPUTYPE=mips64 && cmake --build .
|
||||
```
|
||||
|
||||
### On Windows platform
|
||||
|
||||
If you use Visual Studio 2013, please open a command window by executing "cmd.exe".
|
||||
If you use the Visual Studio 2013, please open a command window by executing "cmd.exe".
|
||||
Please specify "amd64" for 64 bits Windows or specify "x86" is for 32 bits Windows when you execute vcvarsall.bat.
|
||||
|
||||
```cmd
|
||||
mkdir debug && cd debug
|
||||
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" < amd64 | x86 >
|
||||
|
@ -145,7 +251,7 @@ cmake .. -G "NMake Makefiles"
|
|||
nmake
|
||||
```
|
||||
|
||||
If you use Visual Studio 2019 or 2017:
|
||||
If you use the Visual Studio 2019 or 2017:
|
||||
|
||||
please open a command window by executing "cmd.exe".
|
||||
Please specify "x64" for 64 bits Windows or specify "x86" is for 32 bits Windows when you execute vcvarsall.bat.
|
||||
|
@ -158,15 +264,14 @@ nmake
|
|||
```
|
||||
|
||||
Or, you can simply open a command window by clicking Windows Start -> "Visual Studio < 2019 | 2017 >" folder -> "x64 Native Tools Command Prompt for VS < 2019 | 2017 >" or "x86 Native Tools Command Prompt for VS < 2019 | 2017 >" depends what architecture your Windows is, then execute commands as follows:
|
||||
|
||||
```cmd
|
||||
mkdir debug && cd debug
|
||||
cmake .. -G "NMake Makefiles"
|
||||
nmake
|
||||
```
|
||||
|
||||
If you use Visual Studio 2022, the only change is the default path of `vcvarsall.bat`, which is `C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvarsall.bat`.
|
||||
|
||||
### On Mac OS X platform
|
||||
### On macOS platform
|
||||
|
||||
Please install XCode command line tools and cmake. Verified with XCode 11.4+ on Catalina and Big Sur.
|
||||
|
||||
|
@ -177,7 +282,10 @@ cmake .. && cmake --build .
|
|||
|
||||
# Installing
|
||||
|
||||
After building successfully, TDengine can be installed by: (On Windows platform, the following command should be `nmake install`)
|
||||
## On Linux platform
|
||||
|
||||
After building successfully, TDengine can be installed by
|
||||
|
||||
```bash
|
||||
sudo make install
|
||||
```
|
||||
|
@ -186,68 +294,129 @@ Users can find more information about directories installed on the system in the
|
|||
Users can also choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) for it.
|
||||
|
||||
To start the service after installation, in a terminal, use:
|
||||
|
||||
```bash
|
||||
sudo systemctl start taosd
|
||||
```
|
||||
|
||||
Then users can use the [TDengine shell](https://www.taosdata.com/en/getting-started/#TDengine-Shell) to connect the TDengine server. In a terminal, use:
|
||||
|
||||
```bash
|
||||
taos
|
||||
```
|
||||
|
||||
If TDengine shell connects the server successfully, welcome messages and version info are printed. Otherwise, an error message is shown.
|
||||
|
||||
### Install TDengine by apt-get
|
||||
|
||||
If you use Debian or Ubuntu system, you can use 'apt-get' command to install TDengine from official repository. Please use following commands to setup:
|
||||
|
||||
```
|
||||
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
|
||||
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-stable stable main" | sudo tee /etc/apt/sources.list.d/tdengine-stable.list
|
||||
[Optional] echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-beta beta main" | sudo tee /etc/apt/sources.list.d/tdengine-beta.list
|
||||
sudo apt-get update
|
||||
apt-cache policy tdengine
|
||||
sudo apt-get install tdengine
|
||||
```
|
||||
|
||||
## On Windows platform
|
||||
|
||||
After building successfully, TDengine can be installed by:
|
||||
|
||||
```cmd
|
||||
nmake install
|
||||
```
|
||||
|
||||
## On macOS platform
|
||||
|
||||
After building successfully, TDengine can be installed by:
|
||||
|
||||
```bash
|
||||
sudo make install
|
||||
```
|
||||
|
||||
To start the service after installation, config `.plist` file first, in a terminal, use:
|
||||
|
||||
```bash
|
||||
sudo cp ../packaging/macOS/com.taosdata.tdengine.plist /Library/LaunchDaemons
|
||||
```
|
||||
|
||||
To start the service, in a terminal, use:
|
||||
|
||||
```bash
|
||||
sudo launchctl load /Library/LaunchDaemons/com.taosdata.tdengine.plist
|
||||
```
|
||||
|
||||
To stop the service, in a terminal, use:
|
||||
|
||||
```bash
|
||||
sudo launchctl unload /Library/LaunchDaemons/com.taosdata.tdengine.plist
|
||||
```
|
||||
|
||||
## Quick Run
|
||||
|
||||
If you don't want to run TDengine as a service, you can run it in current shell. For example, to quickly start a TDengine server after building, run the command below in terminal: (We take Linux as an example, command on Windows will be `taosd.exe`)
|
||||
|
||||
```bash
|
||||
./build/bin/taosd -c test/cfg
|
||||
```
|
||||
|
||||
In another terminal, use the TDengine shell to connect the server:
|
||||
|
||||
```bash
|
||||
./build/bin/taos -c test/cfg
|
||||
```
|
||||
|
||||
option "-c test/cfg" specifies the system configuration file directory.
|
||||
option "-c test/cfg" specifies the system configuration file directory.
|
||||
|
||||
# Try TDengine
|
||||
|
||||
It is easy to run SQL commands from TDengine shell which is the same as other SQL databases.
|
||||
|
||||
```sql
|
||||
create database db;
|
||||
use db;
|
||||
create table t (ts timestamp, a int);
|
||||
insert into t values ('2019-07-15 00:00:00', 1);
|
||||
insert into t values ('2019-07-15 01:00:00', 2);
|
||||
select * from t;
|
||||
drop database db;
|
||||
CREATE DATABASE demo;
|
||||
USE demo;
|
||||
CREATE TABLE t (ts TIMESTAMP, speed INT);
|
||||
INSERT INTO t VALUES('2019-07-15 00:00:00', 10);
|
||||
INSERT INTO t VALUES('2019-07-15 01:00:00', 20);
|
||||
SELECT * FROM t;
|
||||
ts | speed |
|
||||
===================================
|
||||
19-07-15 00:00:00.000| 10|
|
||||
19-07-15 01:00:00.000| 20|
|
||||
Query OK, 2 row(s) in set (0.001700s)
|
||||
```
|
||||
|
||||
# Developing with TDengine
|
||||
### Official Connectors
|
||||
|
||||
## Official Connectors
|
||||
|
||||
TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation.
|
||||
|
||||
- [Java](https://www.taosdata.com/en/documentation/connector/#Java-Connector)
|
||||
- [C/C++](https://www.taosdata.com/en/documentation/connector/#C/C++-Connector)
|
||||
- [Python](https://www.taosdata.com/en/documentation/connector/#Python-Connector)
|
||||
- [Go](https://www.taosdata.com/en/documentation/connector/#Go-Connector)
|
||||
- [RESTful API](https://www.taosdata.com/en/documentation/connector/#RESTful-Connector)
|
||||
- [Node.js](https://www.taosdata.com/en/documentation/connector/#Node.js-Connector)
|
||||
- [Java](https://www.taosdata.com/en/documentation/connector/java)
|
||||
- [C/C++](https://www.taosdata.com/en/documentation/connector#c-cpp)
|
||||
- [Python](https://www.taosdata.com/en/documentation/connector#python)
|
||||
- [Go](https://www.taosdata.com/en/documentation/connector#go)
|
||||
- [RESTful API](https://www.taosdata.com/en/documentation/connector#restful)
|
||||
- [Node.js](https://www.taosdata.com/en/documentation/connector#nodejs)
|
||||
- [Rust](https://www.taosdata.com/en/documentation/connector/rust)
|
||||
|
||||
### Third Party Connectors
|
||||
## Third Party Connectors
|
||||
|
||||
The TDengine community has also kindly built some of their own connectors! Follow the links below to find the source code for them.
|
||||
|
||||
- [Rust Connector](https://github.com/taosdata/TDengine/tree/master/tests/examples/rust)
|
||||
- [Rust Bindings](https://github.com/songtianyi/tdengine-rust-bindings/tree/master/examples)
|
||||
- [.Net Core Connector](https://github.com/maikebing/Maikebing.EntityFrameworkCore.Taos)
|
||||
- [Lua Connector](https://github.com/taosdata/TDengine/tree/develop/tests/examples/lua)
|
||||
|
||||
# How to run the test cases and how to add a new test case?
|
||||
TDengine's test framework and all test cases are fully open source.
|
||||
Please refer to [this document](tests/How-To-Run-Test-And-How-To-Add-New-Test-Case.md) for how to run test and develop new test case.
|
||||
# How to run the test cases and how to add a new test case
|
||||
|
||||
TDengine's test framework and all test cases are fully open source.
|
||||
Please refer to [this document](https://github.com/taosdata/TDengine/blob/develop/tests/How-To-Run-Test-And-How-To-Add-New-Test-Case.md) for how to run test and develop new test case.
|
||||
|
||||
# TDengine Roadmap
|
||||
|
||||
- Support event-driven stream computing
|
||||
- Support user defined functions
|
||||
- Support MQTT connection
|
||||
|
|
|
@ -18,6 +18,14 @@ if (NOT DEFINED TD_GRANT)
|
|||
SET(TD_GRANT FALSE)
|
||||
endif()
|
||||
|
||||
IF ("${WEBSOCKET}" MATCHES "true")
|
||||
SET(TD_WEBSOCKET TRUE)
|
||||
MESSAGE("Enable websocket")
|
||||
ADD_DEFINITIONS(-DWEBSOCKET)
|
||||
ELSE ()
|
||||
SET(TD_WEBSOCKET FALSE)
|
||||
ENDIF ()
|
||||
|
||||
IF ("${BUILD_HTTP}" STREQUAL "")
|
||||
IF (TD_LINUX)
|
||||
IF (TD_ARM_32)
|
||||
|
@ -97,10 +105,10 @@ ELSE ()
|
|||
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${GCC_COVERAGE_COMPILE_FLAGS} ${GCC_COVERAGE_LINK_FLAGS}")
|
||||
ENDIF ()
|
||||
|
||||
IF (${SANITIZER} MATCHES "true")
|
||||
IF (${BUILD_SANITIZER})
|
||||
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Werror -Werror=return-type -fPIC -gdwarf-2 -fsanitize=address -fsanitize=undefined -fsanitize-recover=all -fsanitize=float-divide-by-zero -fsanitize=float-cast-overflow -fno-sanitize=shift-base -fno-sanitize=alignment -g3 -Wformat=0")
|
||||
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror -Wno-literal-suffix -Werror=return-type -fPIC -gdwarf-2 -fsanitize=address -fsanitize=undefined -fsanitize-recover=all -fsanitize=float-divide-by-zero -fsanitize=float-cast-overflow -fno-sanitize=shift-base -fno-sanitize=alignment -g3 -Wformat=0")
|
||||
MESSAGE(STATUS "Will compile with Address Sanitizer!")
|
||||
MESSAGE(STATUS "Will compile with Address Sanitizer!")
|
||||
ELSE ()
|
||||
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Werror -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=0")
|
||||
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror -Wno-literal-suffix -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=0")
|
||||
|
|
|
@ -77,6 +77,12 @@ ELSE ()
|
|||
ENDIF ()
|
||||
ENDIF ()
|
||||
|
||||
option(
|
||||
BUILD_SANITIZER
|
||||
"If build addr2line"
|
||||
OFF
|
||||
)
|
||||
|
||||
option(
|
||||
BUILD_ADDR2LINE
|
||||
"If build addr2line"
|
||||
|
|
|
@ -379,11 +379,11 @@ We still use the hypothetical environment from Chapter 4. There are three measur
|
|||
|
||||
### Storage resource estimation
|
||||
|
||||
Assuming that the number of sensor devices that generate data and need to be stored is `n`, the frequency of data generation is `t` per second, and the length of each record is `L` bytes, the scale of data generated per day is `n * t * L` bytes. Assuming the compression ratio is `C`, the daily data size is `(n * t * L)/C` bytes. The storage resources are estimated to accommodate the data scale for 1.5 years. In the production environment, the compression ratio C of TDengine is generally between 5 and 7.
|
||||
Assuming that the number of sensor devices that generate data and need to be stored is `n`, the frequency of data generation is `t` per second, and the length of each record is `L` bytes, the scale of data generated per day is `86400 * n * t * L` bytes. Assuming the compression ratio is `C`, the daily data size is `(86400 * n * t * L)/C` bytes. The storage resources are estimated to accommodate the data scale for 1.5 years. In the production environment, the compression ratio C of TDengine is generally between 5 and 7.
|
||||
With additional 20% redundancy, you can calculate the required storage resources:
|
||||
|
||||
```matlab
|
||||
(n * t * L) * (365 * 1.5) * (1+20%)/C
|
||||
(86400 * n * t * L) * (365 * 1.5) * (1+20%)/C
|
||||
````
|
||||
Substituting in the above formula, the raw data generated every year is 11.8TB without considering the label information. Note that tag information is associated with each timeline in TDengine, not every record. The amount of data to be recorded is somewhat reduced relative to the generated data, and label data can be ignored as a whole. Assuming a compression ratio of 5, the size of the retained data ends up being 2.56 TB.
|
||||
|
||||
|
|
|
@ -367,10 +367,10 @@ WHERE ts>=1510560000 AND ts<=1515000009
|
|||
|
||||
### 存储资源估算
|
||||
|
||||
假设产生数据并需要存储的传感器设备数量为 `n`,数据生成的频率为`t`条/秒,每条记录的长度为 `L` bytes,则每天产生的数据规模为 `n×t×L` bytes。假设压缩比为 C,则每日产生数据规模为 `(n×t×L)/C` bytes。存储资源预估为能够容纳 1.5 年的数据规模,生产环境下 TDengine 的压缩比 C 一般在 5 ~ 7 之间,同时为最后结果增加 20% 的冗余,可计算得到需要存储资源:
|
||||
假设产生数据并需要存储的传感器设备数量为 `n`,数据生成的频率为`t`条/秒,每条记录的长度为 `L` bytes,则每天产生的数据规模为 `86400×n×t×L` bytes。假设压缩比为 C,则每日产生数据规模为 `(86400×n×t×L)/C` bytes。存储资源预估为能够容纳 1.5 年的数据规模,生产环境下 TDengine 的压缩比 C 一般在 5 ~ 7 之间,同时为最后结果增加 20% 的冗余,可计算得到需要存储资源:
|
||||
|
||||
```matlab
|
||||
(n×t×L)×(365×1.5)×(1+20%)/C
|
||||
(86400×n×t×L)×(365×1.5)×(1+20%)/C
|
||||
```
|
||||
|
||||
结合以上的计算公式,将参数带入计算公式,在不考虑标签信息的情况下,每年产生的原始数据规模是 11.8TB。需要注意的是,由于标签信息在 TDengine 中关联到每个时间线,并不是每条记录。所以需要记录的数据量规模相对于产生的数据有一定的降低,而这部分标签数据整体上可以忽略不记。假设压缩比为 5,则保留的数据规模最终为 2.56 TB。
|
||||
|
|
|
@ -55,7 +55,8 @@ enum {
|
|||
enum {
|
||||
STREAM_INPUT__DATA_SUBMIT = 1,
|
||||
STREAM_INPUT__DATA_BLOCK,
|
||||
STREAM_INPUT__DATA_SCAN,
|
||||
STREAM_INPUT__TABLE_SCAN,
|
||||
STREAM_INPUT__TQ_SCAN,
|
||||
STREAM_INPUT__DATA_RETRIEVE,
|
||||
STREAM_INPUT__TRIGGER,
|
||||
STREAM_INPUT__CHECKPOINT,
|
||||
|
@ -122,7 +123,8 @@ enum {
|
|||
};
|
||||
|
||||
typedef struct {
|
||||
int8_t fetchType;
|
||||
int8_t fetchType;
|
||||
STqOffsetVal offset;
|
||||
union {
|
||||
SSDataBlock data;
|
||||
void* meta;
|
||||
|
|
|
@ -231,7 +231,7 @@ SSDataBlock* createDataBlock();
|
|||
int32_t blockDataAppendColInfo(SSDataBlock* pBlock, SColumnInfoData* pColInfoData);
|
||||
|
||||
SColumnInfoData createColumnInfoData(int16_t type, int32_t bytes, int16_t colId);
|
||||
SColumnInfoData* bdGetColumnInfoData(SSDataBlock* pBlock, int32_t index);
|
||||
SColumnInfoData* bdGetColumnInfoData(const SSDataBlock* pBlock, int32_t index);
|
||||
|
||||
void blockEncode(const SSDataBlock* pBlock, char* data, int32_t* dataLen, int32_t numOfCols, int8_t needCompress);
|
||||
const char* blockDecode(SSDataBlock* pBlock, int32_t numOfCols, int32_t numOfRows, const char* pData);
|
||||
|
|
|
@ -57,8 +57,8 @@ extern int32_t tMsgDict[];
|
|||
#define TMSG_SEG_SEQ(TYPE) ((TYPE)&0xff)
|
||||
#define TMSG_INFO(TYPE) \
|
||||
((TYPE) >= 0 && \
|
||||
((TYPE) < TDMT_DND_MAX_MSG | (TYPE) < TDMT_MND_MAX_MSG | (TYPE) < TDMT_VND_MAX_MSG | (TYPE) < TDMT_SCH_MAX_MSG | \
|
||||
(TYPE) < TDMT_STREAM_MAX_MSG | (TYPE) < TDMT_MON_MAX_MSG | (TYPE) < TDMT_SYNC_MAX_MSG)) \
|
||||
((TYPE) < TDMT_DND_MAX_MSG || (TYPE) < TDMT_MND_MAX_MSG || (TYPE) < TDMT_VND_MAX_MSG || (TYPE) < TDMT_SCH_MAX_MSG || \
|
||||
(TYPE) < TDMT_STREAM_MAX_MSG || (TYPE) < TDMT_MON_MAX_MSG || (TYPE) < TDMT_SYNC_MAX_MSG)) \
|
||||
? tMsgInfo[tMsgDict[TMSG_SEG_CODE(TYPE)] + TMSG_SEG_SEQ(TYPE)] \
|
||||
: 0
|
||||
#define TMSG_INDEX(TYPE) (tMsgDict[TMSG_SEG_CODE(TYPE)] + TMSG_SEG_SEQ(TYPE))
|
||||
|
@ -665,6 +665,7 @@ typedef struct {
|
|||
char tbFName[TSDB_TABLE_FNAME_LEN];
|
||||
int32_t sversion;
|
||||
int32_t tversion;
|
||||
int64_t affectedRows;
|
||||
} SQueryTableRsp;
|
||||
|
||||
int32_t tSerializeSQueryTableRsp(void* buf, int32_t bufLen, SQueryTableRsp* pRsp);
|
||||
|
@ -1510,6 +1511,7 @@ typedef struct SSubQueryMsg {
|
|||
int32_t execId;
|
||||
int8_t taskType;
|
||||
int8_t explain;
|
||||
int8_t needFetch;
|
||||
uint32_t sqlLen; // the query sql,
|
||||
uint32_t phyLen;
|
||||
char msg[];
|
||||
|
|
|
@ -79,195 +79,196 @@
|
|||
#define TK_EXISTS 61
|
||||
#define TK_BUFFER 62
|
||||
#define TK_CACHELAST 63
|
||||
#define TK_COMP 64
|
||||
#define TK_DURATION 65
|
||||
#define TK_NK_VARIABLE 66
|
||||
#define TK_FSYNC 67
|
||||
#define TK_MAXROWS 68
|
||||
#define TK_MINROWS 69
|
||||
#define TK_KEEP 70
|
||||
#define TK_PAGES 71
|
||||
#define TK_PAGESIZE 72
|
||||
#define TK_PRECISION 73
|
||||
#define TK_REPLICA 74
|
||||
#define TK_STRICT 75
|
||||
#define TK_WAL 76
|
||||
#define TK_VGROUPS 77
|
||||
#define TK_SINGLE_STABLE 78
|
||||
#define TK_RETENTIONS 79
|
||||
#define TK_SCHEMALESS 80
|
||||
#define TK_NK_COLON 81
|
||||
#define TK_TABLE 82
|
||||
#define TK_NK_LP 83
|
||||
#define TK_NK_RP 84
|
||||
#define TK_STABLE 85
|
||||
#define TK_ADD 86
|
||||
#define TK_COLUMN 87
|
||||
#define TK_MODIFY 88
|
||||
#define TK_RENAME 89
|
||||
#define TK_TAG 90
|
||||
#define TK_SET 91
|
||||
#define TK_NK_EQ 92
|
||||
#define TK_USING 93
|
||||
#define TK_TAGS 94
|
||||
#define TK_COMMENT 95
|
||||
#define TK_BOOL 96
|
||||
#define TK_TINYINT 97
|
||||
#define TK_SMALLINT 98
|
||||
#define TK_INT 99
|
||||
#define TK_INTEGER 100
|
||||
#define TK_BIGINT 101
|
||||
#define TK_FLOAT 102
|
||||
#define TK_DOUBLE 103
|
||||
#define TK_BINARY 104
|
||||
#define TK_TIMESTAMP 105
|
||||
#define TK_NCHAR 106
|
||||
#define TK_UNSIGNED 107
|
||||
#define TK_JSON 108
|
||||
#define TK_VARCHAR 109
|
||||
#define TK_MEDIUMBLOB 110
|
||||
#define TK_BLOB 111
|
||||
#define TK_VARBINARY 112
|
||||
#define TK_DECIMAL 113
|
||||
#define TK_MAX_DELAY 114
|
||||
#define TK_WATERMARK 115
|
||||
#define TK_ROLLUP 116
|
||||
#define TK_TTL 117
|
||||
#define TK_SMA 118
|
||||
#define TK_FIRST 119
|
||||
#define TK_LAST 120
|
||||
#define TK_SHOW 121
|
||||
#define TK_DATABASES 122
|
||||
#define TK_TABLES 123
|
||||
#define TK_STABLES 124
|
||||
#define TK_MNODES 125
|
||||
#define TK_MODULES 126
|
||||
#define TK_QNODES 127
|
||||
#define TK_FUNCTIONS 128
|
||||
#define TK_INDEXES 129
|
||||
#define TK_ACCOUNTS 130
|
||||
#define TK_APPS 131
|
||||
#define TK_CONNECTIONS 132
|
||||
#define TK_LICENCE 133
|
||||
#define TK_GRANTS 134
|
||||
#define TK_QUERIES 135
|
||||
#define TK_SCORES 136
|
||||
#define TK_TOPICS 137
|
||||
#define TK_VARIABLES 138
|
||||
#define TK_BNODES 139
|
||||
#define TK_SNODES 140
|
||||
#define TK_CLUSTER 141
|
||||
#define TK_TRANSACTIONS 142
|
||||
#define TK_DISTRIBUTED 143
|
||||
#define TK_CONSUMERS 144
|
||||
#define TK_SUBSCRIPTIONS 145
|
||||
#define TK_LIKE 146
|
||||
#define TK_INDEX 147
|
||||
#define TK_FUNCTION 148
|
||||
#define TK_INTERVAL 149
|
||||
#define TK_TOPIC 150
|
||||
#define TK_AS 151
|
||||
#define TK_WITH 152
|
||||
#define TK_META 153
|
||||
#define TK_CONSUMER 154
|
||||
#define TK_GROUP 155
|
||||
#define TK_DESC 156
|
||||
#define TK_DESCRIBE 157
|
||||
#define TK_RESET 158
|
||||
#define TK_QUERY 159
|
||||
#define TK_CACHE 160
|
||||
#define TK_EXPLAIN 161
|
||||
#define TK_ANALYZE 162
|
||||
#define TK_VERBOSE 163
|
||||
#define TK_NK_BOOL 164
|
||||
#define TK_RATIO 165
|
||||
#define TK_NK_FLOAT 166
|
||||
#define TK_COMPACT 167
|
||||
#define TK_VNODES 168
|
||||
#define TK_IN 169
|
||||
#define TK_OUTPUTTYPE 170
|
||||
#define TK_AGGREGATE 171
|
||||
#define TK_BUFSIZE 172
|
||||
#define TK_STREAM 173
|
||||
#define TK_INTO 174
|
||||
#define TK_TRIGGER 175
|
||||
#define TK_AT_ONCE 176
|
||||
#define TK_WINDOW_CLOSE 177
|
||||
#define TK_IGNORE 178
|
||||
#define TK_EXPIRED 179
|
||||
#define TK_KILL 180
|
||||
#define TK_CONNECTION 181
|
||||
#define TK_TRANSACTION 182
|
||||
#define TK_BALANCE 183
|
||||
#define TK_VGROUP 184
|
||||
#define TK_MERGE 185
|
||||
#define TK_REDISTRIBUTE 186
|
||||
#define TK_SPLIT 187
|
||||
#define TK_SYNCDB 188
|
||||
#define TK_DELETE 189
|
||||
#define TK_INSERT 190
|
||||
#define TK_NULL 191
|
||||
#define TK_NK_QUESTION 192
|
||||
#define TK_NK_ARROW 193
|
||||
#define TK_ROWTS 194
|
||||
#define TK_TBNAME 195
|
||||
#define TK_QSTARTTS 196
|
||||
#define TK_QENDTS 197
|
||||
#define TK_WSTARTTS 198
|
||||
#define TK_WENDTS 199
|
||||
#define TK_WDURATION 200
|
||||
#define TK_CAST 201
|
||||
#define TK_NOW 202
|
||||
#define TK_TODAY 203
|
||||
#define TK_TIMEZONE 204
|
||||
#define TK_CLIENT_VERSION 205
|
||||
#define TK_SERVER_VERSION 206
|
||||
#define TK_SERVER_STATUS 207
|
||||
#define TK_CURRENT_USER 208
|
||||
#define TK_COUNT 209
|
||||
#define TK_LAST_ROW 210
|
||||
#define TK_BETWEEN 211
|
||||
#define TK_IS 212
|
||||
#define TK_NK_LT 213
|
||||
#define TK_NK_GT 214
|
||||
#define TK_NK_LE 215
|
||||
#define TK_NK_GE 216
|
||||
#define TK_NK_NE 217
|
||||
#define TK_MATCH 218
|
||||
#define TK_NMATCH 219
|
||||
#define TK_CONTAINS 220
|
||||
#define TK_JOIN 221
|
||||
#define TK_INNER 222
|
||||
#define TK_SELECT 223
|
||||
#define TK_DISTINCT 224
|
||||
#define TK_WHERE 225
|
||||
#define TK_PARTITION 226
|
||||
#define TK_BY 227
|
||||
#define TK_SESSION 228
|
||||
#define TK_STATE_WINDOW 229
|
||||
#define TK_SLIDING 230
|
||||
#define TK_FILL 231
|
||||
#define TK_VALUE 232
|
||||
#define TK_NONE 233
|
||||
#define TK_PREV 234
|
||||
#define TK_LINEAR 235
|
||||
#define TK_NEXT 236
|
||||
#define TK_HAVING 237
|
||||
#define TK_RANGE 238
|
||||
#define TK_EVERY 239
|
||||
#define TK_ORDER 240
|
||||
#define TK_SLIMIT 241
|
||||
#define TK_SOFFSET 242
|
||||
#define TK_LIMIT 243
|
||||
#define TK_OFFSET 244
|
||||
#define TK_ASC 245
|
||||
#define TK_NULLS 246
|
||||
#define TK_ID 247
|
||||
#define TK_NK_BITNOT 248
|
||||
#define TK_VALUES 249
|
||||
#define TK_IMPORT 250
|
||||
#define TK_NK_SEMI 251
|
||||
#define TK_FILE 252
|
||||
#define TK_CACHELASTSIZE 64
|
||||
#define TK_COMP 65
|
||||
#define TK_DURATION 66
|
||||
#define TK_NK_VARIABLE 67
|
||||
#define TK_FSYNC 68
|
||||
#define TK_MAXROWS 69
|
||||
#define TK_MINROWS 70
|
||||
#define TK_KEEP 71
|
||||
#define TK_PAGES 72
|
||||
#define TK_PAGESIZE 73
|
||||
#define TK_PRECISION 74
|
||||
#define TK_REPLICA 75
|
||||
#define TK_STRICT 76
|
||||
#define TK_WAL 77
|
||||
#define TK_VGROUPS 78
|
||||
#define TK_SINGLE_STABLE 79
|
||||
#define TK_RETENTIONS 80
|
||||
#define TK_SCHEMALESS 81
|
||||
#define TK_NK_COLON 82
|
||||
#define TK_TABLE 83
|
||||
#define TK_NK_LP 84
|
||||
#define TK_NK_RP 85
|
||||
#define TK_STABLE 86
|
||||
#define TK_ADD 87
|
||||
#define TK_COLUMN 88
|
||||
#define TK_MODIFY 89
|
||||
#define TK_RENAME 90
|
||||
#define TK_TAG 91
|
||||
#define TK_SET 92
|
||||
#define TK_NK_EQ 93
|
||||
#define TK_USING 94
|
||||
#define TK_TAGS 95
|
||||
#define TK_COMMENT 96
|
||||
#define TK_BOOL 97
|
||||
#define TK_TINYINT 98
|
||||
#define TK_SMALLINT 99
|
||||
#define TK_INT 100
|
||||
#define TK_INTEGER 101
|
||||
#define TK_BIGINT 102
|
||||
#define TK_FLOAT 103
|
||||
#define TK_DOUBLE 104
|
||||
#define TK_BINARY 105
|
||||
#define TK_TIMESTAMP 106
|
||||
#define TK_NCHAR 107
|
||||
#define TK_UNSIGNED 108
|
||||
#define TK_JSON 109
|
||||
#define TK_VARCHAR 110
|
||||
#define TK_MEDIUMBLOB 111
|
||||
#define TK_BLOB 112
|
||||
#define TK_VARBINARY 113
|
||||
#define TK_DECIMAL 114
|
||||
#define TK_MAX_DELAY 115
|
||||
#define TK_WATERMARK 116
|
||||
#define TK_ROLLUP 117
|
||||
#define TK_TTL 118
|
||||
#define TK_SMA 119
|
||||
#define TK_FIRST 120
|
||||
#define TK_LAST 121
|
||||
#define TK_SHOW 122
|
||||
#define TK_DATABASES 123
|
||||
#define TK_TABLES 124
|
||||
#define TK_STABLES 125
|
||||
#define TK_MNODES 126
|
||||
#define TK_MODULES 127
|
||||
#define TK_QNODES 128
|
||||
#define TK_FUNCTIONS 129
|
||||
#define TK_INDEXES 130
|
||||
#define TK_ACCOUNTS 131
|
||||
#define TK_APPS 132
|
||||
#define TK_CONNECTIONS 133
|
||||
#define TK_LICENCE 134
|
||||
#define TK_GRANTS 135
|
||||
#define TK_QUERIES 136
|
||||
#define TK_SCORES 137
|
||||
#define TK_TOPICS 138
|
||||
#define TK_VARIABLES 139
|
||||
#define TK_BNODES 140
|
||||
#define TK_SNODES 141
|
||||
#define TK_CLUSTER 142
|
||||
#define TK_TRANSACTIONS 143
|
||||
#define TK_DISTRIBUTED 144
|
||||
#define TK_CONSUMERS 145
|
||||
#define TK_SUBSCRIPTIONS 146
|
||||
#define TK_LIKE 147
|
||||
#define TK_INDEX 148
|
||||
#define TK_FUNCTION 149
|
||||
#define TK_INTERVAL 150
|
||||
#define TK_TOPIC 151
|
||||
#define TK_AS 152
|
||||
#define TK_WITH 153
|
||||
#define TK_META 154
|
||||
#define TK_CONSUMER 155
|
||||
#define TK_GROUP 156
|
||||
#define TK_DESC 157
|
||||
#define TK_DESCRIBE 158
|
||||
#define TK_RESET 159
|
||||
#define TK_QUERY 160
|
||||
#define TK_CACHE 161
|
||||
#define TK_EXPLAIN 162
|
||||
#define TK_ANALYZE 163
|
||||
#define TK_VERBOSE 164
|
||||
#define TK_NK_BOOL 165
|
||||
#define TK_RATIO 166
|
||||
#define TK_NK_FLOAT 167
|
||||
#define TK_COMPACT 168
|
||||
#define TK_VNODES 169
|
||||
#define TK_IN 170
|
||||
#define TK_OUTPUTTYPE 171
|
||||
#define TK_AGGREGATE 172
|
||||
#define TK_BUFSIZE 173
|
||||
#define TK_STREAM 174
|
||||
#define TK_INTO 175
|
||||
#define TK_TRIGGER 176
|
||||
#define TK_AT_ONCE 177
|
||||
#define TK_WINDOW_CLOSE 178
|
||||
#define TK_IGNORE 179
|
||||
#define TK_EXPIRED 180
|
||||
#define TK_KILL 181
|
||||
#define TK_CONNECTION 182
|
||||
#define TK_TRANSACTION 183
|
||||
#define TK_BALANCE 184
|
||||
#define TK_VGROUP 185
|
||||
#define TK_MERGE 186
|
||||
#define TK_REDISTRIBUTE 187
|
||||
#define TK_SPLIT 188
|
||||
#define TK_SYNCDB 189
|
||||
#define TK_DELETE 190
|
||||
#define TK_INSERT 191
|
||||
#define TK_NULL 192
|
||||
#define TK_NK_QUESTION 193
|
||||
#define TK_NK_ARROW 194
|
||||
#define TK_ROWTS 195
|
||||
#define TK_TBNAME 196
|
||||
#define TK_QSTARTTS 197
|
||||
#define TK_QENDTS 198
|
||||
#define TK_WSTARTTS 199
|
||||
#define TK_WENDTS 200
|
||||
#define TK_WDURATION 201
|
||||
#define TK_CAST 202
|
||||
#define TK_NOW 203
|
||||
#define TK_TODAY 204
|
||||
#define TK_TIMEZONE 205
|
||||
#define TK_CLIENT_VERSION 206
|
||||
#define TK_SERVER_VERSION 207
|
||||
#define TK_SERVER_STATUS 208
|
||||
#define TK_CURRENT_USER 209
|
||||
#define TK_COUNT 210
|
||||
#define TK_LAST_ROW 211
|
||||
#define TK_BETWEEN 212
|
||||
#define TK_IS 213
|
||||
#define TK_NK_LT 214
|
||||
#define TK_NK_GT 215
|
||||
#define TK_NK_LE 216
|
||||
#define TK_NK_GE 217
|
||||
#define TK_NK_NE 218
|
||||
#define TK_MATCH 219
|
||||
#define TK_NMATCH 220
|
||||
#define TK_CONTAINS 221
|
||||
#define TK_JOIN 222
|
||||
#define TK_INNER 223
|
||||
#define TK_SELECT 224
|
||||
#define TK_DISTINCT 225
|
||||
#define TK_WHERE 226
|
||||
#define TK_PARTITION 227
|
||||
#define TK_BY 228
|
||||
#define TK_SESSION 229
|
||||
#define TK_STATE_WINDOW 230
|
||||
#define TK_SLIDING 231
|
||||
#define TK_FILL 232
|
||||
#define TK_VALUE 233
|
||||
#define TK_NONE 234
|
||||
#define TK_PREV 235
|
||||
#define TK_LINEAR 236
|
||||
#define TK_NEXT 237
|
||||
#define TK_HAVING 238
|
||||
#define TK_RANGE 239
|
||||
#define TK_EVERY 240
|
||||
#define TK_ORDER 241
|
||||
#define TK_SLIMIT 242
|
||||
#define TK_SOFFSET 243
|
||||
#define TK_LIMIT 244
|
||||
#define TK_OFFSET 245
|
||||
#define TK_ASC 246
|
||||
#define TK_NULLS 247
|
||||
#define TK_ID 248
|
||||
#define TK_NK_BITNOT 249
|
||||
#define TK_VALUES 250
|
||||
#define TK_IMPORT 251
|
||||
#define TK_NK_SEMI 252
|
||||
#define TK_FILE 253
|
||||
|
||||
#define TK_NK_SPACE 300
|
||||
#define TK_NK_COMMENT 301
|
||||
|
|
|
@ -45,6 +45,10 @@ typedef struct SDeleterParam {
|
|||
SArray* pUidList;
|
||||
} SDeleterParam;
|
||||
|
||||
typedef struct SInserterParam {
|
||||
SReadHandle* readHandle;
|
||||
} SInserterParam;
|
||||
|
||||
typedef struct SDataSinkStat {
|
||||
uint64_t cachedSize;
|
||||
} SDataSinkStat;
|
||||
|
@ -96,7 +100,7 @@ void dsEndPut(DataSinkHandle handle, uint64_t useconds);
|
|||
* @param handle
|
||||
* @param pLen data length
|
||||
*/
|
||||
void dsGetDataLength(DataSinkHandle handle, int32_t* pLen, bool* pQueryEnd);
|
||||
void dsGetDataLength(DataSinkHandle handle, int64_t* pLen, bool* pQueryEnd);
|
||||
|
||||
/**
|
||||
* Get data, the caller needs to allocate data memory.
|
||||
|
|
|
@ -157,7 +157,7 @@ int64_t qGetQueriedTableUid(qTaskInfo_t tinfo);
|
|||
*/
|
||||
int32_t qGetQualifiedTableIdList(void* pTableList, const char* tagCond, int32_t tagCondLen, SArray* pTableIdList);
|
||||
|
||||
void qProcessFetchRsp(void* parent, struct SRpcMsg* pMsg, struct SEpSet* pEpSet);
|
||||
void qProcessRspMsg(void* parent, struct SRpcMsg* pMsg, struct SEpSet* pEpSet);
|
||||
|
||||
int32_t qGetExplainExecInfo(qTaskInfo_t tinfo, int32_t* resNum, SExplainExecInfo** pRes);
|
||||
|
||||
|
@ -174,7 +174,13 @@ int32_t qDeserializeTaskStatus(qTaskInfo_t tinfo, const char* pInput, int32_t le
|
|||
*/
|
||||
int32_t qGetStreamScanStatus(qTaskInfo_t tinfo, uint64_t* uid, int64_t* ts);
|
||||
|
||||
int32_t qStreamPrepareScan(qTaskInfo_t tinfo, uint64_t uid, int64_t ts);
|
||||
int32_t qStreamPrepareTsdbScan(qTaskInfo_t tinfo, uint64_t uid, int64_t ts);
|
||||
|
||||
int32_t qStreamPrepareScan1(qTaskInfo_t tinfo, const STqOffsetVal* pOffset);
|
||||
|
||||
int32_t qStreamExtractOffset(qTaskInfo_t tinfo, STqOffsetVal* pOffset);
|
||||
|
||||
void* qStreamExtractMetaMsg(qTaskInfo_t tinfo);
|
||||
|
||||
void* qExtractReaderFromStreamScanner(void* scanner);
|
||||
int32_t qExtractStreamScanner(qTaskInfo_t tinfo, void** scanner);
|
||||
|
|
|
@ -51,7 +51,8 @@ extern "C" {
|
|||
typedef struct SDatabaseOptions {
|
||||
ENodeType type;
|
||||
int32_t buffer;
|
||||
int8_t cachelast;
|
||||
int8_t cacheLast;
|
||||
int32_t cacheLastSize;
|
||||
int8_t compressionLevel;
|
||||
int32_t daysPerFile;
|
||||
SValueNode* pDaysPerFile;
|
||||
|
|
|
@ -67,7 +67,7 @@ int32_t qWorkerProcessCQueryMsg(void *node, void *qWorkerMgmt, SRpcMsg *pMsg, in
|
|||
|
||||
int32_t qWorkerProcessFetchMsg(void *node, void *qWorkerMgmt, SRpcMsg *pMsg, int64_t ts);
|
||||
|
||||
int32_t qWorkerProcessFetchRsp(void *node, void *qWorkerMgmt, SRpcMsg *pMsg, int64_t ts);
|
||||
int32_t qWorkerProcessRspMsg(void *node, void *qWorkerMgmt, SRpcMsg *pMsg, int64_t ts);
|
||||
|
||||
int32_t qWorkerProcessCancelMsg(void *node, void *qWorkerMgmt, SRpcMsg *pMsg, int64_t ts);
|
||||
|
||||
|
|
|
@ -163,9 +163,6 @@ typedef struct SSyncLogStore {
|
|||
// return commit index of log
|
||||
SyncIndex (*getCommitIndex)(struct SSyncLogStore* pLogStore);
|
||||
|
||||
// refactor, log[0 .. n] ==> log[m .. n]
|
||||
// int32_t (*syncLogSetBeginIndex)(struct SSyncLogStore* pLogStore, SyncIndex beginIndex);
|
||||
|
||||
SyncIndex (*syncLogBeginIndex)(struct SSyncLogStore* pLogStore);
|
||||
SyncIndex (*syncLogEndIndex)(struct SSyncLogStore* pLogStore);
|
||||
bool (*syncLogIsEmpty)(struct SSyncLogStore* pLogStore);
|
||||
|
|
|
@ -194,6 +194,7 @@ int32_t walRestoreFromSnapshot(SWal *, int64_t ver);
|
|||
SWalReader *walOpenReader(SWal *, SWalFilterCond *pCond);
|
||||
void walCloseReader(SWalReader *pRead);
|
||||
int32_t walReadVer(SWalReader *pRead, int64_t ver);
|
||||
int32_t walReadSeekVer(SWalReader *pRead, int64_t ver);
|
||||
int32_t walNextValidMsg(SWalReader *pRead);
|
||||
|
||||
// only for tq usage
|
||||
|
|
|
@ -477,22 +477,6 @@ static FORCE_INLINE void* tDecoderMalloc(SDecoder* pCoder, int32_t size) {
|
|||
return n; \
|
||||
} while (0)
|
||||
|
||||
#define tGetV(p, v) \
|
||||
do { \
|
||||
int32_t n = 0; \
|
||||
if (v) *v = 0; \
|
||||
for (;;) { \
|
||||
if (p[n] <= 0x7f) { \
|
||||
if (v) (*v) |= (p[n] << (7 * n)); \
|
||||
n++; \
|
||||
break; \
|
||||
} \
|
||||
if (v) (*v) |= ((p[n] & 0x7f) << (7 * n)); \
|
||||
n++; \
|
||||
} \
|
||||
return n; \
|
||||
} while (0)
|
||||
|
||||
// PUT
|
||||
static FORCE_INLINE int32_t tPutU8(uint8_t* p, uint8_t v) {
|
||||
if (p) ((uint8_t*)p)[0] = v;
|
||||
|
@ -607,7 +591,22 @@ static FORCE_INLINE int32_t tGetI64(uint8_t* p, int64_t* v) {
|
|||
return sizeof(int64_t);
|
||||
}
|
||||
|
||||
static FORCE_INLINE int32_t tGetU16v(uint8_t* p, uint16_t* v) { tGetV(p, v); }
|
||||
static FORCE_INLINE int32_t tGetU16v(uint8_t* p, uint16_t* v) {
|
||||
int32_t n = 0;
|
||||
|
||||
if (v) *v = 0;
|
||||
for (;;) {
|
||||
if (p[n] <= 0x7f) {
|
||||
if (v) (*v) |= (((uint16_t)p[n]) << (7 * n));
|
||||
n++;
|
||||
break;
|
||||
}
|
||||
if (v) (*v) |= (((uint16_t)(p[n] & 0x7f)) << (7 * n));
|
||||
n++;
|
||||
}
|
||||
|
||||
return n;
|
||||
}
|
||||
|
||||
static FORCE_INLINE int32_t tGetI16v(uint8_t* p, int16_t* v) {
|
||||
int32_t n;
|
||||
|
@ -619,7 +618,22 @@ static FORCE_INLINE int32_t tGetI16v(uint8_t* p, int16_t* v) {
|
|||
return n;
|
||||
}
|
||||
|
||||
static FORCE_INLINE int32_t tGetU32v(uint8_t* p, uint32_t* v) { tGetV(p, v); }
|
||||
static FORCE_INLINE int32_t tGetU32v(uint8_t* p, uint32_t* v) {
|
||||
int32_t n = 0;
|
||||
|
||||
if (v) *v = 0;
|
||||
for (;;) {
|
||||
if (p[n] <= 0x7f) {
|
||||
if (v) (*v) |= (((uint32_t)p[n]) << (7 * n));
|
||||
n++;
|
||||
break;
|
||||
}
|
||||
if (v) (*v) |= (((uint32_t)(p[n] & 0x7f)) << (7 * n));
|
||||
n++;
|
||||
}
|
||||
|
||||
return n;
|
||||
}
|
||||
|
||||
static FORCE_INLINE int32_t tGetI32v(uint8_t* p, int32_t* v) {
|
||||
int32_t n;
|
||||
|
@ -631,7 +645,22 @@ static FORCE_INLINE int32_t tGetI32v(uint8_t* p, int32_t* v) {
|
|||
return n;
|
||||
}
|
||||
|
||||
static FORCE_INLINE int32_t tGetU64v(uint8_t* p, uint64_t* v) { tGetV(p, v); }
|
||||
static FORCE_INLINE int32_t tGetU64v(uint8_t* p, uint64_t* v) {
|
||||
int32_t n = 0;
|
||||
|
||||
if (v) *v = 0;
|
||||
for (;;) {
|
||||
if (p[n] <= 0x7f) {
|
||||
if (v) (*v) |= (((uint64_t)p[n]) << (7 * n));
|
||||
n++;
|
||||
break;
|
||||
}
|
||||
if (v) (*v) |= (((uint64_t)(p[n] & 0x7f)) << (7 * n));
|
||||
n++;
|
||||
}
|
||||
|
||||
return n;
|
||||
}
|
||||
|
||||
static FORCE_INLINE int32_t tGetI64v(uint8_t* p, int64_t* v) {
|
||||
int32_t n;
|
||||
|
|
|
@ -60,7 +60,7 @@ extern int32_t tsdbDebugFlag;
|
|||
extern int32_t tqDebugFlag;
|
||||
extern int32_t fsDebugFlag;
|
||||
extern int32_t metaDebugFlag;
|
||||
extern int32_t fnDebugFlag;
|
||||
extern int32_t udfDebugFlag;
|
||||
extern int32_t smaDebugFlag;
|
||||
extern int32_t idxDebugFlag;
|
||||
|
||||
|
|
|
@ -37,4 +37,5 @@ if [ -f "${install_main_dir}/taosadapter.service" ]; then
|
|||
fi
|
||||
|
||||
# there can not libtaos.so*, otherwise ln -s error
|
||||
${csudo}rm -f ${install_main_dir}/driver/libtaos* || :
|
||||
${csudo}rm -f ${install_main_dir}/driver/libtaos.* || :
|
||||
${csudo}rm -f ${install_main_dir}/driver/libtaosws.* || :
|
||||
|
|
|
@ -29,8 +29,12 @@ else
|
|||
${csudo}rm -f ${bin_link_dir}/taosdemo || :
|
||||
${csudo}rm -f ${cfg_link_dir}/* || :
|
||||
${csudo}rm -f ${inc_link_dir}/taos.h || :
|
||||
${csudo}rm -f ${inc_link_dir}/taosdef.h || :
|
||||
${csudo}rm -f ${inc_link_dir}/taoserror.h || :
|
||||
${csudo}rm -f ${inc_link_dir}/taosudf.h || :
|
||||
${csudo}rm -f ${inc_link_dir}/taosws.h || :
|
||||
${csudo}rm -f ${lib_link_dir}/libtaos.* || :
|
||||
${csudo}rm -f ${lib_link_dir}/libtaosws.* || :
|
||||
|
||||
${csudo}rm -f ${log_link_dir} || :
|
||||
${csudo}rm -f ${data_link_dir} || :
|
||||
|
|
|
@ -30,6 +30,7 @@ mkdir -p ${pkg_dir}
|
|||
cd ${pkg_dir}
|
||||
|
||||
libfile="libtaos.so.${tdengine_ver}"
|
||||
wslibfile="libtaosws.so"
|
||||
|
||||
# create install dir
|
||||
install_home_path="/usr/local/taos"
|
||||
|
@ -67,10 +68,12 @@ fi
|
|||
|
||||
cp ${compile_dir}/build/bin/taos ${pkg_dir}${install_home_path}/bin
|
||||
cp ${compile_dir}/build/lib/${libfile} ${pkg_dir}${install_home_path}/driver
|
||||
cp ${compile_dir}/build/lib/${wslibfile} ${pkg_dir}${install_home_path}/driver ||:
|
||||
cp ${compile_dir}/../include/client/taos.h ${pkg_dir}${install_home_path}/include
|
||||
cp ${compile_dir}/../include/common/taosdef.h ${pkg_dir}${install_home_path}/include
|
||||
cp ${compile_dir}/../include/util/taoserror.h ${pkg_dir}${install_home_path}/include
|
||||
cp ${compile_dir}/../include/libs/function/taosudf.h ${pkg_dir}${install_home_path}/include
|
||||
cp ${compile_dir}/../src/inc/taosws.h ${pkg_dir}${install_home_path}/include ||:
|
||||
cp -r ${top_dir}/examples/* ${pkg_dir}${install_home_path}/examples
|
||||
#cp -r ${top_dir}/src/connector/python ${pkg_dir}${install_home_path}/connector
|
||||
#cp -r ${top_dir}/src/connector/go ${pkg_dir}${install_home_path}/connector
|
||||
|
|
|
@ -42,6 +42,7 @@ echo version: %{_version}
|
|||
echo buildroot: %{buildroot}
|
||||
|
||||
libfile="libtaos.so.%{_version}"
|
||||
wslibfile="libtaosws.so"
|
||||
|
||||
# create install path, and cp file
|
||||
mkdir -p %{buildroot}%{homepath}/bin
|
||||
|
@ -74,10 +75,12 @@ if [ -f %{_compiledir}/build/bin/taosadapter ]; then
|
|||
cp %{_compiledir}/build/bin/taosadapter %{buildroot}%{homepath}/bin ||:
|
||||
fi
|
||||
cp %{_compiledir}/build/lib/${libfile} %{buildroot}%{homepath}/driver
|
||||
cp %{_compiledir}/build/lib/${wslibfile} %{buildroot}%{homepath}/driver ||:
|
||||
cp %{_compiledir}/../include/client/taos.h %{buildroot}%{homepath}/include
|
||||
cp %{_compiledir}/../include/common/taosdef.h %{buildroot}%{homepath}/include
|
||||
cp %{_compiledir}/../include/util/taoserror.h %{buildroot}%{homepath}/include
|
||||
cp %{_compiledir}/../include/libs/function/taosudf.h %{buildroot}%{homepath}/include
|
||||
cp %{_compiledir}/../src/inc/taosws.h %{buildroot}%{homepath}/include ||:
|
||||
#cp -r %{_compiledir}/../src/connector/python %{buildroot}%{homepath}/connector
|
||||
#cp -r %{_compiledir}/../src/connector/go %{buildroot}%{homepath}/connector
|
||||
#cp -r %{_compiledir}/../src/connector/nodejs %{buildroot}%{homepath}/connector
|
||||
|
|
|
@ -227,9 +227,13 @@ function install_lib() {
|
|||
${csudo}ln -s ${install_main_dir}/driver/libtaos.* ${lib_link_dir}/libtaos.so.1
|
||||
${csudo}ln -s ${lib_link_dir}/libtaos.so.1 ${lib_link_dir}/libtaos.so
|
||||
|
||||
${csudo}ln -s ${lib_link_dir}/libtaosws.so ${lib_link_dir}/libtaosws.so || :
|
||||
|
||||
if [[ -d ${lib64_link_dir} && ! -e ${lib64_link_dir}/libtaos.so ]]; then
|
||||
${csudo}ln -s ${install_main_dir}/driver/libtaos.* ${lib64_link_dir}/libtaos.so.1 || :
|
||||
${csudo}ln -s ${lib64_link_dir}/libtaos.so.1 ${lib64_link_dir}/libtaos.so || :
|
||||
|
||||
${csudo}ln -s ${lib64_link_dir}/libtaosws.so ${lib64_link_dir}/libtaosws.so || :
|
||||
fi
|
||||
|
||||
${csudo}ldconfig
|
||||
|
@ -313,11 +317,16 @@ function install_jemalloc() {
|
|||
|
||||
function install_header() {
|
||||
${csudo}rm -f ${inc_link_dir}/taos.h ${inc_link_dir}/taosdef.h ${inc_link_dir}/taoserror.h ${inc_link_dir}/taosudf.h || :
|
||||
|
||||
${csudo}rm -f ${inc_link_dir}/taosws.h || :
|
||||
|
||||
${csudo}cp -f ${script_dir}/inc/* ${install_main_dir}/include && ${csudo}chmod 644 ${install_main_dir}/include/*
|
||||
${csudo}ln -s ${install_main_dir}/include/taos.h ${inc_link_dir}/taos.h
|
||||
${csudo}ln -s ${install_main_dir}/include/taosdef.h ${inc_link_dir}/taosdef.h
|
||||
${csudo}ln -s ${install_main_dir}/include/taoserror.h ${inc_link_dir}/taoserror.h
|
||||
${csudo}ln -s ${install_main_dir}/include/taosudf.h ${inc_link_dir}/taosudf.h
|
||||
|
||||
${csudo}ln -s ${install_main_dir}/include/taosws.h ${inc_link_dir}/taosws.h || :
|
||||
}
|
||||
|
||||
function add_newHostname_to_hosts() {
|
||||
|
|
|
@ -294,21 +294,29 @@ function install_avro() {
|
|||
function install_lib() {
|
||||
# Remove links
|
||||
${csudo}rm -f ${lib_link_dir}/libtaos.* || :
|
||||
${csudo}rm -f ${lib_link_dir}/libtaosws.* || :
|
||||
if [ "$osType" != "Darwin" ]; then
|
||||
${csudo}rm -f ${lib64_link_dir}/libtaos.* || :
|
||||
${csudo}rm -f ${lib64_link_dir}/libtaosws.* || :
|
||||
fi
|
||||
|
||||
if [ "$osType" != "Darwin" ]; then
|
||||
${csudo}cp ${binary_dir}/build/lib/libtaos.so.${verNumber} \
|
||||
${install_main_dir}/driver &&
|
||||
${csudo}chmod 777 ${install_main_dir}/driver/*
|
||||
${csudo}chmod 777 ${install_main_dir}/driver/libtaos.so.${verNumber}
|
||||
|
||||
${csudo}cp ${binary_dir}/build/lib/libtaosws.so \
|
||||
${install_main_dir}/driver &&
|
||||
${csudo}chmod 777 ${install_main_dir}/driver/libtaosws.so
|
||||
|
||||
${csudo}ln -sf ${install_main_dir}/driver/libtaos.* ${lib_link_dir}/libtaos.so.1
|
||||
${csudo}ln -sf ${lib_link_dir}/libtaos.so.1 ${lib_link_dir}/libtaos.so
|
||||
${csudo}ln -sf ${install_main_dir}/driver/libtaosws.so ${lib_link_dir}/libtaosws.so || :
|
||||
|
||||
if [ -d "${lib64_link_dir}" ]; then
|
||||
${csudo}ln -sf ${install_main_dir}/driver/libtaos.* ${lib64_link_dir}/libtaos.so.1
|
||||
${csudo}ln -sf ${lib64_link_dir}/libtaos.so.1 ${lib64_link_dir}/libtaos.so
|
||||
${csudo}ln -sf ${lib64_link_dir}/libtaosws.so ${lib64_link_dir}/libtaosws.so || :
|
||||
fi
|
||||
else
|
||||
${csudo}cp -Rf ${binary_dir}/build/lib/libtaos.${verNumber}.dylib \
|
||||
|
@ -337,8 +345,8 @@ function install_lib() {
|
|||
fi
|
||||
|
||||
install_jemalloc
|
||||
install_avro lib
|
||||
install_avro lib64
|
||||
#install_avro lib
|
||||
#install_avro lib64
|
||||
|
||||
if [ "$osType" != "Darwin" ]; then
|
||||
${csudo}ldconfig
|
||||
|
@ -350,11 +358,19 @@ function install_header() {
|
|||
if [ "$osType" != "Darwin" ]; then
|
||||
${csudo}rm -f ${inc_link_dir}/taos.h ${inc_link_dir}/taosdef.h ${inc_link_dir}/taoserror.h ${inc_link_dir}/taosudf.h || :
|
||||
${csudo}cp -f ${source_dir}/include/client/taos.h ${source_dir}/include/common/taosdef.h ${source_dir}/include/util/taoserror.h ${source_dir}/include/libs/function/taosudf.h \
|
||||
${csudo}rm -f ${inc_link_dir}/taosws.h || :
|
||||
|
||||
${csudo}cp -f ${source_dir}/src/inc/taos.h ${source_dir}/src/inc/taosdef.h ${source_dir}/src/inc/taoserror.h \
|
||||
${install_main_dir}/include && ${csudo}chmod 644 ${install_main_dir}/include/*
|
||||
|
||||
${csudo}cp -f ${binary_dir}/build/include/taosws.h ${install_main_dir}/include && ${csudo}chmod 644 ${install_main_dir}/include/taosws.h
|
||||
|
||||
${csudo}ln -s ${install_main_dir}/include/taos.h ${inc_link_dir}/taos.h
|
||||
${csudo}ln -s ${install_main_dir}/include/taosdef.h ${inc_link_dir}/taosdef.h
|
||||
${csudo}ln -s ${install_main_dir}/include/taoserror.h ${inc_link_dir}/taoserror.h
|
||||
${csudo}ln -s ${install_main_dir}/include/taosudf.h ${inc_link_dir}/taosudf.h
|
||||
|
||||
${csudo}ln -s ${install_main_dir}/include/taosws.h ${inc_link_dir}/taosws.h || :
|
||||
else
|
||||
${csudo}cp -f ${source_dir}/include/client/taos.h ${source_dir}/include/common/taosdef.h ${source_dir}/include/util/taoserror.h ${source_dir}/include/libs/function/taosudf.h \
|
||||
${install_main_dir}/include ||
|
||||
|
|
|
@ -92,8 +92,11 @@ else
|
|||
fi
|
||||
|
||||
lib_files="${build_dir}/lib/libtaos.so.${version}"
|
||||
wslib_files="${build_dir}/lib/libtaosws.so."
|
||||
header_files="${code_dir}/include/client/taos.h ${code_dir}/include/common/taosdef.h ${code_dir}/include/util/taoserror.h ${code_dir}/include/libs/function/taosudf.h"
|
||||
|
||||
wsheader_files="${code_dir}/inc/taosws.h"
|
||||
|
||||
if [ "$dbName" != "taos" ]; then
|
||||
cfg_dir="${top_dir}/../enterprise/packaging/cfg"
|
||||
else
|
||||
|
@ -109,6 +112,9 @@ init_file_rpm=${script_dir}/../rpm/taosd
|
|||
# make directories.
|
||||
mkdir -p ${install_dir}
|
||||
mkdir -p ${install_dir}/inc && cp ${header_files} ${install_dir}/inc
|
||||
|
||||
${wsheader_files} ${install_dir}/inc || :
|
||||
|
||||
mkdir -p ${install_dir}/cfg && cp ${cfg_dir}/${configFile} ${install_dir}/cfg/${configFile}
|
||||
|
||||
if [ -f "${compile_dir}/test/cfg/taosadapter.toml" ]; then
|
||||
|
@ -283,6 +289,7 @@ fi
|
|||
|
||||
# Copy driver
|
||||
mkdir -p ${install_dir}/driver && cp ${lib_files} ${install_dir}/driver && echo "${versionComp}" >${install_dir}/driver/vercomp.txt
|
||||
cp ${wslib_files} ${install_dir}/driver || :
|
||||
|
||||
# Copy connector
|
||||
if [ "$verMode" == "cluster" ]; then
|
||||
|
|
|
@ -102,7 +102,10 @@ function clean_local_bin() {
|
|||
function clean_lib() {
|
||||
# Remove link
|
||||
${csudo}rm -f ${lib_link_dir}/libtaos.* || :
|
||||
${csudo}rm -f ${lib_link_dir}/libtaosws.* || :
|
||||
|
||||
${csudo}rm -f ${lib64_link_dir}/libtaos.* || :
|
||||
${csudo}rm -f ${lib64_link_dir}/libtaosws.* || :
|
||||
#${csudo}rm -rf ${v15_java_app_dir} || :
|
||||
}
|
||||
|
||||
|
@ -111,6 +114,8 @@ function clean_header() {
|
|||
${csudo}rm -f ${inc_link_dir}/taos.h || :
|
||||
${csudo}rm -f ${inc_link_dir}/taosdef.h || :
|
||||
${csudo}rm -f ${inc_link_dir}/taoserror.h || :
|
||||
|
||||
${csudo}rm -f ${inc_link_dir}/taosws.h || :
|
||||
}
|
||||
|
||||
function clean_config() {
|
||||
|
|
|
@ -324,9 +324,7 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_tmq_TMQConnector_fetchRawBlockImp(
|
|||
(*env)->CallVoidMethod(env, rowobj, g_blockdataSetNumOfRowsFp, (jint)numOfRows);
|
||||
(*env)->CallVoidMethod(env, rowobj, g_blockdataSetNumOfColsFp, (jint)numOfFields);
|
||||
|
||||
char *chars = (char *)data;
|
||||
int32_t len = chars[0] + (chars[1] << 8) + (chars[2] << 16) + (chars[3] << 24);
|
||||
(*env)->CallVoidMethod(env, rowobj, g_blockdataSetByteArrayFp, len, jniFromNCharToByteArray(env, (char *)data, len));
|
||||
|
||||
int32_t len = *(int32_t *)data;
|
||||
(*env)->CallVoidMethod(env, rowobj, g_blockdataSetByteArrayFp, jniFromNCharToByteArray(env, (char *)data, len));
|
||||
return JNI_SUCCESS;
|
||||
}
|
||||
|
|
|
@ -592,8 +592,7 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_fetchBlockImp(JNI
|
|||
(*env)->CallVoidMethod(env, rowobj, g_blockdataSetNumOfRowsFp, (jint)numOfRows);
|
||||
(*env)->CallVoidMethod(env, rowobj, g_blockdataSetNumOfColsFp, (jint)numOfFields);
|
||||
|
||||
char *chars = (char *)data;
|
||||
int32_t len = chars[0] + (chars[1] << 8) + (chars[2] << 16) + (chars[3] << 24);
|
||||
int32_t len = *(int32_t *)data;
|
||||
(*env)->CallVoidMethod(env, rowobj, g_blockdataSetByteArrayFp, jniFromNCharToByteArray(env, (char *)data, len));
|
||||
|
||||
return JNI_SUCCESS;
|
||||
|
|
|
@ -1,14 +0,0 @@
|
|||
/*
|
||||
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
|
||||
*
|
||||
* This program is free software: you can use, redistribute, and/or modify
|
||||
* it under the terms of the GNU Affero General Public License, version 3
|
||||
* or later ("AGPL"), as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE.
|
||||
*
|
||||
* You should have received a copy of the GNU Affero General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
|
@ -228,7 +228,7 @@ int32_t colDataMergeCol(SColumnInfoData* pColumnInfoData, uint32_t numOfRow1, ui
|
|||
uint32_t finalNumOfRows = numOfRow1 + numOfRow2;
|
||||
if (IS_VAR_DATA_TYPE(pColumnInfoData->info.type)) {
|
||||
// Handle the bitmap
|
||||
if (finalNumOfRows > *capacity) {
|
||||
if (finalNumOfRows > *capacity || numOfRow1 == 0) {
|
||||
char* p = taosMemoryRealloc(pColumnInfoData->varmeta.offset, sizeof(int32_t) * (numOfRow1 + numOfRow2));
|
||||
if (p == NULL) {
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
|
@ -262,7 +262,7 @@ int32_t colDataMergeCol(SColumnInfoData* pColumnInfoData, uint32_t numOfRow1, ui
|
|||
memcpy(pColumnInfoData->pData + oldLen, pSource->pData, len);
|
||||
pColumnInfoData->varmeta.length = len + oldLen;
|
||||
} else {
|
||||
if (finalNumOfRows > *capacity) {
|
||||
if (finalNumOfRows > *capacity || numOfRow1 == 0) {
|
||||
ASSERT(finalNumOfRows * pColumnInfoData->info.bytes);
|
||||
char* tmp = taosMemoryRealloc(pColumnInfoData->pData, finalNumOfRows * pColumnInfoData->info.bytes);
|
||||
if (tmp == NULL) {
|
||||
|
@ -1356,7 +1356,7 @@ SColumnInfoData createColumnInfoData(int16_t type, int32_t bytes, int16_t colId)
|
|||
return col;
|
||||
}
|
||||
|
||||
SColumnInfoData* bdGetColumnInfoData(SSDataBlock* pBlock, int32_t index) {
|
||||
SColumnInfoData* bdGetColumnInfoData(const SSDataBlock* pBlock, int32_t index) {
|
||||
ASSERT(pBlock != NULL);
|
||||
if (index >= taosArrayGetSize(pBlock->pDataBlock)) {
|
||||
return NULL;
|
||||
|
@ -2119,3 +2119,4 @@ const char* blockDecode(SSDataBlock* pBlock, int32_t numOfCols, int32_t numOfRow
|
|||
ASSERT(pStart - pData == dataLen);
|
||||
return pStart;
|
||||
}
|
||||
|
||||
|
|
|
@ -314,7 +314,7 @@ static int32_t taosAddServerLogCfg(SConfig *pCfg) {
|
|||
if (cfgAddInt32(pCfg, "tsdbDebugFlag", tsdbDebugFlag, 0, 255, 0) != 0) return -1;
|
||||
if (cfgAddInt32(pCfg, "tqDebugFlag", tqDebugFlag, 0, 255, 0) != 0) return -1;
|
||||
if (cfgAddInt32(pCfg, "fsDebugFlag", fsDebugFlag, 0, 255, 0) != 0) return -1;
|
||||
if (cfgAddInt32(pCfg, "fnDebugFlag", fnDebugFlag, 0, 255, 0) != 0) return -1;
|
||||
if (cfgAddInt32(pCfg, "udfDebugFlag", udfDebugFlag, 0, 255, 0) != 0) return -1;
|
||||
if (cfgAddInt32(pCfg, "smaDebugFlag", smaDebugFlag, 0, 255, 0) != 0) return -1;
|
||||
if (cfgAddInt32(pCfg, "idxDebugFlag", idxDebugFlag, 0, 255, 0) != 0) return -1;
|
||||
return 0;
|
||||
|
@ -504,7 +504,7 @@ static void taosSetServerLogCfg(SConfig *pCfg) {
|
|||
tsdbDebugFlag = cfgGetItem(pCfg, "tsdbDebugFlag")->i32;
|
||||
tqDebugFlag = cfgGetItem(pCfg, "tqDebugFlag")->i32;
|
||||
fsDebugFlag = cfgGetItem(pCfg, "fsDebugFlag")->i32;
|
||||
fnDebugFlag = cfgGetItem(pCfg, "fnDebugFlag")->i32;
|
||||
udfDebugFlag = cfgGetItem(pCfg, "udfDebugFlag")->i32;
|
||||
smaDebugFlag = cfgGetItem(pCfg, "smaDebugFlag")->i32;
|
||||
idxDebugFlag = cfgGetItem(pCfg, "idxDebugFlag")->i32;
|
||||
}
|
||||
|
@ -715,8 +715,6 @@ int32_t taosSetCfg(SConfig *pCfg, char* name) {
|
|||
cfgSetItem(pCfg, "firstEp", tsFirst, pFirstEpItem->stype);
|
||||
} else if (strcasecmp("fsDebugFlag", name) == 0) {
|
||||
fsDebugFlag = cfgGetItem(pCfg, "fsDebugFlag")->i32;
|
||||
} else if (strcasecmp("fnDebugFlag", name) == 0) {
|
||||
fnDebugFlag = cfgGetItem(pCfg, "fnDebugFlag")->i32;
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
@ -817,6 +815,8 @@ int32_t taosSetCfg(SConfig *pCfg, char* name) {
|
|||
case 'u': {
|
||||
if (strcasecmp("multiProcess", name) == 0) {
|
||||
tsMultiProcess = cfgGetItem(pCfg, "multiProcess")->bval;
|
||||
} else if (strcasecmp("udfDebugFlag", name) == 0) {
|
||||
udfDebugFlag = cfgGetItem(pCfg, "udfDebugFlag")->i32;
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
|
|
@ -1,18 +0,0 @@
|
|||
/*
|
||||
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
|
||||
*
|
||||
* This program is free software: you can use, redistribute, and/or modify
|
||||
* it under the terms of the GNU Affero General Public License, version 3
|
||||
* or later ("AGPL"), as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE.
|
||||
*
|
||||
* You should have received a copy of the GNU Affero General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#define _DEFAULT_SOURCE
|
||||
#define TSDB_SQL_C
|
||||
#include "tmsgtype.h"
|
|
@ -216,7 +216,7 @@ int main(int argc, char const *argv[]) {
|
|||
return -1;
|
||||
}
|
||||
|
||||
dInfo("start to open dnode");
|
||||
dInfo("start to init service");
|
||||
dmSetSignalHandle();
|
||||
int32_t code = dmRun();
|
||||
dInfo("shutting down the service");
|
||||
|
|
|
@ -384,6 +384,8 @@ SArray *vmGetMsgHandles() {
|
|||
if (dmSetMgmtHandle(pArray, TDMT_SYNC_APPEND_ENTRIES, vmPutMsgToSyncQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_SYNC_APPEND_ENTRIES_BATCH, vmPutMsgToSyncQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_SYNC_APPEND_ENTRIES_REPLY, vmPutMsgToSyncQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_SYNC_SNAPSHOT_SEND, vmPutMsgToSyncQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_SYNC_SNAPSHOT_RSP, vmPutMsgToSyncQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_SYNC_SET_VNODE_STANDBY, vmPutMsgToSyncQueue, 0) == NULL) goto _OVER;
|
||||
|
||||
code = 0;
|
||||
|
|
|
@ -173,7 +173,7 @@ static int32_t vmOpenVnodes(SVnodeMgmt *pMgmt) {
|
|||
pThread->pCfgs[pThread->vnodeNum++] = pCfgs[v];
|
||||
}
|
||||
|
||||
dInfo("start %d threads to open %d vnodes", threadNum, numOfVnodes);
|
||||
dInfo("open %d vnodes with %d threads", numOfVnodes, threadNum);
|
||||
|
||||
for (int32_t t = 0; t < threadNum; ++t) {
|
||||
SVnodeThread *pThread = &threads[t];
|
||||
|
@ -204,7 +204,7 @@ static int32_t vmOpenVnodes(SVnodeMgmt *pMgmt) {
|
|||
dError("there are total vnodes:%d, opened:%d", pMgmt->state.totalVnodes, pMgmt->state.openVnodes);
|
||||
return -1;
|
||||
} else {
|
||||
dInfo("total vnodes:%d open successfully", pMgmt->state.totalVnodes);
|
||||
dInfo("successfully opened %d vnodes", pMgmt->state.totalVnodes);
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -128,7 +128,7 @@ static void dmClearVars(SDnode *pDnode) {
|
|||
}
|
||||
|
||||
int32_t dmInitDnode(SDnode *pDnode, EDndNodeType rtype) {
|
||||
dInfo("start to create dnode");
|
||||
dDebug("start to create dnode");
|
||||
int32_t code = -1;
|
||||
char path[PATH_MAX + 100] = {0};
|
||||
|
||||
|
|
|
@ -89,7 +89,8 @@ static void dmProcessRpcMsg(SDnode *pDnode, SRpcMsg *pRpc, SEpSet *pEpSet) {
|
|||
case TDMT_DND_SYSTABLE_RETRIEVE_RSP:
|
||||
case TDMT_SCH_FETCH_RSP:
|
||||
case TDMT_SCH_MERGE_FETCH_RSP:
|
||||
qWorkerProcessFetchRsp(NULL, NULL, pRpc, 0);
|
||||
case TDMT_VND_SUBMIT_RSP:
|
||||
qWorkerProcessRspMsg(NULL, NULL, pRpc, 0);
|
||||
return;
|
||||
case TDMT_MND_STATUS_RSP:
|
||||
if (pEpSet != NULL) {
|
||||
|
|
|
@ -546,7 +546,11 @@ static int32_t mndProcessRebalanceReq(SRpcMsg *pMsg) {
|
|||
char cgroup[TSDB_CGROUP_LEN];
|
||||
mndSplitSubscribeKey(pRebInfo->key, topic, cgroup, true);
|
||||
SMqTopicObj *pTopic = mndAcquireTopic(pMnode, topic);
|
||||
ASSERT(pTopic);
|
||||
/*ASSERT(pTopic);*/
|
||||
if (pTopic == NULL) {
|
||||
mError("rebalance %s failed since topic %s was dropped, abort", pRebInfo->key, topic);
|
||||
continue;
|
||||
}
|
||||
taosRLockLatch(&pTopic->lock);
|
||||
|
||||
rebOutput.pSub = mndCreateSub(pMnode, pTopic, pRebInfo->key);
|
||||
|
|
|
@ -188,7 +188,7 @@ int32_t mndInitSync(SMnode *pMnode) {
|
|||
SNodeInfo *pNode = &pCfg->nodeInfo[0];
|
||||
tstrncpy(pNode->nodeFqdn, pMgmt->replica.fqdn, sizeof(pNode->nodeFqdn));
|
||||
pNode->nodePort = pMgmt->replica.port;
|
||||
mInfo("fqdn:%s port:%u", pNode->nodeFqdn, pNode->nodePort);
|
||||
mInfo("mnode ep:%s:%u", pNode->nodeFqdn, pNode->nodePort);
|
||||
}
|
||||
|
||||
tsem_init(&pMgmt->syncSem, 0, 0);
|
||||
|
|
|
@ -89,9 +89,6 @@ int32_t qndProcessQueryMsg(SQnode *pQnode, int64_t ts, SRpcMsg *pMsg) {
|
|||
case TDMT_SCH_MERGE_FETCH:
|
||||
code = qWorkerProcessFetchMsg(pQnode, pQnode->pQuery, pMsg, ts);
|
||||
break;
|
||||
case TDMT_SCH_FETCH_RSP:
|
||||
code = qWorkerProcessFetchRsp(pQnode, pQnode->pQuery, pMsg, ts);
|
||||
break;
|
||||
case TDMT_SCH_CANCEL_TASK:
|
||||
code = qWorkerProcessCancelMsg(pQnode, pQnode->pQuery, pMsg, ts);
|
||||
break;
|
||||
|
|
|
@ -9,7 +9,6 @@ target_sources(
|
|||
"src/vnd/vnodeCfg.c"
|
||||
"src/vnd/vnodeCommit.c"
|
||||
"src/vnd/vnodeQuery.c"
|
||||
"src/vnd/vnodeStateMgr.c"
|
||||
"src/vnd/vnodeModule.c"
|
||||
"src/vnd/vnodeSvr.c"
|
||||
"src/vnd/vnodeSync.c"
|
||||
|
@ -32,7 +31,6 @@ target_sources(
|
|||
"src/sma/smaUtil.c"
|
||||
"src/sma/smaOpen.c"
|
||||
"src/sma/smaCommit.c"
|
||||
"src/sma/smaSnapshot.c"
|
||||
"src/sma/smaRollup.c"
|
||||
"src/sma/smaTimeRange.c"
|
||||
|
||||
|
@ -43,7 +41,6 @@ target_sources(
|
|||
"src/tsdb/tsdbOpen.c"
|
||||
"src/tsdb/tsdbMemTable.c"
|
||||
"src/tsdb/tsdbRead.c"
|
||||
"src/tsdb/tsdbReadImpl.c"
|
||||
"src/tsdb/tsdbCache.c"
|
||||
"src/tsdb/tsdbWrite.c"
|
||||
"src/tsdb/tsdbReaderWriter.c"
|
||||
|
|
|
@ -143,6 +143,7 @@ int32_t tsdbLastRowReaderOpen(void *pVnode, int32_t type, SArray *pTableIdList,
|
|||
void **pReader);
|
||||
int32_t tsdbRetrieveLastRow(void *pReader, SSDataBlock *pResBlock, const int32_t *slotIds);
|
||||
int32_t tsdbLastrowReaderClose(void *pReader);
|
||||
int32_t tsdbGetTableSchema(SVnode* pVnode, int64_t uid, STSchema** pSchema, int64_t* suid);
|
||||
|
||||
// tq
|
||||
|
||||
|
@ -173,6 +174,9 @@ int32_t tqReaderSetTbUidList(STqReader *pReader, const SArray *tbUidList);
|
|||
int32_t tqReaderAddTbUidList(STqReader *pReader, const SArray *tbUidList);
|
||||
int32_t tqReaderRemoveTbUidList(STqReader *pReader, const SArray *tbUidList);
|
||||
|
||||
int32_t tqSeekVer(STqReader *pReader, int64_t ver);
|
||||
int32_t tqNextBlock(STqReader *pReader, SFetchRet *ret);
|
||||
|
||||
int32_t tqReaderSetDataMsg(STqReader *pReader, SSubmitReq *pMsg, int64_t ver);
|
||||
bool tqNextDataBlock(STqReader *pReader);
|
||||
bool tqNextDataBlockFilterOut(STqReader *pReader, SHashObj *filterOutUids);
|
||||
|
|
|
@ -129,6 +129,7 @@ typedef struct {
|
|||
static STqMgmt tqMgmt = {0};
|
||||
|
||||
// tqRead
|
||||
int64_t tqScanLog(STQ* pTq, const STqExecHandle* pExec, SMqDataRsp* pRsp, STqOffsetVal* offset);
|
||||
int64_t tqFetchLog(STQ* pTq, STqHandle* pHandle, int64_t* fetchOffset, SWalCkHead** pHeadWithCkSum);
|
||||
|
||||
// tqExec
|
||||
|
|
|
@ -245,6 +245,8 @@ int32_t tsdbCacheRelease(SLRUCache *pCache, LRUHandle *h);
|
|||
|
||||
int32_t tsdbCacheDelete(SLRUCache *pCache, tb_uid_t uid, TSKEY eKey);
|
||||
|
||||
int32_t tsdbCacheLastArray2Row(SArray *pLastArray, STSRow **ppRow, STSchema *pSchema);
|
||||
|
||||
// structs =======================
|
||||
typedef struct {
|
||||
int minFid;
|
||||
|
|
|
@ -277,8 +277,9 @@ static int32_t tdSetRSmaInfoItemParams(SSma *pSma, SRSmaParam *param, SRSmaInfo
|
|||
pItem->maxDelay = TSDB_MAX_ROLLUP_MAX_DELAY;
|
||||
}
|
||||
pItem->level = (idx == 0 ? TSDB_RETENTION_L1 : TSDB_RETENTION_L2);
|
||||
smaInfo("vgId:%d table:%" PRIi64 " level:%" PRIi8 " maxdelay:%" PRIi64 " watermark:%" PRIi64 ", finally maxdelay:%"PRIi32, SMA_VID(pSma),
|
||||
pRSmaInfo->suid, idx + 1, param->maxdelay[idx], param->watermark[idx], pItem->maxDelay);
|
||||
smaInfo("vgId:%d table:%" PRIi64 " level:%" PRIi8 " maxdelay:%" PRIi64 " watermark:%" PRIi64
|
||||
", finally maxdelay:%" PRIi32,
|
||||
SMA_VID(pSma), pRSmaInfo->suid, idx + 1, param->maxdelay[idx], param->watermark[idx], pItem->maxDelay);
|
||||
}
|
||||
return TSDB_CODE_SUCCESS;
|
||||
_err:
|
||||
|
@ -572,11 +573,15 @@ static int32_t tdFetchAndSubmitRSmaResult(SRSmaInfoItem *pItem, int8_t blkType)
|
|||
SSubmitReq *pReq = NULL;
|
||||
// TODO: the schema update should be handled
|
||||
if (buildSubmitReqFromDataBlock(&pReq, pResult, pRSmaInfo->pTSchema, SMA_VID(pSma), pRSmaInfo->suid) < 0) {
|
||||
smaError("vgId:%d, build submit req for rsma table %" PRIi64 "l evel %" PRIi8 " failed since %s", SMA_VID(pSma),
|
||||
pRSmaInfo->suid, pItem->level, terrstr());
|
||||
goto _err;
|
||||
}
|
||||
|
||||
if (pReq && tdProcessSubmitReq(sinkTsdb, atomic_add_fetch_64(&pRSmaInfo->pStat->submitVer, 1), pReq) < 0) {
|
||||
taosMemoryFreeClear(pReq);
|
||||
smaError("vgId:%d, process submit req for rsma table %" PRIi64 " level %" PRIi8 " failed since %s", SMA_VID(pSma),
|
||||
pRSmaInfo->suid, pItem->level, terrstr());
|
||||
goto _err;
|
||||
}
|
||||
|
||||
|
|
|
@ -1,16 +0,0 @@
|
|||
/*
|
||||
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
|
||||
*
|
||||
* This program is free software: you can use, redistribute, and/or modify
|
||||
* it under the terms of the GNU Affero General Public License, version 3
|
||||
* or later ("AGPL"), as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE.
|
||||
*
|
||||
* You should have received a copy of the GNU Affero General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#include "sma.h"
|
|
@ -244,11 +244,6 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg, int32_t workerId) {
|
|||
STqOffsetVal fetchOffsetNew;
|
||||
|
||||
// 1.find handle
|
||||
char buf[80];
|
||||
tFormatOffset(buf, 80, &reqOffset);
|
||||
tqDebug("tmq poll: consumer %ld (epoch %d) recv poll req in vg %d, req offset %s", consumerId, pReq->epoch,
|
||||
TD_VID(pTq->pVnode), buf);
|
||||
|
||||
STqHandle* pHandle = taosHashGet(pTq->handles, pReq->subKey, strlen(pReq->subKey));
|
||||
/*ASSERT(pHandle);*/
|
||||
if (pHandle == NULL) {
|
||||
|
@ -270,6 +265,11 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg, int32_t workerId) {
|
|||
consumerEpoch = atomic_val_compare_exchange_32(&pHandle->epoch, consumerEpoch, reqEpoch);
|
||||
}
|
||||
|
||||
char buf[80];
|
||||
tFormatOffset(buf, 80, &reqOffset);
|
||||
tqDebug("tmq poll: consumer %ld (epoch %d), subkey %s, recv poll req in vg %d, req offset %s", consumerId,
|
||||
pReq->epoch, pHandle->subKey, TD_VID(pTq->pVnode), buf);
|
||||
|
||||
// 2.reset offset if needed
|
||||
if (reqOffset.type > 0) {
|
||||
fetchOffsetNew = reqOffset;
|
||||
|
@ -279,7 +279,7 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg, int32_t workerId) {
|
|||
fetchOffsetNew = pOffset->val;
|
||||
char formatBuf[80];
|
||||
tFormatOffset(formatBuf, 80, &fetchOffsetNew);
|
||||
tqDebug("tmq poll: consumer %ld, offset reset to %s", consumerId, formatBuf);
|
||||
tqDebug("tmq poll: consumer %ld, subkey %s, offset reset to %s", consumerId, pHandle->subKey, formatBuf);
|
||||
} else {
|
||||
if (reqOffset.type == TMQ_OFFSET__RESET_EARLIEAST) {
|
||||
if (pReq->useSnapshot && pHandle->execHandle.subType == TOPIC_SUB_TYPE__COLUMN) {
|
||||
|
@ -294,9 +294,29 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg, int32_t workerId) {
|
|||
}
|
||||
} else if (reqOffset.type == TMQ_OFFSET__RESET_LATEST) {
|
||||
tqOffsetResetToLog(&fetchOffsetNew, walGetLastVer(pTq->pVnode->pWal));
|
||||
tqDebug("tmq poll: consumer %ld, subkey %s, offset reset to %ld", consumerId, pHandle->subKey,
|
||||
fetchOffsetNew.version);
|
||||
SMqDataRsp dataRsp = {0};
|
||||
tqInitDataRsp(&dataRsp, pReq, pHandle->execHandle.subType);
|
||||
dataRsp.rspOffset = fetchOffsetNew;
|
||||
code = 0;
|
||||
if (tqSendDataRsp(pTq, pMsg, pReq, &dataRsp) < 0) {
|
||||
code = -1;
|
||||
}
|
||||
taosArrayDestroy(dataRsp.blockDataLen);
|
||||
taosArrayDestroyP(dataRsp.blockData, (FDelete)taosMemoryFree);
|
||||
|
||||
if (dataRsp.withSchema) {
|
||||
taosArrayDestroyP(dataRsp.blockSchema, (FDelete)tDeleteSSchemaWrapper);
|
||||
}
|
||||
|
||||
if (dataRsp.withTbName) {
|
||||
taosArrayDestroyP(dataRsp.blockTbName, (FDelete)taosMemoryFree);
|
||||
}
|
||||
return code;
|
||||
} else if (reqOffset.type == TMQ_OFFSET__RESET_NONE) {
|
||||
tqError("tmq poll: no offset committed for consumer %ld in vg %d, subkey %s, reset none failed", consumerId,
|
||||
TD_VID(pTq->pVnode), pReq->subKey);
|
||||
tqError("tmq poll: subkey %s, no offset committed for consumer %ld in vg %d, subkey %s, reset none failed",
|
||||
pHandle->subKey, consumerId, TD_VID(pTq->pVnode), pReq->subKey);
|
||||
terrno = TSDB_CODE_TQ_NO_COMMITTED_OFFSET;
|
||||
return -1;
|
||||
}
|
||||
|
@ -307,7 +327,24 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg, int32_t workerId) {
|
|||
SMqDataRsp dataRsp = {0};
|
||||
tqInitDataRsp(&dataRsp, pReq, pHandle->execHandle.subType);
|
||||
|
||||
if (fetchOffsetNew.type == TMQ_OFFSET__LOG) {
|
||||
if (pHandle->execHandle.subType == TOPIC_SUB_TYPE__COLUMN && fetchOffsetNew.type == TMQ_OFFSET__LOG) {
|
||||
fetchOffsetNew.version++;
|
||||
if (tqScanLog(pTq, &pHandle->execHandle, &dataRsp, &fetchOffsetNew) < 0) {
|
||||
ASSERT(0);
|
||||
code = -1;
|
||||
goto OVER;
|
||||
}
|
||||
if (dataRsp.blockNum == 0) {
|
||||
// TODO add to async task
|
||||
/*dataRsp.rspOffset.version--;*/
|
||||
}
|
||||
if (tqSendDataRsp(pTq, pMsg, pReq, &dataRsp) < 0) {
|
||||
code = -1;
|
||||
}
|
||||
goto OVER;
|
||||
}
|
||||
|
||||
if (pHandle->execHandle.subType != TOPIC_SUB_TYPE__COLUMN && fetchOffsetNew.type == TMQ_OFFSET__LOG) {
|
||||
int64_t fetchVer = fetchOffsetNew.version + 1;
|
||||
SWalCkHead* pCkHead = taosMemoryMalloc(sizeof(SWalCkHead) + 2048);
|
||||
if (pCkHead == NULL) {
|
||||
|
@ -319,8 +356,10 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg, int32_t workerId) {
|
|||
while (1) {
|
||||
consumerEpoch = atomic_load_32(&pHandle->epoch);
|
||||
if (consumerEpoch > reqEpoch) {
|
||||
tqWarn("tmq poll: consumer %ld (epoch %d) vg %d offset %ld, found new consumer epoch %d, discard req epoch %d",
|
||||
consumerId, pReq->epoch, TD_VID(pTq->pVnode), fetchVer, consumerEpoch, reqEpoch);
|
||||
tqWarn(
|
||||
"tmq poll: consumer %ld (epoch %d), subkey %s, vg %d offset %ld, found new consumer epoch %d, discard req "
|
||||
"epoch %d",
|
||||
consumerId, pReq->epoch, pHandle->subKey, TD_VID(pTq->pVnode), fetchVer, consumerEpoch, reqEpoch);
|
||||
break;
|
||||
}
|
||||
|
||||
|
|
|
@ -46,7 +46,7 @@ static int32_t tqAddBlockSchemaToRsp(const STqExecHandle* pExec, int32_t workerI
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int32_t tqAddTbNameToRsp(const STQ* pTq, int64_t uid, SMqDataRsp* pRsp, int32_t workerId) {
|
||||
static int32_t tqAddTbNameToRsp(const STQ* pTq, int64_t uid, SMqDataRsp* pRsp) {
|
||||
SMetaReader mr = {0};
|
||||
metaReaderInit(&mr, pTq->pVnode->pMeta, 0);
|
||||
if (metaGetTableEntryByUid(&mr, uid) < 0) {
|
||||
|
@ -59,6 +59,53 @@ static int32_t tqAddTbNameToRsp(const STQ* pTq, int64_t uid, SMqDataRsp* pRsp, i
|
|||
return 0;
|
||||
}
|
||||
|
||||
int64_t tqScanLog(STQ* pTq, const STqExecHandle* pExec, SMqDataRsp* pRsp, STqOffsetVal* pOffset) {
|
||||
qTaskInfo_t task = pExec->execCol.task[0];
|
||||
|
||||
if (qStreamPrepareScan1(task, pOffset) < 0) {
|
||||
pRsp->rspOffset = *pOffset;
|
||||
pRsp->rspOffset.version--;
|
||||
return 0;
|
||||
}
|
||||
|
||||
while (1) {
|
||||
SSDataBlock* pDataBlock = NULL;
|
||||
uint64_t ts = 0;
|
||||
if (qExecTask(task, &pDataBlock, &ts) < 0) {
|
||||
ASSERT(0);
|
||||
}
|
||||
|
||||
if (pDataBlock != NULL) {
|
||||
tqAddBlockDataToRsp(pDataBlock, pRsp);
|
||||
if (pRsp->withTbName) {
|
||||
int64_t uid = pExec->pExecReader[0]->msgIter.uid;
|
||||
tqAddTbNameToRsp(pTq, uid, pRsp);
|
||||
}
|
||||
pRsp->blockNum++;
|
||||
continue;
|
||||
}
|
||||
|
||||
void* meta = qStreamExtractMetaMsg(task);
|
||||
if (meta != NULL) {
|
||||
// tq add meta to rsp
|
||||
}
|
||||
|
||||
if (qStreamExtractOffset(task, &pRsp->rspOffset) < 0) {
|
||||
ASSERT(0);
|
||||
}
|
||||
|
||||
if (pRsp->rspOffset.type == TMQ_OFFSET__LOG) {
|
||||
ASSERT(pRsp->rspOffset.version + 1 >= pRsp->reqOffset.version);
|
||||
}
|
||||
|
||||
ASSERT(pRsp->rspOffset.type != 0);
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int32_t tqScanSnapshot(STQ* pTq, const STqExecHandle* pExec, SMqDataRsp* pRsp, STqOffsetVal offset, int32_t workerId) {
|
||||
ASSERT(pExec->subType == TOPIC_SUB_TYPE__COLUMN);
|
||||
qTaskInfo_t task = pExec->execCol.task[workerId];
|
||||
|
@ -67,7 +114,7 @@ int32_t tqScanSnapshot(STQ* pTq, const STqExecHandle* pExec, SMqDataRsp* pRsp, S
|
|||
/*ASSERT(0);*/
|
||||
/*}*/
|
||||
|
||||
if (qStreamPrepareScan(task, offset.uid, offset.ts) < 0) {
|
||||
if (qStreamPrepareTsdbScan(task, offset.uid, offset.ts) < 0) {
|
||||
ASSERT(0);
|
||||
}
|
||||
|
||||
|
@ -93,7 +140,7 @@ int32_t tqScanSnapshot(STQ* pTq, const STqExecHandle* pExec, SMqDataRsp* pRsp, S
|
|||
if (qGetStreamScanStatus(task, &uid, &ts) < 0) {
|
||||
ASSERT(0);
|
||||
}
|
||||
tqAddTbNameToRsp(pTq, uid, pRsp, workerId);
|
||||
tqAddTbNameToRsp(pTq, uid, pRsp);
|
||||
#endif
|
||||
}
|
||||
pRsp->blockNum++;
|
||||
|
@ -129,7 +176,7 @@ int32_t tqLogScanExec(STQ* pTq, STqExecHandle* pExec, SSubmitReq* pReq, SMqDataR
|
|||
tqAddBlockDataToRsp(pDataBlock, pRsp);
|
||||
if (pRsp->withTbName) {
|
||||
int64_t uid = pExec->pExecReader[workerId]->msgIter.uid;
|
||||
tqAddTbNameToRsp(pTq, uid, pRsp, workerId);
|
||||
tqAddTbNameToRsp(pTq, uid, pRsp);
|
||||
}
|
||||
pRsp->blockNum++;
|
||||
}
|
||||
|
@ -146,7 +193,7 @@ int32_t tqLogScanExec(STQ* pTq, STqExecHandle* pExec, SSubmitReq* pReq, SMqDataR
|
|||
tqAddBlockDataToRsp(&block, pRsp);
|
||||
if (pRsp->withTbName) {
|
||||
int64_t uid = pExec->pExecReader[workerId]->msgIter.uid;
|
||||
tqAddTbNameToRsp(pTq, uid, pRsp, workerId);
|
||||
tqAddTbNameToRsp(pTq, uid, pRsp);
|
||||
}
|
||||
tqAddBlockSchemaToRsp(pExec, workerId, pRsp);
|
||||
pRsp->blockNum++;
|
||||
|
@ -164,7 +211,7 @@ int32_t tqLogScanExec(STQ* pTq, STqExecHandle* pExec, SSubmitReq* pReq, SMqDataR
|
|||
tqAddBlockDataToRsp(&block, pRsp);
|
||||
if (pRsp->withTbName) {
|
||||
int64_t uid = pExec->pExecReader[workerId]->msgIter.uid;
|
||||
tqAddTbNameToRsp(pTq, uid, pRsp, workerId);
|
||||
tqAddTbNameToRsp(pTq, uid, pRsp);
|
||||
}
|
||||
tqAddBlockSchemaToRsp(pExec, workerId, pRsp);
|
||||
pRsp->blockNum++;
|
||||
|
|
|
@ -15,11 +15,6 @@
|
|||
|
||||
#include "tq.h"
|
||||
|
||||
int64_t tqScanLog(STQ* pTq, const STqExecHandle* pExec, SMqDataRsp* pRsp, STqOffsetVal offset) {
|
||||
/*if ()*/
|
||||
return 0;
|
||||
}
|
||||
|
||||
int64_t tqFetchLog(STQ* pTq, STqHandle* pHandle, int64_t* fetchOffset, SWalCkHead** ppCkHead) {
|
||||
int32_t code = 0;
|
||||
taosThreadMutexLock(&pHandle->pWalReader->mutex);
|
||||
|
@ -84,8 +79,10 @@ STqReader* tqOpenReader(SVnode* pVnode) {
|
|||
return NULL;
|
||||
}
|
||||
|
||||
// TODO open
|
||||
/*pReader->pWalReader = walOpenReader(pVnode->pWal, NULL);*/
|
||||
pReader->pWalReader = walOpenReader(pVnode->pWal, NULL);
|
||||
if (pReader->pWalReader == NULL) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
pReader->pVnodeMeta = pVnode->pMeta;
|
||||
pReader->pMsg = NULL;
|
||||
|
@ -106,12 +103,19 @@ void tqCloseReader(STqReader* pReader) {
|
|||
taosMemoryFree(pReader);
|
||||
}
|
||||
|
||||
int32_t tqSeekVer(STqReader* pReader, int64_t ver) {
|
||||
//
|
||||
return walReadSeekVer(pReader->pWalReader, ver);
|
||||
}
|
||||
|
||||
int32_t tqNextBlock(STqReader* pReader, SFetchRet* ret) {
|
||||
bool fromProcessedMsg = pReader->pMsg != NULL;
|
||||
|
||||
while (1) {
|
||||
if (!fromProcessedMsg) {
|
||||
if (walNextValidMsg(pReader->pWalReader) < 0) {
|
||||
ret->offset.type = TMQ_OFFSET__LOG;
|
||||
ret->offset.version = pReader->ver;
|
||||
ret->fetchType = FETCH_TYPE__NONE;
|
||||
return -1;
|
||||
}
|
||||
|
@ -130,19 +134,25 @@ int32_t tqNextBlock(STqReader* pReader, SFetchRet* ret) {
|
|||
memset(&ret->data, 0, sizeof(SSDataBlock));
|
||||
int32_t code = tqRetrieveDataBlock(&ret->data, pReader);
|
||||
if (code != 0 || ret->data.info.rows == 0) {
|
||||
ASSERT(0);
|
||||
continue;
|
||||
#if 0
|
||||
if (fromProcessedMsg) {
|
||||
ret->fetchType = FETCH_TYPE__NONE;
|
||||
return 0;
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
ret->fetchType = FETCH_TYPE__DATA;
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (fromProcessedMsg) {
|
||||
ret->offset.type = TMQ_OFFSET__LOG;
|
||||
ret->offset.version = pReader->ver;
|
||||
ASSERT(pReader->ver != -1);
|
||||
ret->fetchType = FETCH_TYPE__NONE;
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -59,6 +59,8 @@ static void getTableCacheKey(tb_uid_t uid, int cacheType, char *key, int *len) {
|
|||
|
||||
static void deleteTableCacheLastrow(const void *key, size_t keyLen, void *value) { taosMemoryFree(value); }
|
||||
|
||||
static void deleteTableCacheLast(const void *key, size_t keyLen, void *value) { taosArrayDestroy(value); }
|
||||
|
||||
static int32_t tsdbCacheDeleteLastrow(SLRUCache *pCache, tb_uid_t uid, TSKEY eKey) {
|
||||
int32_t code = 0;
|
||||
char key[32] = {0};
|
||||
|
@ -153,7 +155,7 @@ int32_t tsdbCacheInsertLastrow(SLRUCache *pCache, STsdb *pTsdb, tb_uid_t uid, ST
|
|||
/* tsdbCacheInsertLastrow(pCache, uid, row, dup); */
|
||||
}
|
||||
}
|
||||
} else {
|
||||
} /*else {
|
||||
if (dup) {
|
||||
cacheRow = tdRowDup(row);
|
||||
} else {
|
||||
|
@ -166,7 +168,7 @@ int32_t tsdbCacheInsertLastrow(SLRUCache *pCache, STsdb *pTsdb, tb_uid_t uid, ST
|
|||
if (status != TAOS_LRU_STATUS_OK) {
|
||||
code = -1;
|
||||
}
|
||||
}
|
||||
}*/
|
||||
|
||||
return code;
|
||||
}
|
||||
|
@ -761,7 +763,6 @@ static int32_t mergeLastRow(tb_uid_t uid, STsdb *pTsdb, bool *dup, STSRow **ppRo
|
|||
for (int i = 0; i < nMax; ++i) {
|
||||
TSDBKEY maxKey = TSDBROW_KEY(max[i]);
|
||||
|
||||
// bool deleted = false;
|
||||
bool deleted = tsdbKeyDeleted(&maxKey, pSkyline, &iSkyline);
|
||||
if (!deleted) {
|
||||
// iMerge[nMerge] = i;
|
||||
|
@ -818,12 +819,22 @@ _err:
|
|||
return code;
|
||||
}
|
||||
|
||||
static int32_t mergeLast(tb_uid_t uid, STsdb *pTsdb, STSRow **ppRow) {
|
||||
int32_t code = 0;
|
||||
typedef struct {
|
||||
TSKEY ts;
|
||||
SColVal colVal;
|
||||
} SLastCol;
|
||||
|
||||
// static int32_t mergeLast(tb_uid_t uid, STsdb *pTsdb, STSRow **ppRow) {
|
||||
static int32_t mergeLast(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray) {
|
||||
int32_t code = 0;
|
||||
SArray *pSkyline = NULL;
|
||||
STSRow *pRow = NULL;
|
||||
STSRow **ppRow = &pRow;
|
||||
|
||||
STSchema *pTSchema = metaGetTbTSchema(pTsdb->pVnode->pMeta, uid, -1);
|
||||
int16_t nCol = pTSchema->numOfCols;
|
||||
SArray *pColArray = taosArrayInit(nCol, sizeof(SColVal));
|
||||
// SArray *pColArray = taosArrayInit(nCol, sizeof(SColVal));
|
||||
SArray *pColArray = taosArrayInit(nCol, sizeof(SLastCol));
|
||||
|
||||
tb_uid_t suid = getTableSuidByUid(uid, pTsdb);
|
||||
|
||||
|
@ -837,9 +848,9 @@ static int32_t mergeLast(tb_uid_t uid, STsdb *pTsdb, STSRow **ppRow) {
|
|||
tsdbGetTbDataFromMemTable(pTsdb->imem, suid, uid, &pIMem);
|
||||
}
|
||||
|
||||
*ppRow = NULL;
|
||||
*ppLastArray = NULL;
|
||||
|
||||
SArray *pSkyline = taosArrayInit(32, sizeof(TSDBKEY));
|
||||
pSkyline = taosArrayInit(32, sizeof(TSDBKEY));
|
||||
|
||||
SDelIdx delIdx;
|
||||
|
||||
|
@ -943,7 +954,6 @@ static int32_t mergeLast(tb_uid_t uid, STsdb *pTsdb, STSRow **ppRow) {
|
|||
for (int i = 0; i < nMax; ++i) {
|
||||
TSDBKEY maxKey = TSDBROW_KEY(max[i]);
|
||||
|
||||
// bool deleted = false;
|
||||
bool deleted = tsdbKeyDeleted(&maxKey, pSkyline, &iSkyline);
|
||||
if (!deleted) {
|
||||
iMerge[nMerge] = iMax[i];
|
||||
|
@ -970,8 +980,9 @@ static int32_t mergeLast(tb_uid_t uid, STsdb *pTsdb, STSRow **ppRow) {
|
|||
tRowMergerClear(&merger);
|
||||
}
|
||||
} else {
|
||||
*ppRow = NULL;
|
||||
return code;
|
||||
/* *ppRow = NULL; */
|
||||
/* return code; */
|
||||
continue;
|
||||
}
|
||||
|
||||
if (iCol == 0) {
|
||||
|
@ -980,7 +991,8 @@ static int32_t mergeLast(tb_uid_t uid, STsdb *pTsdb, STSRow **ppRow) {
|
|||
|
||||
*pColVal = COL_VAL_VALUE(pTColumn->colId, pTColumn->type, (SValue){.ts = maxKey});
|
||||
|
||||
if (taosArrayPush(pColArray, pColVal) == NULL) {
|
||||
// if (taosArrayPush(pColArray, pColVal) == NULL) {
|
||||
if (taosArrayPush(pColArray, &(SLastCol){.ts = maxKey, .colVal = *pColVal}) == NULL) {
|
||||
code = TSDB_CODE_OUT_OF_MEMORY;
|
||||
goto _err;
|
||||
}
|
||||
|
@ -991,7 +1003,8 @@ static int32_t mergeLast(tb_uid_t uid, STsdb *pTsdb, STSRow **ppRow) {
|
|||
for (int16_t i = iCol; i < nCol; ++i) {
|
||||
// tsdbRowGetColVal(*ppRow, pTSchema, i, pColVal);
|
||||
tTSRowGetVal(*ppRow, pTSchema, i, pColVal);
|
||||
if (taosArrayPush(pColArray, pColVal) == NULL) {
|
||||
// if (taosArrayPush(pColArray, pColVal) == NULL) {
|
||||
if (taosArrayPush(pColArray, &(SLastCol){.ts = maxKey, .colVal = *pColVal}) == NULL) {
|
||||
code = TSDB_CODE_OUT_OF_MEMORY;
|
||||
goto _err;
|
||||
}
|
||||
|
@ -1012,11 +1025,11 @@ static int32_t mergeLast(tb_uid_t uid, STsdb *pTsdb, STSRow **ppRow) {
|
|||
--nilColCount;
|
||||
}
|
||||
}
|
||||
/*
|
||||
|
||||
if (*ppRow) {
|
||||
taosMemoryFreeClear(*ppRow);
|
||||
}
|
||||
*/
|
||||
|
||||
continue;
|
||||
}
|
||||
|
||||
|
@ -1024,12 +1037,16 @@ static int32_t mergeLast(tb_uid_t uid, STsdb *pTsdb, STSRow **ppRow) {
|
|||
for (int16_t i = iCol; i < nCol; ++i) {
|
||||
SColVal colVal = {0};
|
||||
tTSRowGetVal(*ppRow, pTSchema, i, &colVal);
|
||||
TSKEY rowTs = (*ppRow)->ts;
|
||||
|
||||
SColVal *tColVal = (SColVal *)taosArrayGet(pColArray, i);
|
||||
// SColVal *tColVal = (SColVal *)taosArrayGet(pColArray, i);
|
||||
SLastCol *tTsVal = (SLastCol *)taosArrayGet(pColArray, i);
|
||||
SColVal *tColVal = &tTsVal->colVal;
|
||||
|
||||
if (!colVal.isNone && !colVal.isNull) {
|
||||
if (tColVal->isNull || tColVal->isNone) {
|
||||
taosArraySet(pColArray, i, &colVal);
|
||||
// taosArraySet(pColArray, i, &colVal);
|
||||
taosArraySet(pColArray, i, &(SLastCol){.ts = rowTs, .colVal = colVal});
|
||||
--nilColCount;
|
||||
}
|
||||
} else {
|
||||
|
@ -1054,16 +1071,45 @@ static int32_t mergeLast(tb_uid_t uid, STsdb *pTsdb, STSRow **ppRow) {
|
|||
} while (nilColCount > 0);
|
||||
|
||||
// if () new ts row from pColArray if non empty
|
||||
if (taosArrayGetSize(pColArray) == nCol) {
|
||||
code = tdSTSRowNew(pColArray, pTSchema, ppRow);
|
||||
if (code) goto _err;
|
||||
/* if (taosArrayGetSize(pColArray) == nCol) { */
|
||||
/* code = tdSTSRowNew(pColArray, pTSchema, ppRow); */
|
||||
/* if (code) goto _err; */
|
||||
/* } */
|
||||
/* taosArrayDestroy(pColArray); */
|
||||
if (taosArrayGetSize(pColArray) <= 0) {
|
||||
*ppLastArray = NULL;
|
||||
taosArrayDestroy(pColArray);
|
||||
} else {
|
||||
*ppLastArray = pColArray;
|
||||
}
|
||||
if (*ppRow) {
|
||||
taosMemoryFreeClear(*ppRow);
|
||||
}
|
||||
|
||||
for (int i = 0; i < 3; ++i) {
|
||||
if (input[i].nextRowClearFn) {
|
||||
input[i].nextRowClearFn(input[i].iter);
|
||||
}
|
||||
}
|
||||
if (pSkyline) {
|
||||
taosArrayDestroy(pSkyline);
|
||||
}
|
||||
taosArrayDestroy(pColArray);
|
||||
taosMemoryFreeClear(pTSchema);
|
||||
|
||||
return code;
|
||||
_err:
|
||||
taosArrayDestroy(pColArray);
|
||||
if (*ppRow) {
|
||||
taosMemoryFreeClear(*ppRow);
|
||||
}
|
||||
for (int i = 0; i < 3; ++i) {
|
||||
if (input[i].nextRowClearFn) {
|
||||
input[i].nextRowClearFn(input[i].iter);
|
||||
}
|
||||
}
|
||||
if (pSkyline) {
|
||||
taosArrayDestroy(pSkyline);
|
||||
}
|
||||
taosMemoryFreeClear(pTSchema);
|
||||
tsdbError("vgId:%d merge last_row failed since %s", TD_VID(pTsdb->pVnode), tstrerror(code));
|
||||
return code;
|
||||
|
@ -1081,7 +1127,7 @@ int32_t tsdbCacheGetLastrowH(SLRUCache *pCache, tb_uid_t uid, STsdb *pTsdb, LRUH
|
|||
//*ppRow = (STSRow *)taosLRUCacheValue(pCache, h);
|
||||
} else {
|
||||
STSRow *pRow = NULL;
|
||||
bool dup = false;
|
||||
bool dup = false; // which is always false for now
|
||||
code = mergeLastRow(uid, pTsdb, &dup, &pRow);
|
||||
// if table's empty or error, return code of -1
|
||||
if (code < 0 || pRow == NULL) {
|
||||
|
@ -1093,7 +1139,14 @@ int32_t tsdbCacheGetLastrowH(SLRUCache *pCache, tb_uid_t uid, STsdb *pTsdb, LRUH
|
|||
return 0;
|
||||
}
|
||||
|
||||
tsdbCacheInsertLastrow(pCache, pTsdb, uid, pRow, dup);
|
||||
_taos_lru_deleter_t deleter = deleteTableCacheLastrow;
|
||||
LRUStatus status =
|
||||
taosLRUCacheInsert(pCache, key, keyLen, pRow, TD_ROW_LEN(pRow), deleter, NULL, TAOS_LRU_PRIORITY_LOW);
|
||||
if (status != TAOS_LRU_STATUS_OK) {
|
||||
code = -1;
|
||||
}
|
||||
|
||||
// tsdbCacheInsertLastrow(pCache, pTsdb, uid, pRow, dup);
|
||||
h = taosLRUCacheLookup(pCache, key, keyLen);
|
||||
//*ppRow = (STSRow *)taosLRUCacheValue(pCache, h);
|
||||
}
|
||||
|
@ -1103,6 +1156,30 @@ int32_t tsdbCacheGetLastrowH(SLRUCache *pCache, tb_uid_t uid, STsdb *pTsdb, LRUH
|
|||
return code;
|
||||
}
|
||||
|
||||
int32_t tsdbCacheLastArray2Row(SArray *pLastArray, STSRow **ppRow, STSchema *pTSchema) {
|
||||
int32_t code = 0;
|
||||
int16_t nCol = taosArrayGetSize(pLastArray);
|
||||
SArray *pColArray = taosArrayInit(nCol, sizeof(SColVal));
|
||||
|
||||
for (int16_t iCol = 0; iCol < nCol; ++iCol) {
|
||||
SLastCol *tTsVal = (SLastCol *)taosArrayGet(pLastArray, iCol);
|
||||
SColVal *tColVal = &tTsVal->colVal;
|
||||
taosArrayPush(pColArray, tColVal);
|
||||
}
|
||||
|
||||
code = tdSTSRowNew(pColArray, pTSchema, ppRow);
|
||||
if (code) goto _err;
|
||||
|
||||
taosArrayDestroy(pColArray);
|
||||
|
||||
return code;
|
||||
|
||||
_err:
|
||||
taosArrayDestroy(pColArray);
|
||||
|
||||
return code;
|
||||
}
|
||||
|
||||
int32_t tsdbCacheGetLastH(SLRUCache *pCache, tb_uid_t uid, STsdb *pTsdb, LRUHandle **handle) {
|
||||
int32_t code = 0;
|
||||
char key[32] = {0};
|
||||
|
@ -1115,21 +1192,24 @@ int32_t tsdbCacheGetLastH(SLRUCache *pCache, tb_uid_t uid, STsdb *pTsdb, LRUHand
|
|||
//*ppRow = (STSRow *)taosLRUCacheValue(pCache, h);
|
||||
|
||||
} else {
|
||||
STSRow *pRow = NULL;
|
||||
code = mergeLast(uid, pTsdb, &pRow);
|
||||
// STSRow *pRow = NULL;
|
||||
// code = mergeLast(uid, pTsdb, &pRow);
|
||||
SArray *pLastArray = NULL;
|
||||
code = mergeLast(uid, pTsdb, &pLastArray);
|
||||
// if table's empty or error, return code of -1
|
||||
if (code < 0 || pRow == NULL) {
|
||||
// if (code < 0 || pRow == NULL) {
|
||||
if (code < 0 || pLastArray == NULL) {
|
||||
*handle = NULL;
|
||||
return 0;
|
||||
}
|
||||
|
||||
_taos_lru_deleter_t deleter = deleteTableCacheLastrow;
|
||||
_taos_lru_deleter_t deleter = deleteTableCacheLast;
|
||||
LRUStatus status =
|
||||
taosLRUCacheInsert(pCache, key, keyLen, pRow, TD_ROW_LEN(pRow), deleter, NULL, TAOS_LRU_PRIORITY_LOW);
|
||||
taosLRUCacheInsert(pCache, key, keyLen, pLastArray, pLastArray->capacity, deleter, NULL, TAOS_LRU_PRIORITY_LOW);
|
||||
if (status != TAOS_LRU_STATUS_OK) {
|
||||
code = -1;
|
||||
}
|
||||
/* tsdbCacheInsertLast(pCache, uid, pRow); */
|
||||
|
||||
h = taosLRUCacheLookup(pCache, key, keyLen);
|
||||
//*ppRow = (STSRow *)taosLRUCacheValue(pCache, h);
|
||||
}
|
||||
|
@ -1162,9 +1242,23 @@ int32_t tsdbCacheDelete(SLRUCache *pCache, tb_uid_t uid, TSKEY eKey) {
|
|||
getTableCacheKey(uid, 1, key, &keyLen);
|
||||
h = taosLRUCacheLookup(pCache, key, keyLen);
|
||||
if (h) {
|
||||
// clear last cache anyway, no matter where eKey ends.
|
||||
taosLRUCacheRelease(pCache, h, true);
|
||||
SArray *pLast = (SArray *)taosLRUCacheValue(pCache, h);
|
||||
bool invalidate = false;
|
||||
int16_t nCol = taosArrayGetSize(pLast);
|
||||
|
||||
for (int16_t iCol = 0; iCol < nCol; ++iCol) {
|
||||
SLastCol *tTsVal = (SLastCol *)taosArrayGet(pLast, iCol);
|
||||
if (eKey >= tTsVal->ts) {
|
||||
invalidate = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (invalidate) {
|
||||
taosLRUCacheRelease(pCache, h, true);
|
||||
} else {
|
||||
taosLRUCacheRelease(pCache, h, false);
|
||||
}
|
||||
// void taosLRUCacheErase(SLRUCache * cache, const void *key, size_t keyLen);
|
||||
}
|
||||
|
||||
|
|
|
@ -128,6 +128,8 @@ int32_t tsdbRetrieveLastRow(void* pReader, SSDataBlock* pResBlock, const int32_t
|
|||
}
|
||||
|
||||
pRow = (STSRow*)taosLRUCacheValue(lruCache, h);
|
||||
// SArray* pLast = (SArray*)taosLRUCacheValue(lruCache, h);
|
||||
// tsdbCacheLastArray2Row(pLast, &pRow, pr->pSchema);
|
||||
if (pRow->ts > lastKey) {
|
||||
// Set result row into the same rowIndex repeatly, so we need to check if the internal result row has already
|
||||
// appended or not.
|
||||
|
@ -140,6 +142,7 @@ int32_t tsdbRetrieveLastRow(void* pReader, SSDataBlock* pResBlock, const int32_t
|
|||
lastKey = pRow->ts;
|
||||
}
|
||||
|
||||
// taosMemoryFree(pRow);
|
||||
tsdbCacheRelease(lruCache, h);
|
||||
}
|
||||
} else if (pr->type == LASTROW_RETRIEVE_TYPE_ALL) {
|
||||
|
@ -158,8 +161,12 @@ int32_t tsdbRetrieveLastRow(void* pReader, SSDataBlock* pResBlock, const int32_t
|
|||
}
|
||||
|
||||
pRow = (STSRow*)taosLRUCacheValue(lruCache, h);
|
||||
// SArray* pLast = (SArray*)taosLRUCacheValue(lruCache, h);
|
||||
// tsdbCacheLastArray2Row(pLast, &pRow, pr->pSchema);
|
||||
|
||||
saveOneRow(pRow, pResBlock, pr, slotIds);
|
||||
|
||||
// taosMemoryFree(pRow);
|
||||
tsdbCacheRelease(lruCache, h);
|
||||
|
||||
pr->tableIndex += 1;
|
||||
|
|
|
@ -602,7 +602,7 @@ static int32_t tsdbGetOvlpNRow(STbDataIter *pIter, SBlock *pBlock) {
|
|||
|
||||
iter.pRow = NULL;
|
||||
while (true) {
|
||||
pRow = tsdbTbDataIterGet(pIter);
|
||||
pRow = tsdbTbDataIterGet(&iter);
|
||||
|
||||
if (pRow == NULL) break;
|
||||
key = TSDBROW_KEY(pRow);
|
||||
|
@ -610,7 +610,7 @@ static int32_t tsdbGetOvlpNRow(STbDataIter *pIter, SBlock *pBlock) {
|
|||
c = tBlockCmprFn(&(SBlock){.maxKey = key, .minKey = key}, pBlock);
|
||||
if (c == 0) {
|
||||
nRow++;
|
||||
tsdbTbDataIterNext(pIter);
|
||||
tsdbTbDataIterNext(&iter);
|
||||
} else if (c > 0) {
|
||||
break;
|
||||
} else {
|
||||
|
@ -635,7 +635,7 @@ static int32_t tsdbMergeAsSubBlock(SCommitter *pCommitter, STbDataIter *pIter, S
|
|||
code = tsdbCommitterUpdateRowSchema(pCommitter, pBlockIdx->suid, pBlockIdx->uid, TSDBROW_SVERSION(pRow));
|
||||
if (code) goto _err;
|
||||
while (true) {
|
||||
if (pRow) break;
|
||||
if (pRow == NULL) break;
|
||||
code = tBlockDataAppendRow(pBlockData, pRow, pCommitter->skmRow.pTSchema);
|
||||
if (code) goto _err;
|
||||
|
||||
|
|
|
@ -1677,6 +1677,7 @@ static int32_t doMergeThreeLevelRows(STsdbReader* pReader, STableBlockScanInfo*
|
|||
}
|
||||
|
||||
ASSERT(0);
|
||||
return -1;
|
||||
}
|
||||
|
||||
static bool isValidFileBlockRow(SBlockData* pBlockData, SFileBlockDumpInfo* pDumpInfo,
|
||||
|
@ -3104,3 +3105,39 @@ int64_t tsdbGetNumOfRowsInMemTable(STsdbReader* pReader) {
|
|||
|
||||
return rows;
|
||||
}
|
||||
|
||||
int32_t tsdbGetTableSchema(SVnode* pVnode, int64_t uid, STSchema** pSchema, int64_t *suid) {
|
||||
int32_t sversion = 1;
|
||||
|
||||
SMetaReader mr = {0};
|
||||
metaReaderInit(&mr, pVnode->pMeta, 0);
|
||||
int32_t code = metaGetTableEntryByUid(&mr, uid);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
terrno = TSDB_CODE_TDB_INVALID_TABLE_ID;
|
||||
metaReaderClear(&mr);
|
||||
return terrno;
|
||||
}
|
||||
|
||||
*suid = 0;
|
||||
|
||||
if (mr.me.type == TSDB_CHILD_TABLE) {
|
||||
*suid = mr.me.ctbEntry.suid;
|
||||
code = metaGetTableEntryByUid(&mr, *suid);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
terrno = TSDB_CODE_TDB_INVALID_TABLE_ID;
|
||||
metaReaderClear(&mr);
|
||||
return terrno;
|
||||
}
|
||||
sversion = mr.me.stbEntry.schemaRow.version;
|
||||
} else {
|
||||
ASSERT(mr.me.type == TSDB_NORMAL_TABLE);
|
||||
sversion = mr.me.ntbEntry.schemaRow.version;
|
||||
}
|
||||
|
||||
metaReaderClear(&mr);
|
||||
*pSchema = metaGetTbTSchema(pVnode->pMeta, uid, sversion);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -1,16 +0,0 @@
|
|||
/*
|
||||
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
|
||||
*
|
||||
* This program is free software: you can use, redistribute, and/or modify
|
||||
* it under the terms of the GNU Affero General Public License, version 3
|
||||
* or later ("AGPL"), as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE.
|
||||
*
|
||||
* You should have received a copy of the GNU Affero General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#include "tsdb.h"
|
|
@ -424,10 +424,12 @@ int32_t tsdbReadDelIdx(SDelFReader *pReader, SArray *aDelIdx, uint8_t **ppBuf) {
|
|||
|
||||
ASSERT(n == size - sizeof(TSCKSUM));
|
||||
|
||||
tFree(pBuf);
|
||||
return code;
|
||||
|
||||
_err:
|
||||
tsdbError("vgId:%d read del idx failed since %s", TD_VID(pReader->pTsdb->pVnode), tstrerror(code));
|
||||
tFree(pBuf);
|
||||
return code;
|
||||
}
|
||||
|
||||
|
@ -969,6 +971,8 @@ int32_t tsdbReadColData(SDataFReader *pReader, SBlockIdx *pBlockIdx, SBlock *pBl
|
|||
SBlockData *pBlockData1 = &(SBlockData){0};
|
||||
SBlockData *pBlockData2 = &(SBlockData){0};
|
||||
|
||||
tBlockDataInit(pBlockData1);
|
||||
tBlockDataInit(pBlockData2);
|
||||
for (int32_t iSubBlock = 1; iSubBlock < pBlock->nSubBlock; iSubBlock++) {
|
||||
code = tsdbReadSubColData(pReader, pBlockIdx, pBlock, iSubBlock, aColId, nCol, pBlockData1, ppBuf1, ppBuf2);
|
||||
if (code) goto _err;
|
||||
|
@ -1106,6 +1110,8 @@ int32_t tsdbReadBlockData(SDataFReader *pReader, SBlockIdx *pBlockIdx, SBlock *p
|
|||
SBlockData *pBlockData1 = &(SBlockData){0};
|
||||
SBlockData *pBlockData2 = &(SBlockData){0};
|
||||
|
||||
tBlockDataInit(pBlockData1);
|
||||
tBlockDataInit(pBlockData2);
|
||||
for (iSubBlock = 1; iSubBlock < pBlock->nSubBlock; iSubBlock++) {
|
||||
code = tsdbReadSubBlockData(pReader, pBlockIdx, pBlock, iSubBlock, pBlockData1, ppBuf1, ppBuf2);
|
||||
if (code) {
|
||||
|
|
|
@ -821,16 +821,20 @@ int32_t tColDataCopy(SColData *pColDataSrc, SColData *pColDataDest) {
|
|||
int32_t code = 0;
|
||||
int32_t size;
|
||||
|
||||
ASSERT(pColDataSrc->nVal > 0);
|
||||
|
||||
pColDataDest->cid = pColDataSrc->cid;
|
||||
pColDataDest->type = pColDataSrc->type;
|
||||
pColDataDest->smaOn = pColDataSrc->smaOn;
|
||||
pColDataDest->nVal = pColDataSrc->nVal;
|
||||
pColDataDest->flag = pColDataSrc->flag;
|
||||
|
||||
size = BIT2_SIZE(pColDataSrc->nVal);
|
||||
code = tRealloc(&pColDataDest->pBitMap, size);
|
||||
if (code) goto _exit;
|
||||
memcpy(pColDataDest->pBitMap, pColDataSrc->pBitMap, size);
|
||||
if (pColDataSrc->flag != HAS_NONE && pColDataSrc->flag != HAS_NULL && pColDataSrc->flag != HAS_VALUE) {
|
||||
size = BIT2_SIZE(pColDataSrc->nVal);
|
||||
code = tRealloc(&pColDataDest->pBitMap, size);
|
||||
if (code) goto _exit;
|
||||
memcpy(pColDataDest->pBitMap, pColDataSrc->pBitMap, size);
|
||||
}
|
||||
|
||||
if (IS_VAR_DATA_TYPE(pColDataDest->type)) {
|
||||
size = sizeof(int32_t) * pColDataSrc->nVal;
|
||||
|
@ -1012,7 +1016,7 @@ int32_t tBlockDataAppendRow(SBlockData *pBlockData, TSDBROW *pRow, STSchema *pTS
|
|||
SColData *pColData;
|
||||
SColVal *pColVal;
|
||||
|
||||
ASSERT(nColData > 0);
|
||||
if (nColData == 0) goto _exit;
|
||||
|
||||
tRowIterInit(pIter, pRow, pTSchema);
|
||||
pColData = tBlockDataGetColDataByIdx(pBlockData, iColData);
|
||||
|
@ -1042,6 +1046,7 @@ int32_t tBlockDataAppendRow(SBlockData *pBlockData, TSDBROW *pRow, STSchema *pTS
|
|||
}
|
||||
}
|
||||
|
||||
_exit:
|
||||
pBlockData->nRow++;
|
||||
return code;
|
||||
|
||||
|
|
|
@ -1,14 +0,0 @@
|
|||
/*
|
||||
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
|
||||
*
|
||||
* This program is free software: you can use, redistribute, and/or modify
|
||||
* it under the terms of the GNU Affero General Public License, version 3
|
||||
* or later ("AGPL"), as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE.
|
||||
*
|
||||
* You should have received a copy of the GNU Affero General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
|
@ -296,7 +296,7 @@ int32_t vnodeProcessFetchMsg(SVnode *pVnode, SRpcMsg *pMsg, SQueueInfo *pInfo) {
|
|||
case TDMT_SCH_MERGE_FETCH:
|
||||
return qWorkerProcessFetchMsg(pVnode, pVnode->pQuery, pMsg, 0);
|
||||
case TDMT_SCH_FETCH_RSP:
|
||||
return qWorkerProcessFetchRsp(pVnode, pVnode->pQuery, pMsg, 0);
|
||||
return qWorkerProcessRspMsg(pVnode, pVnode->pQuery, pMsg, 0);
|
||||
case TDMT_SCH_CANCEL_TASK:
|
||||
return qWorkerProcessCancelMsg(pVnode, pVnode->pQuery, pMsg, 0);
|
||||
case TDMT_SCH_DROP_TASK:
|
||||
|
|
|
@ -338,12 +338,12 @@ int32_t vnodeProcessSyncMsg(SVnode *pVnode, SRpcMsg *pMsg, SRpcMsg **pRsp) {
|
|||
} else if (pMsg->msgType == TDMT_SYNC_REQUEST_VOTE) {
|
||||
SyncRequestVote *pSyncMsg = syncRequestVoteFromRpcMsg2(pMsg);
|
||||
ASSERT(pSyncMsg != NULL);
|
||||
code = syncNodeOnRequestVoteCb(pSyncNode, pSyncMsg);
|
||||
code = syncNodeOnRequestVoteSnapshotCb(pSyncNode, pSyncMsg);
|
||||
syncRequestVoteDestroy(pSyncMsg);
|
||||
} else if (pMsg->msgType == TDMT_SYNC_REQUEST_VOTE_REPLY) {
|
||||
SyncRequestVoteReply *pSyncMsg = syncRequestVoteReplyFromRpcMsg2(pMsg);
|
||||
ASSERT(pSyncMsg != NULL);
|
||||
code = syncNodeOnRequestVoteReplyCb(pSyncNode, pSyncMsg);
|
||||
code = syncNodeOnRequestVoteReplySnapshotCb(pSyncNode, pSyncMsg);
|
||||
syncRequestVoteReplyDestroy(pSyncMsg);
|
||||
} else if (pMsg->msgType == TDMT_SYNC_APPEND_ENTRIES_BATCH) {
|
||||
SyncAppendEntriesBatch *pSyncMsg = syncAppendEntriesBatchFromRpcMsg2(pMsg);
|
||||
|
@ -355,6 +355,14 @@ int32_t vnodeProcessSyncMsg(SVnode *pVnode, SRpcMsg *pMsg, SRpcMsg **pRsp) {
|
|||
ASSERT(pSyncMsg != NULL);
|
||||
code = syncNodeOnAppendEntriesReplySnapshot2Cb(pSyncNode, pSyncMsg);
|
||||
syncAppendEntriesReplyDestroy(pSyncMsg);
|
||||
} else if (pMsg->msgType == TDMT_SYNC_SNAPSHOT_SEND) {
|
||||
SyncSnapshotSend *pSyncMsg = syncSnapshotSendFromRpcMsg2(pMsg);
|
||||
code = syncNodeOnSnapshotSendCb(pSyncNode, pSyncMsg);
|
||||
syncSnapshotSendDestroy(pSyncMsg);
|
||||
} else if (pMsg->msgType == TDMT_SYNC_SNAPSHOT_RSP) {
|
||||
SyncSnapshotRsp *pSyncMsg = syncSnapshotRspFromRpcMsg2(pMsg);
|
||||
code = syncNodeOnSnapshotRspCb(pSyncNode, pSyncMsg);
|
||||
syncSnapshotRspDestroy(pSyncMsg);
|
||||
} else if (pMsg->msgType == TDMT_SYNC_SET_VNODE_STANDBY) {
|
||||
code = vnodeSetStandBy(pVnode);
|
||||
if (code != 0 && terrno != 0) code = terrno;
|
||||
|
|
|
@ -34,7 +34,7 @@ typedef struct SDataSinkManager {
|
|||
|
||||
typedef int32_t (*FPutDataBlock)(struct SDataSinkHandle* pHandle, const SInputData* pInput, bool* pContinue);
|
||||
typedef void (*FEndPut)(struct SDataSinkHandle* pHandle, uint64_t useconds);
|
||||
typedef void (*FGetDataLength)(struct SDataSinkHandle* pHandle, int32_t* pLen, bool* pQueryEnd);
|
||||
typedef void (*FGetDataLength)(struct SDataSinkHandle* pHandle, int64_t* pLen, bool* pQueryEnd);
|
||||
typedef int32_t (*FGetDataBlock)(struct SDataSinkHandle* pHandle, SOutputData* pOutput);
|
||||
typedef int32_t (*FDestroyDataSinker)(struct SDataSinkHandle* pHandle);
|
||||
typedef int32_t (*FGetCacheSize)(struct SDataSinkHandle* pHandle, uint64_t* size);
|
||||
|
@ -50,6 +50,7 @@ typedef struct SDataSinkHandle {
|
|||
|
||||
int32_t createDataDispatcher(SDataSinkManager* pManager, const SDataSinkNode* pDataSink, DataSinkHandle* pHandle);
|
||||
int32_t createDataDeleter(SDataSinkManager* pManager, const SDataSinkNode* pDataSink, DataSinkHandle* pHandle, void *pParam);
|
||||
int32_t createDataInserter(SDataSinkManager* pManager, const SDataSinkNode* pDataSink, DataSinkHandle* pHandle, void *pParam);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
|
|
|
@ -51,13 +51,12 @@ typedef int32_t (*__block_search_fn_t)(char* data, int32_t num, int64_t key, int
|
|||
|
||||
#define NEEDTO_COMPRESS_QUERY(size) ((size) > tsCompressColData ? 1 : 0)
|
||||
|
||||
#define START_TS_COLUMN_INDEX 0
|
||||
#define END_TS_COLUMN_INDEX 1
|
||||
#define UID_COLUMN_INDEX 2
|
||||
#define GROUPID_COLUMN_INDEX UID_COLUMN_INDEX
|
||||
#define START_TS_COLUMN_INDEX 0
|
||||
#define END_TS_COLUMN_INDEX 1
|
||||
#define UID_COLUMN_INDEX 2
|
||||
#define GROUPID_COLUMN_INDEX UID_COLUMN_INDEX
|
||||
#define DELETE_GROUPID_COLUMN_INDEX 2
|
||||
|
||||
|
||||
enum {
|
||||
// when this task starts to execute, this status will set
|
||||
TASK_NOT_COMPLETED = 0x1u,
|
||||
|
@ -81,8 +80,8 @@ typedef struct SResultInfo { // TODO refactor
|
|||
} SResultInfo;
|
||||
|
||||
typedef struct STableQueryInfo {
|
||||
TSKEY lastKey; // last check ts, todo remove it later
|
||||
SResultRowPosition pos; // current active time window
|
||||
TSKEY lastKey; // last check ts, todo remove it later
|
||||
SResultRowPosition pos; // current active time window
|
||||
} STableQueryInfo;
|
||||
|
||||
typedef struct SLimit {
|
||||
|
@ -105,7 +104,7 @@ typedef struct STaskCostInfo {
|
|||
uint64_t loadDataTime;
|
||||
|
||||
SFileBlockLoadRecorder* pRecoder;
|
||||
uint64_t elapsedTime;
|
||||
uint64_t elapsedTime;
|
||||
|
||||
uint64_t firstStageMergeTime;
|
||||
uint64_t winInfoSize;
|
||||
|
@ -118,8 +117,8 @@ typedef struct STaskCostInfo {
|
|||
} STaskCostInfo;
|
||||
|
||||
typedef struct SOperatorCostInfo {
|
||||
double openCost;
|
||||
double totalCost;
|
||||
double openCost;
|
||||
double totalCost;
|
||||
} SOperatorCostInfo;
|
||||
|
||||
struct SOperatorInfo;
|
||||
|
@ -139,24 +138,35 @@ typedef struct STaskIdInfo {
|
|||
char* str;
|
||||
} STaskIdInfo;
|
||||
|
||||
typedef struct {
|
||||
STqOffsetVal prepareStatus; // for tmq
|
||||
STqOffsetVal lastStatus; // for tmq
|
||||
void* metaBlk; // for tmq fetching meta
|
||||
SSDataBlock* pullOverBlk; // for streaming
|
||||
SWalFilterCond cond;
|
||||
} SStreamTaskInfo;
|
||||
|
||||
typedef struct SExecTaskInfo {
|
||||
STaskIdInfo id;
|
||||
uint32_t status;
|
||||
STimeWindow window;
|
||||
STaskCostInfo cost;
|
||||
int64_t owner; // if it is in execution
|
||||
int32_t code;
|
||||
STaskIdInfo id;
|
||||
uint32_t status;
|
||||
STimeWindow window;
|
||||
STaskCostInfo cost;
|
||||
int64_t owner; // if it is in execution
|
||||
int32_t code;
|
||||
|
||||
SStreamTaskInfo streamInfo;
|
||||
|
||||
struct {
|
||||
char *tablename;
|
||||
char *dbname;
|
||||
int32_t tversion;
|
||||
SSchemaWrapper*sw;
|
||||
char* tablename;
|
||||
char* dbname;
|
||||
int32_t tversion;
|
||||
SSchemaWrapper* sw;
|
||||
} schemaVer;
|
||||
|
||||
STableListInfo tableqinfoList; // this is a table list
|
||||
const char* sql; // query sql string
|
||||
jmp_buf env; // jump to this position when error happens.
|
||||
EOPTR_EXEC_MODEL execModel; // operator execution model [batch model|stream model]
|
||||
STableListInfo tableqinfoList; // this is a table list
|
||||
const char* sql; // query sql string
|
||||
jmp_buf env; // jump to this position when error happens.
|
||||
EOPTR_EXEC_MODEL execModel; // operator execution model [batch model|stream model]
|
||||
struct SOperatorInfo* pRoot;
|
||||
} SExecTaskInfo;
|
||||
|
||||
|
@ -168,36 +178,36 @@ enum {
|
|||
};
|
||||
|
||||
typedef struct SOperatorFpSet {
|
||||
__optr_open_fn_t _openFn; // DO NOT invoke this function directly
|
||||
__optr_fn_t getNextFn;
|
||||
__optr_fn_t getStreamResFn; // execute the aggregate in the stream model, todo remove it
|
||||
__optr_fn_t cleanupFn; // call this function to release the allocated resources ASAP
|
||||
__optr_close_fn_t closeFn;
|
||||
__optr_encode_fn_t encodeResultRow;
|
||||
__optr_decode_fn_t decodeResultRow;
|
||||
__optr_explain_fn_t getExplainFn;
|
||||
__optr_open_fn_t _openFn; // DO NOT invoke this function directly
|
||||
__optr_fn_t getNextFn;
|
||||
__optr_fn_t getStreamResFn; // execute the aggregate in the stream model, todo remove it
|
||||
__optr_fn_t cleanupFn; // call this function to release the allocated resources ASAP
|
||||
__optr_close_fn_t closeFn;
|
||||
__optr_encode_fn_t encodeResultRow;
|
||||
__optr_decode_fn_t decodeResultRow;
|
||||
__optr_explain_fn_t getExplainFn;
|
||||
} SOperatorFpSet;
|
||||
|
||||
typedef struct SExprSupp {
|
||||
SExprInfo* pExprInfo;
|
||||
int32_t numOfExprs; // the number of scalar expression in group operator
|
||||
int32_t numOfExprs; // the number of scalar expression in group operator
|
||||
SqlFunctionCtx* pCtx;
|
||||
int32_t* rowEntryInfoOffset; // offset value for each row result cell info
|
||||
} SExprSupp;
|
||||
|
||||
typedef struct SOperatorInfo {
|
||||
uint8_t operatorType;
|
||||
bool blocking; // block operator or not
|
||||
uint8_t status; // denote if current operator is completed
|
||||
char* name; // name, for debug purpose
|
||||
void* info; // extension attribution
|
||||
SExprSupp exprSupp;
|
||||
SExecTaskInfo* pTaskInfo;
|
||||
SOperatorCostInfo cost;
|
||||
SResultInfo resultInfo;
|
||||
struct SOperatorInfo** pDownstream; // downstram pointer list
|
||||
int32_t numOfDownstream; // number of downstream. The value is always ONE expect for join operator
|
||||
SOperatorFpSet fpSet;
|
||||
uint8_t operatorType;
|
||||
bool blocking; // block operator or not
|
||||
uint8_t status; // denote if current operator is completed
|
||||
char* name; // name, for debug purpose
|
||||
void* info; // extension attribution
|
||||
SExprSupp exprSupp;
|
||||
SExecTaskInfo* pTaskInfo;
|
||||
SOperatorCostInfo cost;
|
||||
SResultInfo resultInfo;
|
||||
struct SOperatorInfo** pDownstream; // downstram pointer list
|
||||
int32_t numOfDownstream; // number of downstream. The value is always ONE expect for join operator
|
||||
SOperatorFpSet fpSet;
|
||||
} SOperatorInfo;
|
||||
|
||||
typedef enum {
|
||||
|
@ -210,12 +220,12 @@ typedef enum {
|
|||
#define COL_MATCH_FROM_SLOT_ID 0x2
|
||||
|
||||
typedef struct SSourceDataInfo {
|
||||
int32_t index;
|
||||
SRetrieveTableRsp* pRsp;
|
||||
uint64_t totalRows;
|
||||
int32_t code;
|
||||
EX_SOURCE_STATUS status;
|
||||
const char* taskId;
|
||||
int32_t index;
|
||||
SRetrieveTableRsp* pRsp;
|
||||
uint64_t totalRows;
|
||||
int32_t code;
|
||||
EX_SOURCE_STATUS status;
|
||||
const char* taskId;
|
||||
} SSourceDataInfo;
|
||||
|
||||
typedef struct SLoadRemoteDataInfo {
|
||||
|
@ -325,10 +335,10 @@ typedef enum EStreamScanMode {
|
|||
} EStreamScanMode;
|
||||
|
||||
typedef struct SCatchSupporter {
|
||||
SHashObj* pWindowHashTable; // quick locate the window object for each window
|
||||
SDiskbasedBuf* pDataBuf; // buffer based on blocked-wised disk file
|
||||
int32_t keySize;
|
||||
int64_t* pKeyBuf;
|
||||
SHashObj* pWindowHashTable; // quick locate the window object for each window
|
||||
SDiskbasedBuf* pDataBuf; // buffer based on blocked-wised disk file
|
||||
int32_t keySize;
|
||||
int64_t* pKeyBuf;
|
||||
} SCatchSupporter;
|
||||
|
||||
typedef struct SStreamAggSupporter {
|
||||
|
@ -344,48 +354,48 @@ typedef struct SStreamAggSupporter {
|
|||
|
||||
typedef struct SessionWindowSupporter {
|
||||
SStreamAggSupporter* pStreamAggSup;
|
||||
int64_t gap;
|
||||
uint8_t parentType;
|
||||
int64_t gap;
|
||||
uint8_t parentType;
|
||||
} SessionWindowSupporter;
|
||||
|
||||
typedef struct SStreamScanInfo {
|
||||
uint64_t tableUid; // queried super table uid
|
||||
SExprInfo* pPseudoExpr;
|
||||
int32_t numOfPseudoExpr;
|
||||
int32_t primaryTsIndex; // primary time stamp slot id
|
||||
SReadHandle readHandle;
|
||||
SInterval interval; // if the upstream is an interval operator, the interval info is also kept here.
|
||||
SArray* pColMatchInfo; //
|
||||
SNode* pCondition;
|
||||
uint64_t tableUid; // queried super table uid
|
||||
SExprInfo* pPseudoExpr;
|
||||
int32_t numOfPseudoExpr;
|
||||
int32_t primaryTsIndex; // primary time stamp slot id
|
||||
SReadHandle readHandle;
|
||||
SInterval interval; // if the upstream is an interval operator, the interval info is also kept here.
|
||||
SArray* pColMatchInfo; //
|
||||
SNode* pCondition;
|
||||
|
||||
SArray* pBlockLists; // multiple SSDatablock.
|
||||
SSDataBlock* pRes; // result SSDataBlock
|
||||
SSDataBlock* pUpdateRes; // update SSDataBlock
|
||||
int32_t updateResIndex;
|
||||
int32_t blockType; // current block type
|
||||
int32_t validBlockIndex; // Is current data has returned?
|
||||
uint64_t numOfExec; // execution times
|
||||
STqReader* tqReader;
|
||||
SArray* pBlockLists; // multiple SSDatablock.
|
||||
SSDataBlock* pRes; // result SSDataBlock
|
||||
SSDataBlock* pUpdateRes; // update SSDataBlock
|
||||
int32_t updateResIndex;
|
||||
int32_t blockType; // current block type
|
||||
int32_t validBlockIndex; // Is current data has returned?
|
||||
uint64_t numOfExec; // execution times
|
||||
STqReader* tqReader;
|
||||
|
||||
int32_t tsArrayIndex;
|
||||
SArray* tsArray;
|
||||
uint64_t groupId;
|
||||
SUpdateInfo* pUpdateInfo;
|
||||
int32_t tsArrayIndex;
|
||||
SArray* tsArray;
|
||||
uint64_t groupId;
|
||||
SUpdateInfo* pUpdateInfo;
|
||||
|
||||
EStreamScanMode scanMode;
|
||||
SOperatorInfo* pStreamScanOp;
|
||||
SOperatorInfo* pTableScanOp;
|
||||
SArray* childIds;
|
||||
EStreamScanMode scanMode;
|
||||
SOperatorInfo* pStreamScanOp;
|
||||
SOperatorInfo* pTableScanOp;
|
||||
SArray* childIds;
|
||||
SessionWindowSupporter sessionSup;
|
||||
bool assignBlockUid; // assign block uid to groupId, temporarily used for generating rollup SMA.
|
||||
int32_t scanWinIndex; // for state operator
|
||||
int32_t pullDataResIndex;
|
||||
SSDataBlock* pPullDataRes; // pull data SSDataBlock
|
||||
SSDataBlock* pDeleteDataRes; // delete data SSDataBlock
|
||||
int32_t deleteDataIndex;
|
||||
bool assignBlockUid; // assign block uid to groupId, temporarily used for generating rollup SMA.
|
||||
int32_t scanWinIndex; // for state operator
|
||||
int32_t pullDataResIndex;
|
||||
SSDataBlock* pPullDataRes; // pull data SSDataBlock
|
||||
SSDataBlock* pDeleteDataRes; // delete data SSDataBlock
|
||||
int32_t deleteDataIndex;
|
||||
|
||||
// status for tmq
|
||||
//SSchemaWrapper schema;
|
||||
// SSchemaWrapper schema;
|
||||
STqOffset offset;
|
||||
|
||||
} SStreamScanInfo;
|
||||
|
@ -595,7 +605,7 @@ typedef struct SSessionAggOperatorInfo {
|
|||
int64_t gap; // session window gap
|
||||
int32_t tsSlotId; // primary timestamp slot id
|
||||
STimeWindowAggSupp twAggSup;
|
||||
SNode *pCondition;
|
||||
const SNode* pCondition;
|
||||
} SSessionAggOperatorInfo;
|
||||
|
||||
typedef struct SResultWindowInfo {
|
||||
|
@ -657,7 +667,7 @@ typedef struct SStateWindowOperatorInfo {
|
|||
int32_t tsSlotId; // primary timestamp column slot id
|
||||
STimeWindowAggSupp twAggSup;
|
||||
// bool reptScan;
|
||||
const SNode *pCondition;
|
||||
const SNode* pCondition;
|
||||
} SStateWindowOperatorInfo;
|
||||
|
||||
typedef struct SStreamStateAggOperatorInfo {
|
||||
|
@ -806,7 +816,7 @@ SOperatorInfo* createMergeIntervalOperatorInfo(SOperatorInfo* downstream, SExprI
|
|||
|
||||
SOperatorInfo* createMergeAlignedIntervalOperatorInfo(SOperatorInfo* downstream, SExprInfo* pExprInfo, int32_t numOfCols,
|
||||
SSDataBlock* pResBlock, SInterval* pInterval, int32_t primaryTsSlotId,
|
||||
SExecTaskInfo* pTaskInfo);
|
||||
SNode* pCondition, SExecTaskInfo* pTaskInfo);
|
||||
|
||||
SOperatorInfo* createStreamFinalIntervalOperatorInfo(SOperatorInfo* downstream,
|
||||
SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo, int32_t numOfChild);
|
||||
|
@ -885,7 +895,7 @@ int32_t decodeOperator(SOperatorInfo* ops, const char* data, int32_t length);
|
|||
void setTaskStatus(SExecTaskInfo* pTaskInfo, int8_t status);
|
||||
int32_t createExecTaskInfoImpl(SSubplan* pPlan, SExecTaskInfo** pTaskInfo, SReadHandle* pHandle, uint64_t taskId,
|
||||
const char* sql, EOPTR_EXEC_MODEL model);
|
||||
int32_t createDataSinkParam(SDataSinkNode *pNode, void **pParam, qTaskInfo_t* pTaskInfo);
|
||||
int32_t createDataSinkParam(SDataSinkNode *pNode, void **pParam, qTaskInfo_t* pTaskInfo, SReadHandle* readHandle);
|
||||
int32_t getOperatorExplainExecInfo(SOperatorInfo* operatorInfo, SExplainExecInfo** pRes, int32_t* capacity,
|
||||
int32_t* resNum);
|
||||
|
||||
|
|
|
@ -154,7 +154,7 @@ static void endPut(struct SDataSinkHandle* pHandle, uint64_t useconds) {
|
|||
taosThreadMutexUnlock(&pDeleter->mutex);
|
||||
}
|
||||
|
||||
static void getDataLength(SDataSinkHandle* pHandle, int32_t* pLen, bool* pQueryEnd) {
|
||||
static void getDataLength(SDataSinkHandle* pHandle, int64_t* pLen, bool* pQueryEnd) {
|
||||
SDataDeleterHandle* pDeleter = (SDataDeleterHandle*)pHandle;
|
||||
if (taosQueueEmpty(pDeleter->pDataBlocks)) {
|
||||
*pQueryEnd = pDeleter->queryEnd;
|
||||
|
@ -168,7 +168,7 @@ static void getDataLength(SDataSinkHandle* pHandle, int32_t* pLen, bool* pQueryE
|
|||
taosFreeQitem(pBuf);
|
||||
*pLen = ((SDataCacheEntry*)(pDeleter->nextOutput.pData))->dataLen;
|
||||
*pQueryEnd = pDeleter->queryEnd;
|
||||
qDebug("got data len %d, row num %d in sink", *pLen, ((SDataCacheEntry*)(pDeleter->nextOutput.pData))->numOfRows);
|
||||
qDebug("got data len %" PRId64 ", row num %d in sink", *pLen, ((SDataCacheEntry*)(pDeleter->nextOutput.pData))->numOfRows);
|
||||
}
|
||||
|
||||
static int32_t getDataBlock(SDataSinkHandle* pHandle, SOutputData* pOutput) {
|
||||
|
|
|
@ -156,7 +156,7 @@ static void endPut(struct SDataSinkHandle* pHandle, uint64_t useconds) {
|
|||
taosThreadMutexUnlock(&pDispatcher->mutex);
|
||||
}
|
||||
|
||||
static void getDataLength(SDataSinkHandle* pHandle, int32_t* pLen, bool* pQueryEnd) {
|
||||
static void getDataLength(SDataSinkHandle* pHandle, int64_t* pLen, bool* pQueryEnd) {
|
||||
SDataDispatchHandle* pDispatcher = (SDataDispatchHandle*)pHandle;
|
||||
if (taosQueueEmpty(pDispatcher->pDataBlocks)) {
|
||||
*pQueryEnd = pDispatcher->queryEnd;
|
||||
|
@ -170,7 +170,7 @@ static void getDataLength(SDataSinkHandle* pHandle, int32_t* pLen, bool* pQueryE
|
|||
taosFreeQitem(pBuf);
|
||||
*pLen = ((SDataCacheEntry*)(pDispatcher->nextOutput.pData))->dataLen;
|
||||
*pQueryEnd = pDispatcher->queryEnd;
|
||||
qDebug("got data len %d, row num %d in sink", *pLen, ((SDataCacheEntry*)(pDispatcher->nextOutput.pData))->numOfRows);
|
||||
qDebug("got data len %" PRId64 ", row num %d in sink", *pLen, ((SDataCacheEntry*)(pDispatcher->nextOutput.pData))->numOfRows);
|
||||
}
|
||||
|
||||
static int32_t getDataBlock(SDataSinkHandle* pHandle, SOutputData* pOutput) {
|
||||
|
|
|
@ -24,195 +24,266 @@
|
|||
|
||||
extern SDataSinkStat gDataSinkStat;
|
||||
|
||||
typedef struct SDataInserterBuf {
|
||||
int32_t useSize;
|
||||
int32_t allocSize;
|
||||
char* pData;
|
||||
} SDataInserterBuf;
|
||||
|
||||
typedef struct SDataCacheEntry {
|
||||
int32_t dataLen;
|
||||
int32_t numOfRows;
|
||||
int32_t numOfCols;
|
||||
int8_t compressed;
|
||||
char data[];
|
||||
} SDataCacheEntry;
|
||||
typedef struct SSubmitRes {
|
||||
int64_t affectedRows;
|
||||
int32_t code;
|
||||
SSubmitRsp *pRsp;
|
||||
} SSubmitRes;
|
||||
|
||||
typedef struct SDataInserterHandle {
|
||||
SDataSinkHandle sink;
|
||||
SDataSinkManager* pManager;
|
||||
SDataBlockDescNode* pSchema;
|
||||
SDataDeleterNode* pDeleter;
|
||||
SDeleterParam* pParam;
|
||||
STaosQueue* pDataBlocks;
|
||||
SDataInserterBuf nextOutput;
|
||||
STSchema* pSchema;
|
||||
SQueryInserterNode* pNode;
|
||||
SSubmitRes submitRes;
|
||||
SInserterParam* pParam;
|
||||
SArray* pDataBlocks;
|
||||
SHashObj* pCols;
|
||||
int32_t status;
|
||||
bool queryEnd;
|
||||
uint64_t useconds;
|
||||
uint64_t cachedSize;
|
||||
TdThreadMutex mutex;
|
||||
tsem_t ready;
|
||||
} SDataInserterHandle;
|
||||
|
||||
static bool needCompress(const SSDataBlock* pData, int32_t numOfCols) {
|
||||
if (tsCompressColData < 0 || 0 == pData->info.rows) {
|
||||
return false;
|
||||
}
|
||||
typedef struct SSubmitRspParam {
|
||||
SDataInserterHandle* pInserter;
|
||||
} SSubmitRspParam;
|
||||
|
||||
for (int32_t col = 0; col < numOfCols; ++col) {
|
||||
SColumnInfoData* pColRes = taosArrayGet(pData->pDataBlock, col);
|
||||
int32_t colSize = pColRes->info.bytes * pData->info.rows;
|
||||
if (NEEDTO_COMPRESS_QUERY(colSize)) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
int32_t inserterCallback(void* param, SDataBuf* pMsg, int32_t code) {
|
||||
SSubmitRspParam* pParam = (SSubmitRspParam*)param;
|
||||
SDataInserterHandle* pInserter = pParam->pInserter;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static void toDataCacheEntry(SDataInserterHandle* pHandle, const SInputData* pInput, SDataInserterBuf* pBuf) {
|
||||
int32_t numOfCols = LIST_LENGTH(pHandle->pSchema->pSlots);
|
||||
|
||||
SDataCacheEntry* pEntry = (SDataCacheEntry*)pBuf->pData;
|
||||
pEntry->compressed = 0;
|
||||
pEntry->numOfRows = pInput->pData->info.rows;
|
||||
pEntry->numOfCols = taosArrayGetSize(pInput->pData->pDataBlock);
|
||||
pEntry->dataLen = sizeof(SDeleterRes);
|
||||
|
||||
ASSERT(1 == pEntry->numOfRows);
|
||||
ASSERT(1 == pEntry->numOfCols);
|
||||
|
||||
pBuf->useSize = sizeof(SDataCacheEntry);
|
||||
|
||||
SColumnInfoData* pColRes = (SColumnInfoData*)taosArrayGet(pInput->pData->pDataBlock, 0);
|
||||
|
||||
SDeleterRes* pRes = (SDeleterRes*)pEntry->data;
|
||||
pRes->suid = pHandle->pParam->suid;
|
||||
pRes->uidList = pHandle->pParam->pUidList;
|
||||
pRes->skey = pHandle->pDeleter->deleteTimeRange.skey;
|
||||
pRes->ekey = pHandle->pDeleter->deleteTimeRange.ekey;
|
||||
pRes->affectedRows = *(int64_t*)pColRes->pData;
|
||||
|
||||
pBuf->useSize += pEntry->dataLen;
|
||||
pInserter->submitRes.code = code;
|
||||
|
||||
atomic_add_fetch_64(&pHandle->cachedSize, pEntry->dataLen);
|
||||
atomic_add_fetch_64(&gDataSinkStat.cachedSize, pEntry->dataLen);
|
||||
}
|
||||
if (code == TSDB_CODE_SUCCESS) {
|
||||
pInserter->submitRes.pRsp = taosMemoryCalloc(1, sizeof(SSubmitRsp));
|
||||
SDecoder coder = {0};
|
||||
tDecoderInit(&coder, pMsg->pData, pMsg->len);
|
||||
code = tDecodeSSubmitRsp(&coder, pInserter->submitRes.pRsp);
|
||||
if (code) {
|
||||
tFreeSSubmitRsp(pInserter->submitRes.pRsp);
|
||||
pInserter->submitRes.code = code;
|
||||
goto _return;
|
||||
}
|
||||
|
||||
if (pInserter->submitRes.pRsp->nBlocks > 0) {
|
||||
for (int32_t i = 0; i < pInserter->submitRes.pRsp->nBlocks; ++i) {
|
||||
SSubmitBlkRsp *blk = pInserter->submitRes.pRsp->pBlocks + i;
|
||||
if (TSDB_CODE_SUCCESS != blk->code) {
|
||||
code = blk->code;
|
||||
tFreeSSubmitRsp(pInserter->submitRes.pRsp);
|
||||
pInserter->submitRes.code = code;
|
||||
goto _return;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pInserter->submitRes.affectedRows += pInserter->submitRes.pRsp->affectedRows;
|
||||
qDebug("submit rsp received, affectedRows:%d, total:%d", pInserter->submitRes.pRsp->affectedRows, pInserter->submitRes.affectedRows);
|
||||
|
||||
static bool allocBuf(SDataInserterHandle* pDeleter, const SInputData* pInput, SDataInserterBuf* pBuf) {
|
||||
uint32_t capacity = pDeleter->pManager->cfg.maxDataBlockNumPerQuery;
|
||||
if (taosQueueItemSize(pDeleter->pDataBlocks) > capacity) {
|
||||
qError("SinkNode queue is full, no capacity, max:%d, current:%d, no capacity", capacity,
|
||||
taosQueueItemSize(pDeleter->pDataBlocks));
|
||||
return false;
|
||||
tFreeSSubmitRsp(pInserter->submitRes.pRsp);
|
||||
}
|
||||
|
||||
pBuf->allocSize = sizeof(SDataCacheEntry) + sizeof(SDeleterRes);
|
||||
_return:
|
||||
|
||||
pBuf->pData = taosMemoryMalloc(pBuf->allocSize);
|
||||
if (pBuf->pData == NULL) {
|
||||
qError("SinkNode failed to malloc memory, size:%d, code:%d", pBuf->allocSize, TAOS_SYSTEM_ERROR(errno));
|
||||
tsem_post(&pInserter->ready);
|
||||
|
||||
taosMemoryFree(param);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
||||
static int32_t sendSubmitRequest(SDataInserterHandle* pInserter, SSubmitReq* pMsg, void* pTransporter, SEpSet* pEpset) {
|
||||
// send the fetch remote task result reques
|
||||
SMsgSendInfo* pMsgSendInfo = taosMemoryCalloc(1, sizeof(SMsgSendInfo));
|
||||
if (NULL == pMsgSendInfo) {
|
||||
taosMemoryFreeClear(pMsg);
|
||||
terrno = TSDB_CODE_QRY_OUT_OF_MEMORY;
|
||||
return terrno;
|
||||
}
|
||||
|
||||
return NULL != pBuf->pData;
|
||||
SSubmitRspParam* pParam = taosMemoryCalloc(1, sizeof(SSubmitRspParam));
|
||||
pParam->pInserter = pInserter;
|
||||
|
||||
pMsgSendInfo->param = pParam;
|
||||
pMsgSendInfo->msgInfo.pData = pMsg;
|
||||
pMsgSendInfo->msgInfo.len = ntohl(pMsg->length);
|
||||
pMsgSendInfo->msgType = TDMT_VND_SUBMIT;
|
||||
pMsgSendInfo->fp = inserterCallback;
|
||||
|
||||
int64_t transporterId = 0;
|
||||
return asyncSendMsgToServer(pTransporter, pEpset, &transporterId, pMsgSendInfo);
|
||||
}
|
||||
|
||||
static int32_t updateStatus(SDataInserterHandle* pDeleter) {
|
||||
taosThreadMutexLock(&pDeleter->mutex);
|
||||
int32_t blockNums = taosQueueItemSize(pDeleter->pDataBlocks);
|
||||
int32_t status =
|
||||
(0 == blockNums ? DS_BUF_EMPTY
|
||||
: (blockNums < pDeleter->pManager->cfg.maxDataBlockNumPerQuery ? DS_BUF_LOW : DS_BUF_FULL));
|
||||
pDeleter->status = status;
|
||||
taosThreadMutexUnlock(&pDeleter->mutex);
|
||||
return status;
|
||||
|
||||
int32_t dataBlockToSubmit(SDataInserterHandle* pInserter, SSubmitReq** pReq) {
|
||||
const SArray* pBlocks = pInserter->pDataBlocks;
|
||||
const STSchema* pTSchema = pInserter->pSchema;
|
||||
int64_t uid = pInserter->pNode->tableId;
|
||||
int64_t suid = pInserter->pNode->stableId;
|
||||
int32_t vgId = pInserter->pNode->vgId;
|
||||
bool fullCol = (pInserter->pNode->pCols->length == pTSchema->numOfCols);
|
||||
|
||||
SSubmitReq* ret = NULL;
|
||||
int32_t sz = taosArrayGetSize(pBlocks);
|
||||
|
||||
// cal size
|
||||
int32_t cap = sizeof(SSubmitReq);
|
||||
for (int32_t i = 0; i < sz; i++) {
|
||||
SSDataBlock* pDataBlock = taosArrayGetP(pBlocks, i);
|
||||
int32_t rows = pDataBlock->info.rows;
|
||||
// TODO min
|
||||
int32_t rowSize = pDataBlock->info.rowSize;
|
||||
int32_t maxLen = TD_ROW_MAX_BYTES_FROM_SCHEMA(pTSchema);
|
||||
|
||||
cap += sizeof(SSubmitBlk) + rows * maxLen;
|
||||
}
|
||||
|
||||
// assign data
|
||||
// TODO
|
||||
ret = taosMemoryCalloc(1, cap);
|
||||
ret->header.vgId = htonl(vgId);
|
||||
ret->version = htonl(pTSchema->version);
|
||||
ret->length = sizeof(SSubmitReq);
|
||||
ret->numOfBlocks = htonl(sz);
|
||||
|
||||
SSubmitBlk* blkHead = POINTER_SHIFT(ret, sizeof(SSubmitReq));
|
||||
for (int32_t i = 0; i < sz; i++) {
|
||||
SSDataBlock* pDataBlock = taosArrayGetP(pBlocks, i);
|
||||
|
||||
blkHead->sversion = htonl(pTSchema->version);
|
||||
// TODO
|
||||
blkHead->suid = htobe64(suid);
|
||||
blkHead->uid = htobe64(uid);
|
||||
blkHead->schemaLen = htonl(0);
|
||||
|
||||
int32_t rows = 0;
|
||||
int32_t dataLen = 0;
|
||||
STSRow* rowData = POINTER_SHIFT(blkHead, sizeof(SSubmitBlk));
|
||||
int64_t lastTs = TSKEY_MIN;
|
||||
bool ignoreRow = false;
|
||||
for (int32_t j = 0; j < pDataBlock->info.rows; j++) {
|
||||
SRowBuilder rb = {0};
|
||||
tdSRowInit(&rb, pTSchema->version);
|
||||
tdSRowSetTpInfo(&rb, pTSchema->numOfCols, pTSchema->flen);
|
||||
tdSRowResetBuf(&rb, rowData);
|
||||
|
||||
ignoreRow = false;
|
||||
for (int32_t k = 0; k < pTSchema->numOfCols; k++) {
|
||||
const STColumn* pColumn = &pTSchema->columns[k];
|
||||
SColumnInfoData* pColData = NULL;
|
||||
int16_t colIdx = k;
|
||||
if (!fullCol) {
|
||||
int16_t *slotId = taosHashGet(pInserter->pCols, &pColumn->colId, sizeof(pColumn->colId));
|
||||
if (NULL == slotId) {
|
||||
continue;
|
||||
}
|
||||
|
||||
colIdx = *slotId;
|
||||
}
|
||||
|
||||
pColData = taosArrayGet(pDataBlock->pDataBlock, colIdx);
|
||||
if (pColData->info.type != pColumn->type) {
|
||||
qError("col type mis-match, schema type:%d, type in block:%d", pColumn->type, pColData->info.type);
|
||||
terrno = TSDB_CODE_APP_ERROR;
|
||||
return TSDB_CODE_APP_ERROR;
|
||||
}
|
||||
|
||||
if (colDataIsNull_s(pColData, j)) {
|
||||
if (0 == k && TSDB_DATA_TYPE_TIMESTAMP == pColumn->type) {
|
||||
ignoreRow = true;
|
||||
break;
|
||||
}
|
||||
|
||||
tdAppendColValToRow(&rb, pColumn->colId, pColumn->type, TD_VTYPE_NULL, NULL, false, pColumn->offset, k);
|
||||
} else {
|
||||
void* data = colDataGetData(pColData, j);
|
||||
if (0 == k && TSDB_DATA_TYPE_TIMESTAMP == pColumn->type) {
|
||||
if (*(int64_t*)data == lastTs) {
|
||||
ignoreRow = true;
|
||||
break;
|
||||
} else {
|
||||
lastTs = *(int64_t*)data;
|
||||
}
|
||||
}
|
||||
tdAppendColValToRow(&rb, pColumn->colId, pColumn->type, TD_VTYPE_NORM, data, true, pColumn->offset, k);
|
||||
}
|
||||
}
|
||||
|
||||
if (ignoreRow) {
|
||||
continue;
|
||||
}
|
||||
|
||||
rows++;
|
||||
int32_t rowLen = TD_ROW_LEN(rowData);
|
||||
rowData = POINTER_SHIFT(rowData, rowLen);
|
||||
dataLen += rowLen;
|
||||
}
|
||||
|
||||
blkHead->dataLen = htonl(dataLen);
|
||||
blkHead->numOfRows = htons(rows);
|
||||
|
||||
ret->length += sizeof(SSubmitBlk) + dataLen;
|
||||
blkHead = POINTER_SHIFT(blkHead, sizeof(SSubmitBlk) + dataLen);
|
||||
}
|
||||
|
||||
ret->length = htonl(ret->length);
|
||||
|
||||
*pReq = ret;
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t getStatus(SDataInserterHandle* pDeleter) {
|
||||
taosThreadMutexLock(&pDeleter->mutex);
|
||||
int32_t status = pDeleter->status;
|
||||
taosThreadMutexUnlock(&pDeleter->mutex);
|
||||
return status;
|
||||
}
|
||||
|
||||
static int32_t putDataBlock(SDataSinkHandle* pHandle, const SInputData* pInput, bool* pContinue) {
|
||||
SDataInserterHandle* pDeleter = (SDataInserterHandle*)pHandle;
|
||||
SDataInserterBuf* pBuf = taosAllocateQitem(sizeof(SDataInserterBuf), DEF_QITEM);
|
||||
if (NULL == pBuf || !allocBuf(pDeleter, pInput, pBuf)) {
|
||||
return TSDB_CODE_QRY_OUT_OF_MEMORY;
|
||||
SDataInserterHandle* pInserter = (SDataInserterHandle*)pHandle;
|
||||
taosArrayPush(pInserter->pDataBlocks, &pInput->pData);
|
||||
SSubmitReq* pMsg = NULL;
|
||||
int32_t code = dataBlockToSubmit(pInserter, &pMsg);
|
||||
if (code) {
|
||||
return code;
|
||||
}
|
||||
toDataCacheEntry(pDeleter, pInput, pBuf);
|
||||
taosWriteQitem(pDeleter->pDataBlocks, pBuf);
|
||||
*pContinue = (DS_BUF_LOW == updateStatus(pDeleter) ? true : false);
|
||||
|
||||
code = sendSubmitRequest(pInserter, pMsg, pInserter->pParam->readHandle->pMsgCb->clientRpc, &pInserter->pNode->epSet);
|
||||
if (code) {
|
||||
return code;
|
||||
}
|
||||
|
||||
tsem_wait(&pInserter->ready);
|
||||
|
||||
if (pInserter->submitRes.code) {
|
||||
return pInserter->submitRes.code;
|
||||
}
|
||||
|
||||
*pContinue = true;
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static void endPut(struct SDataSinkHandle* pHandle, uint64_t useconds) {
|
||||
SDataInserterHandle* pDeleter = (SDataInserterHandle*)pHandle;
|
||||
taosThreadMutexLock(&pDeleter->mutex);
|
||||
pDeleter->queryEnd = true;
|
||||
pDeleter->useconds = useconds;
|
||||
taosThreadMutexUnlock(&pDeleter->mutex);
|
||||
SDataInserterHandle* pInserter = (SDataInserterHandle*)pHandle;
|
||||
taosThreadMutexLock(&pInserter->mutex);
|
||||
pInserter->queryEnd = true;
|
||||
pInserter->useconds = useconds;
|
||||
taosThreadMutexUnlock(&pInserter->mutex);
|
||||
}
|
||||
|
||||
static void getDataLength(SDataSinkHandle* pHandle, int32_t* pLen, bool* pQueryEnd) {
|
||||
SDataInserterHandle* pDeleter = (SDataInserterHandle*)pHandle;
|
||||
if (taosQueueEmpty(pDeleter->pDataBlocks)) {
|
||||
*pQueryEnd = pDeleter->queryEnd;
|
||||
*pLen = 0;
|
||||
return;
|
||||
}
|
||||
|
||||
SDataInserterBuf* pBuf = NULL;
|
||||
taosReadQitem(pDeleter->pDataBlocks, (void**)&pBuf);
|
||||
memcpy(&pDeleter->nextOutput, pBuf, sizeof(SDataInserterBuf));
|
||||
taosFreeQitem(pBuf);
|
||||
*pLen = ((SDataCacheEntry*)(pDeleter->nextOutput.pData))->dataLen;
|
||||
*pQueryEnd = pDeleter->queryEnd;
|
||||
qDebug("got data len %d, row num %d in sink", *pLen, ((SDataCacheEntry*)(pDeleter->nextOutput.pData))->numOfRows);
|
||||
static void getDataLength(SDataSinkHandle* pHandle, int64_t* pLen, bool* pQueryEnd) {
|
||||
SDataInserterHandle* pDispatcher = (SDataInserterHandle*)pHandle;
|
||||
*pLen = pDispatcher->submitRes.affectedRows;
|
||||
qDebug("got total affectedRows %" PRId64 , *pLen);
|
||||
}
|
||||
|
||||
static int32_t getDataBlock(SDataSinkHandle* pHandle, SOutputData* pOutput) {
|
||||
SDataInserterHandle* pDeleter = (SDataInserterHandle*)pHandle;
|
||||
if (NULL == pDeleter->nextOutput.pData) {
|
||||
assert(pDeleter->queryEnd);
|
||||
pOutput->useconds = pDeleter->useconds;
|
||||
pOutput->precision = pDeleter->pSchema->precision;
|
||||
pOutput->bufStatus = DS_BUF_EMPTY;
|
||||
pOutput->queryEnd = pDeleter->queryEnd;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
SDataCacheEntry* pEntry = (SDataCacheEntry*)(pDeleter->nextOutput.pData);
|
||||
memcpy(pOutput->pData, pEntry->data, pEntry->dataLen);
|
||||
pOutput->numOfRows = pEntry->numOfRows;
|
||||
pOutput->numOfCols = pEntry->numOfCols;
|
||||
pOutput->compressed = pEntry->compressed;
|
||||
|
||||
atomic_sub_fetch_64(&pDeleter->cachedSize, pEntry->dataLen);
|
||||
atomic_sub_fetch_64(&gDataSinkStat.cachedSize, pEntry->dataLen);
|
||||
|
||||
taosMemoryFreeClear(pDeleter->nextOutput.pData); // todo persistent
|
||||
pOutput->bufStatus = updateStatus(pDeleter);
|
||||
taosThreadMutexLock(&pDeleter->mutex);
|
||||
pOutput->queryEnd = pDeleter->queryEnd;
|
||||
pOutput->useconds = pDeleter->useconds;
|
||||
pOutput->precision = pDeleter->pSchema->precision;
|
||||
taosThreadMutexUnlock(&pDeleter->mutex);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t destroyDataSinker(SDataSinkHandle* pHandle) {
|
||||
SDataInserterHandle* pDeleter = (SDataInserterHandle*)pHandle;
|
||||
atomic_sub_fetch_64(&gDataSinkStat.cachedSize, pDeleter->cachedSize);
|
||||
taosMemoryFreeClear(pDeleter->nextOutput.pData);
|
||||
while (!taosQueueEmpty(pDeleter->pDataBlocks)) {
|
||||
SDataInserterBuf* pBuf = NULL;
|
||||
taosReadQitem(pDeleter->pDataBlocks, (void**)&pBuf);
|
||||
taosMemoryFreeClear(pBuf->pData);
|
||||
taosFreeQitem(pBuf);
|
||||
}
|
||||
taosCloseQueue(pDeleter->pDataBlocks);
|
||||
taosThreadMutexDestroy(&pDeleter->mutex);
|
||||
SDataInserterHandle* pInserter = (SDataInserterHandle*)pHandle;
|
||||
atomic_sub_fetch_64(&gDataSinkStat.cachedSize, pInserter->cachedSize);
|
||||
taosArrayDestroy(pInserter->pDataBlocks);
|
||||
taosMemoryFree(pInserter->pSchema);
|
||||
taosThreadMutexDestroy(&pInserter->mutex);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
@ -230,25 +301,46 @@ int32_t createDataInserter(SDataSinkManager* pManager, const SDataSinkNode* pDat
|
|||
return TSDB_CODE_QRY_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
SDataDeleterNode* pDeleterNode = (SDataDeleterNode *)pDataSink;
|
||||
SQueryInserterNode* pInserterNode = (SQueryInserterNode *)pDataSink;
|
||||
inserter->sink.fPut = putDataBlock;
|
||||
inserter->sink.fEndPut = endPut;
|
||||
inserter->sink.fGetLen = getDataLength;
|
||||
inserter->sink.fGetData = getDataBlock;
|
||||
inserter->sink.fGetData = NULL;
|
||||
inserter->sink.fDestroy = destroyDataSinker;
|
||||
inserter->sink.fGetCacheSize = getCacheSize;
|
||||
inserter->pManager = pManager;
|
||||
inserter->pDeleter = pDeleterNode;
|
||||
inserter->pSchema = pDataSink->pInputDataBlockDesc;
|
||||
inserter->pNode = pInserterNode;
|
||||
inserter->pParam = pParam;
|
||||
inserter->status = DS_BUF_EMPTY;
|
||||
inserter->queryEnd = false;
|
||||
inserter->pDataBlocks = taosOpenQueue();
|
||||
|
||||
int64_t suid = 0;
|
||||
int32_t code = tsdbGetTableSchema(inserter->pParam->readHandle->vnode, pInserterNode->tableId, &inserter->pSchema, &suid);
|
||||
if (code) {
|
||||
return code;
|
||||
}
|
||||
|
||||
if (pInserterNode->stableId != suid) {
|
||||
terrno = TSDB_CODE_TDB_INVALID_TABLE_ID;
|
||||
return terrno;
|
||||
}
|
||||
|
||||
inserter->pDataBlocks = taosArrayInit(1, POINTER_BYTES);
|
||||
taosThreadMutexInit(&inserter->mutex, NULL);
|
||||
if (NULL == inserter->pDataBlocks) {
|
||||
terrno = TSDB_CODE_QRY_OUT_OF_MEMORY;
|
||||
return TSDB_CODE_QRY_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
inserter->pCols = taosHashInit(pInserterNode->pCols->length, taosGetDefaultHashFunction(TSDB_DATA_TYPE_SMALLINT), false, HASH_NO_LOCK);
|
||||
SNode* pNode = NULL;
|
||||
FOREACH(pNode, pInserterNode->pCols) {
|
||||
SColumnNode* pCol = (SColumnNode*)pNode;
|
||||
taosHashPut(inserter->pCols, &pCol->colId, sizeof(pCol->colId), &pCol->slotId, sizeof(pCol->slotId));
|
||||
}
|
||||
|
||||
tsem_init(&inserter->ready, 0, 0);
|
||||
|
||||
*pHandle = inserter;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
|
|
@ -40,6 +40,8 @@ int32_t dsCreateDataSinker(const SDataSinkNode *pDataSink, DataSinkHandle* pHand
|
|||
return createDataDispatcher(&gDataSinkManager, pDataSink, pHandle);
|
||||
case QUERY_NODE_PHYSICAL_PLAN_DELETE:
|
||||
return createDataDeleter(&gDataSinkManager, pDataSink, pHandle, pParam);
|
||||
case QUERY_NODE_PHYSICAL_PLAN_QUERY_INSERT:
|
||||
return createDataInserter(&gDataSinkManager, pDataSink, pHandle, pParam);
|
||||
}
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
|
@ -54,7 +56,7 @@ void dsEndPut(DataSinkHandle handle, uint64_t useconds) {
|
|||
return pHandleImpl->fEndPut(pHandleImpl, useconds);
|
||||
}
|
||||
|
||||
void dsGetDataLength(DataSinkHandle handle, int32_t* pLen, bool* pQueryEnd) {
|
||||
void dsGetDataLength(DataSinkHandle handle, int64_t* pLen, bool* pQueryEnd) {
|
||||
SDataSinkHandle* pHandleImpl = (SDataSinkHandle*)handle;
|
||||
pHandleImpl->fGetLen(pHandleImpl, pLen, pQueryEnd);
|
||||
}
|
||||
|
|
|
@ -60,9 +60,9 @@ static int32_t doSetStreamBlock(SOperatorInfo* pOperator, void* input, size_t nu
|
|||
taosArrayAddAll(p->pDataBlock, pDataBlock->pDataBlock);
|
||||
taosArrayPush(pInfo->pBlockLists, &p);
|
||||
}
|
||||
} else if (type == STREAM_INPUT__DATA_SCAN) {
|
||||
} else if (type == STREAM_INPUT__TABLE_SCAN) {
|
||||
// do nothing
|
||||
ASSERT(pInfo->blockType == STREAM_INPUT__DATA_SCAN);
|
||||
ASSERT(pInfo->blockType == STREAM_INPUT__TABLE_SCAN);
|
||||
} else {
|
||||
ASSERT(0);
|
||||
}
|
||||
|
@ -76,7 +76,7 @@ int32_t qStreamScanSnapshot(qTaskInfo_t tinfo) {
|
|||
return TSDB_CODE_QRY_APP_ERROR;
|
||||
}
|
||||
SExecTaskInfo* pTaskInfo = (SExecTaskInfo*)tinfo;
|
||||
return doSetStreamBlock(pTaskInfo->pRoot, NULL, 0, STREAM_INPUT__DATA_SCAN, 0, NULL);
|
||||
return doSetStreamBlock(pTaskInfo->pRoot, NULL, 0, STREAM_INPUT__TABLE_SCAN, 0, NULL);
|
||||
}
|
||||
|
||||
int32_t qSetStreamInput(qTaskInfo_t tinfo, const void* input, int32_t type, bool assignUid) {
|
||||
|
|
|
@ -52,7 +52,7 @@ int32_t qCreateExecTask(SReadHandle* readHandle, int32_t vgId, uint64_t taskId,
|
|||
|
||||
if (handle) {
|
||||
void* pSinkParam = NULL;
|
||||
code = createDataSinkParam(pSubplan->pDataSink, &pSinkParam, pTaskInfo);
|
||||
code = createDataSinkParam(pSubplan->pDataSink, &pSinkParam, pTaskInfo, readHandle);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
goto _error;
|
||||
}
|
||||
|
@ -267,7 +267,46 @@ const STqOffset* qExtractStatusFromStreamScanner(void* scanner) {
|
|||
return &pInfo->offset;
|
||||
}
|
||||
|
||||
int32_t qStreamPrepareScan(qTaskInfo_t tinfo, uint64_t uid, int64_t ts) {
|
||||
void* qStreamExtractMetaMsg(qTaskInfo_t tinfo) {
|
||||
SExecTaskInfo* pTaskInfo = (SExecTaskInfo*)tinfo;
|
||||
ASSERT(pTaskInfo->execModel == OPTR_EXEC_MODEL_STREAM);
|
||||
return pTaskInfo->streamInfo.metaBlk;
|
||||
}
|
||||
|
||||
int32_t qStreamExtractOffset(qTaskInfo_t tinfo, STqOffsetVal* pOffset) {
|
||||
SExecTaskInfo* pTaskInfo = (SExecTaskInfo*)tinfo;
|
||||
ASSERT(pTaskInfo->execModel == OPTR_EXEC_MODEL_STREAM);
|
||||
memcpy(pOffset, &pTaskInfo->streamInfo.lastStatus, sizeof(STqOffsetVal));
|
||||
return 0;
|
||||
}
|
||||
|
||||
int32_t qStreamPrepareScan1(qTaskInfo_t tinfo, const STqOffsetVal* pOffset) {
|
||||
SExecTaskInfo* pTaskInfo = (SExecTaskInfo*)tinfo;
|
||||
SOperatorInfo* pOperator = pTaskInfo->pRoot;
|
||||
ASSERT(pTaskInfo->execModel == OPTR_EXEC_MODEL_STREAM);
|
||||
pTaskInfo->streamInfo.prepareStatus = *pOffset;
|
||||
// TODO: optimize
|
||||
/*if (pTaskInfo->streamInfo.lastStatus.type != pOffset->type ||*/
|
||||
/*pTaskInfo->streamInfo.prepareStatus.version != pTaskInfo->streamInfo.lastStatus.version) {*/
|
||||
while (1) {
|
||||
uint8_t type = pOperator->operatorType;
|
||||
pOperator->status = OP_OPENED;
|
||||
if (type == QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN) {
|
||||
SStreamScanInfo* pInfo = pOperator->info;
|
||||
if (tqSeekVer(pInfo->tqReader, pOffset->version) < 0) {
|
||||
return -1;
|
||||
}
|
||||
return 0;
|
||||
} else {
|
||||
ASSERT(pOperator->numOfDownstream == 1);
|
||||
pOperator = pOperator->pDownstream[0];
|
||||
}
|
||||
}
|
||||
/*}*/
|
||||
return 0;
|
||||
}
|
||||
|
||||
int32_t qStreamPrepareTsdbScan(qTaskInfo_t tinfo, uint64_t uid, int64_t ts) {
|
||||
SExecTaskInfo* pTaskInfo = (SExecTaskInfo*)tinfo;
|
||||
|
||||
if (uid == 0) {
|
||||
|
|
|
@ -1994,7 +1994,7 @@ static void destroySendMsgInfo(SMsgSendInfo* pMsgBody) {
|
|||
taosMemoryFreeClear(pMsgBody);
|
||||
}
|
||||
|
||||
void qProcessFetchRsp(void* parent, SRpcMsg* pMsg, SEpSet* pEpSet) {
|
||||
void qProcessRspMsg(void* parent, SRpcMsg* pMsg, SEpSet* pEpSet) {
|
||||
SMsgSendInfo* pSendInfo = (SMsgSendInfo*)pMsg->info.ahandle;
|
||||
assert(pMsg->info.ahandle != NULL);
|
||||
|
||||
|
@ -2463,6 +2463,8 @@ static void destroySortedMergeOperatorInfo(void* param, int32_t numOfOutput) {
|
|||
|
||||
blockDataDestroy(pInfo->binfo.pRes);
|
||||
cleanupAggSup(&pInfo->aggSup);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
static bool needToMerge(SSDataBlock* pBlock, SArray* groupInfo, char** buf, int32_t rowIndex) {
|
||||
|
@ -2850,7 +2852,7 @@ int32_t doPrepareScan(SOperatorInfo* pOperator, uint64_t uid, int64_t ts) {
|
|||
|
||||
if (type == QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN) {
|
||||
SStreamScanInfo* pScanInfo = pOperator->info;
|
||||
pScanInfo->blockType = STREAM_INPUT__DATA_SCAN;
|
||||
pScanInfo->blockType = STREAM_INPUT__TABLE_SCAN;
|
||||
|
||||
pScanInfo->pTableScanOp->status = OP_OPENED;
|
||||
|
||||
|
@ -3285,7 +3287,10 @@ static SSDataBlock* doProjectOperation(SOperatorInfo* pOperator) {
|
|||
// The downstream exec may change the value of the newgroup, so use a local variable instead.
|
||||
SSDataBlock* pBlock = downstream->fpSet.getNextFn(downstream);
|
||||
if (pBlock == NULL) {
|
||||
// TODO optimize
|
||||
/*if (pTaskInfo->execModel != OPTR_EXEC_MODEL_STREAM) {*/
|
||||
doSetOperatorCompleted(pOperator);
|
||||
/*}*/
|
||||
break;
|
||||
}
|
||||
if (pBlock->info.type == STREAM_RETRIEVE) {
|
||||
|
@ -3504,7 +3509,6 @@ static void destroyOperatorInfo(SOperatorInfo* pOperator) {
|
|||
}
|
||||
|
||||
taosMemoryFreeClear(pOperator->exprSupp.pExprInfo);
|
||||
taosMemoryFreeClear(pOperator->info);
|
||||
taosMemoryFreeClear(pOperator);
|
||||
}
|
||||
|
||||
|
@ -3674,11 +3678,15 @@ void cleanupBasicInfo(SOptrBasicInfo* pInfo) {
|
|||
void destroyBasicOperatorInfo(void* param, int32_t numOfOutput) {
|
||||
SOptrBasicInfo* pInfo = (SOptrBasicInfo*)param;
|
||||
cleanupBasicInfo(pInfo);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
void destroyAggOperatorInfo(void* param, int32_t numOfOutput) {
|
||||
SAggOperatorInfo* pInfo = (SAggOperatorInfo*)param;
|
||||
cleanupBasicInfo(&pInfo->binfo);
|
||||
cleanupBasicInfo(&pInfo->binfo);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
void destroySFillOperatorInfo(void* param, int32_t numOfOutput) {
|
||||
|
@ -3686,6 +3694,8 @@ void destroySFillOperatorInfo(void* param, int32_t numOfOutput) {
|
|||
pInfo->pFillInfo = taosDestroyFillInfo(pInfo->pFillInfo);
|
||||
pInfo->pRes = blockDataDestroy(pInfo->pRes);
|
||||
taosMemoryFreeClear(pInfo->p);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
static void destroyProjectOperatorInfo(void* param, int32_t numOfOutput) {
|
||||
|
@ -3696,6 +3706,8 @@ static void destroyProjectOperatorInfo(void* param, int32_t numOfOutput) {
|
|||
cleanupBasicInfo(&pInfo->binfo);
|
||||
cleanupAggSup(&pInfo->aggSup);
|
||||
taosArrayDestroy(pInfo->pPseudoColInfo);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
void cleanupExprSupp(SExprSupp* pSupp) {
|
||||
|
@ -3712,6 +3724,8 @@ static void destroyIndefinitOperatorInfo(void* param, int32_t numOfOutput) {
|
|||
taosArrayDestroy(pInfo->pPseudoColInfo);
|
||||
cleanupAggSup(&pInfo->aggSup);
|
||||
cleanupExprSupp(&pInfo->scalarSup);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
void destroyExchangeOperatorInfo(void* param, int32_t numOfOutput) {
|
||||
|
@ -3729,6 +3743,8 @@ void doDestroyExchangeOperatorInfo(void* param) {
|
|||
}
|
||||
|
||||
tsem_destroy(&pExInfo->ready);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
static SArray* setRowTsColumnOutputInfo(SqlFunctionCtx* pCtx, int32_t numOfCols) {
|
||||
|
@ -4449,7 +4465,7 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
|
|||
.precision = ((SColumnNode*)pIntervalPhyNode->window.pTspk)->node.resType.precision};
|
||||
|
||||
int32_t tsSlotId = ((SColumnNode*)pIntervalPhyNode->window.pTspk)->slotId;
|
||||
pOptr = createMergeAlignedIntervalOperatorInfo(ops[0], pExprInfo, num, pResBlock, &interval, tsSlotId, pTaskInfo);
|
||||
pOptr = createMergeAlignedIntervalOperatorInfo(ops[0], pExprInfo, num, pResBlock, &interval, tsSlotId, pPhyNode->pConditions, pTaskInfo);
|
||||
} else if (QUERY_NODE_PHYSICAL_PLAN_MERGE_INTERVAL == type) {
|
||||
SMergeIntervalPhysiNode* pIntervalPhyNode = (SMergeIntervalPhysiNode*)pPhyNode;
|
||||
|
||||
|
@ -4772,10 +4788,20 @@ int32_t decodeOperator(SOperatorInfo* ops, const char* result, int32_t length) {
|
|||
return TDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t createDataSinkParam(SDataSinkNode* pNode, void** pParam, qTaskInfo_t* pTaskInfo) {
|
||||
int32_t createDataSinkParam(SDataSinkNode* pNode, void** pParam, qTaskInfo_t* pTaskInfo, SReadHandle* readHandle) {
|
||||
SExecTaskInfo* pTask = *(SExecTaskInfo**)pTaskInfo;
|
||||
|
||||
switch (pNode->type) {
|
||||
case QUERY_NODE_PHYSICAL_PLAN_QUERY_INSERT: {
|
||||
SInserterParam* pInserterParam = taosMemoryCalloc(1, sizeof(SInserterParam));
|
||||
if (NULL == pInserterParam) {
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
pInserterParam->readHandle = readHandle;
|
||||
|
||||
*pParam = pInserterParam;
|
||||
break;
|
||||
}
|
||||
case QUERY_NODE_PHYSICAL_PLAN_DELETE: {
|
||||
SDeleterParam* pDeleterParam = taosMemoryCalloc(1, sizeof(SDeleterParam));
|
||||
if (NULL == pDeleterParam) {
|
||||
|
|
|
@ -38,6 +38,8 @@ static void destroyGroupOperatorInfo(void* param, int32_t numOfOutput) {
|
|||
taosArrayDestroy(pInfo->pGroupCols);
|
||||
taosArrayDestroy(pInfo->pGroupColVals);
|
||||
cleanupExprSupp(&pInfo->scalarSup);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
static int32_t initGroupOptrInfo(SArray** pGroupColVals, int32_t* keyLen, char** keyBuf, const SArray* pGroupColList) {
|
||||
|
@ -724,6 +726,8 @@ static void destroyPartitionOperatorInfo(void* param, int32_t numOfOutput) {
|
|||
taosMemoryFree(pInfo->columnOffset);
|
||||
|
||||
cleanupExprSupp(&pInfo->scalarSup);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
SOperatorInfo* createPartitionOperatorInfo(SOperatorInfo* downstream, SPartitionPhysiNode* pPartNode, SExecTaskInfo* pTaskInfo) {
|
||||
|
@ -806,4 +810,4 @@ int32_t setGroupResultOutputBuf(SOperatorInfo* pOperator, SOptrBasicInfo* binfo,
|
|||
|
||||
setResultRowInitCtx(pResultRow, pCtx, numOfCols, pOperator->exprSupp.rowEntryInfoOffset);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -104,6 +104,8 @@ void setJoinColumnInfo(SColumnInfo* pColumn, const SColumnNode* pColumnNode) {
|
|||
void destroyMergeJoinOperator(void* param, int32_t numOfOutput) {
|
||||
SJoinOperatorInfo* pJoinOperator = (SJoinOperatorInfo*)param;
|
||||
nodesDestroyNode(pJoinOperator->pCondAfterMerge);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
static void doMergeJoinImpl(struct SOperatorInfo* pOperator, SSDataBlock* pRes) {
|
||||
|
|
|
@ -210,6 +210,7 @@ static int32_t loadDataBlock(SOperatorInfo* pOperator, STableScanInfo* pTableSca
|
|||
|
||||
bool allColumnsHaveAgg = true;
|
||||
SColumnDataAgg** pColAgg = NULL;
|
||||
|
||||
int32_t code = tsdbRetrieveDatablockSMA(pTableScanInfo->dataReader, &pColAgg, &allColumnsHaveAgg);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
longjmp(pTaskInfo->env, code);
|
||||
|
@ -595,6 +596,8 @@ static void destroyTableScanOperatorInfo(void* param, int32_t numOfOutput) {
|
|||
if (pTableScanInfo->pColMatchInfo != NULL) {
|
||||
taosArrayDestroy(pTableScanInfo->pColMatchInfo);
|
||||
}
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
SOperatorInfo* createTableScanOperatorInfo(STableScanPhysiNode* pTableScanNode, SReadHandle* readHandle,
|
||||
|
@ -743,6 +746,8 @@ static SSDataBlock* doBlockInfoScan(SOperatorInfo* pOperator) {
|
|||
static void destroyBlockDistScanOperatorInfo(void* param, int32_t numOfOutput) {
|
||||
SBlockDistInfo* pDistInfo = (SBlockDistInfo*)param;
|
||||
blockDataDestroy(pDistInfo->pResBlock);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
SOperatorInfo* createDataBlockInfoScanOperator(void* dataReader, SReadHandle* readHandle, uint64_t uid,
|
||||
|
@ -1126,15 +1131,121 @@ static void setBlockGroupId(SOperatorInfo* pOperator, SSDataBlock* pBlock, int32
|
|||
uidCol[i] = getGroupId(pOperator, uidCol[i]);
|
||||
}
|
||||
}
|
||||
static int32_t setBlockIntoRes(SStreamScanInfo* pInfo, const SSDataBlock* pBlock) {
|
||||
SDataBlockInfo* pBlockInfo = &pInfo->pRes->info;
|
||||
SOperatorInfo* pOperator = pInfo->pStreamScanOp;
|
||||
SExecTaskInfo* pTaskInfo = pInfo->pStreamScanOp->pTaskInfo;
|
||||
|
||||
pInfo->pRes->info.rows = pBlock->info.rows;
|
||||
pInfo->pRes->info.uid = pBlock->info.uid;
|
||||
pInfo->pRes->info.type = STREAM_NORMAL;
|
||||
pInfo->pRes->info.capacity = pBlock->info.rows;
|
||||
|
||||
// for generating rollup SMA result, each time is an independent time serie.
|
||||
// TODO temporarily used, when the statement of "partition by tbname" is ready, remove this
|
||||
if (pInfo->assignBlockUid) {
|
||||
pInfo->pRes->info.groupId = pBlock->info.uid;
|
||||
}
|
||||
|
||||
uint64_t* groupIdPre = taosHashGet(pOperator->pTaskInfo->tableqinfoList.map, &pBlock->info.uid, sizeof(int64_t));
|
||||
if (groupIdPre) {
|
||||
pInfo->pRes->info.groupId = *groupIdPre;
|
||||
} else {
|
||||
pInfo->pRes->info.groupId = 0;
|
||||
}
|
||||
|
||||
// todo extract method
|
||||
for (int32_t i = 0; i < taosArrayGetSize(pInfo->pColMatchInfo); ++i) {
|
||||
SColMatchInfo* pColMatchInfo = taosArrayGet(pInfo->pColMatchInfo, i);
|
||||
if (!pColMatchInfo->output) {
|
||||
continue;
|
||||
}
|
||||
|
||||
bool colExists = false;
|
||||
for (int32_t j = 0; j < blockDataGetNumOfCols(pBlock); ++j) {
|
||||
SColumnInfoData* pResCol = bdGetColumnInfoData(pBlock, j);
|
||||
if (pResCol->info.colId == pColMatchInfo->colId) {
|
||||
taosArraySet(pInfo->pRes->pDataBlock, pColMatchInfo->targetSlotId, pResCol);
|
||||
colExists = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// the required column does not exists in submit block, let's set it to be all null value
|
||||
if (!colExists) {
|
||||
SColumnInfoData* pDst = taosArrayGet(pInfo->pRes->pDataBlock, pColMatchInfo->targetSlotId);
|
||||
colDataAppendNNULL(pDst, 0, pBlockInfo->rows);
|
||||
}
|
||||
}
|
||||
|
||||
taosArrayDestroy(pBlock->pDataBlock);
|
||||
|
||||
ASSERT(pInfo->pRes->pDataBlock != NULL);
|
||||
#if 0
|
||||
if (pInfo->pRes->pDataBlock == NULL) {
|
||||
// TODO add log
|
||||
updateInfoDestoryColseWinSBF(pInfo->pUpdateInfo);
|
||||
pOperator->status = OP_EXEC_DONE;
|
||||
pTaskInfo->code = terrno;
|
||||
return -1;
|
||||
}
|
||||
#endif
|
||||
|
||||
// currently only the tbname pseudo column
|
||||
if (pInfo->numOfPseudoExpr > 0) {
|
||||
addTagPseudoColumnData(&pInfo->readHandle, pInfo->pPseudoExpr, pInfo->numOfPseudoExpr, pInfo->pRes);
|
||||
}
|
||||
|
||||
doFilter(pInfo->pCondition, pInfo->pRes);
|
||||
blockDataUpdateTsWindow(pInfo->pRes, pInfo->primaryTsIndex);
|
||||
if (pBlockInfo->rows > 0) {
|
||||
return 0;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static SSDataBlock* doStreamScan(SOperatorInfo* pOperator) {
|
||||
// NOTE: this operator does never check if current status is done or not
|
||||
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
|
||||
SStreamScanInfo* pInfo = pOperator->info;
|
||||
|
||||
pTaskInfo->code = pOperator->fpSet._openFn(pOperator);
|
||||
if (pTaskInfo->code != TSDB_CODE_SUCCESS || pOperator->status == OP_EXEC_DONE) {
|
||||
return NULL;
|
||||
/*pTaskInfo->code = pOperator->fpSet._openFn(pOperator);*/
|
||||
/*if (pTaskInfo->code != TSDB_CODE_SUCCESS || pOperator->status == OP_EXEC_DONE) {*/
|
||||
/*return NULL;*/
|
||||
/*}*/
|
||||
|
||||
if (pTaskInfo->streamInfo.prepareStatus.type == TMQ_OFFSET__LOG) {
|
||||
while (1) {
|
||||
SFetchRet ret = {0};
|
||||
tqNextBlock(pInfo->tqReader, &ret);
|
||||
if (ret.fetchType == FETCH_TYPE__DATA) {
|
||||
blockDataCleanup(pInfo->pRes);
|
||||
if (setBlockIntoRes(pInfo, &ret.data) < 0) {
|
||||
ASSERT(0);
|
||||
}
|
||||
/*pTaskInfo->streamInfo.lastStatus = ret.offset;*/
|
||||
if (pInfo->pRes->info.rows > 0) {
|
||||
return pInfo->pRes;
|
||||
/*} else {*/
|
||||
/*tDeleteSSDataBlock(&ret.data);*/
|
||||
}
|
||||
} else if (ret.fetchType == FETCH_TYPE__META) {
|
||||
ASSERT(0);
|
||||
pTaskInfo->streamInfo.lastStatus = ret.offset;
|
||||
pTaskInfo->streamInfo.metaBlk = ret.meta;
|
||||
return NULL;
|
||||
} else if (ret.fetchType == FETCH_TYPE__NONE) {
|
||||
if (ret.offset.version == -1) {
|
||||
pTaskInfo->streamInfo.lastStatus.type = TMQ_OFFSET__LOG;
|
||||
pTaskInfo->streamInfo.lastStatus.version = pTaskInfo->streamInfo.prepareStatus.version - 1;
|
||||
} else {
|
||||
pTaskInfo->streamInfo.lastStatus = ret.offset;
|
||||
}
|
||||
return NULL;
|
||||
} else {
|
||||
ASSERT(0);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
size_t total = taosArrayGetSize(pInfo->pBlockLists);
|
||||
|
@ -1142,7 +1253,7 @@ static SSDataBlock* doStreamScan(SOperatorInfo* pOperator) {
|
|||
if (pInfo->blockType == STREAM_INPUT__DATA_BLOCK) {
|
||||
if (pInfo->validBlockIndex >= total) {
|
||||
/*doClearBufferedBlocks(pInfo);*/
|
||||
pOperator->status = OP_EXEC_DONE;
|
||||
/*pOperator->status = OP_EXEC_DONE;*/
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
@ -1251,12 +1362,6 @@ static SSDataBlock* doStreamScan(SOperatorInfo* pOperator) {
|
|||
pInfo->pRes->info.type = STREAM_NORMAL;
|
||||
pInfo->pRes->info.capacity = block.info.rows;
|
||||
|
||||
// for generating rollup SMA result, each time is an independent time serie.
|
||||
// TODO temporarily used, when the statement of "partition by tbname" is ready, remove this
|
||||
if (pInfo->assignBlockUid) {
|
||||
pInfo->pRes->info.groupId = block.info.uid;
|
||||
}
|
||||
|
||||
uint64_t* groupIdPre = taosHashGet(pOperator->pTaskInfo->tableqinfoList.map, &block.info.uid, sizeof(int64_t));
|
||||
if (groupIdPre) {
|
||||
pInfo->pRes->info.groupId = *groupIdPre;
|
||||
|
@ -1264,6 +1369,12 @@ static SSDataBlock* doStreamScan(SOperatorInfo* pOperator) {
|
|||
pInfo->pRes->info.groupId = 0;
|
||||
}
|
||||
|
||||
// for generating rollup SMA result, each time is an independent time serie.
|
||||
// TODO temporarily used, when the statement of "partition by tbname" is ready, remove this
|
||||
if (pInfo->assignBlockUid) {
|
||||
pInfo->pRes->info.groupId = block.info.uid;
|
||||
}
|
||||
|
||||
// todo extract method
|
||||
for (int32_t i = 0; i < taosArrayGetSize(pInfo->pColMatchInfo); ++i) {
|
||||
SColMatchInfo* pColMatchInfo = taosArrayGet(pInfo->pColMatchInfo, i);
|
||||
|
@ -1289,6 +1400,9 @@ static SSDataBlock* doStreamScan(SOperatorInfo* pOperator) {
|
|||
}
|
||||
|
||||
taosArrayDestroy(block.pDataBlock);
|
||||
|
||||
ASSERT(pInfo->pRes->pDataBlock != NULL);
|
||||
#if 0
|
||||
if (pInfo->pRes->pDataBlock == NULL) {
|
||||
// TODO add log
|
||||
updateInfoDestoryColseWinSBF(pInfo->pUpdateInfo);
|
||||
|
@ -1296,6 +1410,7 @@ static SSDataBlock* doStreamScan(SOperatorInfo* pOperator) {
|
|||
pTaskInfo->code = terrno;
|
||||
return NULL;
|
||||
}
|
||||
#endif
|
||||
|
||||
// currently only the tbname pseudo column
|
||||
if (pInfo->numOfPseudoExpr > 0) {
|
||||
|
@ -1315,7 +1430,7 @@ static SSDataBlock* doStreamScan(SOperatorInfo* pOperator) {
|
|||
|
||||
if (pBlockInfo->rows == 0) {
|
||||
updateInfoDestoryColseWinSBF(pInfo->pUpdateInfo);
|
||||
pOperator->status = OP_EXEC_DONE;
|
||||
/*pOperator->status = OP_EXEC_DONE;*/
|
||||
} else if (pInfo->pUpdateInfo) {
|
||||
pInfo->tsArrayIndex = 0;
|
||||
checkUpdateData(pInfo, true, pInfo->pRes, true);
|
||||
|
@ -1333,7 +1448,7 @@ static SSDataBlock* doStreamScan(SOperatorInfo* pOperator) {
|
|||
|
||||
return (pBlockInfo->rows == 0) ? NULL : pInfo->pRes;
|
||||
|
||||
} else if (pInfo->blockType == STREAM_INPUT__DATA_SCAN) {
|
||||
} else if (pInfo->blockType == STREAM_INPUT__TABLE_SCAN) {
|
||||
// check reader last status
|
||||
// if not match, reset status
|
||||
SSDataBlock* pResult = doTableScan(pInfo->pTableScanOp);
|
||||
|
@ -1491,6 +1606,8 @@ static void destroySysScanOperator(void* param, int32_t numOfOutput) {
|
|||
|
||||
taosArrayDestroy(pInfo->scanCols);
|
||||
taosMemoryFreeClear(pInfo->pUser);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
static int32_t getSysTableDbNameColId(const char* pTable) {
|
||||
|
@ -2169,6 +2286,8 @@ static SSDataBlock* doTagScan(SOperatorInfo* pOperator) {
|
|||
static void destroyTagScanOperatorInfo(void* param, int32_t numOfOutput) {
|
||||
STagScanInfo* pInfo = (STagScanInfo*)param;
|
||||
pInfo->pRes = blockDataDestroy(pInfo->pRes);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
SOperatorInfo* createTagScanOperatorInfo(SReadHandle* pReadHandle, STagScanPhysiNode* pPhyNode,
|
||||
|
@ -2659,6 +2778,8 @@ void destroyTableMergeScanOperatorInfo(void* param, int32_t numOfOutput) {
|
|||
pTableScanInfo->pSortInputBlock = blockDataDestroy(pTableScanInfo->pSortInputBlock);
|
||||
|
||||
taosArrayDestroy(pTableScanInfo->pSortInfo);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
typedef struct STableMergeScanExecInfo {
|
||||
|
@ -2790,6 +2911,8 @@ static void destroyLastrowScanOperator(void* param, int32_t numOfOutput) {
|
|||
SLastrowScanInfo* pInfo = (SLastrowScanInfo*)param;
|
||||
blockDataDestroy(pInfo->pRes);
|
||||
tsdbLastrowReaderClose(pInfo->pLastrowReader);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
SOperatorInfo* createLastrowScanOperator(SLastRowScanPhysiNode* pScanNode, SReadHandle* readHandle, SArray* pTableList,
|
||||
|
|
|
@ -235,6 +235,8 @@ void destroyOrderOperatorInfo(void* param, int32_t numOfOutput) {
|
|||
|
||||
taosArrayDestroy(pInfo->pSortInfo);
|
||||
taosArrayDestroy(pInfo->pColMatchInfo);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
int32_t getExplainExecInfo(SOperatorInfo* pOptr, void** pOptrExplain, uint32_t* len) {
|
||||
|
@ -451,6 +453,8 @@ void destroyGroupSortOperatorInfo(void* param, int32_t numOfOutput) {
|
|||
|
||||
taosArrayDestroy(pInfo->pSortInfo);
|
||||
taosArrayDestroy(pInfo->pColMatchInfo);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
SOperatorInfo* createGroupSortOperatorInfo(SOperatorInfo* downstream, SGroupSortPhysiNode* pSortPhyNode,
|
||||
|
@ -670,6 +674,8 @@ void destroyMultiwayMergeOperatorInfo(void* param, int32_t numOfOutput) {
|
|||
|
||||
taosArrayDestroy(pInfo->pSortInfo);
|
||||
taosArrayDestroy(pInfo->pColMatchInfo);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
int32_t getMultiwayMergeExplainExecInfo(SOperatorInfo* pOptr, void** pOptrExplain, uint32_t* len) {
|
||||
|
|
|
@ -1316,7 +1316,7 @@ bool doDeleteIntervalWindow(SAggSupporter* pAggSup, TSKEY ts, uint64_t groupId)
|
|||
// window has been closed
|
||||
return false;
|
||||
}
|
||||
SFilePage* bufPage = getBufPage(pAggSup->pResultBuf, p1->pageId);
|
||||
// SFilePage* bufPage = getBufPage(pAggSup->pResultBuf, p1->pageId);
|
||||
// dBufSetBufPageRecycled(pAggSup->pResultBuf, bufPage);
|
||||
taosHashRemove(pAggSup->pResultRowHashTable, pAggSup->keyBuf, GET_RES_WINDOW_KEY_LEN(bytes));
|
||||
return true;
|
||||
|
@ -1419,7 +1419,7 @@ static int32_t closeIntervalWindow(SHashObj* pHashMap, STimeWindowAggSupp* pSup,
|
|||
ASSERT(pRecyPages != NULL);
|
||||
taosArrayPush(pRecyPages, &pPos->pageId);
|
||||
} else {
|
||||
SFilePage* bufPage = getBufPage(pDiscBuf, pPos->pageId);
|
||||
// SFilePage* bufPage = getBufPage(pDiscBuf, pPos->pageId);
|
||||
// dBufSetBufPageRecycled(pDiscBuf, bufPage);
|
||||
}
|
||||
char keyBuf[GET_RES_WINDOW_KEY_LEN(sizeof(TSKEY))];
|
||||
|
@ -1446,7 +1446,7 @@ static void freeAllPages(SArray* pageIds, SDiskbasedBuf* pDiskBuf) {
|
|||
int32_t size = taosArrayGetSize(pageIds);
|
||||
for (int32_t i = 0; i < size; i++) {
|
||||
int32_t pageId = *(int32_t*)taosArrayGet(pageIds, i);
|
||||
SFilePage* bufPage = getBufPage(pDiskBuf, pageId);
|
||||
// SFilePage* bufPage = getBufPage(pDiskBuf, pageId);
|
||||
// dBufSetBufPageRecycled(pDiskBuf, bufPage);
|
||||
}
|
||||
taosArrayClear(pageIds);
|
||||
|
@ -1557,6 +1557,8 @@ static void destroyStateWindowOperatorInfo(void* param, int32_t numOfOutput) {
|
|||
SStateWindowOperatorInfo* pInfo = (SStateWindowOperatorInfo*)param;
|
||||
cleanupBasicInfo(&pInfo->binfo);
|
||||
taosMemoryFreeClear(pInfo->stateKey.pData);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
void destroyIntervalOperatorInfo(void* param, int32_t numOfOutput) {
|
||||
|
@ -1564,6 +1566,8 @@ void destroyIntervalOperatorInfo(void* param, int32_t numOfOutput) {
|
|||
cleanupBasicInfo(&pInfo->binfo);
|
||||
cleanupAggSup(&pInfo->aggSup);
|
||||
taosArrayDestroy(pInfo->pRecycledPages);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
void destroyStreamFinalIntervalOperatorInfo(void* param, int32_t numOfOutput) {
|
||||
|
@ -1586,6 +1590,8 @@ void destroyStreamFinalIntervalOperatorInfo(void* param, int32_t numOfOutput) {
|
|||
}
|
||||
}
|
||||
nodesDestroyNode((SNode*)pInfo->pPhyNode);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
static bool allInvertible(SqlFunctionCtx* pFCtx, int32_t numOfCols) {
|
||||
|
@ -2319,6 +2325,8 @@ _error:
|
|||
void destroySWindowOperatorInfo(void* param, int32_t numOfOutput) {
|
||||
SSessionAggOperatorInfo* pInfo = (SSessionAggOperatorInfo*)param;
|
||||
cleanupBasicInfo(&pInfo->binfo);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
SOperatorInfo* createSessionAggOperatorInfo(SOperatorInfo* downstream, SExprInfo* pExprInfo, int32_t numOfCols,
|
||||
|
@ -2995,6 +3003,8 @@ void destroyStreamSessionAggOperatorInfo(void* param, int32_t numOfOutput) {
|
|||
taosMemoryFreeClear(pChInfo);
|
||||
}
|
||||
}
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
int32_t initBasicInfoEx(SOptrBasicInfo* pBasicInfo, SExprSupp* pSup, SExprInfo* pExprInfo, int32_t numOfCols,
|
||||
|
@ -3954,6 +3964,8 @@ void destroyStreamStateOperatorInfo(void* param, int32_t numOfOutput) {
|
|||
taosMemoryFreeClear(pChInfo);
|
||||
}
|
||||
}
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
int64_t getStateWinTsKey(void* data, int32_t index) {
|
||||
|
@ -4357,23 +4369,27 @@ _error:
|
|||
}
|
||||
|
||||
typedef struct SMergeAlignedIntervalAggOperatorInfo {
|
||||
SIntervalAggOperatorInfo intervalAggOperatorInfo;
|
||||
SIntervalAggOperatorInfo *intervalAggOperatorInfo;
|
||||
|
||||
bool hasGroupId;
|
||||
uint64_t groupId;
|
||||
SSDataBlock* prefetchedBlock;
|
||||
bool inputBlocksFinished;
|
||||
|
||||
SNode* pCondition;
|
||||
} SMergeAlignedIntervalAggOperatorInfo;
|
||||
|
||||
void destroyMergeAlignedIntervalOperatorInfo(void* param, int32_t numOfOutput) {
|
||||
SMergeAlignedIntervalAggOperatorInfo* miaInfo = (SMergeAlignedIntervalAggOperatorInfo*)param;
|
||||
destroyIntervalOperatorInfo(&miaInfo->intervalAggOperatorInfo, numOfOutput);
|
||||
destroyIntervalOperatorInfo(miaInfo->intervalAggOperatorInfo, numOfOutput);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
static int32_t outputMergeAlignedIntervalResult(SOperatorInfo* pOperatorInfo, uint64_t tableGroupId,
|
||||
SSDataBlock* pResultBlock, TSKEY wstartTs) {
|
||||
SMergeAlignedIntervalAggOperatorInfo* miaInfo = pOperatorInfo->info;
|
||||
SIntervalAggOperatorInfo* iaInfo = &miaInfo->intervalAggOperatorInfo;
|
||||
SIntervalAggOperatorInfo* iaInfo = miaInfo->intervalAggOperatorInfo;
|
||||
SExecTaskInfo* pTaskInfo = pOperatorInfo->pTaskInfo;
|
||||
|
||||
SExprSupp* pSup = &pOperatorInfo->exprSupp;
|
||||
|
@ -4394,7 +4410,7 @@ static int32_t outputMergeAlignedIntervalResult(SOperatorInfo* pOperatorInfo, ui
|
|||
static void doMergeAlignedIntervalAggImpl(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResultRowInfo,
|
||||
SSDataBlock* pBlock, int32_t scanFlag, SSDataBlock* pResultBlock) {
|
||||
SMergeAlignedIntervalAggOperatorInfo* miaInfo = pOperatorInfo->info;
|
||||
SIntervalAggOperatorInfo* iaInfo = &miaInfo->intervalAggOperatorInfo;
|
||||
SIntervalAggOperatorInfo* iaInfo = miaInfo->intervalAggOperatorInfo;
|
||||
|
||||
SExecTaskInfo* pTaskInfo = pOperatorInfo->pTaskInfo;
|
||||
SExprSupp* pSup = &pOperatorInfo->exprSupp;
|
||||
|
@ -4459,7 +4475,7 @@ static SSDataBlock* doMergeAlignedIntervalAgg(SOperatorInfo* pOperator) {
|
|||
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
|
||||
|
||||
SMergeAlignedIntervalAggOperatorInfo* miaInfo = pOperator->info;
|
||||
SIntervalAggOperatorInfo* iaInfo = &miaInfo->intervalAggOperatorInfo;
|
||||
SIntervalAggOperatorInfo* iaInfo = miaInfo->intervalAggOperatorInfo;
|
||||
if (pOperator->status == OP_EXEC_DONE) {
|
||||
return NULL;
|
||||
}
|
||||
|
@ -4498,8 +4514,8 @@ static SSDataBlock* doMergeAlignedIntervalAgg(SOperatorInfo* pOperator) {
|
|||
getTableScanInfo(pOperator, &iaInfo->order, &scanFlag);
|
||||
setInputDataBlock(pOperator, pSup->pCtx, pBlock, iaInfo->order, scanFlag, true);
|
||||
doMergeAlignedIntervalAggImpl(pOperator, &iaInfo->binfo.resultRowInfo, pBlock, scanFlag, pRes);
|
||||
|
||||
if (pRes->info.rows >= pOperator->resultInfo.threshold) {
|
||||
doFilter(miaInfo->pCondition, pRes);
|
||||
if (pRes->info.rows > 0) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
@ -4507,7 +4523,7 @@ static SSDataBlock* doMergeAlignedIntervalAgg(SOperatorInfo* pOperator) {
|
|||
pRes->info.groupId = miaInfo->groupId;
|
||||
}
|
||||
|
||||
if (pRes->info.rows == 0) {
|
||||
if (miaInfo->inputBlocksFinished) {
|
||||
doSetOperatorCompleted(pOperator);
|
||||
}
|
||||
|
||||
|
@ -4518,16 +4534,22 @@ static SSDataBlock* doMergeAlignedIntervalAgg(SOperatorInfo* pOperator) {
|
|||
|
||||
SOperatorInfo* createMergeAlignedIntervalOperatorInfo(SOperatorInfo* downstream, SExprInfo* pExprInfo,
|
||||
int32_t numOfCols, SSDataBlock* pResBlock, SInterval* pInterval,
|
||||
int32_t primaryTsSlotId, SExecTaskInfo* pTaskInfo) {
|
||||
int32_t primaryTsSlotId, SNode* pCondition, SExecTaskInfo* pTaskInfo) {
|
||||
SMergeAlignedIntervalAggOperatorInfo* miaInfo = taosMemoryCalloc(1, sizeof(SMergeAlignedIntervalAggOperatorInfo));
|
||||
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
|
||||
if (miaInfo == NULL || pOperator == NULL) {
|
||||
goto _error;
|
||||
}
|
||||
|
||||
SIntervalAggOperatorInfo* iaInfo = &miaInfo->intervalAggOperatorInfo;
|
||||
miaInfo->intervalAggOperatorInfo = taosMemoryCalloc(1, sizeof(SIntervalAggOperatorInfo));
|
||||
if (miaInfo->intervalAggOperatorInfo == NULL) {
|
||||
goto _error;
|
||||
}
|
||||
|
||||
SIntervalAggOperatorInfo* iaInfo = miaInfo->intervalAggOperatorInfo;
|
||||
SExprSupp* pSup = &pOperator->exprSupp;
|
||||
|
||||
miaInfo->pCondition = pCondition;
|
||||
iaInfo->win = pTaskInfo->window;
|
||||
iaInfo->order = TSDB_ORDER_ASC;
|
||||
iaInfo->interval = *pInterval;
|
||||
|
@ -4602,6 +4624,8 @@ void destroyMergeIntervalOperatorInfo(void* param, int32_t numOfOutput) {
|
|||
SMergeIntervalAggOperatorInfo* miaInfo = (SMergeIntervalAggOperatorInfo*)param;
|
||||
tdListFree(miaInfo->groupIntervals);
|
||||
destroyIntervalOperatorInfo(&miaInfo->intervalAggOperatorInfo, numOfOutput);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
static int32_t finalizeWindowResult(SOperatorInfo* pOperatorInfo, uint64_t tableGroupId, STimeWindow* win,
|
||||
|
|
|
@ -10,12 +10,14 @@
|
|||
extern "C" {
|
||||
#endif
|
||||
|
||||
#define fnFatal(...) { if (fnDebugFlag & DEBUG_FATAL) { taosPrintLog("FN FATAL ", DEBUG_FATAL, 255, __VA_ARGS__); }}
|
||||
#define fnError(...) { if (fnDebugFlag & DEBUG_ERROR) { taosPrintLog("FN ERROR ", DEBUG_ERROR, 255, __VA_ARGS__); }}
|
||||
#define fnWarn(...) { if (fnDebugFlag & DEBUG_WARN) { taosPrintLog("FN WARN ", DEBUG_WARN, 255, __VA_ARGS__); }}
|
||||
#define fnInfo(...) { if (fnDebugFlag & DEBUG_INFO) { taosPrintLog("FN ", DEBUG_INFO, 255, __VA_ARGS__); }}
|
||||
#define fnDebug(...) { if (fnDebugFlag & DEBUG_DEBUG) { taosPrintLog("FN ", DEBUG_DEBUG, dDebugFlag, __VA_ARGS__); }}
|
||||
#define fnTrace(...) { if (fnDebugFlag & DEBUG_TRACE) { taosPrintLog("FN ", DEBUG_TRACE, dDebugFlag, __VA_ARGS__); }}
|
||||
// clang-format off
|
||||
#define fnFatal(...) { if (udfDebugFlag & DEBUG_FATAL) { taosPrintLog("UDF FATAL ", DEBUG_FATAL, 255, __VA_ARGS__); }}
|
||||
#define fnError(...) { if (udfDebugFlag & DEBUG_ERROR) { taosPrintLog("UDF ERROR ", DEBUG_ERROR, 255, __VA_ARGS__); }}
|
||||
#define fnWarn(...) { if (udfDebugFlag & DEBUG_WARN) { taosPrintLog("UDF WARN ", DEBUG_WARN, 255, __VA_ARGS__); }}
|
||||
#define fnInfo(...) { if (udfDebugFlag & DEBUG_INFO) { taosPrintLog("UDF ", DEBUG_INFO, 255, __VA_ARGS__); }}
|
||||
#define fnDebug(...) { if (udfDebugFlag & DEBUG_DEBUG) { taosPrintLog("UDF ", DEBUG_DEBUG, udfDebugFlag, __VA_ARGS__); }}
|
||||
#define fnTrace(...) { if (udfDebugFlag & DEBUG_TRACE) { taosPrintLog("UDF ", DEBUG_TRACE, udfDebugFlag, __VA_ARGS__); }}
|
||||
// clang-format on
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
|
|
|
@ -3605,7 +3605,7 @@ static int32_t databaseOptionsToJson(const void* pObj, SJson* pJson) {
|
|||
|
||||
int32_t code = tjsonAddIntegerToObject(pJson, jkDatabaseOptionsBuffer, pNode->buffer);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = tjsonAddIntegerToObject(pJson, jkDatabaseOptionsCachelast, pNode->cachelast);
|
||||
code = tjsonAddIntegerToObject(pJson, jkDatabaseOptionsCachelast, pNode->cacheLast);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = tjsonAddIntegerToObject(pJson, jkDatabaseOptionsCompressionLevel, pNode->compressionLevel);
|
||||
|
@ -3667,7 +3667,7 @@ static int32_t jsonToDatabaseOptions(const SJson* pJson, void* pObj) {
|
|||
|
||||
int32_t code = tjsonGetIntValue(pJson, jkDatabaseOptionsBuffer, &pNode->buffer);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = tjsonGetTinyIntValue(pJson, jkDatabaseOptionsCachelast, &pNode->cachelast);
|
||||
code = tjsonGetTinyIntValue(pJson, jkDatabaseOptionsCachelast, &pNode->cacheLast);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = tjsonGetTinyIntValue(pJson, jkDatabaseOptionsCompressionLevel, &pNode->compressionLevel);
|
||||
|
|
|
@ -39,6 +39,7 @@ typedef struct SAstCreateContext {
|
|||
typedef enum EDatabaseOptionType {
|
||||
DB_OPTION_BUFFER = 1,
|
||||
DB_OPTION_CACHELAST,
|
||||
DB_OPTION_CACHELASTSIZE,
|
||||
DB_OPTION_COMP,
|
||||
DB_OPTION_DAYS,
|
||||
DB_OPTION_FSYNC,
|
||||
|
|
|
@ -172,6 +172,7 @@ exists_opt(A) ::= .
|
|||
db_options(A) ::= . { A = createDefaultDatabaseOptions(pCxt); }
|
||||
db_options(A) ::= db_options(B) BUFFER NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_BUFFER, &C); }
|
||||
db_options(A) ::= db_options(B) CACHELAST NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_CACHELAST, &C); }
|
||||
db_options(A) ::= db_options(B) CACHELASTSIZE NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_CACHELASTSIZE, &C); }
|
||||
db_options(A) ::= db_options(B) COMP NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_COMP, &C); }
|
||||
db_options(A) ::= db_options(B) DURATION NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_DAYS, &C); }
|
||||
db_options(A) ::= db_options(B) DURATION NK_VARIABLE(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_DAYS, &C); }
|
||||
|
@ -198,6 +199,7 @@ alter_db_options(A) ::= alter_db_options(B) alter_db_option(C).
|
|||
%destructor alter_db_option { }
|
||||
alter_db_option(A) ::= BUFFER NK_INTEGER(B). { A.type = DB_OPTION_BUFFER; A.val = B; }
|
||||
alter_db_option(A) ::= CACHELAST NK_INTEGER(B). { A.type = DB_OPTION_CACHELAST; A.val = B; }
|
||||
alter_db_option(A) ::= CACHELASTSIZE NK_INTEGER(B). { A.type = DB_OPTION_CACHELASTSIZE; A.val = B; }
|
||||
alter_db_option(A) ::= FSYNC NK_INTEGER(B). { A.type = DB_OPTION_FSYNC; A.val = B; }
|
||||
alter_db_option(A) ::= KEEP integer_list(B). { A.type = DB_OPTION_KEEP; A.pList = B; }
|
||||
alter_db_option(A) ::= KEEP variable_list(B). { A.type = DB_OPTION_KEEP; A.pList = B; }
|
||||
|
|
|
@ -746,7 +746,8 @@ SNode* createDefaultDatabaseOptions(SAstCreateContext* pCxt) {
|
|||
SDatabaseOptions* pOptions = (SDatabaseOptions*)nodesMakeNode(QUERY_NODE_DATABASE_OPTIONS);
|
||||
CHECK_OUT_OF_MEM(pOptions);
|
||||
pOptions->buffer = TSDB_DEFAULT_BUFFER_PER_VNODE;
|
||||
pOptions->cachelast = TSDB_DEFAULT_CACHE_LAST_ROW;
|
||||
pOptions->cacheLast = TSDB_DEFAULT_CACHE_LAST_ROW;
|
||||
pOptions->cacheLastSize = TSDB_DEFAULT_LAST_ROW_MEM;
|
||||
pOptions->compressionLevel = TSDB_DEFAULT_COMP_LEVEL;
|
||||
pOptions->daysPerFile = TSDB_DEFAULT_DAYS_PER_FILE;
|
||||
pOptions->fsyncPeriod = TSDB_DEFAULT_FSYNC_PERIOD;
|
||||
|
@ -772,7 +773,8 @@ SNode* createAlterDatabaseOptions(SAstCreateContext* pCxt) {
|
|||
SDatabaseOptions* pOptions = (SDatabaseOptions*)nodesMakeNode(QUERY_NODE_DATABASE_OPTIONS);
|
||||
CHECK_OUT_OF_MEM(pOptions);
|
||||
pOptions->buffer = -1;
|
||||
pOptions->cachelast = -1;
|
||||
pOptions->cacheLast = -1;
|
||||
pOptions->cacheLastSize = -1;
|
||||
pOptions->compressionLevel = -1;
|
||||
pOptions->daysPerFile = -1;
|
||||
pOptions->fsyncPeriod = -1;
|
||||
|
@ -800,7 +802,10 @@ SNode* setDatabaseOption(SAstCreateContext* pCxt, SNode* pOptions, EDatabaseOpti
|
|||
((SDatabaseOptions*)pOptions)->buffer = taosStr2Int32(((SToken*)pVal)->z, NULL, 10);
|
||||
break;
|
||||
case DB_OPTION_CACHELAST:
|
||||
((SDatabaseOptions*)pOptions)->cachelast = taosStr2Int8(((SToken*)pVal)->z, NULL, 10);
|
||||
((SDatabaseOptions*)pOptions)->cacheLast = taosStr2Int8(((SToken*)pVal)->z, NULL, 10);
|
||||
break;
|
||||
case DB_OPTION_CACHELASTSIZE:
|
||||
((SDatabaseOptions*)pOptions)->cacheLastSize = taosStr2Int32(((SToken*)pVal)->z, NULL, 10);
|
||||
break;
|
||||
case DB_OPTION_COMP:
|
||||
((SDatabaseOptions*)pOptions)->compressionLevel = taosStr2Int8(((SToken*)pVal)->z, NULL, 10);
|
||||
|
|
|
@ -53,6 +53,7 @@ static SKeyword keywordTable[] = {
|
|||
{"BY", TK_BY},
|
||||
{"CACHE", TK_CACHE},
|
||||
{"CACHELAST", TK_CACHELAST},
|
||||
{"CACHELASTSIZE", TK_CACHELASTSIZE},
|
||||
{"CAST", TK_CAST},
|
||||
{"CLIENT_VERSION", TK_CLIENT_VERSION},
|
||||
{"CLUSTER", TK_CLUSTER},
|
||||
|
|
|
@ -2988,7 +2988,8 @@ static int32_t buildCreateDbReq(STranslateContext* pCxt, SCreateDatabaseStmt* pS
|
|||
pReq->compression = pStmt->pOptions->compressionLevel;
|
||||
pReq->replications = pStmt->pOptions->replica;
|
||||
pReq->strict = pStmt->pOptions->strict;
|
||||
pReq->cacheLastRow = pStmt->pOptions->cachelast;
|
||||
pReq->cacheLastRow = pStmt->pOptions->cacheLast;
|
||||
pReq->lastRowMem = pStmt->pOptions->cacheLastSize;
|
||||
pReq->schemaless = pStmt->pOptions->schemaless;
|
||||
pReq->ignoreExist = pStmt->ignoreExists;
|
||||
return buildCreateDbRetentions(pStmt->pOptions->pRetentions, pReq);
|
||||
|
@ -3149,9 +3150,13 @@ static int32_t checkDatabaseOptions(STranslateContext* pCxt, const char* pDbName
|
|||
int32_t code =
|
||||
checkRangeOption(pCxt, "buffer", pOptions->buffer, TSDB_MIN_BUFFER_PER_VNODE, TSDB_MAX_BUFFER_PER_VNODE);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = checkRangeOption(pCxt, "cacheLast", pOptions->cachelast, TSDB_MIN_DB_CACHE_LAST_ROW,
|
||||
code = checkRangeOption(pCxt, "cacheLast", pOptions->cacheLast, TSDB_MIN_DB_CACHE_LAST_ROW,
|
||||
TSDB_MAX_DB_CACHE_LAST_ROW);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = checkRangeOption(pCxt, "cacheLastSize", pOptions->cacheLastSize, TSDB_MIN_DB_LAST_ROW_MEM,
|
||||
TSDB_MAX_DB_LAST_ROW_MEM);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = checkRangeOption(pCxt, "compression", pOptions->compressionLevel, TSDB_MIN_COMP_LEVEL, TSDB_MAX_COMP_LEVEL);
|
||||
}
|
||||
|
@ -3271,7 +3276,8 @@ static void buildAlterDbReq(STranslateContext* pCxt, SAlterDatabaseStmt* pStmt,
|
|||
pReq->fsyncPeriod = pStmt->pOptions->fsyncPeriod;
|
||||
pReq->walLevel = pStmt->pOptions->walLevel;
|
||||
pReq->strict = pStmt->pOptions->strict;
|
||||
pReq->cacheLastRow = pStmt->pOptions->cachelast;
|
||||
pReq->cacheLastRow = pStmt->pOptions->cacheLast;
|
||||
pReq->lastRowMem = pStmt->pOptions->cacheLastSize;
|
||||
pReq->replications = pStmt->pOptions->replica;
|
||||
return;
|
||||
}
|
||||
|
|
|
@ -39,6 +39,9 @@ bool qIsInsertValuesSql(const char* pStr, size_t length) {
|
|||
if (TK_USING == t.type || TK_VALUES == t.type) {
|
||||
return true;
|
||||
}
|
||||
if (0 == t.type) {
|
||||
break;
|
||||
}
|
||||
} while (pStr - pSql < length);
|
||||
return false;
|
||||
}
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -77,6 +77,7 @@ TEST_F(ParserInitialCTest, createDatabase) {
|
|||
expect.ignoreExist = igExists;
|
||||
expect.buffer = TSDB_DEFAULT_BUFFER_PER_VNODE;
|
||||
expect.cacheLastRow = TSDB_DEFAULT_CACHE_LAST_ROW;
|
||||
expect.lastRowMem = TSDB_DEFAULT_LAST_ROW_MEM;
|
||||
expect.compression = TSDB_DEFAULT_COMP_LEVEL;
|
||||
expect.daysPerFile = TSDB_DEFAULT_DAYS_PER_FILE;
|
||||
expect.fsyncPeriod = TSDB_DEFAULT_FSYNC_PERIOD;
|
||||
|
@ -97,7 +98,8 @@ TEST_F(ParserInitialCTest, createDatabase) {
|
|||
};
|
||||
|
||||
auto setDbBufferFunc = [&](int32_t buffer) { expect.buffer = buffer; };
|
||||
auto setDbCachelastFunc = [&](int8_t CACHELAST) { expect.cacheLastRow = CACHELAST; };
|
||||
auto setDbCachelastFunc = [&](int8_t cachelast) { expect.cacheLastRow = cachelast; };
|
||||
auto setDbCachelastSize = [&](int8_t cachelastSize) { expect.lastRowMem = cachelastSize; };
|
||||
auto setDbCompressionFunc = [&](int8_t compressionLevel) { expect.compression = compressionLevel; };
|
||||
auto setDbDaysFunc = [&](int32_t daysPerFile) { expect.daysPerFile = daysPerFile; };
|
||||
auto setDbFsyncFunc = [&](int32_t fsyncPeriod) { expect.fsyncPeriod = fsyncPeriod; };
|
||||
|
@ -154,6 +156,7 @@ TEST_F(ParserInitialCTest, createDatabase) {
|
|||
ASSERT_EQ(req.replications, expect.replications);
|
||||
ASSERT_EQ(req.strict, expect.strict);
|
||||
ASSERT_EQ(req.cacheLastRow, expect.cacheLastRow);
|
||||
ASSERT_EQ(req.lastRowMem, expect.lastRowMem);
|
||||
// ASSERT_EQ(req.schemaless, expect.schemaless);
|
||||
ASSERT_EQ(req.ignoreExist, expect.ignoreExist);
|
||||
ASSERT_EQ(req.numOfRetensions, expect.numOfRetensions);
|
||||
|
@ -179,6 +182,7 @@ TEST_F(ParserInitialCTest, createDatabase) {
|
|||
setCreateDbReqFunc("wxy_db", 1);
|
||||
setDbBufferFunc(64);
|
||||
setDbCachelastFunc(2);
|
||||
setDbCachelastSize(20);
|
||||
setDbCompressionFunc(1);
|
||||
setDbDaysFunc(100 * 1440);
|
||||
setDbFsyncFunc(100);
|
||||
|
@ -200,6 +204,7 @@ TEST_F(ParserInitialCTest, createDatabase) {
|
|||
run("CREATE DATABASE IF NOT EXISTS wxy_db "
|
||||
"BUFFER 64 "
|
||||
"CACHELAST 2 "
|
||||
"CACHELASTSIZE 20 "
|
||||
"COMP 1 "
|
||||
"DURATION 100 "
|
||||
"FSYNC 100 "
|
||||
|
|
|
@ -1340,6 +1340,17 @@ static void doSetLogicNodeParent(SLogicNode* pNode, SLogicNode* pParent) {
|
|||
|
||||
static void setLogicNodeParent(SLogicNode* pNode) { doSetLogicNodeParent(pNode, NULL); }
|
||||
|
||||
static void setLogicSubplanType(SLogicSubplan* pSubplan) {
|
||||
if (QUERY_NODE_LOGIC_PLAN_VNODE_MODIFY != nodeType(pSubplan->pNode)) {
|
||||
pSubplan->subplanType = SUBPLAN_TYPE_SCAN;
|
||||
} else {
|
||||
SVnodeModifyLogicNode* pModify = (SVnodeModifyLogicNode*)pSubplan->pNode;
|
||||
pSubplan->subplanType = (MODIFY_TABLE_TYPE_INSERT == pModify->modifyType && NULL != pModify->node.pChildren)
|
||||
? SUBPLAN_TYPE_SCAN
|
||||
: SUBPLAN_TYPE_MODIFY;
|
||||
}
|
||||
}
|
||||
|
||||
int32_t createLogicPlan(SPlanContext* pCxt, SLogicSubplan** pLogicSubplan) {
|
||||
SLogicPlanContext cxt = {.pPlanCxt = pCxt};
|
||||
|
||||
|
@ -1354,11 +1365,7 @@ int32_t createLogicPlan(SPlanContext* pCxt, SLogicSubplan** pLogicSubplan) {
|
|||
int32_t code = createQueryLogicNode(&cxt, pCxt->pAstRoot, &pSubplan->pNode);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
setLogicNodeParent(pSubplan->pNode);
|
||||
if (QUERY_NODE_LOGIC_PLAN_VNODE_MODIFY == nodeType(pSubplan->pNode)) {
|
||||
pSubplan->subplanType = SUBPLAN_TYPE_MODIFY;
|
||||
} else {
|
||||
pSubplan->subplanType = SUBPLAN_TYPE_SCAN;
|
||||
}
|
||||
setLogicSubplanType(pSubplan);
|
||||
}
|
||||
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
|
|
|
@ -1566,7 +1566,7 @@ static int32_t buildInsertSelectSubplan(SPhysiPlanContext* pCxt, SVnodeModifyLog
|
|||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = createQueryInserter(pCxt, pModify, pSubplan, &pSubplan->pDataSink);
|
||||
}
|
||||
pSubplan->msgType = TDMT_VND_SUBMIT;
|
||||
pSubplan->msgType = TDMT_SCH_MERGE_QUERY;
|
||||
return code;
|
||||
}
|
||||
|
||||
|
|
|
@ -39,7 +39,6 @@ typedef struct SSplitRule {
|
|||
FSplit splitFunc;
|
||||
} SSplitRule;
|
||||
|
||||
// typedef bool (*FSplFindSplitNode)(SSplitContext* pCxt, SLogicSubplan* pSubplan, void* pInfo);
|
||||
typedef bool (*FSplFindSplitNode)(SSplitContext* pCxt, SLogicSubplan* pSubplan, SLogicNode* pNode, void* pInfo);
|
||||
|
||||
static void splSetSubplanVgroups(SLogicSubplan* pSubplan, SLogicNode* pNode) {
|
||||
|
@ -67,6 +66,19 @@ static SLogicSubplan* splCreateScanSubplan(SSplitContext* pCxt, SLogicNode* pNod
|
|||
return pSubplan;
|
||||
}
|
||||
|
||||
static SLogicSubplan* splCreateSubplan(SSplitContext* pCxt, SLogicNode* pNode, ESubplanType subplanType) {
|
||||
SLogicSubplan* pSubplan = (SLogicSubplan*)nodesMakeNode(QUERY_NODE_LOGIC_SUBPLAN);
|
||||
if (NULL == pSubplan) {
|
||||
return NULL;
|
||||
}
|
||||
pSubplan->id.queryId = pCxt->queryId;
|
||||
pSubplan->id.groupId = pCxt->groupId;
|
||||
pSubplan->subplanType = subplanType;
|
||||
pSubplan->pNode = pNode;
|
||||
pNode->pParent = NULL;
|
||||
return pSubplan;
|
||||
}
|
||||
|
||||
static int32_t splCreateExchangeNode(SSplitContext* pCxt, SLogicNode* pChild, SExchangeLogicNode** pOutput) {
|
||||
SExchangeLogicNode* pExchange = (SExchangeLogicNode*)nodesMakeNode(QUERY_NODE_LOGIC_PLAN_EXCHANGE);
|
||||
if (NULL == pExchange) {
|
||||
|
@ -98,6 +110,43 @@ static int32_t splCreateExchangeNodeForSubplan(SSplitContext* pCxt, SLogicSubpla
|
|||
return code;
|
||||
}
|
||||
|
||||
static bool splIsChildSubplan(SLogicNode* pLogicNode, int32_t groupId) {
|
||||
if (QUERY_NODE_LOGIC_PLAN_EXCHANGE == nodeType(pLogicNode)) {
|
||||
return ((SExchangeLogicNode*)pLogicNode)->srcGroupId == groupId;
|
||||
}
|
||||
|
||||
if (QUERY_NODE_LOGIC_PLAN_MERGE == nodeType(pLogicNode)) {
|
||||
return ((SMergeLogicNode*)pLogicNode)->srcGroupId == groupId;
|
||||
}
|
||||
|
||||
SNode* pChild;
|
||||
FOREACH(pChild, pLogicNode->pChildren) {
|
||||
bool isChild = splIsChildSubplan((SLogicNode*)pChild, groupId);
|
||||
if (isChild) {
|
||||
return isChild;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
static int32_t splMountSubplan(SLogicSubplan* pParent, SNodeList* pChildren) {
|
||||
SNode* pChild = NULL;
|
||||
WHERE_EACH(pChild, pChildren) {
|
||||
if (splIsChildSubplan(pParent->pNode, ((SLogicSubplan*)pChild)->id.groupId)) {
|
||||
int32_t code = nodesListMakeAppend(&pParent->pChildren, pChild);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
REPLACE_NODE(NULL);
|
||||
ERASE_NODE(pChildren);
|
||||
continue;
|
||||
} else {
|
||||
return code;
|
||||
}
|
||||
}
|
||||
WHERE_NEXT;
|
||||
}
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static bool splMatchByNode(SSplitContext* pCxt, SLogicSubplan* pSubplan, SLogicNode* pNode, FSplFindSplitNode func,
|
||||
void* pInfo) {
|
||||
if (func(pCxt, pSubplan, pNode, pInfo)) {
|
||||
|
@ -982,56 +1031,6 @@ static int32_t singleTableJoinSplit(SSplitContext* pCxt, SLogicSubplan* pSubplan
|
|||
return code;
|
||||
}
|
||||
|
||||
static bool unionIsChildSubplan(SLogicNode* pLogicNode, int32_t groupId) {
|
||||
if (QUERY_NODE_LOGIC_PLAN_EXCHANGE == nodeType(pLogicNode)) {
|
||||
return ((SExchangeLogicNode*)pLogicNode)->srcGroupId == groupId;
|
||||
}
|
||||
|
||||
if (QUERY_NODE_LOGIC_PLAN_MERGE == nodeType(pLogicNode)) {
|
||||
return ((SMergeLogicNode*)pLogicNode)->srcGroupId == groupId;
|
||||
}
|
||||
|
||||
SNode* pChild;
|
||||
FOREACH(pChild, pLogicNode->pChildren) {
|
||||
bool isChild = unionIsChildSubplan((SLogicNode*)pChild, groupId);
|
||||
if (isChild) {
|
||||
return isChild;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
static int32_t unionMountSubplan(SLogicSubplan* pParent, SNodeList* pChildren) {
|
||||
SNode* pChild = NULL;
|
||||
WHERE_EACH(pChild, pChildren) {
|
||||
if (unionIsChildSubplan(pParent->pNode, ((SLogicSubplan*)pChild)->id.groupId)) {
|
||||
int32_t code = nodesListMakeAppend(&pParent->pChildren, pChild);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
REPLACE_NODE(NULL);
|
||||
ERASE_NODE(pChildren);
|
||||
continue;
|
||||
} else {
|
||||
return code;
|
||||
}
|
||||
}
|
||||
WHERE_NEXT;
|
||||
}
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static SLogicSubplan* unionCreateSubplan(SSplitContext* pCxt, SLogicNode* pNode, ESubplanType subplanType) {
|
||||
SLogicSubplan* pSubplan = (SLogicSubplan*)nodesMakeNode(QUERY_NODE_LOGIC_SUBPLAN);
|
||||
if (NULL == pSubplan) {
|
||||
return NULL;
|
||||
}
|
||||
pSubplan->id.queryId = pCxt->queryId;
|
||||
pSubplan->id.groupId = pCxt->groupId;
|
||||
pSubplan->subplanType = subplanType;
|
||||
pSubplan->pNode = pNode;
|
||||
pNode->pParent = NULL;
|
||||
return pSubplan;
|
||||
}
|
||||
|
||||
static int32_t unionSplitSubplan(SSplitContext* pCxt, SLogicSubplan* pUnionSubplan, SLogicNode* pSplitNode) {
|
||||
SNodeList* pSubplanChildren = pUnionSubplan->pChildren;
|
||||
pUnionSubplan->pChildren = NULL;
|
||||
|
@ -1040,11 +1039,11 @@ static int32_t unionSplitSubplan(SSplitContext* pCxt, SLogicSubplan* pUnionSubpl
|
|||
|
||||
SNode* pChild = NULL;
|
||||
FOREACH(pChild, pSplitNode->pChildren) {
|
||||
SLogicSubplan* pNewSubplan = unionCreateSubplan(pCxt, (SLogicNode*)pChild, pUnionSubplan->subplanType);
|
||||
SLogicSubplan* pNewSubplan = splCreateSubplan(pCxt, (SLogicNode*)pChild, pUnionSubplan->subplanType);
|
||||
code = nodesListMakeStrictAppend(&pUnionSubplan->pChildren, (SNode*)pNewSubplan);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
REPLACE_NODE(NULL);
|
||||
code = unionMountSubplan(pNewSubplan, pSubplanChildren);
|
||||
code = splMountSubplan(pNewSubplan, pSubplanChildren);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS != code) {
|
||||
break;
|
||||
|
@ -1219,14 +1218,24 @@ static int32_t insertSelectSplit(SSplitContext* pCxt, SLogicSubplan* pSubplan) {
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t code = splCreateExchangeNodeForSubplan(pCxt, info.pSubplan, info.pQueryRoot, info.pSubplan->subplanType);
|
||||
SLogicSubplan* pNewSubplan = NULL;
|
||||
SNodeList* pSubplanChildren = info.pSubplan->pChildren;
|
||||
ESubplanType subplanType = info.pSubplan->subplanType;
|
||||
int32_t code = splCreateExchangeNodeForSubplan(pCxt, info.pSubplan, info.pQueryRoot, SUBPLAN_TYPE_MODIFY);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = nodesListMakeStrictAppend(&info.pSubplan->pChildren, (SNode*)splCreateScanSubplan(pCxt, info.pQueryRoot, 0));
|
||||
pNewSubplan = splCreateSubplan(pCxt, info.pQueryRoot, subplanType);
|
||||
if (NULL == pNewSubplan) {
|
||||
code = TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
info.pSubplan->subplanType = SUBPLAN_TYPE_MODIFY;
|
||||
SPLIT_FLAG_SET_MASK(info.pSubplan->splitFlag, SPLIT_FLAG_INSERT_SPLIT);
|
||||
code = nodesListMakeStrictAppend(&info.pSubplan->pChildren, (SNode*)pNewSubplan);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = splMountSubplan(pNewSubplan, pSubplanChildren);
|
||||
}
|
||||
|
||||
SPLIT_FLAG_SET_MASK(info.pSubplan->splitFlag, SPLIT_FLAG_INSERT_SPLIT);
|
||||
++(pCxt->groupId);
|
||||
pCxt->split = true;
|
||||
return code;
|
||||
|
|
|
@ -80,12 +80,19 @@ typedef struct SQWDebug {
|
|||
|
||||
extern SQWDebug gQWDebug;
|
||||
|
||||
typedef struct SQWMsgInfo {
|
||||
int8_t taskType;
|
||||
int8_t explain;
|
||||
int8_t needFetch;
|
||||
} SQWMsgInfo;
|
||||
|
||||
typedef struct SQWMsg {
|
||||
void *node;
|
||||
int32_t code;
|
||||
int32_t msgType;
|
||||
char *msg;
|
||||
int32_t msgLen;
|
||||
SQWMsgInfo msgInfo;
|
||||
SRpcHandleInfo connInfo;
|
||||
} SQWMsg;
|
||||
|
||||
|
@ -122,15 +129,18 @@ typedef struct SQWTaskCtx {
|
|||
int8_t phase;
|
||||
int8_t taskType;
|
||||
int8_t explain;
|
||||
int8_t needFetch;
|
||||
int32_t queryType;
|
||||
int32_t fetchType;
|
||||
int32_t execId;
|
||||
|
||||
bool queryRsped;
|
||||
bool queryFetched;
|
||||
bool queryEnd;
|
||||
bool queryContinue;
|
||||
bool queryInQueue;
|
||||
int32_t rspCode;
|
||||
int64_t affectedRows; // for insert ...select stmt
|
||||
|
||||
SRpcHandleInfo ctrlConnInfo;
|
||||
SRpcHandleInfo dataConnInfo;
|
||||
|
@ -162,7 +172,7 @@ typedef struct SQWMsgStat {
|
|||
uint64_t queryProcessed;
|
||||
uint64_t cqueryProcessed;
|
||||
uint64_t fetchProcessed;
|
||||
uint64_t fetchRspProcessed;
|
||||
uint64_t rspProcessed;
|
||||
uint64_t cancelProcessed;
|
||||
uint64_t dropProcessed;
|
||||
uint64_t hbProcessed;
|
||||
|
@ -212,8 +222,8 @@ typedef struct SQWorkerMgmt {
|
|||
#define QW_STAT_GET(_item) atomic_load_64(&(_item))
|
||||
|
||||
#define QW_GET_EVENT(ctx, event) atomic_load_8(&(ctx)->events[event])
|
||||
#define QW_IS_EVENT_RECEIVED(ctx, event) (QW_GET_EVENT(ctx, event) == QW_EVENT_RECEIVED)
|
||||
#define QW_IS_EVENT_PROCESSED(ctx, event) (QW_GET_EVENT(ctx, event) == QW_EVENT_PROCESSED)
|
||||
#define QW_EVENT_RECEIVED(ctx, event) (QW_GET_EVENT(ctx, event) == QW_EVENT_RECEIVED)
|
||||
#define QW_EVENT_PROCESSED(ctx, event) (QW_GET_EVENT(ctx, event) == QW_EVENT_PROCESSED)
|
||||
#define QW_SET_EVENT_RECEIVED(ctx, event) atomic_store_8(&(ctx)->events[event], QW_EVENT_RECEIVED)
|
||||
#define QW_SET_EVENT_PROCESSED(ctx, event) atomic_store_8(&(ctx)->events[event], QW_EVENT_PROCESSED)
|
||||
|
||||
|
@ -222,13 +232,8 @@ typedef struct SQWorkerMgmt {
|
|||
#define QW_SET_RSP_CODE(ctx, code) atomic_store_32(&(ctx)->rspCode, code)
|
||||
#define QW_UPDATE_RSP_CODE(ctx, code) atomic_val_compare_exchange_32(&(ctx)->rspCode, 0, code)
|
||||
|
||||
#define QW_IS_QUERY_RUNNING(ctx) (QW_GET_PHASE(ctx) == QW_PHASE_PRE_QUERY || QW_GET_PHASE(ctx) == QW_PHASE_PRE_CQUERY)
|
||||
#define QW_QUERY_RUNNING(ctx) (QW_GET_PHASE(ctx) == QW_PHASE_PRE_QUERY || QW_GET_PHASE(ctx) == QW_PHASE_PRE_CQUERY)
|
||||
|
||||
#define QW_TASK_NOT_EXIST(code) (TSDB_CODE_QRY_SCH_NOT_EXIST == (code) || TSDB_CODE_QRY_TASK_NOT_EXIST == (code))
|
||||
#define QW_TASK_ALREADY_EXIST(code) (TSDB_CODE_QRY_TASK_ALREADY_EXIST == (code))
|
||||
#define QW_TASK_READY(status) \
|
||||
(status == JOB_TASK_STATUS_SUCC || status == JOB_TASK_STATUS_FAIL || status == JOB_TASK_STATUS_CANCELLED || \
|
||||
status == JOB_TASK_STATUS_PART_SUCC)
|
||||
#define QW_SET_QTID(id, qId, tId, eId) \
|
||||
do { \
|
||||
*(uint64_t *)(id) = (qId); \
|
||||
|
|
|
@ -25,7 +25,7 @@ extern "C" {
|
|||
|
||||
int32_t qwAbortPrerocessQuery(QW_FPARAMS_DEF);
|
||||
int32_t qwPrerocessQuery(QW_FPARAMS_DEF, SQWMsg *qwMsg);
|
||||
int32_t qwProcessQuery(QW_FPARAMS_DEF, SQWMsg *qwMsg, int8_t taskType, int8_t explain, const char* sql);
|
||||
int32_t qwProcessQuery(QW_FPARAMS_DEF, SQWMsg *qwMsg, const char* sql);
|
||||
int32_t qwProcessCQuery(QW_FPARAMS_DEF, SQWMsg *qwMsg);
|
||||
int32_t qwProcessReady(QW_FPARAMS_DEF, SQWMsg *qwMsg);
|
||||
int32_t qwProcessFetch(QW_FPARAMS_DEF, SQWMsg *qwMsg);
|
||||
|
@ -38,7 +38,7 @@ int32_t qwBuildAndSendCancelRsp(SRpcHandleInfo *pConn, int32_t code);
|
|||
int32_t qwBuildAndSendFetchRsp(int32_t rspType, SRpcHandleInfo *pConn, SRetrieveTableRsp *pRsp, int32_t dataLength, int32_t code);
|
||||
void qwBuildFetchRsp(void *msg, SOutputData *input, int32_t len, bool qComplete);
|
||||
int32_t qwBuildAndSendCQueryMsg(QW_FPARAMS_DEF, SRpcHandleInfo *pConn);
|
||||
int32_t qwBuildAndSendQueryRsp(int32_t rspType, SRpcHandleInfo *pConn, int32_t code, STbVerInfo* tbInfo);
|
||||
int32_t qwBuildAndSendQueryRsp(int32_t rspType, SRpcHandleInfo *pConn, int32_t code, SQWTaskCtx *ctx);
|
||||
int32_t qwBuildAndSendExplainRsp(SRpcHandleInfo *pConn, SExplainExecInfo *execInfo, int32_t num);
|
||||
void qwFreeFetchRsp(void *msg);
|
||||
int32_t qwMallocFetchRsp(int32_t length, SRetrieveTableRsp **rsp);
|
||||
|
|
|
@ -43,13 +43,16 @@ void qwFreeFetchRsp(void *msg) {
|
|||
}
|
||||
}
|
||||
|
||||
int32_t qwBuildAndSendQueryRsp(int32_t rspType, SRpcHandleInfo *pConn, int32_t code, STbVerInfo* tbInfo) {
|
||||
int32_t qwBuildAndSendQueryRsp(int32_t rspType, SRpcHandleInfo *pConn, int32_t code, SQWTaskCtx *ctx) {
|
||||
STbVerInfo* tbInfo = ctx ? &ctx->tbInfo : NULL;
|
||||
int64_t affectedRows = ctx ? ctx->affectedRows : 0;
|
||||
SQueryTableRsp *pRsp = (SQueryTableRsp *)rpcMallocCont(sizeof(SQueryTableRsp));
|
||||
pRsp->code = code;
|
||||
pRsp->code = htonl(code);
|
||||
pRsp->affectedRows = htobe64(affectedRows);
|
||||
if (tbInfo) {
|
||||
strcpy(pRsp->tbFName, tbInfo->tbFName);
|
||||
pRsp->sversion = tbInfo->sversion;
|
||||
pRsp->tversion = tbInfo->tversion;
|
||||
pRsp->sversion = htonl(tbInfo->sversion);
|
||||
pRsp->tversion = htonl(tbInfo->tversion);
|
||||
}
|
||||
|
||||
SRpcMsg rpcRsp = {
|
||||
|
@ -366,10 +369,14 @@ int32_t qWorkerProcessQueryMsg(void *node, void *qWorkerMgmt, SRpcMsg *pMsg, int
|
|||
int32_t eId = msg->execId;
|
||||
|
||||
SQWMsg qwMsg = {.node = node, .msg = msg->msg + msg->sqlLen, .msgLen = msg->phyLen, .connInfo = pMsg->info, .msgType = pMsg->msgType};
|
||||
qwMsg.msgInfo.explain = msg->explain;
|
||||
qwMsg.msgInfo.taskType = msg->taskType;
|
||||
qwMsg.msgInfo.needFetch = msg->needFetch;
|
||||
|
||||
char * sql = strndup(msg->msg, msg->sqlLen);
|
||||
QW_SCH_TASK_DLOG("processQuery start, node:%p, type:%s, handle:%p, sql:%s", node, TMSG_INFO(pMsg->msgType), pMsg->info.handle, sql);
|
||||
|
||||
QW_ERR_RET(qwProcessQuery(QW_FPARAMS(), &qwMsg, msg->taskType, msg->explain, sql));
|
||||
QW_ERR_RET(qwProcessQuery(QW_FPARAMS(), &qwMsg, sql));
|
||||
QW_SCH_TASK_DLOG("processQuery end, node:%p", node);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
@ -447,14 +454,14 @@ int32_t qWorkerProcessFetchMsg(void *node, void *qWorkerMgmt, SRpcMsg *pMsg, int
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t qWorkerProcessFetchRsp(void *node, void *qWorkerMgmt, SRpcMsg *pMsg, int64_t ts) {
|
||||
int32_t qWorkerProcessRspMsg(void *node, void *qWorkerMgmt, SRpcMsg *pMsg, int64_t ts) {
|
||||
SQWorker * mgmt = (SQWorker *)qWorkerMgmt;
|
||||
if (mgmt) {
|
||||
qwUpdateTimeInQueue(mgmt, ts, FETCH_QUEUE);
|
||||
QW_STAT_INC(mgmt->stat.msgStat.fetchRspProcessed, 1);
|
||||
QW_STAT_INC(mgmt->stat.msgStat.rspProcessed, 1);
|
||||
}
|
||||
|
||||
qProcessFetchRsp(NULL, pMsg, NULL);
|
||||
qProcessRspMsg(NULL, pMsg, NULL);
|
||||
pMsg->pCont = NULL;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
|
|
@ -57,6 +57,10 @@ int32_t qwHandleTaskComplete(QW_FPARAMS_DEF, SQWTaskCtx *ctx) {
|
|||
connInfo.ahandle = NULL;
|
||||
QW_ERR_RET(qwBuildAndSendExplainRsp(&connInfo, execInfo, resNum));
|
||||
}
|
||||
|
||||
if (!ctx->needFetch) {
|
||||
dsGetDataLength(ctx->sinkHandle, &ctx->affectedRows, NULL);
|
||||
}
|
||||
}
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
@ -123,11 +127,11 @@ int32_t qwExecTask(QW_FPARAMS_DEF, SQWTaskCtx *ctx, bool *queryEnd) {
|
|||
break;
|
||||
}
|
||||
|
||||
if (QW_IS_EVENT_RECEIVED(ctx, QW_EVENT_READY) && execNum >= QW_DEFAULT_SHORT_RUN_TIMES) {
|
||||
if (ctx->needFetch && (!ctx->queryRsped) && execNum >= QW_DEFAULT_SHORT_RUN_TIMES) {
|
||||
break;
|
||||
}
|
||||
|
||||
if (QW_IS_EVENT_RECEIVED(ctx, QW_EVENT_FETCH)) {
|
||||
if (QW_EVENT_RECEIVED(ctx, QW_EVENT_FETCH)) {
|
||||
break;
|
||||
}
|
||||
|
||||
|
@ -184,7 +188,7 @@ int32_t qwGenerateSchHbRsp(SQWorker *mgmt, SQWSchStatus *sch, SQWHbInfo *hbInfo)
|
|||
}
|
||||
|
||||
int32_t qwGetQueryResFromSink(QW_FPARAMS_DEF, SQWTaskCtx *ctx, int32_t *dataLen, void **rspMsg, SOutputData *pOutput) {
|
||||
int32_t len = 0;
|
||||
int64_t len = 0;
|
||||
SRetrieveTableRsp *rsp = NULL;
|
||||
bool queryEnd = false;
|
||||
int32_t code = 0;
|
||||
|
@ -243,7 +247,7 @@ int32_t qwGetQueryResFromSink(QW_FPARAMS_DEF, SQWTaskCtx *ctx, int32_t *dataLen,
|
|||
}
|
||||
|
||||
int32_t qwGetDeleteResFromSink(QW_FPARAMS_DEF, SQWTaskCtx *ctx, SDeleteRes *pRes) {
|
||||
int32_t len = 0;
|
||||
int64_t len = 0;
|
||||
bool queryEnd = false;
|
||||
int32_t code = 0;
|
||||
SOutputData output = {0};
|
||||
|
@ -251,7 +255,7 @@ int32_t qwGetDeleteResFromSink(QW_FPARAMS_DEF, SQWTaskCtx *ctx, SDeleteRes *pRes
|
|||
dsGetDataLength(ctx->sinkHandle, &len, &queryEnd);
|
||||
|
||||
if (len <= 0 || len != sizeof(SDeleterRes)) {
|
||||
QW_TASK_ELOG("invalid length from dsGetDataLength, length:%d", len);
|
||||
QW_TASK_ELOG("invalid length from dsGetDataLength, length:%" PRId64, len);
|
||||
QW_ERR_RET(TSDB_CODE_QRY_INVALID_INPUT);
|
||||
}
|
||||
|
||||
|
@ -282,7 +286,6 @@ int32_t qwGetDeleteResFromSink(QW_FPARAMS_DEF, SQWTaskCtx *ctx, SDeleteRes *pRes
|
|||
int32_t qwHandlePrePhaseEvents(QW_FPARAMS_DEF, int8_t phase, SQWPhaseInput *input, SQWPhaseOutput *output) {
|
||||
int32_t code = 0;
|
||||
SQWTaskCtx *ctx = NULL;
|
||||
SRpcHandleInfo *cancelConnection = NULL;
|
||||
|
||||
QW_TASK_DLOG("start to handle event at phase %s", qwPhaseStr(phase));
|
||||
|
||||
|
@ -303,13 +306,13 @@ int32_t qwHandlePrePhaseEvents(QW_FPARAMS_DEF, int8_t phase, SQWPhaseInput *inpu
|
|||
|
||||
switch (phase) {
|
||||
case QW_PHASE_PRE_QUERY: {
|
||||
if (QW_IS_EVENT_PROCESSED(ctx, QW_EVENT_DROP)) {
|
||||
if (QW_EVENT_PROCESSED(ctx, QW_EVENT_DROP)) {
|
||||
QW_TASK_ELOG("task already dropped at wrong phase %s", qwPhaseStr(phase));
|
||||
QW_ERR_JRET(TSDB_CODE_QRY_TASK_STATUS_ERROR);
|
||||
break;
|
||||
}
|
||||
|
||||
if (QW_IS_EVENT_RECEIVED(ctx, QW_EVENT_DROP)) {
|
||||
if (QW_EVENT_RECEIVED(ctx, QW_EVENT_DROP)) {
|
||||
QW_ERR_JRET(qwDropTask(QW_FPARAMS()));
|
||||
|
||||
//qwBuildAndSendDropRsp(&ctx->ctrlConnInfo, code);
|
||||
|
@ -323,29 +326,29 @@ int32_t qwHandlePrePhaseEvents(QW_FPARAMS_DEF, int8_t phase, SQWPhaseInput *inpu
|
|||
break;
|
||||
}
|
||||
case QW_PHASE_PRE_FETCH: {
|
||||
if (QW_IS_EVENT_PROCESSED(ctx, QW_EVENT_DROP) || QW_IS_EVENT_RECEIVED(ctx, QW_EVENT_DROP)) {
|
||||
if (QW_EVENT_PROCESSED(ctx, QW_EVENT_DROP) || QW_EVENT_RECEIVED(ctx, QW_EVENT_DROP)) {
|
||||
QW_TASK_WLOG("task dropping or already dropped, phase:%s", qwPhaseStr(phase));
|
||||
QW_ERR_JRET(TSDB_CODE_QRY_TASK_DROPPED);
|
||||
}
|
||||
|
||||
if (QW_IS_EVENT_RECEIVED(ctx, QW_EVENT_FETCH)) {
|
||||
if (QW_EVENT_RECEIVED(ctx, QW_EVENT_FETCH)) {
|
||||
QW_TASK_WLOG("last fetch still not processed, phase:%s", qwPhaseStr(phase));
|
||||
QW_ERR_JRET(TSDB_CODE_QRY_DUPLICATTED_OPERATION);
|
||||
}
|
||||
|
||||
if (!QW_IS_EVENT_PROCESSED(ctx, QW_EVENT_READY)) {
|
||||
if (!ctx->queryRsped) {
|
||||
QW_TASK_ELOG("ready msg has not been processed, phase:%s", qwPhaseStr(phase));
|
||||
QW_ERR_JRET(TSDB_CODE_QRY_TASK_MSG_ERROR);
|
||||
}
|
||||
break;
|
||||
}
|
||||
case QW_PHASE_PRE_CQUERY: {
|
||||
if (QW_IS_EVENT_PROCESSED(ctx, QW_EVENT_DROP)) {
|
||||
if (QW_EVENT_PROCESSED(ctx, QW_EVENT_DROP)) {
|
||||
QW_TASK_WLOG("task already dropped, phase:%s", qwPhaseStr(phase));
|
||||
QW_ERR_JRET(TSDB_CODE_QRY_TASK_DROPPED);
|
||||
}
|
||||
|
||||
if (QW_IS_EVENT_RECEIVED(ctx, QW_EVENT_DROP)) {
|
||||
if (QW_EVENT_RECEIVED(ctx, QW_EVENT_DROP)) {
|
||||
QW_ERR_JRET(qwDropTask(QW_FPARAMS()));
|
||||
|
||||
//qwBuildAndSendDropRsp(&ctx->ctrlConnInfo, code);
|
||||
|
@ -374,11 +377,6 @@ _return:
|
|||
qwReleaseTaskCtx(mgmt, ctx);
|
||||
}
|
||||
|
||||
if (cancelConnection) {
|
||||
qwBuildAndSendCancelRsp(cancelConnection, code);
|
||||
QW_TASK_DLOG("cancel rsp send, handle:%p, code:%x - %s", cancelConnection->handle, code, tstrerror(code));
|
||||
}
|
||||
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
QW_TASK_ELOG("end to handle event at phase %s, code:%s", qwPhaseStr(phase), tstrerror(code));
|
||||
} else {
|
||||
|
@ -400,7 +398,7 @@ int32_t qwHandlePostPhaseEvents(QW_FPARAMS_DEF, int8_t phase, SQWPhaseInput *inp
|
|||
|
||||
QW_LOCK(QW_WRITE, &ctx->lock);
|
||||
|
||||
if (QW_IS_EVENT_PROCESSED(ctx, QW_EVENT_DROP)) {
|
||||
if (QW_EVENT_PROCESSED(ctx, QW_EVENT_DROP)) {
|
||||
QW_TASK_WLOG("task already dropped, phase:%s", qwPhaseStr(phase));
|
||||
QW_ERR_JRET(TSDB_CODE_QRY_TASK_DROPPED);
|
||||
}
|
||||
|
@ -409,10 +407,10 @@ int32_t qwHandlePostPhaseEvents(QW_FPARAMS_DEF, int8_t phase, SQWPhaseInput *inp
|
|||
connInfo = ctx->ctrlConnInfo;
|
||||
rspConnection = &connInfo;
|
||||
|
||||
QW_SET_EVENT_PROCESSED(ctx, QW_EVENT_READY);
|
||||
ctx->queryRsped = true;
|
||||
}
|
||||
|
||||
if (QW_IS_EVENT_RECEIVED(ctx, QW_EVENT_DROP)) {
|
||||
if (QW_EVENT_RECEIVED(ctx, QW_EVENT_DROP)) {
|
||||
if (QW_PHASE_POST_FETCH == phase) {
|
||||
QW_TASK_WLOG("drop received at wrong phase %s", qwPhaseStr(phase));
|
||||
QW_ERR_JRET(TSDB_CODE_QRY_APP_ERROR);
|
||||
|
@ -440,7 +438,7 @@ _return:
|
|||
}
|
||||
|
||||
if (rspConnection) {
|
||||
qwBuildAndSendQueryRsp(input->msgType + 1, rspConnection, code, ctx ? &ctx->tbInfo : NULL);
|
||||
qwBuildAndSendQueryRsp(input->msgType + 1, rspConnection, code, ctx);
|
||||
QW_TASK_DLOG("query msg rsped, handle:%p, code:%x - %s", rspConnection->handle, code, tstrerror(code));
|
||||
}
|
||||
|
||||
|
@ -501,7 +499,7 @@ _return:
|
|||
}
|
||||
|
||||
|
||||
int32_t qwProcessQuery(QW_FPARAMS_DEF, SQWMsg *qwMsg, int8_t taskType, int8_t explain, const char* sql) {
|
||||
int32_t qwProcessQuery(QW_FPARAMS_DEF, SQWMsg *qwMsg, const char* sql) {
|
||||
int32_t code = 0;
|
||||
bool queryRsped = false;
|
||||
SSubplan *plan = NULL;
|
||||
|
@ -514,8 +512,9 @@ int32_t qwProcessQuery(QW_FPARAMS_DEF, SQWMsg *qwMsg, int8_t taskType, int8_t ex
|
|||
|
||||
QW_ERR_JRET(qwGetTaskCtx(QW_FPARAMS(), &ctx));
|
||||
|
||||
ctx->taskType = taskType;
|
||||
ctx->explain = explain;
|
||||
ctx->taskType = qwMsg->msgInfo.taskType;
|
||||
ctx->explain = qwMsg->msgInfo.explain;
|
||||
ctx->needFetch = qwMsg->msgInfo.needFetch;
|
||||
ctx->queryType = qwMsg->msgType;
|
||||
|
||||
QW_TASK_DLOGL("subplan json string, len:%d, %s", qwMsg->msgLen, qwMsg->msg);
|
||||
|
@ -585,7 +584,7 @@ int32_t qwProcessCQuery(QW_FPARAMS_DEF, SQWMsg *qwMsg) {
|
|||
|
||||
QW_ERR_JRET(qwExecTask(QW_FPARAMS(), ctx, &queryEnd));
|
||||
|
||||
if (QW_IS_EVENT_RECEIVED(ctx, QW_EVENT_FETCH)) {
|
||||
if (QW_EVENT_RECEIVED(ctx, QW_EVENT_FETCH)) {
|
||||
SOutputData sOutput = {0};
|
||||
QW_ERR_JRET(qwGetQueryResFromSink(QW_FPARAMS(), ctx, &dataLen, &rsp, &sOutput));
|
||||
|
||||
|
@ -622,7 +621,7 @@ int32_t qwProcessCQuery(QW_FPARAMS_DEF, SQWMsg *qwMsg) {
|
|||
break;
|
||||
}
|
||||
|
||||
if (code && QW_IS_EVENT_RECEIVED(ctx, QW_EVENT_FETCH)) {
|
||||
if (code && QW_EVENT_RECEIVED(ctx, QW_EVENT_FETCH)) {
|
||||
QW_SET_EVENT_PROCESSED(ctx, QW_EVENT_FETCH);
|
||||
qwFreeFetchRsp(rsp);
|
||||
rsp = NULL;
|
||||
|
@ -686,7 +685,7 @@ int32_t qwProcessFetch(QW_FPARAMS_DEF, SQWMsg *qwMsg) {
|
|||
locked = true;
|
||||
|
||||
// RC WARNING
|
||||
if (QW_IS_QUERY_RUNNING(ctx)) {
|
||||
if (QW_QUERY_RUNNING(ctx)) {
|
||||
atomic_store_8((int8_t *)&ctx->queryContinue, 1);
|
||||
} else if (0 == atomic_load_8((int8_t *)&ctx->queryInQueue)) {
|
||||
qwUpdateTaskStatus(QW_FPARAMS(), JOB_TASK_STATUS_EXEC);
|
||||
|
@ -714,7 +713,7 @@ _return:
|
|||
|
||||
if (code || rsp) {
|
||||
qwBuildAndSendFetchRsp(qwMsg->msgType + 1, &qwMsg->connInfo, rsp, dataLen, code);
|
||||
QW_TASK_DLOG("fetch rsp send, handle:%p, code:%x - %s, dataLen:%d", qwMsg->connInfo.handle, code, tstrerror(code),
|
||||
QW_TASK_DLOG("%s send, handle:%p, code:%x - %s, dataLen:%d", TMSG_INFO(qwMsg->msgType + 1), qwMsg->connInfo.handle, code, tstrerror(code),
|
||||
dataLen);
|
||||
}
|
||||
|
||||
|
@ -733,12 +732,12 @@ int32_t qwProcessDrop(QW_FPARAMS_DEF, SQWMsg *qwMsg) {
|
|||
|
||||
locked = true;
|
||||
|
||||
if (QW_IS_EVENT_RECEIVED(ctx, QW_EVENT_DROP)) {
|
||||
if (QW_EVENT_RECEIVED(ctx, QW_EVENT_DROP)) {
|
||||
QW_TASK_WLOG_E("task already dropping");
|
||||
QW_ERR_JRET(TSDB_CODE_QRY_DUPLICATTED_OPERATION);
|
||||
}
|
||||
|
||||
if (QW_IS_QUERY_RUNNING(ctx)) {
|
||||
if (QW_QUERY_RUNNING(ctx)) {
|
||||
QW_ERR_JRET(qwKillTaskHandle(QW_FPARAMS(), ctx));
|
||||
qwUpdateTaskStatus(QW_FPARAMS(), JOB_TASK_STATUS_DROP);
|
||||
} else if (ctx->phase > 0) {
|
||||
|
|
|
@ -332,7 +332,7 @@ void qwtEndPut(DataSinkHandle handle, uint64_t useconds) {
|
|||
qwtTestSinkQueryEnd = true;
|
||||
}
|
||||
|
||||
void qwtGetDataLength(DataSinkHandle handle, int32_t* pLen, bool* pQueryEnd) {
|
||||
void qwtGetDataLength(DataSinkHandle handle, int64_t* pLen, bool* pQueryEnd) {
|
||||
static int32_t in = 0;
|
||||
|
||||
if (in > 0) {
|
||||
|
|
|
@ -296,8 +296,8 @@ extern SSchedulerMgmt schMgmt;
|
|||
#define SCH_TASK_ID(_task) ((_task) ? (_task)->taskId : -1)
|
||||
#define SCH_TASK_EID(_task) ((_task) ? (_task)->execId : -1)
|
||||
|
||||
#define SCH_IS_DATA_SRC_QRY_TASK(task) ((task)->plan->subplanType == SUBPLAN_TYPE_SCAN)
|
||||
#define SCH_IS_DATA_SRC_TASK(task) (((task)->plan->subplanType == SUBPLAN_TYPE_SCAN) || ((task)->plan->subplanType == SUBPLAN_TYPE_MODIFY))
|
||||
#define SCH_IS_DATA_BIND_QRY_TASK(task) ((task)->plan->subplanType == SUBPLAN_TYPE_SCAN)
|
||||
#define SCH_IS_DATA_BIND_TASK(task) (((task)->plan->subplanType == SUBPLAN_TYPE_SCAN) || ((task)->plan->subplanType == SUBPLAN_TYPE_MODIFY))
|
||||
#define SCH_IS_LEAF_TASK(_job, _task) (((_task)->level->level + 1) == (_job)->levelNum)
|
||||
|
||||
#define SCH_SET_TASK_STATUS(task, st) atomic_store_8(&(task)->status, st)
|
||||
|
@ -317,8 +317,9 @@ extern SSchedulerMgmt schMgmt;
|
|||
|
||||
#define SCH_SET_JOB_NEED_FLOW_CTRL(_job) (_job)->attr.needFlowCtrl = true
|
||||
#define SCH_JOB_NEED_FLOW_CTRL(_job) ((_job)->attr.needFlowCtrl)
|
||||
#define SCH_TASK_NEED_FLOW_CTRL(_job, _task) (SCH_IS_DATA_SRC_QRY_TASK(_task) && SCH_JOB_NEED_FLOW_CTRL(_job) && SCH_IS_LEVEL_UNFINISHED((_task)->level))
|
||||
#define SCH_FETCH_TYPE(_pSrcTask) (SCH_IS_DATA_SRC_QRY_TASK(_pSrcTask) ? TDMT_SCH_FETCH : TDMT_SCH_MERGE_FETCH)
|
||||
#define SCH_TASK_NEED_FLOW_CTRL(_job, _task) (SCH_IS_DATA_BIND_QRY_TASK(_task) && SCH_JOB_NEED_FLOW_CTRL(_job) && SCH_IS_LEVEL_UNFINISHED((_task)->level))
|
||||
#define SCH_FETCH_TYPE(_pSrcTask) (SCH_IS_DATA_BIND_QRY_TASK(_pSrcTask) ? TDMT_SCH_FETCH : TDMT_SCH_MERGE_FETCH)
|
||||
#define SCH_TASK_NEED_FETCH(_task) ((_task)->plan->subplanType != SUBPLAN_TYPE_MODIFY)
|
||||
|
||||
#define SCH_SET_JOB_TYPE(_job, type) do { if ((type) != SUBPLAN_TYPE_MODIFY) { (_job)->attr.queryJob = true; } } while (0)
|
||||
#define SCH_IS_QUERY_JOB(_job) ((_job)->attr.queryJob)
|
||||
|
|
|
@ -247,7 +247,7 @@ int32_t schBuildTaskRalation(SSchJob *pJob, SHashObj *planToTask) {
|
|||
|
||||
|
||||
int32_t schAppendJobDataSrc(SSchJob *pJob, SSchTask *pTask) {
|
||||
if (!SCH_IS_DATA_SRC_QRY_TASK(pTask)) {
|
||||
if (!SCH_IS_DATA_BIND_QRY_TASK(pTask)) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
@ -879,7 +879,7 @@ int32_t schProcessOnCbBegin(SSchJob** job, SSchTask** task, uint64_t qId, int64_
|
|||
}
|
||||
|
||||
if (schJobNeedToStop(pJob, &status)) {
|
||||
SCH_TASK_ELOG("will not do further processing cause of job status %s", jobTaskStatusStr(status));
|
||||
SCH_TASK_DLOG("will not do further processing cause of job status %s", jobTaskStatusStr(status));
|
||||
SCH_ERR_JRET(TSDB_CODE_SCH_IGNORE_ERROR);
|
||||
}
|
||||
|
||||
|
|
|
@ -21,44 +21,30 @@
|
|||
#include "tref.h"
|
||||
#include "trpc.h"
|
||||
|
||||
int32_t schValidateReceivedMsgType(SSchJob *pJob, SSchTask *pTask, int32_t msgType) {
|
||||
int32_t schValidateRspMsgType(SSchJob *pJob, SSchTask *pTask, int32_t msgType) {
|
||||
int32_t lastMsgType = pTask->lastMsgType;
|
||||
int32_t taskStatus = SCH_GET_TASK_STATUS(pTask);
|
||||
int32_t reqMsgType = msgType - 1;
|
||||
int32_t reqMsgType = (msgType & 1U) ? msgType : (msgType - 1);
|
||||
switch (msgType) {
|
||||
case TDMT_SCH_LINK_BROKEN:
|
||||
case TDMT_SCH_EXPLAIN_RSP:
|
||||
return TSDB_CODE_SUCCESS;
|
||||
case TDMT_SCH_MERGE_QUERY_RSP:
|
||||
case TDMT_SCH_QUERY_RSP: // query_rsp may be processed later than ready_rsp
|
||||
if (lastMsgType != reqMsgType && -1 != lastMsgType) {
|
||||
SCH_TASK_DLOG("rsp msg type mis-match, last sent msgType:%s, rspType:%s", TMSG_INFO(lastMsgType),
|
||||
TMSG_INFO(msgType));
|
||||
}
|
||||
|
||||
if (taskStatus != JOB_TASK_STATUS_EXEC && taskStatus != JOB_TASK_STATUS_PART_SUCC) {
|
||||
SCH_TASK_DLOG("rsp msg conflicted with task status, status:%s, rspType:%s", jobTaskStatusStr(taskStatus),
|
||||
TMSG_INFO(msgType));
|
||||
}
|
||||
|
||||
// SCH_SET_TASK_LASTMSG_TYPE(pTask, -1);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
case TDMT_SCH_FETCH_RSP:
|
||||
case TDMT_SCH_MERGE_FETCH_RSP:
|
||||
if (lastMsgType != reqMsgType && -1 != lastMsgType) {
|
||||
if (lastMsgType != reqMsgType) {
|
||||
SCH_TASK_ELOG("rsp msg type mis-match, last sent msgType:%s, rspType:%s", TMSG_INFO(lastMsgType),
|
||||
TMSG_INFO(msgType));
|
||||
SCH_ERR_RET(TSDB_CODE_SCH_STATUS_ERROR);
|
||||
}
|
||||
|
||||
if (taskStatus != JOB_TASK_STATUS_EXEC && taskStatus != JOB_TASK_STATUS_PART_SUCC) {
|
||||
}
|
||||
if (taskStatus != JOB_TASK_STATUS_PART_SUCC) {
|
||||
SCH_TASK_ELOG("rsp msg conflicted with task status, status:%s, rspType:%s", jobTaskStatusStr(taskStatus),
|
||||
TMSG_INFO(msgType));
|
||||
SCH_ERR_RET(TSDB_CODE_SCH_STATUS_ERROR);
|
||||
}
|
||||
|
||||
// SCH_SET_TASK_LASTMSG_TYPE(pTask, -1);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
case TDMT_SCH_MERGE_QUERY_RSP:
|
||||
case TDMT_SCH_QUERY_RSP:
|
||||
case TDMT_VND_CREATE_TABLE_RSP:
|
||||
case TDMT_VND_DROP_TABLE_RSP:
|
||||
case TDMT_VND_ALTER_TABLE_RSP:
|
||||
|
@ -77,14 +63,12 @@ int32_t schValidateReceivedMsgType(SSchJob *pJob, SSchTask *pTask, int32_t msgTy
|
|||
SCH_ERR_RET(TSDB_CODE_SCH_STATUS_ERROR);
|
||||
}
|
||||
|
||||
if (taskStatus != JOB_TASK_STATUS_EXEC && taskStatus != JOB_TASK_STATUS_PART_SUCC) {
|
||||
if (taskStatus != JOB_TASK_STATUS_EXEC) {
|
||||
SCH_TASK_ELOG("rsp msg conflicted with task status, status:%s, rspType:%s", jobTaskStatusStr(taskStatus),
|
||||
TMSG_INFO(msgType));
|
||||
SCH_ERR_RET(TSDB_CODE_SCH_STATUS_ERROR);
|
||||
}
|
||||
|
||||
// SCH_SET_TASK_LASTMSG_TYPE(pTask, -1);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
@ -98,7 +82,7 @@ int32_t schHandleResponseMsg(SSchJob *pJob, SSchTask *pTask, int32_t execId, SDa
|
|||
bool dropExecNode = (msgType == TDMT_SCH_LINK_BROKEN || SCH_NETWORK_ERR(rspCode));
|
||||
SCH_ERR_JRET(schUpdateTaskHandle(pJob, pTask, dropExecNode, pMsg->handle, execId));
|
||||
|
||||
SCH_ERR_JRET(schValidateReceivedMsgType(pJob, pTask, msgType));
|
||||
SCH_ERR_JRET(schValidateRspMsgType(pJob, pTask, msgType));
|
||||
|
||||
int32_t reqType = IsReq(pMsg) ? pMsg->msgType : (pMsg->msgType - 1);
|
||||
if (SCH_NEED_REDIRECT(reqType, rspCode, pMsg->len)) {
|
||||
|
@ -259,18 +243,25 @@ int32_t schHandleResponseMsg(SSchJob *pJob, SSchTask *pTask, int32_t execId, SDa
|
|||
}
|
||||
case TDMT_SCH_QUERY_RSP:
|
||||
case TDMT_SCH_MERGE_QUERY_RSP: {
|
||||
SQueryTableRsp *rsp = (SQueryTableRsp *)msg;
|
||||
|
||||
SCH_ERR_JRET(rspCode);
|
||||
if (NULL == msg) {
|
||||
SCH_ERR_JRET(TSDB_CODE_QRY_INVALID_INPUT);
|
||||
}
|
||||
|
||||
SQueryTableRsp *rsp = (SQueryTableRsp *)msg;
|
||||
rsp->code = ntohl(rsp->code);
|
||||
rsp->sversion = ntohl(rsp->sversion);
|
||||
rsp->tversion = ntohl(rsp->tversion);
|
||||
rsp->affectedRows = be64toh(rsp->affectedRows);
|
||||
|
||||
SCH_ERR_JRET(rsp->code);
|
||||
|
||||
SCH_ERR_JRET(schSaveJobQueryRes(pJob, rsp));
|
||||
|
||||
taosMemoryFreeClear(msg);
|
||||
atomic_add_fetch_32(&pJob->resNumOfRows, rsp->affectedRows);
|
||||
|
||||
taosMemoryFreeClear(msg);
|
||||
|
||||
SCH_ERR_RET(schProcessOnTaskSuccess(pJob, pTask));
|
||||
|
||||
break;
|
||||
|
@ -1010,6 +1001,7 @@ int32_t schBuildAndSendMsg(SSchJob *pJob, SSchTask *pTask, SQueryNodeAddr *addr,
|
|||
pMsg->execId = htonl(pTask->execId);
|
||||
pMsg->taskType = TASK_TYPE_TEMP;
|
||||
pMsg->explain = SCH_IS_EXPLAIN_JOB(pJob);
|
||||
pMsg->needFetch = SCH_TASK_NEED_FETCH(pTask);
|
||||
pMsg->phyLen = htonl(pTask->msgLen);
|
||||
pMsg->sqlLen = htonl(len);
|
||||
|
||||
|
|
|
@ -276,7 +276,7 @@ int32_t schProcessOnTaskSuccess(SSchJob *pJob, SSchTask *pTask) {
|
|||
}
|
||||
|
||||
int32_t schRescheduleTask(SSchJob *pJob, SSchTask *pTask) {
|
||||
if (SCH_IS_DATA_SRC_QRY_TASK(pTask)) {
|
||||
if (SCH_IS_DATA_BIND_TASK(pTask)) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
@ -313,7 +313,7 @@ int32_t schDoTaskRedirect(SSchJob *pJob, SSchTask *pTask, SDataBuf* pData, int32
|
|||
pTask->lastMsgType = 0;
|
||||
memset(&pTask->succeedAddr, 0, sizeof(pTask->succeedAddr));
|
||||
|
||||
if (SCH_IS_DATA_SRC_QRY_TASK(pTask)) {
|
||||
if (SCH_IS_DATA_BIND_TASK(pTask)) {
|
||||
if (pData) {
|
||||
SCH_ERR_JRET(schUpdateTaskCandidateAddr(pJob, pTask, pData->pEpSet));
|
||||
}
|
||||
|
@ -358,7 +358,7 @@ _return:
|
|||
int32_t schHandleRedirect(SSchJob *pJob, SSchTask *pTask, SDataBuf* pData, int32_t rspCode) {
|
||||
int32_t code = 0;
|
||||
|
||||
if (SCH_IS_DATA_SRC_QRY_TASK(pTask)) {
|
||||
if (SCH_IS_DATA_BIND_TASK(pTask)) {
|
||||
if (NULL == pData->pEpSet) {
|
||||
SCH_TASK_ELOG("no epset updated while got error %s", tstrerror(rspCode));
|
||||
SCH_ERR_JRET(rspCode);
|
||||
|
@ -492,7 +492,7 @@ int32_t schTaskCheckSetRetry(SSchJob *pJob, SSchTask *pTask, int32_t errCode, bo
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
if (SCH_IS_DATA_SRC_TASK(pTask)) {
|
||||
if (SCH_IS_DATA_BIND_TASK(pTask)) {
|
||||
if ((pTask->execId + 1) >= SCH_TASK_NUM_OF_EPS(&pTask->plan->execNode)) {
|
||||
*needRetry = false;
|
||||
SCH_TASK_DLOG("task no more retry since all ep tried, execId:%d, epNum:%d", pTask->execId,
|
||||
|
@ -528,7 +528,7 @@ int32_t schHandleTaskRetry(SSchJob *pJob, SSchTask *pTask) {
|
|||
|
||||
schDeregisterTaskHb(pJob, pTask);
|
||||
|
||||
if (SCH_IS_DATA_SRC_TASK(pTask)) {
|
||||
if (SCH_IS_DATA_BIND_TASK(pTask)) {
|
||||
SCH_SWITCH_EPSET(&pTask->plan->execNode);
|
||||
} else {
|
||||
int32_t candidateNum = taosArrayGetSize(pTask->candidateAddrs);
|
||||
|
@ -596,7 +596,7 @@ int32_t schSetTaskCandidateAddrs(SSchJob *pJob, SSchTask *pTask) {
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
if (SCH_IS_DATA_SRC_QRY_TASK(pTask)) {
|
||||
if (SCH_IS_DATA_BIND_TASK(pTask)) {
|
||||
SCH_TASK_ELOG("no execNode specifed for data src task, numOfEps:%d", pTask->plan->execNode.epSet.numOfEps);
|
||||
SCH_ERR_RET(TSDB_CODE_QRY_APP_ERROR);
|
||||
}
|
||||
|
|
|
@ -1,72 +0,0 @@
|
|||
/*
|
||||
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
|
||||
*
|
||||
* This program is free software: you can use, redistribute, and/or modify
|
||||
* it under the terms of the GNU Affero General Public License, version 3
|
||||
* or later ("AGPL"), as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE.
|
||||
*
|
||||
* You should have received a copy of the GNU Affero General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#ifndef _TD_LIBS_SYNC_ON_MESSAGE_H
|
||||
#define _TD_LIBS_SYNC_ON_MESSAGE_H
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
#include <stdint.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include "taosdef.h"
|
||||
|
||||
// TLA+ Spec
|
||||
// Receive(m) ==
|
||||
// LET i == m.mdest
|
||||
// j == m.msource
|
||||
// IN \* Any RPC with a newer term causes the recipient to advance
|
||||
// \* its term first. Responses with stale terms are ignored.
|
||||
// \/ UpdateTerm(i, j, m)
|
||||
// \/ /\ m.mtype = RequestVoteRequest
|
||||
// /\ HandleRequestVoteRequest(i, j, m)
|
||||
// \/ /\ m.mtype = RequestVoteResponse
|
||||
// /\ \/ DropStaleResponse(i, j, m)
|
||||
// \/ HandleRequestVoteResponse(i, j, m)
|
||||
// \/ /\ m.mtype = AppendEntriesRequest
|
||||
// /\ HandleAppendEntriesRequest(i, j, m)
|
||||
// \/ /\ m.mtype = AppendEntriesResponse
|
||||
// /\ \/ DropStaleResponse(i, j, m)
|
||||
// \/ HandleAppendEntriesResponse(i, j, m)
|
||||
|
||||
// DuplicateMessage(m) ==
|
||||
// /\ Send(m)
|
||||
// /\ UNCHANGED <<serverVars, candidateVars, leaderVars, logVars>>
|
||||
|
||||
// DropMessage(m) ==
|
||||
// /\ Discard(m)
|
||||
// /\ UNCHANGED <<serverVars, candidateVars, leaderVars, logVars>>
|
||||
|
||||
// Next == /\ \/ \E i \in Server : Restart(i)
|
||||
// \/ \E i \in Server : Timeout(i)
|
||||
// \/ \E i,j \in Server : RequestVote(i, j)
|
||||
// \/ \E i \in Server : BecomeLeader(i)
|
||||
// \/ \E i \in Server, v \in Value : ClientRequest(i, v)
|
||||
// \/ \E i \in Server : AdvanceCommitIndex(i)
|
||||
// \/ \E i,j \in Server : AppendEntries(i, j)
|
||||
// \/ \E m \in DOMAIN messages : Receive(m)
|
||||
// \/ \E m \in DOMAIN messages : DuplicateMessage(m)
|
||||
// \/ \E m \in DOMAIN messages : DropMessage(m)
|
||||
// \* History variable that tracks every log ever:
|
||||
// /\ allLogs' = allLogs \cup {log[i] : i \in Server}
|
||||
//
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif /*_TD_LIBS_SYNC_ON_MESSAGE_H*/
|
|
@ -162,6 +162,17 @@ int32_t syncNodeOnAppendEntriesCb(SSyncNode* ths, SyncAppendEntries* pMsg) {
|
|||
pReply->success = false;
|
||||
pReply->matchIndex = SYNC_INDEX_INVALID;
|
||||
|
||||
// msg event log
|
||||
do {
|
||||
char host[128];
|
||||
uint16_t port;
|
||||
syncUtilU642Addr(pReply->destId.addr, host, sizeof(host), &port);
|
||||
sDebug(
|
||||
"vgId:%d, send sync-append-entries-reply to %s:%d, {term:%lu, pterm:%lu, success:%d, "
|
||||
"match-index:%ld}",
|
||||
ths->vgId, host, port, pReply->term, pReply->privateTerm, pReply->success, pReply->matchIndex);
|
||||
} while (0);
|
||||
|
||||
SRpcMsg rpcMsg;
|
||||
syncAppendEntriesReply2RpcMsg(pReply, &rpcMsg);
|
||||
syncNodeSendMsgById(&pReply->destId, ths, &rpcMsg);
|
||||
|
@ -334,270 +345,16 @@ int32_t syncNodeOnAppendEntriesCb(SSyncNode* ths, SyncAppendEntries* pMsg) {
|
|||
pReply->matchIndex = pMsg->prevLogIndex;
|
||||
}
|
||||
|
||||
SRpcMsg rpcMsg;
|
||||
syncAppendEntriesReply2RpcMsg(pReply, &rpcMsg);
|
||||
syncNodeSendMsgById(&pReply->destId, ths, &rpcMsg);
|
||||
syncAppendEntriesReplyDestroy(pReply);
|
||||
|
||||
// maybe update commit index from leader
|
||||
if (pMsg->commitIndex > ths->commitIndex) {
|
||||
// has commit entry in local
|
||||
if (pMsg->commitIndex <= ths->pLogStore->getLastIndex(ths->pLogStore)) {
|
||||
SyncIndex beginIndex = ths->commitIndex + 1;
|
||||
SyncIndex endIndex = pMsg->commitIndex;
|
||||
|
||||
// update commit index
|
||||
ths->commitIndex = pMsg->commitIndex;
|
||||
|
||||
// call back Wal
|
||||
ths->pLogStore->updateCommitIndex(ths->pLogStore, ths->commitIndex);
|
||||
|
||||
int32_t code = syncNodeCommit(ths, beginIndex, endIndex, ths->state);
|
||||
ASSERT(code == 0);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
#if 0
|
||||
|
||||
int32_t syncNodeOnAppendEntriesCb(SSyncNode* ths, SyncAppendEntries* pMsg) {
|
||||
int32_t ret = 0;
|
||||
|
||||
char logBuf[128] = {0};
|
||||
snprintf(logBuf, sizeof(logBuf), "==syncNodeOnAppendEntriesCb== term:%lu", ths->pRaftStore->currentTerm);
|
||||
syncAppendEntriesLog2(logBuf, pMsg);
|
||||
|
||||
if (pMsg->term > ths->pRaftStore->currentTerm) {
|
||||
syncNodeUpdateTerm(ths, pMsg->term);
|
||||
}
|
||||
ASSERT(pMsg->term <= ths->pRaftStore->currentTerm);
|
||||
|
||||
// reset elect timer
|
||||
if (pMsg->term == ths->pRaftStore->currentTerm) {
|
||||
ths->leaderCache = pMsg->srcId;
|
||||
syncNodeResetElectTimer(ths);
|
||||
}
|
||||
ASSERT(pMsg->dataLen >= 0);
|
||||
|
||||
SyncTerm localPreLogTerm = 0;
|
||||
if (pMsg->prevLogIndex >= SYNC_INDEX_BEGIN && pMsg->prevLogIndex <= ths->pLogStore->getLastIndex(ths->pLogStore)) {
|
||||
SSyncRaftEntry* pEntry = ths->pLogStore->getEntry(ths->pLogStore, pMsg->prevLogIndex);
|
||||
if (pEntry == NULL) {
|
||||
char logBuf[128];
|
||||
snprintf(logBuf, sizeof(logBuf), "getEntry error, index:%ld, since %s", pMsg->prevLogIndex, terrstr());
|
||||
syncNodeErrorLog(ths, logBuf);
|
||||
return -1;
|
||||
}
|
||||
|
||||
localPreLogTerm = pEntry->term;
|
||||
syncEntryDestory(pEntry);
|
||||
}
|
||||
|
||||
bool logOK =
|
||||
(pMsg->prevLogIndex == SYNC_INDEX_INVALID) ||
|
||||
((pMsg->prevLogIndex >= SYNC_INDEX_BEGIN) &&
|
||||
(pMsg->prevLogIndex <= ths->pLogStore->getLastIndex(ths->pLogStore)) && (pMsg->prevLogTerm == localPreLogTerm));
|
||||
|
||||
// reject request
|
||||
if ((pMsg->term < ths->pRaftStore->currentTerm) ||
|
||||
((pMsg->term == ths->pRaftStore->currentTerm) && (ths->state == TAOS_SYNC_STATE_FOLLOWER) && !logOK)) {
|
||||
sTrace(
|
||||
"syncNodeOnAppendEntriesCb --> reject, pMsg->term:%lu, ths->pRaftStore->currentTerm:%lu, ths->state:%d, "
|
||||
"logOK:%d",
|
||||
pMsg->term, ths->pRaftStore->currentTerm, ths->state, logOK);
|
||||
|
||||
SyncAppendEntriesReply* pReply = syncAppendEntriesReplyBuild(ths->vgId);
|
||||
pReply->srcId = ths->myRaftId;
|
||||
pReply->destId = pMsg->srcId;
|
||||
pReply->term = ths->pRaftStore->currentTerm;
|
||||
pReply->success = false;
|
||||
pReply->matchIndex = SYNC_INDEX_INVALID;
|
||||
|
||||
SRpcMsg rpcMsg;
|
||||
syncAppendEntriesReply2RpcMsg(pReply, &rpcMsg);
|
||||
syncNodeSendMsgById(&pReply->destId, ths, &rpcMsg);
|
||||
syncAppendEntriesReplyDestroy(pReply);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
// return to follower state
|
||||
if (pMsg->term == ths->pRaftStore->currentTerm && ths->state == TAOS_SYNC_STATE_CANDIDATE) {
|
||||
sTrace(
|
||||
"syncNodeOnAppendEntriesCb --> return to follower, pMsg->term:%lu, ths->pRaftStore->currentTerm:%lu, "
|
||||
"ths->state:%d, logOK:%d",
|
||||
pMsg->term, ths->pRaftStore->currentTerm, ths->state, logOK);
|
||||
|
||||
syncNodeBecomeFollower(ths, "from candidate by append entries");
|
||||
|
||||
// ret or reply?
|
||||
return ret;
|
||||
}
|
||||
|
||||
// accept request
|
||||
if (pMsg->term == ths->pRaftStore->currentTerm && ths->state == TAOS_SYNC_STATE_FOLLOWER && logOK) {
|
||||
// preIndex = -1, or has preIndex entry in local log
|
||||
ASSERT(pMsg->prevLogIndex <= ths->pLogStore->getLastIndex(ths->pLogStore));
|
||||
|
||||
// has extra entries (> preIndex) in local log
|
||||
bool hasExtraEntries = pMsg->prevLogIndex < ths->pLogStore->getLastIndex(ths->pLogStore);
|
||||
|
||||
// has entries in SyncAppendEntries msg
|
||||
bool hasAppendEntries = pMsg->dataLen > 0;
|
||||
|
||||
sTrace(
|
||||
"syncNodeOnAppendEntriesCb --> accept, pMsg->term:%lu, ths->pRaftStore->currentTerm:%lu, ths->state:%d, "
|
||||
"logOK:%d, hasExtraEntries:%d, hasAppendEntries:%d",
|
||||
pMsg->term, ths->pRaftStore->currentTerm, ths->state, logOK, hasExtraEntries, hasAppendEntries);
|
||||
|
||||
if (hasExtraEntries && hasAppendEntries) {
|
||||
// not conflict by default
|
||||
bool conflict = false;
|
||||
|
||||
SyncIndex extraIndex = pMsg->prevLogIndex + 1;
|
||||
SSyncRaftEntry* pExtraEntry = ths->pLogStore->getEntry(ths->pLogStore, extraIndex);
|
||||
if (pExtraEntry == NULL) {
|
||||
char logBuf[128];
|
||||
snprintf(logBuf, sizeof(logBuf), "getEntry error2, index:%ld, since %s", extraIndex, terrstr());
|
||||
syncNodeErrorLog(ths, logBuf);
|
||||
return -1;
|
||||
}
|
||||
|
||||
SSyncRaftEntry* pAppendEntry = syncEntryDeserialize(pMsg->data, pMsg->dataLen);
|
||||
if (pAppendEntry == NULL) {
|
||||
syncNodeErrorLog(ths, "syncEntryDeserialize pAppendEntry error");
|
||||
return -1;
|
||||
}
|
||||
|
||||
// log not match, conflict
|
||||
ASSERT(extraIndex == pAppendEntry->index);
|
||||
if (pExtraEntry->term != pAppendEntry->term) {
|
||||
conflict = true;
|
||||
}
|
||||
|
||||
if (conflict) {
|
||||
// roll back
|
||||
SyncIndex delBegin = ths->pLogStore->getLastIndex(ths->pLogStore);
|
||||
SyncIndex delEnd = extraIndex;
|
||||
|
||||
sTrace("syncNodeOnAppendEntriesCb --> conflict:%d, delBegin:%ld, delEnd:%ld", conflict, delBegin, delEnd);
|
||||
|
||||
// notice! reverse roll back!
|
||||
for (SyncIndex index = delEnd; index >= delBegin; --index) {
|
||||
if (ths->pFsm->FpRollBackCb != NULL) {
|
||||
SSyncRaftEntry* pRollBackEntry = ths->pLogStore->getEntry(ths->pLogStore, index);
|
||||
if (pRollBackEntry == NULL) {
|
||||
char logBuf[128];
|
||||
snprintf(logBuf, sizeof(logBuf), "getEntry error3, index:%ld, since %s", index, terrstr());
|
||||
syncNodeErrorLog(ths, logBuf);
|
||||
return -1;
|
||||
}
|
||||
|
||||
// if (pRollBackEntry->msgType != TDMT_SYNC_NOOP) {
|
||||
if (syncUtilUserRollback(pRollBackEntry->msgType)) {
|
||||
SRpcMsg rpcMsg;
|
||||
syncEntry2OriginalRpc(pRollBackEntry, &rpcMsg);
|
||||
|
||||
SFsmCbMeta cbMeta = {0};
|
||||
cbMeta.index = pRollBackEntry->index;
|
||||
cbMeta.lastConfigIndex = syncNodeGetSnapshotConfigIndex(ths, cbMeta.index);
|
||||
cbMeta.isWeak = pRollBackEntry->isWeak;
|
||||
cbMeta.code = 0;
|
||||
cbMeta.state = ths->state;
|
||||
cbMeta.seqNum = pRollBackEntry->seqNum;
|
||||
ths->pFsm->FpRollBackCb(ths->pFsm, &rpcMsg, cbMeta);
|
||||
rpcFreeCont(rpcMsg.pCont);
|
||||
}
|
||||
|
||||
syncEntryDestory(pRollBackEntry);
|
||||
}
|
||||
}
|
||||
|
||||
// delete confict entries
|
||||
ths->pLogStore->truncate(ths->pLogStore, extraIndex);
|
||||
|
||||
// append new entries
|
||||
ths->pLogStore->appendEntry(ths->pLogStore, pAppendEntry);
|
||||
|
||||
// pre commit
|
||||
SRpcMsg rpcMsg;
|
||||
syncEntry2OriginalRpc(pAppendEntry, &rpcMsg);
|
||||
if (ths->pFsm != NULL) {
|
||||
// if (ths->pFsm->FpPreCommitCb != NULL && pAppendEntry->originalRpcType != TDMT_SYNC_NOOP) {
|
||||
if (ths->pFsm->FpPreCommitCb != NULL && syncUtilUserPreCommit(pAppendEntry->originalRpcType)) {
|
||||
SFsmCbMeta cbMeta = {0};
|
||||
cbMeta.index = pAppendEntry->index;
|
||||
cbMeta.lastConfigIndex = syncNodeGetSnapshotConfigIndex(ths, cbMeta.index);
|
||||
cbMeta.isWeak = pAppendEntry->isWeak;
|
||||
cbMeta.code = 2;
|
||||
cbMeta.state = ths->state;
|
||||
cbMeta.seqNum = pAppendEntry->seqNum;
|
||||
ths->pFsm->FpPreCommitCb(ths->pFsm, &rpcMsg, cbMeta);
|
||||
}
|
||||
}
|
||||
rpcFreeCont(rpcMsg.pCont);
|
||||
}
|
||||
|
||||
// free memory
|
||||
syncEntryDestory(pExtraEntry);
|
||||
syncEntryDestory(pAppendEntry);
|
||||
|
||||
} else if (hasExtraEntries && !hasAppendEntries) {
|
||||
// do nothing
|
||||
|
||||
} else if (!hasExtraEntries && hasAppendEntries) {
|
||||
SSyncRaftEntry* pAppendEntry = syncEntryDeserialize(pMsg->data, pMsg->dataLen);
|
||||
if (pAppendEntry == NULL) {
|
||||
syncNodeErrorLog(ths, "syncEntryDeserialize pAppendEntry2 error");
|
||||
return -1;
|
||||
}
|
||||
|
||||
// append new entries
|
||||
ths->pLogStore->appendEntry(ths->pLogStore, pAppendEntry);
|
||||
|
||||
// pre commit
|
||||
SRpcMsg rpcMsg;
|
||||
syncEntry2OriginalRpc(pAppendEntry, &rpcMsg);
|
||||
if (ths->pFsm != NULL) {
|
||||
// if (ths->pFsm->FpPreCommitCb != NULL && pAppendEntry->originalRpcType != TDMT_SYNC_NOOP) {
|
||||
if (ths->pFsm->FpPreCommitCb != NULL && syncUtilUserPreCommit(pAppendEntry->originalRpcType)) {
|
||||
SFsmCbMeta cbMeta = {0};
|
||||
cbMeta.index = pAppendEntry->index;
|
||||
cbMeta.lastConfigIndex = syncNodeGetSnapshotConfigIndex(ths, cbMeta.index);
|
||||
cbMeta.isWeak = pAppendEntry->isWeak;
|
||||
cbMeta.code = 3;
|
||||
cbMeta.state = ths->state;
|
||||
cbMeta.seqNum = pAppendEntry->seqNum;
|
||||
ths->pFsm->FpPreCommitCb(ths->pFsm, &rpcMsg, cbMeta);
|
||||
}
|
||||
}
|
||||
rpcFreeCont(rpcMsg.pCont);
|
||||
|
||||
// free memory
|
||||
syncEntryDestory(pAppendEntry);
|
||||
|
||||
} else if (!hasExtraEntries && !hasAppendEntries) {
|
||||
// do nothing
|
||||
|
||||
} else {
|
||||
syncNodeLog3("", ths);
|
||||
ASSERT(0);
|
||||
}
|
||||
|
||||
SyncAppendEntriesReply* pReply = syncAppendEntriesReplyBuild(ths->vgId);
|
||||
pReply->srcId = ths->myRaftId;
|
||||
pReply->destId = pMsg->srcId;
|
||||
pReply->term = ths->pRaftStore->currentTerm;
|
||||
pReply->success = true;
|
||||
|
||||
if (hasAppendEntries) {
|
||||
pReply->matchIndex = pMsg->prevLogIndex + 1;
|
||||
} else {
|
||||
pReply->matchIndex = pMsg->prevLogIndex;
|
||||
}
|
||||
// msg event log
|
||||
do {
|
||||
char host[128];
|
||||
uint16_t port;
|
||||
syncUtilU642Addr(pReply->destId.addr, host, sizeof(host), &port);
|
||||
sDebug(
|
||||
"vgId:%d, send sync-append-entries-reply to %s:%d, {term:%lu, pterm:%lu, success:%d, "
|
||||
"match-index:%ld}",
|
||||
ths->vgId, host, port, pReply->term, pReply->privateTerm, pReply->success, pReply->matchIndex);
|
||||
} while (0);
|
||||
|
||||
SRpcMsg rpcMsg;
|
||||
syncAppendEntriesReply2RpcMsg(pReply, &rpcMsg);
|
||||
|
@ -626,8 +383,6 @@ int32_t syncNodeOnAppendEntriesCb(SSyncNode* ths, SyncAppendEntries* pMsg) {
|
|||
return ret;
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
static int32_t syncNodeMakeLogSame(SSyncNode* ths, SyncAppendEntries* pMsg) {
|
||||
int32_t code;
|
||||
|
||||
|
@ -897,6 +652,17 @@ int32_t syncNodeOnAppendEntriesSnapshot2Cb(SSyncNode* ths, SyncAppendEntriesBatc
|
|||
pReply->success = true;
|
||||
pReply->matchIndex = matchIndex;
|
||||
|
||||
// msg event log
|
||||
do {
|
||||
char host[128];
|
||||
uint16_t port;
|
||||
syncUtilU642Addr(pReply->destId.addr, host, sizeof(host), &port);
|
||||
sDebug(
|
||||
"vgId:%d, send sync-append-entries-reply to %s:%d, {term:%lu, pterm:%lu, success:%d, "
|
||||
"match-index:%ld}",
|
||||
ths->vgId, host, port, pReply->term, pReply->privateTerm, pReply->success, pReply->matchIndex);
|
||||
} while (0);
|
||||
|
||||
// send response
|
||||
SRpcMsg rpcMsg;
|
||||
syncAppendEntriesReply2RpcMsg(pReply, &rpcMsg);
|
||||
|
@ -945,6 +711,17 @@ int32_t syncNodeOnAppendEntriesSnapshot2Cb(SSyncNode* ths, SyncAppendEntriesBatc
|
|||
pReply->success = false;
|
||||
pReply->matchIndex = SYNC_INDEX_INVALID;
|
||||
|
||||
// msg event log
|
||||
do {
|
||||
char host[128];
|
||||
uint16_t port;
|
||||
syncUtilU642Addr(pReply->destId.addr, host, sizeof(host), &port);
|
||||
sDebug(
|
||||
"vgId:%d, send sync-append-entries-reply to %s:%d, {term:%lu, pterm:%lu, success:%d, "
|
||||
"match-index:%ld}",
|
||||
ths->vgId, host, port, pReply->term, pReply->privateTerm, pReply->success, pReply->matchIndex);
|
||||
} while (0);
|
||||
|
||||
// send response
|
||||
SRpcMsg rpcMsg;
|
||||
syncAppendEntriesReply2RpcMsg(pReply, &rpcMsg);
|
||||
|
@ -977,7 +754,7 @@ int32_t syncNodeOnAppendEntriesSnapshot2Cb(SSyncNode* ths, SyncAppendEntriesBatc
|
|||
do {
|
||||
char logBuf[128];
|
||||
snprintf(logBuf, sizeof(logBuf),
|
||||
"recv sync-append-entries, match, {pre-index:%ld, pre-term:%lu, datalen:%d, datacount:%d}",
|
||||
"recv sync-append-entries-batch, match, {pre-index:%ld, pre-term:%lu, datalen:%d, datacount:%d}",
|
||||
pMsg->prevLogIndex, pMsg->prevLogTerm, pMsg->dataLen, pMsg->dataCount);
|
||||
syncNodeEventLog(ths, logBuf);
|
||||
} while (0);
|
||||
|
@ -1018,6 +795,17 @@ int32_t syncNodeOnAppendEntriesSnapshot2Cb(SSyncNode* ths, SyncAppendEntriesBatc
|
|||
pReply->success = true;
|
||||
pReply->matchIndex = hasAppendEntries ? pMsg->prevLogIndex + pMsg->dataCount : pMsg->prevLogIndex;
|
||||
|
||||
// msg event log
|
||||
do {
|
||||
char host[128];
|
||||
uint16_t port;
|
||||
syncUtilU642Addr(pReply->destId.addr, host, sizeof(host), &port);
|
||||
sDebug(
|
||||
"vgId:%d, send sync-append-entries-reply to %s:%d, {term:%lu, pterm:%lu, success:%d, "
|
||||
"match-index:%ld}",
|
||||
ths->vgId, host, port, pReply->term, pReply->privateTerm, pReply->success, pReply->matchIndex);
|
||||
} while (0);
|
||||
|
||||
// send response
|
||||
SRpcMsg rpcMsg;
|
||||
syncAppendEntriesReply2RpcMsg(pReply, &rpcMsg);
|
||||
|
@ -1227,6 +1015,17 @@ int32_t syncNodeOnAppendEntriesSnapshotCb(SSyncNode* ths, SyncAppendEntries* pMs
|
|||
pReply->success = true;
|
||||
pReply->matchIndex = matchIndex;
|
||||
|
||||
// msg event log
|
||||
do {
|
||||
char host[128];
|
||||
uint16_t port;
|
||||
syncUtilU642Addr(pReply->destId.addr, host, sizeof(host), &port);
|
||||
sDebug(
|
||||
"vgId:%d, send sync-append-entries-reply to %s:%d, {term:%lu, pterm:%lu, success:%d, "
|
||||
"match-index:%ld}",
|
||||
ths->vgId, host, port, pReply->term, pReply->privateTerm, pReply->success, pReply->matchIndex);
|
||||
} while (0);
|
||||
|
||||
// send response
|
||||
SRpcMsg rpcMsg;
|
||||
syncAppendEntriesReply2RpcMsg(pReply, &rpcMsg);
|
||||
|
@ -1272,6 +1071,17 @@ int32_t syncNodeOnAppendEntriesSnapshotCb(SSyncNode* ths, SyncAppendEntries* pMs
|
|||
pReply->success = false;
|
||||
pReply->matchIndex = SYNC_INDEX_INVALID;
|
||||
|
||||
// msg event log
|
||||
do {
|
||||
char host[128];
|
||||
uint16_t port;
|
||||
syncUtilU642Addr(pReply->destId.addr, host, sizeof(host), &port);
|
||||
sDebug(
|
||||
"vgId:%d, send sync-append-entries-reply to %s:%d, {term:%lu, pterm:%lu, success:%d, "
|
||||
"match-index:%ld}",
|
||||
ths->vgId, host, port, pReply->term, pReply->privateTerm, pReply->success, pReply->matchIndex);
|
||||
} while (0);
|
||||
|
||||
// send response
|
||||
SRpcMsg rpcMsg;
|
||||
syncAppendEntriesReply2RpcMsg(pReply, &rpcMsg);
|
||||
|
@ -1337,6 +1147,17 @@ int32_t syncNodeOnAppendEntriesSnapshotCb(SSyncNode* ths, SyncAppendEntries* pMs
|
|||
pReply->success = true;
|
||||
pReply->matchIndex = hasAppendEntries ? pMsg->prevLogIndex + 1 : pMsg->prevLogIndex;
|
||||
|
||||
// msg event log
|
||||
do {
|
||||
char host[128];
|
||||
uint16_t port;
|
||||
syncUtilU642Addr(pReply->destId.addr, host, sizeof(host), &port);
|
||||
sDebug(
|
||||
"vgId:%d, send sync-append-entries-reply to %s:%d, {term:%lu, pterm:%lu, success:%d, "
|
||||
"match-index:%ld}",
|
||||
ths->vgId, host, port, pReply->term, pReply->privateTerm, pReply->success, pReply->matchIndex);
|
||||
} while (0);
|
||||
|
||||
// send response
|
||||
SRpcMsg rpcMsg;
|
||||
syncAppendEntriesReply2RpcMsg(pReply, &rpcMsg);
|
||||
|
|
|
@ -193,9 +193,10 @@ int32_t syncNodeOnAppendEntriesReplySnapshot2Cb(SSyncNode* ths, SyncAppendEntrie
|
|||
// start snapshot <match+1, old snapshot.end>
|
||||
SSnapshot oldSnapshot;
|
||||
ths->pFsm->FpGetSnapshotInfo(ths->pFsm, &oldSnapshot);
|
||||
ASSERT(oldSnapshot.lastApplyIndex >= newMatchIndex + 1);
|
||||
syncNodeStartSnapshotOnce(ths, newMatchIndex + 1, oldSnapshot.lastApplyIndex, oldSnapshot.lastApplyTerm,
|
||||
pMsg); // term maybe not ok?
|
||||
if (oldSnapshot.lastApplyIndex > newMatchIndex) {
|
||||
syncNodeStartSnapshotOnce(ths, newMatchIndex + 1, oldSnapshot.lastApplyIndex, oldSnapshot.lastApplyTerm,
|
||||
pMsg); // term maybe not ok?
|
||||
}
|
||||
|
||||
syncIndexMgrSetIndex(ths->pNextIndex, &(pMsg->srcId), oldSnapshot.lastApplyIndex + 1);
|
||||
syncIndexMgrSetIndex(ths->pMatchIndex, &(pMsg->srcId), newMatchIndex);
|
||||
|
|
|
@ -57,8 +57,7 @@ void syncMaybeAdvanceCommitIndex(SSyncNode* pSyncNode) {
|
|||
pSyncNode->commitIndex = snapshot.lastApplyIndex;
|
||||
|
||||
char eventLog[128];
|
||||
snprintf(eventLog, sizeof(eventLog), "commit by snapshot from index:%ld to index:%ld", pSyncNode->commitIndex,
|
||||
snapshot.lastApplyIndex);
|
||||
snprintf(eventLog, sizeof(eventLog), "commit by snapshot from index:%ld to index:%ld", commitBegin, commitEnd);
|
||||
syncNodeEventLog(pSyncNode, eventLog);
|
||||
}
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue