Merge branch 'main' into enh/analysis

This commit is contained in:
Haojun Liao 2025-01-23 18:55:32 +08:00
commit 2f6977dfda
77 changed files with 2245 additions and 619 deletions

View File

@ -10,7 +10,36 @@
简体中文 | [English](README.md) | [TDengine 云服务](https://cloud.taosdata.com/?utm_medium=cn&utm_source=github) | 很多职位正在热招中,请看[这里](https://www.taosdata.com/careers/) 简体中文 | [English](README.md) | [TDengine 云服务](https://cloud.taosdata.com/?utm_medium=cn&utm_source=github) | 很多职位正在热招中,请看[这里](https://www.taosdata.com/careers/)
# TDengine 简介 # 目录
1. [TDengine 简介](#1-tdengine-简介)
1. [文档](#2-文档)
1. [必备工具](#3-必备工具)
- [3.1 Linux预备](#31-linux系统)
- [3.2 macOS预备](#32-macos系统)
- [3.3 Windows预备](#33-windows系统)
- [3.4 克隆仓库](#34-克隆仓库)
1. [构建](#4-构建)
- [4.1 Linux系统上构建](#41-linux系统上构建)
- [4.2 macOS系统上构建](#42-macos系统上构建)
- [4.3 Windows系统上构建](#43-windows系统上构建)
1. [打包](#5-打包)
1. [安装](#6-安装)
- [6.1 Linux系统上安装](#61-linux系统上安装)
- [6.2 macOS系统上安装](#62-macos系统上安装)
- [6.3 Windows系统上安装](#63-windows系统上安装)
1. [快速运行](#7-快速运行)
- [7.1 Linux系统上运行](#71-linux系统上运行)
- [7.2 macOS系统上运行](#72-macos系统上运行)
- [7.3 Windows系统上运行](#73-windows系统上运行)
1. [测试](#8-测试)
1. [版本发布](#9-版本发布)
1. [工作流](#10-工作流)
1. [覆盖率](#11-覆盖率)
1. [成为社区贡献者](#12-成为社区贡献者)
# 1. 简介
TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series Database, TSDB)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外TDengine 还提供缓存、数据订阅、流式计算等功能是一极简的时序数据处理平台最大程度的减小系统设计的复杂度降低研发和运营成本。与其他时序数据库相比TDengine 的主要优势如下: TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series Database, TSDB)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外TDengine 还提供缓存、数据订阅、流式计算等功能是一极简的时序数据处理平台最大程度的减小系统设计的复杂度降低研发和运营成本。与其他时序数据库相比TDengine 的主要优势如下:
@ -26,12 +55,82 @@ TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series
- **核心开源**TDengine 的核心代码包括集群功能全部开源截止到2022年8月1日全球超过 135.9k 个运行实例GitHub Star 18.7kFork 4.4k,社区活跃。 - **核心开源**TDengine 的核心代码包括集群功能全部开源截止到2022年8月1日全球超过 135.9k 个运行实例GitHub Star 18.7kFork 4.4k,社区活跃。
# 文档 了解TDengine高级功能的完整列表请 [点击](https://tdengine.com/tdengine/)。体验TDengine最简单的方式是通过[TDengine云平台](https://cloud.tdengine.com)。
关于完整的使用手册,系统架构和更多细节,请参考 [TDengine 文档](https://docs.taosdata.com) 或者 [TDengine Documentation](https://docs.tdengine.com)。 # 2. 文档
# 构建 关于完整的使用手册,系统架构和更多细节,请参考 [TDengine](https://www.taosdata.com/) 或者 [TDengine 官方文档](https://docs.taosdata.com)。
# 3. 必备工具
## 3.1 Linux系统
<details>
<summary>安装Linux必备工具</summary>
### Ubuntu 18.04、20.04、22.04
```bash
sudo apt-get udpate
sudo apt-get install -y gcc cmake build-essential git libjansson-dev \
libsnappy-dev liblzma-dev zlib1g-dev pkg-config
```
### CentOS 8
```bash
sudo yum update
yum install -y epel-release gcc gcc-c++ make cmake git perl dnf-plugins-core
yum config-manager --set-enabled powertools
yum install -y zlib-static xz-devel snappy-devel jansson-devel pkgconfig libatomic-static libstdc++-static
```
</details>
## 3.2 macOS系统
<details>
<summary>安装macOS必备工具</summary>
根据提示安装依赖工具 [brew](https://brew.sh/).
```bash
brew install argp-standalone gflags pkgconfig
```
</details>
## 3.3 Windows系统
<details>
<summary>安装Windows必备工具</summary>
进行中。
</details>
## 3.4 克隆仓库
<details>
<summary>克隆仓库</summary>
通过如下命令将TDengine仓库克隆到指定计算机:
```bash
git clone https://github.com/taosdata/TDengine.git
cd TDengine
```
> **注意:**
> 想连接更多TDengine连接器可参考以下仓库: [JDBC连接器](https://github.com/taosdata/taos-connector-jdbc), [Go连接器](https://github.com/taosdata/driver-go), [Python连接器](https://github.com/taosdata/taos-connector-python), [Node.js连接器](https://github.com/taosdata/taos-connector-node), [C#连接器](https://github.com/taosdata/taos-connector-dotnet), [Rust连接器](https://github.com/taosdata/taos-connector-rust).
</details>
# 4. 构建
TDengine 目前可以在 Linux、 Windows、macOS 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。目前不支持使用交叉编译器构建。 TDengine 目前可以在 Linux、 Windows、macOS 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。目前不支持使用交叉编译器构建。
用户可根据需求选择通过源码、[容器](https://docs.taosdata.com/get-started/docker/)、[安装包](https://docs.taosdata.com/get-started/package/)或[Kubernetes](https://docs.taosdata.com/deployment/k8s/)来安装。本快速指南仅适用于通过源码安装。 用户可根据需求选择通过源码、[容器](https://docs.taosdata.com/get-started/docker/)、[安装包](https://docs.taosdata.com/get-started/package/)或[Kubernetes](https://docs.taosdata.com/deployment/k8s/)来安装。本快速指南仅适用于通过源码安装。
@ -40,309 +139,257 @@ TDengine 还提供一组辅助工具软件 taosTools目前它包含 taosBench
为了构建TDengine, 请使用 [CMake](https://cmake.org/) 3.13.0 或者更高版本。 为了构建TDengine, 请使用 [CMake](https://cmake.org/) 3.13.0 或者更高版本。
## 安装工具 ## 4.1 Linux系统上构建
### Ubuntu 18.04 及以上版本 & Debian <details>
```bash <summary>Linux系统上构建步骤</summary>
sudo apt-get install -y gcc cmake build-essential git libssl-dev libgflags2.2 libgflags-dev
```
#### 为 taos-tools 安装编译需要的软件 可以通过以下命令使用脚本 `build.sh` 编译TDengine和taosTools包括taosBenchmark和taosdump:
为了在 Ubuntu/Debian 系统上编译 [taos-tools](https://github.com/taosdata/taos-tools) 需要安装如下软件:
```bash
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev zlib1g pkg-config
```
### CentOS 7.9
```bash
sudo yum install epel-release
sudo yum update
sudo yum install -y gcc gcc-c++ make cmake3 gflags git openssl-devel
sudo ln -sf /usr/bin/cmake3 /usr/bin/cmake
```
### CentOS 8/Fedora/Rocky Linux
```bash
sudo dnf install -y gcc gcc-c++ gflags make cmake epel-release git openssl-devel
```
#### 在 CentOS 上构建 taosTools 安装依赖软件
#### CentOS 7.9
```
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
```
#### CentOS 8/Fedora/Rocky Linux
```
sudo yum install -y epel-release
sudo yum install -y dnf-plugins-core
sudo yum config-manager --set-enabled powertools
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
```
注意:由于 snappy 缺乏 pkg-config 支持(参考 [链接](https://github.com/google/snappy/pull/86)),会导致 cmake 提示无法发现 libsnappy实际上工作正常。
若 powertools 安装失败,可以尝试改用:
```
sudo yum config-manager --set-enabled powertools
```
#### CentOS + devtoolset
除上述编译依赖包,需要执行以下命令:
```
sudo yum install centos-release-scl
sudo yum install devtoolset-9 devtoolset-9-libatomic-devel
scl enable devtoolset-9 -- bash
```
### macOS
```
brew install argp-standalone gflags pkgconfig
```
### 设置 golang 开发环境
TDengine 包含数个使用 Go 语言开发的组件比如taosAdapter, 请参考 golang.org 官方文档设置 go 开发环境。
请使用 1.20 及以上版本。对于中国用户,我们建议使用代理来加速软件包下载。
```
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct
```
缺省是不会构建 taosAdapter, 但您可以使用以下命令选择构建 taosAdapter 作为 RESTful 接口的服务。
```
cmake .. -DBUILD_HTTP=false
```
### 设置 rust 开发环境
TDengine 包含数个使用 Rust 语言开发的组件. 请参考 rust-lang.org 官方文档设置 rust 开发环境。
## 获取源码
首先,你需要从 GitHub 克隆源码:
```bash
git clone https://github.com/taosdata/TDengine.git
cd TDengine
```
如果使用 https 协议下载比较慢,可以通过修改 ~/.gitconfig 文件添加以下两行设置使用 ssh 协议下载。需要首先上传 ssh 密钥到 GitHub详细方法请参考 GitHub 官方文档。
```
[url "git@github.com:"]
insteadOf = https://github.com/
```
## 特别说明
[JDBC 连接器](https://github.com/taosdata/taos-connector-jdbc) [Go 连接器](https://github.com/taosdata/driver-go)[Python 连接器](https://github.com/taosdata/taos-connector-python)[Node.js 连接器](https://github.com/taosdata/taos-connector-node)[C# 连接器](https://github.com/taosdata/taos-connector-dotnet) [Rust 连接器](https://github.com/taosdata/taos-connector-rust) 和 [Grafana 插件](https://github.com/taosdata/grafanaplugin)已移到独立仓库。
## 构建 TDengine
### Linux 系统
可以运行代码仓库中的 `build.sh` 脚本编译出 TDengine 和 taosTools包含 taosBenchmark 和 taosdump
```bash ```bash
./build.sh ./build.sh
``` ```
这个脚本等价于执行如下命令: 也可以通过以下命令进行构建:
```bash ```bash
mkdir debug mkdir debug && cd debug
cd debug
cmake .. -DBUILD_TOOLS=true -DBUILD_CONTRIB=true cmake .. -DBUILD_TOOLS=true -DBUILD_CONTRIB=true
make make
``` ```
您也可以选择使用 jemalloc 作为内存分配器,替代默认的 glibc 可以使用Jemalloc作为内存分配器而不是使用glibc:
```bash ```bash
apt install autoconf
cmake .. -DJEMALLOC_ENABLED=true cmake .. -DJEMALLOC_ENABLED=true
``` ```
TDengine构建脚本可以自动检测x86、x86-64、arm64平台上主机的体系结构。
在 X86-64、X86、arm64 平台上TDengine 生成脚本可以自动检测机器架构。也可以手动配置 CPUTYPE 参数来指定 CPU 类型,如 aarch64 等。 您也可以通过CPUTYPE选项手动指定架构:
aarch64
```bash ```bash
cmake .. -DCPUTYPE=aarch64 && cmake --build . cmake .. -DCPUTYPE=aarch64 && cmake --build .
``` ```
### Windows 系统 </details>
如果你使用的是 Visual Studio 2013 版本: ## 4.2 macOS系统上构建
打开 cmd.exe执行 vcvarsall.bat 时,为 64 位操作系统指定“x86_amd64”为 32 位操作系统指定“x86”。 <details>
```bash <summary>macOS系统上构建步骤</summary>
请安装XCode命令行工具和cmake。使用XCode 11.4+在Catalina和Big Sur上完成验证。
```shell
mkdir debug && cd debug mkdir debug && cd debug
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" < x86_amd64 | x86 > cmake .. && cmake --build .
```
</details>
## 4.3 Windows系统上构建
<details>
<summary>Windows系统上构建步骤</summary>
如果您使用的是Visual Studio 2013请执行“cmd.exe”打开命令窗口执行如下命令。
执行vcvarsall.bat时64位的Windows请指定“amd64”32位的Windows请指定“x86”。
```cmd
mkdir debug && cd debug
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" < amd64 | x86 >
cmake .. -G "NMake Makefiles" cmake .. -G "NMake Makefiles"
nmake nmake
``` ```
如果你使用的是 Visual Studio 2019 或 2017 版本: 如果您使用Visual Studio 2019或2017:
打开 cmd.exe执行 vcvarsall.bat 时,为 64 位操作系统指定“x64”为 32 位操作系统指定“x86”。 请执行“cmd.exe”打开命令窗口执行如下命令。
执行vcvarsall.bat时64位的Windows请指定“x64”32位的Windows请指定“x86”。
```bash ```cmd
mkdir debug && cd debug mkdir debug && cd debug
"c:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvarsall.bat" < x64 | x86 > "c:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvarsall.bat" < x64 | x86 >
cmake .. -G "NMake Makefiles" cmake .. -G "NMake Makefiles"
nmake nmake
``` ```
你也可以从开始菜单中找到"Visual Studio < 2019 | 2017 >"菜单项,根据你的系统选择"x64 Native Tools Command Prompt for VS < 2019 | 2017 >"或"x86 Native Tools Command Prompt for VS < 2019 | 2017 >",打开命令行窗口,执行: 或者您可以通过点击Windows开始菜单打开命令窗口->“Visual Studio < 2019 | 2017 >”文件夹->“x64原生工具命令提示符VS < 2019 | 2017 >”或“x86原生工具命令提示符VS < 2019 | 2017 >”取决于你的Windows是什么架构然后执行命令如下
```bash ```cmd
mkdir debug && cd debug mkdir debug && cd debug
cmake .. -G "NMake Makefiles" cmake .. -G "NMake Makefiles"
nmake nmake
``` ```
</details>
### macOS 系统 # 5. 打包
安装 XCode 命令行工具和 cmake. 在 Catalina 和 Big Sur 操作系统上,需要安装 XCode 11.4+ 版本。 由于一些组件依赖关系TDengine社区安装程序不能仅由该存储库创建。我们仍在努力改进。
# 6. 安装
## 6.1 Linux系统上安装
<details>
<summary>Linux系统上安装详细步骤</summary>
构建成功后TDengine可以通过以下命令进行安装:
```bash ```bash
mkdir debug && cd debug sudo make install
cmake .. && cmake --build .
``` ```
从源代码安装还将为TDengine配置服务管理。用户也可以使用[TDengine安装包](https://docs.taosdata.com/get-started/package/)进行安装。
# 安装 </details>
## Linux 系统 ## 6.2 macOS系统上安装
生成完成后,安装 TDengine <details>
<summary>macOS系统上安装详细步骤</summary>
构建成功后TDengine可以通过以下命令进行安装:
```bash ```bash
sudo make install sudo make install
``` ```
用户可以在[文件目录结构](https://docs.taosdata.com/reference/directory/)中了解更多在操作系统中生成的目录或文件。 </details>
从源代码安装也会为 TDengine 配置服务管理 ,用户也可以选择[从安装包中安装](https://docs.taosdata.com/get-started/package/)。 ## 6.3 Windows系统上安装
安装成功后,在终端中启动 TDengine 服务: <details>
```bash <summary>Windows系统上安装详细步骤</summary>
sudo systemctl start taosd
```
用户可以使用 TDengine CLI 来连接 TDengine 服务,在终端中,输入: 构建成功后TDengine可以通过以下命令进行安装:
```bash
taos
```
如果 TDengine CLI 连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印出错误消息。
## Windows 系统
生成完成后,安装 TDengine
```cmd ```cmd
nmake install nmake install
``` ```
## macOS 系统 </details>
生成完成后,安装 TDengine # 7. 快速运行
## 7.1 Linux系统上运行
<details>
<summary>Linux系统上运行详细步骤</summary>
在Linux系统上安装TDengine完成后在终端运行如下命令启动服务:
```bash ```bash
sudo make install sudo systemctl start taosd
``` ```
然后用户可以通过如下命令使用TDengine命令行连接TDengine服务:
用户可以在[文件目录结构](https://docs.taosdata.com/reference/directory/)中了解更多在操作系统中生成的目录或文件。
从源代码安装也会为 TDengine 配置服务管理 ,用户也可以选择[从安装包中安装](https://docs.taosdata.com/get-started/package/)。
安装成功后,可以在应用程序中双击 TDengine 图标启动服务,或者在终端中启动 TDengine 服务:
```bash
sudo launchctl start com.tdengine.taosd
```
用户可以使用 TDengine CLI 来连接 TDengine 服务,在终端中,输入:
```bash ```bash
taos taos
``` ```
如果 TDengine CLI 连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印出错误消息。 如果TDengine 命令行连接服务器成功,系统将打印欢迎信息和版本信息。否则,将显示连接错误信息。
## 快速运行 如果您不想将TDengine作为服务运行您可以在当前终端中运行它。例如要在构建完成后快速启动TDengine服务器在终端中运行以下命令我们以Linux为例Windows上的命令为 `taosd.exe`
如果不希望以服务方式运行 TDengine也可以在终端中直接运行它。也即在生成完成后执行以下命令在 Windows 下,生成的可执行文件会带有 .exe 后缀,例如会名为 taosd.exe
```bash ```bash
./build/bin/taosd -c test/cfg ./build/bin/taosd -c test/cfg
``` ```
在另一个终端,使用 TDengine CLI 连接服务器: 在另一个终端使用TDengine命令行连接服务器:
```bash ```bash
./build/bin/taos -c test/cfg ./build/bin/taos -c test/cfg
``` ```
"-c test/cfg"指定系统配置文件所在目录。 选项 `-c test/cfg` 指定系统配置文件的目录。
# 体验 TDengine </details>
在 TDengine 终端中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行插入查询操作。 ## 7.2 macOS系统上运行
```sql <details>
CREATE DATABASE demo;
USE demo; <summary>macOS系统上运行详细步骤</summary>
CREATE TABLE t (ts TIMESTAMP, speed INT);
INSERT INTO t VALUES('2019-07-15 00:00:00', 10); 在macOS上安装完成后启动服务双击/applications/TDengine启动程序或者在终端中执行如下命令
INSERT INTO t VALUES('2019-07-15 01:00:00', 20);
SELECT * FROM t; ```bash
ts | speed | sudo launchctl start com.tdengine.taosd
===================================
19-07-15 00:00:00.000| 10|
19-07-15 01:00:00.000| 20|
Query OK, 2 row(s) in set (0.001700s)
``` ```
# 应用开发 然后在终端中使用如下命令通过TDengine命令行连接TDengine服务器:
## 官方连接器 ```bash
taos
```
TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用: 如果TDengine命令行连接服务器成功系统将打印欢迎信息和版本信息。否则将显示错误信息。
- [Java](https://docs.taosdata.com/reference/connector/java/) </details>
- [C/C++](https://docs.taosdata.com/reference/connector/cpp/)
- [Python](https://docs.taosdata.com/reference/connector/python/)
- [Go](https://docs.taosdata.com/reference/connector/go/)
- [Node.js](https://docs.taosdata.com/reference/connector/node/)
- [Rust](https://docs.taosdata.com/reference/connector/rust/)
- [C#](https://docs.taosdata.com/reference/connector/csharp/)
- [RESTful API](https://docs.taosdata.com/reference/connector/rest-api/)
# 成为社区贡献者
## 7.3 Windows系统上运行
<details>
<summary>Windows系统上运行详细步骤</summary>
您可以使用以下命令在Windows平台上启动TDengine服务器:
```cmd
.\build\bin\taosd.exe -c test\cfg
```
在另一个终端上使用TDengine命令行连接服务器:
```cmd
.\build\bin\taos.exe -c test\cfg
```
选项 `-c test/cfg` 指定系统配置文件的目录。
</details>
# 8. 测试
有关如何在TDengine上运行不同类型的测试请参考 [TDengine测试](./tests/README-CN.md)
# 9. 版本发布
TDengine发布版本的完整列表请参考 [版本列表](https://github.com/taosdata/TDengine/releases)
# 10. 工作流
TDengine构建检查工作流可以在参考 [Github Action](https://github.com/taosdata/TDengine/actions/workflows/taosd-ci-build.yml), 更多的工作流正在创建中,将很快可用。
# 11. 覆盖率
最新的TDengine测试覆盖率报告可参考 [coveralls.io](https://coveralls.io/github/taosdata/TDengine)
<details>
<summary>如何在本地运行测试覆盖率报告?</summary>
在本地创建测试覆盖率报告HTML格式请运行以下命令:
```bash
cd tests
bash setup-lcov.sh -v 1.16 && ./run_local_coverage.sh -b main -c task
# on main branch and run cases in longtimeruning_cases.task
# for more infomation about options please refer to ./run_local_coverage.sh -h
```
> **注意:**
> 请注意,-b和-i选项将使用-DCOVER=true选项重新编译TDengine这可能需要花费一些时间。
</details>
# 12. 成为社区贡献者
点击 [这里](https://www.taosdata.com/contributor),了解如何成为 TDengine 的贡献者。 点击 [这里](https://www.taosdata.com/contributor),了解如何成为 TDengine 的贡献者。
# 加入技术交流群
TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小 T 为好友,即可入群。

View File

@ -33,6 +33,7 @@ The JDBC driver implementation for TDengine strives to be consistent with relati
| taos-jdbcdriver Version | Major Changes | TDengine Version | | taos-jdbcdriver Version | Major Changes | TDengine Version |
| ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ | | ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ |
| 3.5.3 | Support unsigned data types in WebSocket connections. | - |
| 3.5.2 | Fixed WebSocket result set free bug. | - | | 3.5.2 | Fixed WebSocket result set free bug. | - |
| 3.5.1 | Fixed the getObject issue in data subscription. | - | | 3.5.1 | Fixed the getObject issue in data subscription. | - |
| 3.5.0 | 1. Optimized the performance of WebSocket connection parameter binding, supporting parameter binding queries using binary data. <br/> 2. Optimized the performance of small queries in WebSocket connection. <br/> 3. Added support for setting time zone and app info on WebSocket connection. | 3.3.5.0 and higher | | 3.5.0 | 1. Optimized the performance of WebSocket connection parameter binding, supporting parameter binding queries using binary data. <br/> 2. Optimized the performance of small queries in WebSocket connection. <br/> 3. Added support for setting time zone and app info on WebSocket connection. | 3.3.5.0 and higher |
@ -128,24 +129,27 @@ Please refer to the specific error codes:
TDengine currently supports timestamp, numeric, character, boolean types, and the corresponding Java type conversions are as follows: TDengine currently supports timestamp, numeric, character, boolean types, and the corresponding Java type conversions are as follows:
| TDengine DataType | JDBCType | | TDengine DataType | JDBCType | Remark |
| ----------------- | ------------------ | | ----------------- | -------------------- | --------------------------------------- |
| TIMESTAMP | java.sql.Timestamp | | TIMESTAMP | java.sql.Timestamp | |
| INT | java.lang.Integer | | BOOL | java.lang.Boolean | |
| BIGINT | java.lang.Long | | TINYINT | java.lang.Byte | |
| FLOAT | java.lang.Float | | TINYINT UNSIGNED | java.lang.Short | only supported in WebSocket connections |
| DOUBLE | java.lang.Double | | SMALLINT | java.lang.Short | |
| SMALLINT | java.lang.Short | | SMALLINT UNSIGNED | java.lang.Integer | only supported in WebSocket connections |
| TINYINT | java.lang.Byte | | INT | java.lang.Integer | |
| BOOL | java.lang.Boolean | | INT UNSIGNED | java.lang.Long | only supported in WebSocket connections |
| BINARY | byte array | | BIGINT | java.lang.Long | |
| NCHAR | java.lang.String | | BIGINT UNSIGNED | java.math.BigInteger | only supported in WebSocket connections |
| JSON | java.lang.String | | FLOAT | java.lang.Float | |
| VARBINARY | byte[] | | DOUBLE | java.lang.Double | |
| GEOMETRY | byte[] | | BINARY | byte array | |
| NCHAR | java.lang.String | |
| JSON | java.lang.String | only supported in tags |
| VARBINARY | byte[] | |
| GEOMETRY | byte[] | |
**Note**: JSON type is only supported in tags. **Note**: Due to historical reasons, the BINARY type in TDengine is not truly binary data and is no longer recommended. Please use VARBINARY type instead.
Due to historical reasons, the BINARY type in TDengine is not truly binary data and is no longer recommended. Please use VARBINARY type instead.
GEOMETRY type is binary data in little endian byte order, complying with the WKB standard. For more details, please refer to [Data Types](../../sql-manual/data-types/) GEOMETRY type is binary data in little endian byte order, complying with the WKB standard. For more details, please refer to [Data Types](../../sql-manual/data-types/)
For the WKB standard, please refer to [Well-Known Binary (WKB)](https://libgeos.org/specifications/wkb/) For the WKB standard, please refer to [Well-Known Binary (WKB)](https://libgeos.org/specifications/wkb/)
For the Java connector, you can use the jts library to conveniently create GEOMETRY type objects, serialize them, and write to TDengine. Here is an example [Geometry Example](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/GeometryDemo.java) For the Java connector, you can use the jts library to conveniently create GEOMETRY type objects, serialize them, and write to TDengine. Here is an example [Geometry Example](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/GeometryDemo.java)

View File

@ -19,7 +19,7 @@
<dependency> <dependency>
<groupId>com.taosdata.jdbc</groupId> <groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId> <artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version> <version>3.5.3</version>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.locationtech.jts</groupId> <groupId>org.locationtech.jts</groupId>

View File

@ -47,7 +47,7 @@
<dependency> <dependency>
<groupId>com.taosdata.jdbc</groupId> <groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId> <artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version> <version>3.5.3</version>
</dependency> </dependency>
</dependencies> </dependencies>

View File

@ -18,7 +18,7 @@
<dependency> <dependency>
<groupId>com.taosdata.jdbc</groupId> <groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId> <artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version> <version>3.5.3</version>
</dependency> </dependency>
<!-- druid --> <!-- druid -->
<dependency> <dependency>

View File

@ -17,7 +17,7 @@
<dependency> <dependency>
<groupId>com.taosdata.jdbc</groupId> <groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId> <artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version> <version>3.5.3</version>
</dependency> </dependency>
<dependency> <dependency>
<groupId>com.google.guava</groupId> <groupId>com.google.guava</groupId>

View File

@ -47,7 +47,7 @@
<dependency> <dependency>
<groupId>com.taosdata.jdbc</groupId> <groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId> <artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version> <version>3.5.3</version>
</dependency> </dependency>
<dependency> <dependency>

View File

@ -70,7 +70,7 @@
<dependency> <dependency>
<groupId>com.taosdata.jdbc</groupId> <groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId> <artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version> <version>3.5.3</version>
</dependency> </dependency>
<dependency> <dependency>

View File

@ -67,7 +67,7 @@
<dependency> <dependency>
<groupId>com.taosdata.jdbc</groupId> <groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId> <artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version> <version>3.5.3</version>
<!-- <scope>system</scope>--> <!-- <scope>system</scope>-->
<!-- <systemPath>${project.basedir}/src/main/resources/lib/taos-jdbcdriver-2.0.15-dist.jar</systemPath>--> <!-- <systemPath>${project.basedir}/src/main/resources/lib/taos-jdbcdriver-2.0.15-dist.jar</systemPath>-->
</dependency> </dependency>

View File

@ -22,7 +22,7 @@
<dependency> <dependency>
<groupId>com.taosdata.jdbc</groupId> <groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId> <artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version> <version>3.5.3</version>
</dependency> </dependency>
<!-- ANCHOR_END: dep--> <!-- ANCHOR_END: dep-->

View File

@ -2,6 +2,7 @@ package com.taos.example;
import com.taosdata.jdbc.ws.TSWSPreparedStatement; import com.taosdata.jdbc.ws.TSWSPreparedStatement;
import java.math.BigInteger;
import java.sql.*; import java.sql.*;
import java.util.Random; import java.util.Random;
@ -26,7 +27,12 @@ public class WSParameterBindingFullDemo {
"binary_col BINARY(100), " + "binary_col BINARY(100), " +
"nchar_col NCHAR(100), " + "nchar_col NCHAR(100), " +
"varbinary_col VARBINARY(100), " + "varbinary_col VARBINARY(100), " +
"geometry_col GEOMETRY(100)) " + "geometry_col GEOMETRY(100)," +
"utinyint_col tinyint unsigned," +
"usmallint_col smallint unsigned," +
"uint_col int unsigned," +
"ubigint_col bigint unsigned" +
") " +
"tags (" + "tags (" +
"int_tag INT, " + "int_tag INT, " +
"double_tag DOUBLE, " + "double_tag DOUBLE, " +
@ -34,7 +40,12 @@ public class WSParameterBindingFullDemo {
"binary_tag BINARY(100), " + "binary_tag BINARY(100), " +
"nchar_tag NCHAR(100), " + "nchar_tag NCHAR(100), " +
"varbinary_tag VARBINARY(100), " + "varbinary_tag VARBINARY(100), " +
"geometry_tag GEOMETRY(100))" "geometry_tag GEOMETRY(100)," +
"utinyint_tag tinyint unsigned," +
"usmallint_tag smallint unsigned," +
"uint_tag int unsigned," +
"ubigint_tag bigint unsigned" +
")"
}; };
private static final int numOfSubTable = 10, numOfRow = 10; private static final int numOfSubTable = 10, numOfRow = 10;
@ -79,7 +90,7 @@ public class WSParameterBindingFullDemo {
// set table name // set table name
pstmt.setTableName("ntb_json_" + i); pstmt.setTableName("ntb_json_" + i);
// set tags // set tags
pstmt.setTagJson(1, "{\"device\":\"device_" + i + "\"}"); pstmt.setTagJson(0, "{\"device\":\"device_" + i + "\"}");
// set columns // set columns
long current = System.currentTimeMillis(); long current = System.currentTimeMillis();
for (int j = 0; j < numOfRow; j++) { for (int j = 0; j < numOfRow; j++) {
@ -94,25 +105,29 @@ public class WSParameterBindingFullDemo {
} }
private static void stmtAll(Connection conn) throws SQLException { private static void stmtAll(Connection conn) throws SQLException {
String sql = "INSERT INTO ? using stb tags(?,?,?,?,?,?,?) VALUES (?,?,?,?,?,?,?,?)"; String sql = "INSERT INTO ? using stb tags(?,?,?,?,?,?,?,?,?,?,?) VALUES (?,?,?,?,?,?,?,?,?,?,?,?)";
try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) { try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) {
// set table name // set table name
pstmt.setTableName("ntb"); pstmt.setTableName("ntb");
// set tags // set tags
pstmt.setTagInt(1, 1); pstmt.setTagInt(0, 1);
pstmt.setTagDouble(2, 1.1); pstmt.setTagDouble(1, 1.1);
pstmt.setTagBoolean(3, true); pstmt.setTagBoolean(2, true);
pstmt.setTagString(4, "binary_value"); pstmt.setTagString(3, "binary_value");
pstmt.setTagNString(5, "nchar_value"); pstmt.setTagNString(4, "nchar_value");
pstmt.setTagVarbinary(6, new byte[] { (byte) 0x98, (byte) 0xf4, 0x6e }); pstmt.setTagVarbinary(5, new byte[] { (byte) 0x98, (byte) 0xf4, 0x6e });
pstmt.setTagGeometry(7, new byte[] { pstmt.setTagGeometry(6, new byte[] {
0x01, 0x01, 0x00, 0x00, 0x01, 0x01, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x59, 0x00, 0x00, 0x00, 0x59,
0x40, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x59, 0x40 }); 0x00, 0x00, 0x00, 0x59, 0x40 });
pstmt.setTagShort(7, (short)255);
pstmt.setTagInt(8, 65535);
pstmt.setTagLong(9, 4294967295L);
pstmt.setTagBigInteger(10, new BigInteger("18446744073709551615"));
long current = System.currentTimeMillis(); long current = System.currentTimeMillis();
@ -129,6 +144,10 @@ public class WSParameterBindingFullDemo {
0x00, 0x00, 0x00, 0x59, 0x00, 0x00, 0x00, 0x59,
0x40, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x59, 0x40 }); 0x00, 0x00, 0x00, 0x59, 0x40 });
pstmt.setShort(9, (short)255);
pstmt.setInt(10, 65535);
pstmt.setLong(11, 4294967295L);
pstmt.setObject(12, new BigInteger("18446744073709551615"));
pstmt.addBatch(); pstmt.addBatch();
pstmt.executeBatch(); pstmt.executeBatch();
System.out.println("Successfully inserted rows to example_all_type_stmt.ntb"); System.out.println("Successfully inserted rows to example_all_type_stmt.ntb");

View File

@ -89,7 +89,7 @@ TDengine 提供了丰富的应用程序开发接口,为了便于用户快速
<dependency> <dependency>
<groupId>com.taosdata.jdbc</groupId> <groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId> <artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version> <version>3.5.3</version>
</dependency> </dependency>
``` ```

View File

@ -69,7 +69,7 @@ taosExplorer 服务页面中,进入“系统管理 - 备份”页面,在“
1. 数据库:需要备份的数据库名称。一个备份计划只能备份一个数据库/超级表。 1. 数据库:需要备份的数据库名称。一个备份计划只能备份一个数据库/超级表。
2. 超级表:需要备份的超级表名称。如果不填写,则备份整个数据库。 2. 超级表:需要备份的超级表名称。如果不填写,则备份整个数据库。
3. 下次执行时间:首次执行备份任务的日期时间。 3. 下次执行时间:首次执行备份任务的日期时间。
4. 备份周期:备份点之间的时间间隔。注意:备份周期必须于数据库的 WAL_RETENTION_PERIOD 参数值。 4. 备份周期:备份点之间的时间间隔。注意:备份周期必须于数据库的 WAL_RETENTION_PERIOD 参数值。
5. 错误重试次数:对于可通过重试解决的错误,系统会按照此次数进行重试。 5. 错误重试次数:对于可通过重试解决的错误,系统会按照此次数进行重试。
6. 错误重试间隔:每次重试之间的时间间隔。 6. 错误重试间隔:每次重试之间的时间间隔。
7. 目录:存储备份文件的目录。 7. 目录:存储备份文件的目录。

View File

@ -11,50 +11,25 @@ Tableau 是一款知名的商业智能工具,它支持多种数据源,可方
- TDengine 3.3.5.4以上版本集群已部署并正常运行(企业及社区版均可) - TDengine 3.3.5.4以上版本集群已部署并正常运行(企业及社区版均可)
- taosAdapter 能够正常运行。详细参考 [taosAdapter 使用手册](../../../reference/components/taosadapter) - taosAdapter 能够正常运行。详细参考 [taosAdapter 使用手册](../../../reference/components/taosadapter)
- Tableau 桌面版安装并运行(如未安装,请下载并安装 Windows 操作系统 32/64 位 [Tableau 桌面版](https://www.tableau.com/products/desktop/download) )。安装 Tableau 桌面版请参考 [官方文档](https://www.tableau.com)。 - Tableau 桌面版安装并运行(如未安装,请下载并安装 Windows 操作系统 32/64 位 [Tableau 桌面版](https://www.tableau.com/products/desktop/download) )。安装 Tableau 桌面版请参考 [官方文档](https://www.tableau.com)。
- ODBC 驱动安装成功。详细参考[安装 ODBC 驱动](../../../reference/connector/odbc/#安装)
## 安装 ODBC 驱动 - ODBC 数据源配置成功。详细参考[配置ODBC数据源](../../../reference/connector/odbc/#配置数据源)
从 TDengine 官网下载 3.3.5.4 以上版本的 Windows 操作系统 X64 客户端驱动程序,并安装在运行 Tableau 的机器上。安装成功后可在“ODBC数据源32位”或者“ODBC数据源64位”管理工具中看到 TDengine 驱动程序。
## 配置ODBC数据源
配置ODBC数据源的操作步骤如下。
第1步在Windows操作系统的开始菜单中搜索并打开“ODBC数据源32位”或者“ODBC数据源64位”管理工具。
第2步点击“用户DSN”选项卡→“添加”按钮进入“创建新数据源”对话框。
第3步在“选择您想为其安装数据源的驱动程序”列表中选择“TDengine”点击“完成”按钮进入TDengine ODBC数据源配置页面。填写如下必要信息。
- DSN数据源名称必填比如“MyTDengine”。
- 连接类型勾选“WebSocket”复选框。
- URLODBC 数据源 URL必填比如“http://127.0.0.1:6041”。
- 数据库表示需要连接的数据库可选比如“power”。
- 用户名输入用户名如果不填默认为“root”。
- 密码输入用户密码如果不填默认为“taosdata”。
第4步点击“测试连接”按钮测试连接情况如果成功连接则会提示“成功连接到http://127.0.0.1:6041”。
第5步点击“确定”按钮即可保存配置并退出。
## 加载和分析 TDengine 数据 ## 加载和分析 TDengine 数据
**第 1 步**,在 Windows 系统环境中启动 Tableau并在连接页面中选择 "其他数据库(ODBC)" **第 1 步**,在 Windows 系统环境下启动 Tableau之后在其连接页面中搜索 “ODBC”并选择 “其他数据库 (ODBC)”。
![tableau-source](./tableau/tableau-source.jpg) **第 2 步**,点击 DNS 单选框,接着选择已配置好的数据源(MyTDengine),然后点击连接按钮。待连接成功后,删除字符串附加部分的内容,最后点击登录按钮即可。
**第 2 步**, 点击 DNS 单选框,接着选择已配置好的数据源(MyTDengine),然后点击连接按钮。待连接成功后,删除字符串附加部分的内容,最后点击登录按钮即可。
![tableau-odbc](./tableau/tableau-odbc.jpg) ![tableau-odbc](./tableau/tableau-odbc.jpg)
**第 3 步**, 在弹出的工作簿页面中,会显示已连接的数据源。点击数据库的拉列表,会显示需要进行数据分析的数据库。在此基础上,点击表选项中的查找按钮,即可将此数据库下的所有表显示出来。 **第 3 步**,在弹出的工作簿页面中,会显示已连接的数据源。点击数据库的下拉列表,会显示需要进行数据分析的数据库。在此基础上,点击表选项中的查找按钮,即可将该数据库下的所有表显示出来。然后,拖动需要分析的表到右侧区域,即可显示出表结构。
![tableau-workbook](./tableau/tableau-workbook.jpg)
**第 4 步**, 拖动需要分析的表到右侧区域,即可显示出表结构。
![tableau-workbook](./tableau/tableau-table.jpg) ![tableau-workbook](./tableau/tableau-table.jpg)
**第 5 步**, 点击下方的"立即更新"按钮,即可将表中的数据展示出来。 **第 4 步**,点击下方的"立即更新"按钮,即可将表中的数据展示出来。
![tableau-workbook](./tableau/tableau-data.jpg) ![tableau-workbook](./tableau/tableau-data.jpg)
**第 6 步**, 点击窗口下方的"工作表",弹出数据分析窗口, 并展示分析表的所有字段,将字段拖动到行列即可展示出图表。 **第 5 步**,点击窗口下方的"工作表",弹出数据分析窗口, 并展示分析表的所有字段,将字段拖动到行列即可展示出图表。
![tableau-workbook](./tableau/tableau-analysis.jpg) ![tableau-workbook](./tableau/tableau-analysis.jpg)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 236 KiB

After

Width:  |  Height:  |  Size: 235 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 103 KiB

After

Width:  |  Height:  |  Size: 237 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 230 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

View File

@ -33,6 +33,7 @@ TDengine 的 JDBC 驱动实现尽可能与关系型数据库驱动保持一致
| taos-jdbcdriver 版本 | 主要变化 | TDengine 版本 | | taos-jdbcdriver 版本 | 主要变化 | TDengine 版本 |
| ------------------| ---------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------- | | ------------------| ---------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------- |
| 3.5.3 | 在 WebSocket 连接上支持无符号数据类型 | - |
| 3.5.2 | 解决了 WebSocket 查询结果集释放 bug | - | | 3.5.2 | 解决了 WebSocket 查询结果集释放 bug | - |
| 3.5.1 | 解决了数据订阅获取时间戳对象类型问题 | - | | 3.5.1 | 解决了数据订阅获取时间戳对象类型问题 | - |
| 3.5.0 | 1. 优化了 WebSocket 连接参数绑定性能,支持参数绑定查询使用二进制数据 <br/> 2. 优化了 WebSocket 连接在小查询上的性能 <br/> 3. WebSocket 连接上支持设置时区和应用信息 | 3.3.5.0 及更高版本 | | 3.5.0 | 1. 优化了 WebSocket 连接参数绑定性能,支持参数绑定查询使用二进制数据 <br/> 2. 优化了 WebSocket 连接在小查询上的性能 <br/> 3. WebSocket 连接上支持设置时区和应用信息 | 3.3.5.0 及更高版本 |
@ -128,24 +129,27 @@ JDBC 连接器可能报错的错误码包括 4 种:
TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对应类型转换如下: TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对应类型转换如下:
| TDengine DataType | JDBCType | | TDengine DataType | JDBCType | 备注|
| ----------------- | ------------------ | | ----------------- | -------------------- |-------------------- |
| TIMESTAMP | java.sql.Timestamp | | TIMESTAMP | java.sql.Timestamp ||
| INT | java.lang.Integer | | BOOL | java.lang.Boolean ||
| BIGINT | java.lang.Long | | TINYINT | java.lang.Byte ||
| FLOAT | java.lang.Float | | TINYINT UNSIGNED | java.lang.Short |仅在 WebSocket 连接方式支持|
| DOUBLE | java.lang.Double | | SMALLINT | java.lang.Short ||
| SMALLINT | java.lang.Short | | SMALLINT UNSIGNED | java.lang.Integer |仅在 WebSocket 连接方式支持|
| TINYINT | java.lang.Byte | | INT | java.lang.Integer ||
| BOOL | java.lang.Boolean | | INT UNSIGNED | java.lang.Long |仅在 WebSocket 连接方式支持|
| BINARY | byte array | | BIGINT | java.lang.Long ||
| NCHAR | java.lang.String | | BIGINT UNSIGNED | java.math.BigInteger |仅在 WebSocket 连接方式支持|
| JSON | java.lang.String | | FLOAT | java.lang.Float ||
| VARBINARY | byte[] | | DOUBLE | java.lang.Double ||
| GEOMETRY | byte[] | | BINARY | byte array ||
| NCHAR | java.lang.String ||
| JSON | java.lang.String |仅在 tag 中支持|
| VARBINARY | byte[] ||
| GEOMETRY | byte[] ||
**注意**JSON 类型仅在 tag 中支持。 **注意**由于历史原因TDengine中的BINARY底层不是真正的二进制数据已不建议使用。请用VARBINARY类型代替。
由于历史原因TDengine中的BINARY底层不是真正的二进制数据已不建议使用。请用VARBINARY类型代替。
GEOMETRY类型是little endian字节序的二进制数据符合WKB规范。详细信息请参考 [数据类型](../../taos-sql/data-type/#数据类型) GEOMETRY类型是little endian字节序的二进制数据符合WKB规范。详细信息请参考 [数据类型](../../taos-sql/data-type/#数据类型)
WKB规范请参考[Well-Known Binary (WKB)](https://libgeos.org/specifications/wkb/) WKB规范请参考[Well-Known Binary (WKB)](https://libgeos.org/specifications/wkb/)
对于java连接器可以使用jts库来方便的创建GEOMETRY类型对象序列化后写入TDengine这里有一个样例[Geometry示例](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/GeometryDemo.java) 对于java连接器可以使用jts库来方便的创建GEOMETRY类型对象序列化后写入TDengine这里有一个样例[Geometry示例](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/GeometryDemo.java)

View File

@ -268,7 +268,7 @@ typedef struct SStoreMeta {
const void* (*extractTagVal)(const void* tag, int16_t type, STagVal* tagVal); // todo remove it const void* (*extractTagVal)(const void* tag, int16_t type, STagVal* tagVal); // todo remove it
int32_t (*getTableUidByName)(void* pVnode, char* tbName, uint64_t* uid); int32_t (*getTableUidByName)(void* pVnode, char* tbName, uint64_t* uid);
int32_t (*getTableTypeByName)(void* pVnode, char* tbName, ETableType* tbType); int32_t (*getTableTypeSuidByName)(void* pVnode, char* tbName, ETableType* tbType, uint64_t* suid);
int32_t (*getTableNameByUid)(void* pVnode, uint64_t uid, char* tbName); int32_t (*getTableNameByUid)(void* pVnode, uint64_t uid, char* tbName);
bool (*isTableExisted)(void* pVnode, tb_uid_t uid); bool (*isTableExisted)(void* pVnode, tb_uid_t uid);

View File

@ -314,8 +314,8 @@ typedef struct SOrderByExprNode {
typedef struct SLimitNode { typedef struct SLimitNode {
ENodeType type; // QUERY_NODE_LIMIT ENodeType type; // QUERY_NODE_LIMIT
int64_t limit; SValueNode* limit;
int64_t offset; SValueNode* offset;
} SLimitNode; } SLimitNode;
typedef struct SStateWindowNode { typedef struct SStateWindowNode {
@ -681,6 +681,7 @@ int32_t nodesValueNodeToVariant(const SValueNode* pNode, SVariant* pVal);
int32_t nodesMakeValueNodeFromString(char* literal, SValueNode** ppValNode); int32_t nodesMakeValueNodeFromString(char* literal, SValueNode** ppValNode);
int32_t nodesMakeValueNodeFromBool(bool b, SValueNode** ppValNode); int32_t nodesMakeValueNodeFromBool(bool b, SValueNode** ppValNode);
int32_t nodesMakeValueNodeFromInt32(int32_t value, SNode** ppNode); int32_t nodesMakeValueNodeFromInt32(int32_t value, SNode** ppNode);
int32_t nodesMakeValueNodeFromInt64(int64_t value, SNode** ppNode);
char* nodesGetFillModeString(EFillMode mode); char* nodesGetFillModeString(EFillMode mode);
int32_t nodesMergeConds(SNode** pDst, SNodeList** pSrc); int32_t nodesMergeConds(SNode** pDst, SNodeList** pSrc);

View File

@ -253,7 +253,7 @@ void taos_cleanup(void) {
taosCloseRef(id); taosCloseRef(id);
nodesDestroyAllocatorSet(); nodesDestroyAllocatorSet();
// cleanupAppInfo(); cleanupAppInfo();
rpcCleanup(); rpcCleanup();
tscDebug("rpc cleanup"); tscDebug("rpc cleanup");

View File

@ -76,6 +76,7 @@ enum {
typedef struct { typedef struct {
tmr_h timer; tmr_h timer;
int32_t rsetId; int32_t rsetId;
TdThreadMutex lock;
} SMqMgmt; } SMqMgmt;
struct tmq_list_t { struct tmq_list_t {
@ -1600,16 +1601,25 @@ void tmqFreeImpl(void* handle) {
static void tmqMgmtInit(void) { static void tmqMgmtInit(void) {
tmqInitRes = 0; tmqInitRes = 0;
if (taosThreadMutexInit(&tmqMgmt.lock, NULL) != 0){
goto END;
}
tmqMgmt.timer = taosTmrInit(1000, 100, 360000, "TMQ"); tmqMgmt.timer = taosTmrInit(1000, 100, 360000, "TMQ");
if (tmqMgmt.timer == NULL) { if (tmqMgmt.timer == NULL) {
tmqInitRes = terrno; goto END;
} }
tmqMgmt.rsetId = taosOpenRef(10000, tmqFreeImpl); tmqMgmt.rsetId = taosOpenRef(10000, tmqFreeImpl);
if (tmqMgmt.rsetId < 0) { if (tmqMgmt.rsetId < 0) {
tmqInitRes = terrno; goto END;
} }
return;
END:
tmqInitRes = terrno;
} }
void tmqMgmtClose(void) { void tmqMgmtClose(void) {
@ -1618,9 +1628,27 @@ void tmqMgmtClose(void) {
tmqMgmt.timer = NULL; tmqMgmt.timer = NULL;
} }
if (tmqMgmt.rsetId >= 0) { if (tmqMgmt.rsetId > 0) {
(void) taosThreadMutexLock(&tmqMgmt.lock);
tmq_t *tmq = taosIterateRef(tmqMgmt.rsetId, 0);
int64_t refId = 0;
while (tmq) {
refId = tmq->refId;
if (refId == 0) {
break;
}
atomic_store_8(&tmq->status, TMQ_CONSUMER_STATUS__CLOSED);
if (taosRemoveRef(tmqMgmt.rsetId, tmq->refId) != 0) {
qWarn("taosRemoveRef tmq refId:%" PRId64 " failed, error:%s", refId, tstrerror(terrno));
}
tmq = taosIterateRef(tmqMgmt.rsetId, refId);
}
taosCloseRef(tmqMgmt.rsetId); taosCloseRef(tmqMgmt.rsetId);
tmqMgmt.rsetId = -1; tmqMgmt.rsetId = -1;
(void)taosThreadMutexUnlock(&tmqMgmt.lock);
} }
} }
@ -2617,8 +2645,13 @@ int32_t tmq_unsubscribe(tmq_t* tmq) {
int32_t tmq_consumer_close(tmq_t* tmq) { int32_t tmq_consumer_close(tmq_t* tmq) {
if (tmq == NULL) return TSDB_CODE_INVALID_PARA; if (tmq == NULL) return TSDB_CODE_INVALID_PARA;
int32_t code = 0;
code = taosThreadMutexLock(&tmqMgmt.lock);
if (atomic_load_8(&tmq->status) == TMQ_CONSUMER_STATUS__CLOSED){
goto end;
}
tqInfoC("consumer:0x%" PRIx64 " start to close consumer, status:%d", tmq->consumerId, tmq->status); tqInfoC("consumer:0x%" PRIx64 " start to close consumer, status:%d", tmq->consumerId, tmq->status);
int32_t code = tmq_unsubscribe(tmq); code = tmq_unsubscribe(tmq);
if (code == 0) { if (code == 0) {
atomic_store_8(&tmq->status, TMQ_CONSUMER_STATUS__CLOSED); atomic_store_8(&tmq->status, TMQ_CONSUMER_STATUS__CLOSED);
code = taosRemoveRef(tmqMgmt.rsetId, tmq->refId); code = taosRemoveRef(tmqMgmt.rsetId, tmq->refId);
@ -2626,6 +2659,9 @@ int32_t tmq_consumer_close(tmq_t* tmq) {
tqErrorC("tmq close failed to remove ref:%" PRId64 ", code:%d", tmq->refId, code); tqErrorC("tmq close failed to remove ref:%" PRId64 ", code:%d", tmq->refId, code);
} }
} }
end:
code = taosThreadMutexUnlock(&tmqMgmt.lock);
return code; return code;
} }

View File

@ -237,6 +237,63 @@ int main(int argc, char** argv) {
return RUN_ALL_TESTS(); return RUN_ALL_TESTS();
} }
TEST(stmt2Case, stmt2_test_limit) {
TAOS* taos = taos_connect("localhost", "root", "taosdata", "", 0);
ASSERT_NE(taos, nullptr);
do_query(taos, "drop database if exists stmt2_testdb_7");
do_query(taos, "create database IF NOT EXISTS stmt2_testdb_7");
do_query(taos, "create stable stmt2_testdb_7.stb (ts timestamp, b binary(10)) tags(t1 int, t2 binary(10))");
do_query(taos,
"insert into stmt2_testdb_7.tb2 using stmt2_testdb_7.stb tags(2,'xyz') values(1591060628000, "
"'abc'),(1591060628001,'def'),(1591060628004, 'hij')");
do_query(taos, "use stmt2_testdb_7");
TAOS_STMT2_OPTION option = {0, true, true, NULL, NULL};
TAOS_STMT2* stmt = taos_stmt2_init(taos, &option);
ASSERT_NE(stmt, nullptr);
const char* sql = "select * from stmt2_testdb_7.tb2 where ts > ? and ts < ? limit ?";
int code = taos_stmt2_prepare(stmt, sql, 0);
checkError(stmt, code);
int t64_len[1] = {sizeof(int64_t)};
int b_len[1] = {3};
int x = 2;
int x_len = sizeof(int);
int64_t ts[2] = {1591060627000, 1591060628005};
TAOS_STMT2_BIND params[3] = {{TSDB_DATA_TYPE_TIMESTAMP, &ts[0], t64_len, NULL, 1},
{TSDB_DATA_TYPE_TIMESTAMP, &ts[1], t64_len, NULL, 1},
{TSDB_DATA_TYPE_INT, &x, &x_len, NULL, 1}};
TAOS_STMT2_BIND* paramv = &params[0];
TAOS_STMT2_BINDV bindv = {1, NULL, NULL, &paramv};
code = taos_stmt2_bind_param(stmt, &bindv, -1);
checkError(stmt, code);
taos_stmt2_exec(stmt, NULL);
checkError(stmt, code);
TAOS_RES* pRes = taos_stmt2_result(stmt);
ASSERT_NE(pRes, nullptr);
int getRecordCounts = 0;
while ((taos_fetch_row(pRes))) {
getRecordCounts++;
}
ASSERT_EQ(getRecordCounts, 2);
taos_stmt2_close(stmt);
do_query(taos, "drop database if exists stmt2_testdb_7");
taos_close(taos);
}
TEST(stmt2Case, insert_stb_get_fields_Test) { TEST(stmt2Case, insert_stb_get_fields_Test) {
TAOS* taos = taos_connect("localhost", "root", "taosdata", NULL, 0); TAOS* taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(taos, nullptr); ASSERT_NE(taos, nullptr);

View File

@ -159,6 +159,7 @@ void removeTasksInBuf(SArray *pTaskIds, SStreamExecInfo *pExecInfo);
int32_t mndFindChangedNodeInfo(SMnode *pMnode, const SArray *pPrevNodeList, const SArray *pNodeList, int32_t mndFindChangedNodeInfo(SMnode *pMnode, const SArray *pPrevNodeList, const SArray *pNodeList,
SVgroupChangeInfo *pInfo); SVgroupChangeInfo *pInfo);
void killAllCheckpointTrans(SMnode *pMnode, SVgroupChangeInfo *pChangeInfo); void killAllCheckpointTrans(SMnode *pMnode, SVgroupChangeInfo *pChangeInfo);
bool isNodeUpdateTransActive();
int32_t createStreamTaskIter(SStreamObj *pStream, SStreamTaskIter **pIter); int32_t createStreamTaskIter(SStreamObj *pStream, SStreamTaskIter **pIter);
void destroyStreamTaskIter(SStreamTaskIter *pIter); void destroyStreamTaskIter(SStreamTaskIter *pIter);
@ -175,8 +176,8 @@ void removeStreamTasksInBuf(SStreamObj *pStream, SStreamExecInfo *pExecNode);
int32_t mndGetConsensusInfo(SHashObj *pHash, int64_t streamId, int32_t numOfTasks, SCheckpointConsensusInfo **pInfo); int32_t mndGetConsensusInfo(SHashObj *pHash, int64_t streamId, int32_t numOfTasks, SCheckpointConsensusInfo **pInfo);
void mndAddConsensusTasks(SCheckpointConsensusInfo *pInfo, const SRestoreCheckpointInfo *pRestoreInfo); void mndAddConsensusTasks(SCheckpointConsensusInfo *pInfo, const SRestoreCheckpointInfo *pRestoreInfo);
void mndClearConsensusRspEntry(SCheckpointConsensusInfo *pInfo); void mndClearConsensusRspEntry(SCheckpointConsensusInfo *pInfo);
int64_t mndClearConsensusCheckpointId(SHashObj *pHash, int64_t streamId); int32_t mndClearConsensusCheckpointId(SHashObj *pHash, int64_t streamId);
int64_t mndClearChkptReportInfo(SHashObj *pHash, int64_t streamId); int32_t mndClearChkptReportInfo(SHashObj *pHash, int64_t streamId);
int32_t mndResetChkptReportInfo(SHashObj *pHash, int64_t streamId); int32_t mndResetChkptReportInfo(SHashObj *pHash, int64_t streamId);
int32_t setStreamAttrInResBlock(SStreamObj *pStream, SSDataBlock *pBlock, int32_t numOfRows); int32_t setStreamAttrInResBlock(SStreamObj *pStream, SSDataBlock *pBlock, int32_t numOfRows);

View File

@ -65,8 +65,6 @@ static int32_t mndProcessDropOrphanTaskReq(SRpcMsg *pReq);
static void saveTaskAndNodeInfoIntoBuf(SStreamObj *pStream, SStreamExecInfo *pExecNode); static void saveTaskAndNodeInfoIntoBuf(SStreamObj *pStream, SStreamExecInfo *pExecNode);
static void addAllStreamTasksIntoBuf(SMnode *pMnode, SStreamExecInfo *pExecInfo); static void addAllStreamTasksIntoBuf(SMnode *pMnode, SStreamExecInfo *pExecInfo);
static void removeExpiredNodeInfo(const SArray *pNodeSnapshot);
static int32_t doKillCheckpointTrans(SMnode *pMnode, const char *pDbName, size_t len);
static SSdbRow *mndStreamActionDecode(SSdbRaw *pRaw); static SSdbRow *mndStreamActionDecode(SSdbRaw *pRaw);
SSdbRaw *mndStreamSeqActionEncode(SStreamObj *pStream); SSdbRaw *mndStreamSeqActionEncode(SStreamObj *pStream);
@ -801,6 +799,13 @@ static int32_t mndProcessCreateStreamReq(SRpcMsg *pReq) {
TSDB_CHECK_NULL(sql, code, lino, _OVER, terrno); TSDB_CHECK_NULL(sql, code, lino, _OVER, terrno);
} }
// check for the taskEp update trans
if (isNodeUpdateTransActive()) {
mError("stream:%s failed to create stream, node update trans is active", createReq.name);
code = TSDB_CODE_STREAM_TASK_IVLD_STATUS;
goto _OVER;
}
SDbObj *pSourceDb = mndAcquireDb(pMnode, createReq.sourceDB); SDbObj *pSourceDb = mndAcquireDb(pMnode, createReq.sourceDB);
if (pSourceDb == NULL) { if (pSourceDb == NULL) {
code = terrno; code = terrno;
@ -2416,8 +2421,8 @@ static bool validateChkptReport(const SCheckpointReport *pReport, int64_t report
return true; return true;
} }
static void doAddReportStreamTask(SArray *pList, int64_t reportChkptId, const SCheckpointReport *pReport) { static void doAddReportStreamTask(SArray *pList, int64_t reportedChkptId, const SCheckpointReport *pReport) {
bool valid = validateChkptReport(pReport, reportChkptId); bool valid = validateChkptReport(pReport, reportedChkptId);
if (!valid) { if (!valid) {
return; return;
} }
@ -2433,7 +2438,7 @@ static void doAddReportStreamTask(SArray *pList, int64_t reportChkptId, const SC
mError("s-task:0x%x invalid checkpoint-report msg, existed:%" PRId64 " req checkpointId:%" PRId64 ", discard", mError("s-task:0x%x invalid checkpoint-report msg, existed:%" PRId64 " req checkpointId:%" PRId64 ", discard",
pReport->taskId, p->checkpointId, pReport->checkpointId); pReport->taskId, p->checkpointId, pReport->checkpointId);
} else if (p->checkpointId < pReport->checkpointId) { // expired checkpoint-report msg, update it } else if (p->checkpointId < pReport->checkpointId) { // expired checkpoint-report msg, update it
mDebug("s-task:0x%x expired checkpoint-report msg in checkpoint-report list update from %" PRId64 "->%" PRId64, mInfo("s-task:0x%x expired checkpoint-report info in checkpoint-report list update from %" PRId64 "->%" PRId64,
pReport->taskId, p->checkpointId, pReport->checkpointId); pReport->taskId, p->checkpointId, pReport->checkpointId);
// update the checkpoint report info // update the checkpoint report info
@ -2465,7 +2470,8 @@ static void doAddReportStreamTask(SArray *pList, int64_t reportChkptId, const SC
mError("failed to put into task list, taskId:0x%x", pReport->taskId); mError("failed to put into task list, taskId:0x%x", pReport->taskId);
} else { } else {
int32_t size = taosArrayGetSize(pList); int32_t size = taosArrayGetSize(pList);
mDebug("stream:0x%" PRIx64 " %d tasks has send checkpoint-report", pReport->streamId, size); mDebug("stream:0x%" PRIx64 " taskId:0x%x checkpoint-report recv, %d tasks has send checkpoint-report",
pReport->streamId, pReport->taskId, size);
} }
} }
@ -2491,7 +2497,7 @@ int32_t mndProcessCheckpointReport(SRpcMsg *pReq) {
" checkpointVer:%" PRId64 " transId:%d", " checkpointVer:%" PRId64 " transId:%d",
req.nodeId, req.taskId, req.checkpointId, req.checkpointVer, req.transId); req.nodeId, req.taskId, req.checkpointId, req.checkpointVer, req.transId);
// register to the stream task done map, if all tasks has sent this kinds of message, start the checkpoint trans. // register to the stream task done map, if all tasks has sent these kinds of message, start the checkpoint trans.
streamMutexLock(&execInfo.lock); streamMutexLock(&execInfo.lock);
SStreamObj *pStream = NULL; SStreamObj *pStream = NULL;
@ -2500,7 +2506,7 @@ int32_t mndProcessCheckpointReport(SRpcMsg *pReq) {
mWarn("failed to find the stream:0x%" PRIx64 ", not handle checkpoint-report, try to acquire in buf", req.streamId); mWarn("failed to find the stream:0x%" PRIx64 ", not handle checkpoint-report, try to acquire in buf", req.streamId);
// not in meta-store yet, try to acquire the task in exec buffer // not in meta-store yet, try to acquire the task in exec buffer
// the checkpoint req arrives too soon before the completion of the create stream trans. // the checkpoint req arrives too soon before the completion of the creation of stream trans.
STaskId id = {.streamId = req.streamId, .taskId = req.taskId}; STaskId id = {.streamId = req.streamId, .taskId = req.taskId};
void *p = taosHashGet(execInfo.pTaskMap, &id, sizeof(id)); void *p = taosHashGet(execInfo.pTaskMap, &id, sizeof(id));
if (p == NULL) { if (p == NULL) {
@ -2533,7 +2539,7 @@ int32_t mndProcessCheckpointReport(SRpcMsg *pReq) {
} }
int32_t total = taosArrayGetSize(pInfo->pTaskList); int32_t total = taosArrayGetSize(pInfo->pTaskList);
if (total == numOfTasks) { // all tasks has send the reqs if (total == numOfTasks) { // all tasks have sent the reqs
mInfo("stream:0x%" PRIx64 " %s all %d tasks send checkpoint-report, checkpoint meta-info for checkpointId:%" PRId64 mInfo("stream:0x%" PRIx64 " %s all %d tasks send checkpoint-report, checkpoint meta-info for checkpointId:%" PRId64
" will be issued soon", " will be issued soon",
req.streamId, pStream->name, total, req.checkpointId); req.streamId, pStream->name, total, req.checkpointId);

View File

@ -292,6 +292,25 @@ int32_t setTransAction(STrans *pTrans, void *pCont, int32_t contLen, int32_t msg
return mndTransAppendRedoAction(pTrans, &action); return mndTransAppendRedoAction(pTrans, &action);
} }
bool isNodeUpdateTransActive() {
bool exist = false;
void *pIter = NULL;
streamMutexLock(&execInfo.lock);
while ((pIter = taosHashIterate(execInfo.transMgmt.pDBTrans, pIter)) != NULL) {
SStreamTransInfo *pTransInfo = (SStreamTransInfo *)pIter;
if (strcmp(pTransInfo->name, MND_STREAM_TASK_UPDATE_NAME) == 0) {
mDebug("stream:0x%" PRIx64 " %s st:%" PRId64 " is in task nodeEp update, create new stream not allowed",
pTransInfo->streamId, pTransInfo->name, pTransInfo->startTime);
exist = true;
}
}
streamMutexUnlock(&execInfo.lock);
return exist;
}
int32_t doKillCheckpointTrans(SMnode *pMnode, const char *pDBName, size_t len) { int32_t doKillCheckpointTrans(SMnode *pMnode, const char *pDBName, size_t len) {
void *pIter = NULL; void *pIter = NULL;

View File

@ -658,6 +658,72 @@ int32_t removeExpiredNodeEntryAndTaskInBuf(SArray *pNodeSnapshot) {
return 0; return 0;
} }
static int32_t allTasksSendChkptReport(SChkptReportInfo* pReportInfo, int32_t numOfTasks, const char* pName) {
int64_t checkpointId = -1;
int32_t transId = -1;
int32_t taskId = -1;
int32_t existed = (int32_t)taosArrayGetSize(pReportInfo->pTaskList);
if (existed != numOfTasks) {
mDebug("stream:0x%" PRIx64 " %s %d/%d tasks send checkpoint-report, %d not send", pReportInfo->streamId, pName,
existed, numOfTasks, numOfTasks - existed);
return -1;
}
// acquire current active checkpointId, and do cross-check checkpointId info in exec.pTaskList
for(int32_t i = 0; i < numOfTasks; ++i) {
STaskChkptInfo *pInfo = taosArrayGet(pReportInfo->pTaskList, i);
if (pInfo == NULL) {
continue;
}
if (checkpointId == -1) {
checkpointId = pInfo->checkpointId;
transId = pInfo->transId;
taskId = pInfo->taskId;
} else if (checkpointId != pInfo->checkpointId) {
mError("stream:0x%" PRIx64
" checkpointId in checkpoint-report list are not identical, type 1 taskId:0x%x checkpointId:%" PRId64
", type 2 taskId:0x%x checkpointId:%" PRId64,
pReportInfo->streamId, taskId, checkpointId, pInfo->taskId, pInfo->checkpointId);
return -1;
}
}
// check for the correct checkpointId for current task info in STaskChkptInfo
STaskChkptInfo *p = taosArrayGet(pReportInfo->pTaskList, 0);
STaskId id = {.streamId = p->streamId, .taskId = p->taskId};
STaskStatusEntry *pe = taosHashGet(execInfo.pTaskMap, &id, sizeof(id));
// cross-check failed, there must be something unknown wrong
SStreamTransInfo *pTransInfo = taosHashGet(execInfo.transMgmt.pDBTrans, &id.streamId, sizeof(id.streamId));
if (pTransInfo == NULL) {
mWarn("stream:0x%" PRIx64 " no active trans exists for checkpoint transId:%d, it may have been cleared already",
id.streamId, transId);
if (pe->checkpointInfo.activeId != 0 && pe->checkpointInfo.activeId != checkpointId) {
mWarn("stream:0x%" PRIx64 " active checkpointId is not equalled to the required, current:%" PRId64
", req:%" PRId64 " recheck next time",
id.streamId, pe->checkpointInfo.activeId, checkpointId);
return -1;
} else {
// do nothing
}
} else {
if (pTransInfo->transId != transId) {
mError("stream:0x%" PRIx64
" checkpoint-report list info are expired, active transId:%d trans in list:%d, recheck next time",
id.streamId, pTransInfo->transId, transId);
return -1;
}
}
mDebug("stream:0x%" PRIx64 " %s all %d tasks send checkpoint-report, start to update checkpoint-info", id.streamId,
pName, numOfTasks);
return TSDB_CODE_SUCCESS;
}
int32_t mndScanCheckpointReportInfo(SRpcMsg *pReq) { int32_t mndScanCheckpointReportInfo(SRpcMsg *pReq) {
SMnode *pMnode = pReq->info.node; SMnode *pMnode = pReq->info.node;
void *pIter = NULL; void *pIter = NULL;
@ -668,6 +734,7 @@ int32_t mndScanCheckpointReportInfo(SRpcMsg *pReq) {
} }
mDebug("start to scan checkpoint report info"); mDebug("start to scan checkpoint report info");
streamMutexLock(&execInfo.lock); streamMutexLock(&execInfo.lock);
while ((pIter = taosHashIterate(execInfo.pChkptStreams, pIter)) != NULL) { while ((pIter = taosHashIterate(execInfo.pChkptStreams, pIter)) != NULL) {
@ -693,30 +760,27 @@ int32_t mndScanCheckpointReportInfo(SRpcMsg *pReq) {
} }
int32_t total = mndGetNumOfStreamTasks(pStream); int32_t total = mndGetNumOfStreamTasks(pStream);
int32_t existed = (int32_t)taosArrayGetSize(px->pTaskList); int32_t ret = allTasksSendChkptReport(px, total, pStream->name);
if (ret == 0) {
if (total == existed) {
mDebug("stream:0x%" PRIx64 " %s all %d tasks send checkpoint-report, start to update checkpoint-info",
pStream->uid, pStream->name, total);
code = mndStreamTransConflictCheck(pMnode, pStream->uid, MND_STREAM_CHKPT_UPDATE_NAME, false); code = mndStreamTransConflictCheck(pMnode, pStream->uid, MND_STREAM_CHKPT_UPDATE_NAME, false);
if (code == 0) { if (code == 0) {
code = mndCreateStreamChkptInfoUpdateTrans(pMnode, pStream, px->pTaskList); code = mndCreateStreamChkptInfoUpdateTrans(pMnode, pStream, px->pTaskList);
if (code == TSDB_CODE_SUCCESS || code == TSDB_CODE_ACTION_IN_PROGRESS) { // remove this entry if (code == TSDB_CODE_SUCCESS || code == TSDB_CODE_ACTION_IN_PROGRESS) { // remove this entry
taosArrayClear(px->pTaskList); taosArrayClear(px->pTaskList);
mInfo("stream:0x%" PRIx64 " clear checkpoint-report list and update the report checkpointId from:%" PRId64
" to %" PRId64,
pInfo->streamId, px->reportChkpt, pInfo->checkpointId);
px->reportChkpt = pInfo->checkpointId; px->reportChkpt = pInfo->checkpointId;
mDebug("stream:0x%" PRIx64 " clear checkpoint-report list", pInfo->streamId);
} else { } else {
mDebug("stream:0x%" PRIx64 " not launch chkpt-meta update trans, due to checkpoint not finished yet", mDebug("stream:0x%" PRIx64 " not launch chkpt-info update trans, due to checkpoint not finished yet",
pInfo->streamId); pInfo->streamId);
} }
sdbRelease(pMnode->pSdb, pStream);
break; break;
} else { } else {
mDebug("stream:0x%" PRIx64 " active checkpoint trans not finished yet, wait", pInfo->streamId); mDebug("stream:0x%" PRIx64 " active checkpoint trans not finished yet, wait", pInfo->streamId);
} }
} else {
mDebug("stream:0x%" PRIx64 " %s %d/%d tasks send checkpoint-report, %d not send", pInfo->streamId, pStream->name,
existed, total, total - existed);
} }
sdbRelease(pMnode->pSdb, pStream); sdbRelease(pMnode->pSdb, pStream);
@ -743,6 +807,8 @@ int32_t mndScanCheckpointReportInfo(SRpcMsg *pReq) {
streamMutexUnlock(&execInfo.lock); streamMutexUnlock(&execInfo.lock);
taosArrayDestroy(pDropped); taosArrayDestroy(pDropped);
mDebug("end to scan checkpoint report info")
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
} }
@ -836,7 +902,8 @@ void mndAddConsensusTasks(SCheckpointConsensusInfo *pInfo, const SRestoreCheckpo
SCheckpointConsensusEntry info = {.ts = taosGetTimestampMs()}; SCheckpointConsensusEntry info = {.ts = taosGetTimestampMs()};
memcpy(&info.req, pRestoreInfo, sizeof(info.req)); memcpy(&info.req, pRestoreInfo, sizeof(info.req));
for (int32_t i = 0; i < taosArrayGetSize(pInfo->pTaskList); ++i) { int32_t num = (int32_t) taosArrayGetSize(pInfo->pTaskList);
for (int32_t i = 0; i < num; ++i) {
SCheckpointConsensusEntry *p = taosArrayGet(pInfo->pTaskList, i); SCheckpointConsensusEntry *p = taosArrayGet(pInfo->pTaskList, i);
if (p == NULL) { if (p == NULL) {
continue; continue;
@ -844,10 +911,12 @@ void mndAddConsensusTasks(SCheckpointConsensusInfo *pInfo, const SRestoreCheckpo
if (p->req.taskId == info.req.taskId) { if (p->req.taskId == info.req.taskId) {
mDebug("s-task:0x%x already in consensus-checkpointId list for stream:0x%" PRIx64 ", update ts %" PRId64 mDebug("s-task:0x%x already in consensus-checkpointId list for stream:0x%" PRIx64 ", update ts %" PRId64
"->%" PRId64 " total existed:%d", "->%" PRId64 " checkpointId:%" PRId64 " -> %" PRId64 " total existed:%d",
pRestoreInfo->taskId, pRestoreInfo->streamId, p->req.startTs, info.req.startTs, pRestoreInfo->taskId, pRestoreInfo->streamId, p->req.startTs, info.req.startTs, p->req.checkpointId,
(int32_t)taosArrayGetSize(pInfo->pTaskList)); info.req.checkpointId, num);
p->req.startTs = info.req.startTs; p->req.startTs = info.req.startTs;
p->req.checkpointId = info.req.checkpointId;
p->req.transId = info.req.transId;
return; return;
} }
} }
@ -856,7 +925,7 @@ void mndAddConsensusTasks(SCheckpointConsensusInfo *pInfo, const SRestoreCheckpo
if (p == NULL) { if (p == NULL) {
mError("s-task:0x%x failed to put task into consensus-checkpointId list, code: out of memory", info.req.taskId); mError("s-task:0x%x failed to put task into consensus-checkpointId list, code: out of memory", info.req.taskId);
} else { } else {
int32_t num = taosArrayGetSize(pInfo->pTaskList); num = taosArrayGetSize(pInfo->pTaskList);
mDebug("s-task:0x%x checkpointId:%" PRId64 " added into consensus-checkpointId list, stream:0x%" PRIx64 mDebug("s-task:0x%x checkpointId:%" PRId64 " added into consensus-checkpointId list, stream:0x%" PRIx64
" waiting tasks:%d", " waiting tasks:%d",
pRestoreInfo->taskId, pRestoreInfo->checkpointId, pRestoreInfo->streamId, num); pRestoreInfo->taskId, pRestoreInfo->checkpointId, pRestoreInfo->streamId, num);
@ -868,7 +937,7 @@ void mndClearConsensusRspEntry(SCheckpointConsensusInfo *pInfo) {
pInfo->pTaskList = NULL; pInfo->pTaskList = NULL;
} }
int64_t mndClearConsensusCheckpointId(SHashObj *pHash, int64_t streamId) { int32_t mndClearConsensusCheckpointId(SHashObj *pHash, int64_t streamId) {
int32_t code = 0; int32_t code = 0;
int32_t numOfStreams = taosHashGetSize(pHash); int32_t numOfStreams = taosHashGetSize(pHash);
if (numOfStreams == 0) { if (numOfStreams == 0) {
@ -885,7 +954,7 @@ int64_t mndClearConsensusCheckpointId(SHashObj *pHash, int64_t streamId) {
return code; return code;
} }
int64_t mndClearChkptReportInfo(SHashObj *pHash, int64_t streamId) { int32_t mndClearChkptReportInfo(SHashObj *pHash, int64_t streamId) {
int32_t code = 0; int32_t code = 0;
int32_t numOfStreams = taosHashGetSize(pHash); int32_t numOfStreams = taosHashGetSize(pHash);
if (numOfStreams == 0) { if (numOfStreams == 0) {

View File

@ -130,7 +130,7 @@ int32_t metaGetTableNameByUid(void *pVnode, uint64_t uid, char *tbName);
int metaGetTableSzNameByUid(void *meta, uint64_t uid, char *tbName); int metaGetTableSzNameByUid(void *meta, uint64_t uid, char *tbName);
int metaGetTableUidByName(void *pVnode, char *tbName, uint64_t *uid); int metaGetTableUidByName(void *pVnode, char *tbName, uint64_t *uid);
int metaGetTableTypeByName(void *meta, char *tbName, ETableType *tbType); int metaGetTableTypeSuidByName(void *meta, char *tbName, ETableType *tbType, uint64_t* suid);
int metaGetTableTtlByUid(void *meta, uint64_t uid, int64_t *ttlDays); int metaGetTableTtlByUid(void *meta, uint64_t uid, int64_t *ttlDays);
bool metaIsTableExist(void *pVnode, tb_uid_t uid); bool metaIsTableExist(void *pVnode, tb_uid_t uid);
int32_t metaGetCachedTableUidList(void *pVnode, tb_uid_t suid, const uint8_t *key, int32_t keyLen, SArray *pList, int32_t metaGetCachedTableUidList(void *pVnode, tb_uid_t suid, const uint8_t *key, int32_t keyLen, SArray *pList,

View File

@ -190,13 +190,20 @@ int metaGetTableUidByName(void *pVnode, char *tbName, uint64_t *uid) {
return 0; return 0;
} }
int metaGetTableTypeByName(void *pVnode, char *tbName, ETableType *tbType) { int metaGetTableTypeSuidByName(void *pVnode, char *tbName, ETableType *tbType, uint64_t* suid) {
int code = 0; int code = 0;
SMetaReader mr = {0}; SMetaReader mr = {0};
metaReaderDoInit(&mr, ((SVnode *)pVnode)->pMeta, META_READER_LOCK); metaReaderDoInit(&mr, ((SVnode *)pVnode)->pMeta, META_READER_LOCK);
code = metaGetTableEntryByName(&mr, tbName); code = metaGetTableEntryByName(&mr, tbName);
if (code == 0) *tbType = mr.me.type; if (code == 0) *tbType = mr.me.type;
if (TSDB_CHILD_TABLE == mr.me.type) {
*suid = mr.me.ctbEntry.suid;
} else if (TSDB_SUPER_TABLE == mr.me.type) {
*suid = mr.me.uid;
} else {
*suid = 0;
}
metaReaderClear(&mr); metaReaderClear(&mr);
return code; return code;

View File

@ -559,6 +559,7 @@ struct STsdbSnapWriter {
SIterMerger* tombIterMerger; SIterMerger* tombIterMerger;
// writer // writer
bool toSttOnly;
SFSetWriter* fsetWriter; SFSetWriter* fsetWriter;
} ctx[1]; } ctx[1];
}; };
@ -622,6 +623,7 @@ static int32_t tsdbSnapWriteFileSetOpenReader(STsdbSnapWriter* writer) {
int32_t code = 0; int32_t code = 0;
int32_t lino = 0; int32_t lino = 0;
writer->ctx->toSttOnly = false;
if (writer->ctx->fset) { if (writer->ctx->fset) {
#if 0 #if 0
// open data reader // open data reader
@ -656,6 +658,14 @@ static int32_t tsdbSnapWriteFileSetOpenReader(STsdbSnapWriter* writer) {
// open stt reader array // open stt reader array
SSttLvl* lvl; SSttLvl* lvl;
TARRAY2_FOREACH(writer->ctx->fset->lvlArr, lvl) { TARRAY2_FOREACH(writer->ctx->fset->lvlArr, lvl) {
if (lvl->level != 0) {
if (TARRAY2_SIZE(lvl->fobjArr) > 0) {
writer->ctx->toSttOnly = true;
}
continue; // Only merge level 0
}
STFileObj* fobj; STFileObj* fobj;
TARRAY2_FOREACH(lvl->fobjArr, fobj) { TARRAY2_FOREACH(lvl->fobjArr, fobj) {
SSttFileReader* reader; SSttFileReader* reader;
@ -782,7 +792,7 @@ static int32_t tsdbSnapWriteFileSetOpenWriter(STsdbSnapWriter* writer) {
SFSetWriterConfig config = { SFSetWriterConfig config = {
.tsdb = writer->tsdb, .tsdb = writer->tsdb,
.toSttOnly = false, .toSttOnly = writer->ctx->toSttOnly,
.compactVersion = writer->compactVersion, .compactVersion = writer->compactVersion,
.minRow = writer->minRow, .minRow = writer->minRow,
.maxRow = writer->maxRow, .maxRow = writer->maxRow,
@ -791,7 +801,7 @@ static int32_t tsdbSnapWriteFileSetOpenWriter(STsdbSnapWriter* writer) {
.fid = writer->ctx->fid, .fid = writer->ctx->fid,
.cid = writer->commitID, .cid = writer->commitID,
.did = writer->ctx->did, .did = writer->ctx->did,
.level = 0, .level = writer->ctx->toSttOnly ? 1 : 0,
}; };
// merge stt files to either data or a new stt file // merge stt files to either data or a new stt file
if (writer->ctx->fset) { if (writer->ctx->fset) {

View File

@ -94,7 +94,7 @@ void initMetadataAPI(SStoreMeta* pMeta) {
pMeta->getTableTagsByUid = metaGetTableTagsByUids; pMeta->getTableTagsByUid = metaGetTableTagsByUids;
pMeta->getTableUidByName = metaGetTableUidByName; pMeta->getTableUidByName = metaGetTableUidByName;
pMeta->getTableTypeByName = metaGetTableTypeByName; pMeta->getTableTypeSuidByName = metaGetTableTypeSuidByName;
pMeta->getTableNameByUid = metaGetTableNameByUid; pMeta->getTableNameByUid = metaGetTableNameByUid;
pMeta->getTableSchema = vnodeGetTableSchema; pMeta->getTableSchema = vnodeGetTableSchema;

View File

@ -202,13 +202,13 @@ do { \
#define EXPLAIN_SUM_ROW_END() do { varDataSetLen(tbuf, tlen); tlen += VARSTR_HEADER_SIZE; } while (0) #define EXPLAIN_SUM_ROW_END() do { varDataSetLen(tbuf, tlen); tlen += VARSTR_HEADER_SIZE; } while (0)
#define EXPLAIN_ROW_APPEND_LIMIT_IMPL(_pLimit, sl) do { \ #define EXPLAIN_ROW_APPEND_LIMIT_IMPL(_pLimit, sl) do { \
if (_pLimit) { \ if (_pLimit && ((SLimitNode*)_pLimit)->limit) { \
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT); \ EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT); \
SLimitNode* pLimit = (SLimitNode*)_pLimit; \ SLimitNode* pLimit = (SLimitNode*)_pLimit; \
EXPLAIN_ROW_APPEND(((sl) ? EXPLAIN_SLIMIT_FORMAT : EXPLAIN_LIMIT_FORMAT), pLimit->limit); \ EXPLAIN_ROW_APPEND(((sl) ? EXPLAIN_SLIMIT_FORMAT : EXPLAIN_LIMIT_FORMAT), pLimit->limit->datum.i); \
if (pLimit->offset) { \ if (pLimit->offset) { \
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT); \ EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT); \
EXPLAIN_ROW_APPEND(((sl) ? EXPLAIN_SOFFSET_FORMAT : EXPLAIN_OFFSET_FORMAT), pLimit->offset);\ EXPLAIN_ROW_APPEND(((sl) ? EXPLAIN_SOFFSET_FORMAT : EXPLAIN_OFFSET_FORMAT), pLimit->offset->datum.i);\
} \ } \
} \ } \
} while (0) } while (0)

View File

@ -676,9 +676,9 @@ static int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx
EXPLAIN_ROW_APPEND(EXPLAIN_WIN_OFFSET_FORMAT, pStart->literal, pEnd->literal); EXPLAIN_ROW_APPEND(EXPLAIN_WIN_OFFSET_FORMAT, pStart->literal, pEnd->literal);
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT); EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
} }
if (NULL != pJoinNode->pJLimit) { if (NULL != pJoinNode->pJLimit && NULL != ((SLimitNode*)pJoinNode->pJLimit)->limit) {
SLimitNode* pJLimit = (SLimitNode*)pJoinNode->pJLimit; SLimitNode* pJLimit = (SLimitNode*)pJoinNode->pJLimit;
EXPLAIN_ROW_APPEND(EXPLAIN_JLIMIT_FORMAT, pJLimit->limit); EXPLAIN_ROW_APPEND(EXPLAIN_JLIMIT_FORMAT, pJLimit->limit->datum.i);
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT); EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
} }
if (IS_WINDOW_JOIN(pJoinNode->subType)) { if (IS_WINDOW_JOIN(pJoinNode->subType)) {

View File

@ -49,13 +49,13 @@ typedef enum {
static FilterCondType checkTagCond(SNode* cond); static FilterCondType checkTagCond(SNode* cond);
static int32_t optimizeTbnameInCond(void* metaHandle, int64_t suid, SArray* list, SNode* pTagCond, SStorageAPI* pAPI); static int32_t optimizeTbnameInCond(void* metaHandle, int64_t suid, SArray* list, SNode* pTagCond, SStorageAPI* pAPI);
static int32_t optimizeTbnameInCondImpl(void* metaHandle, SArray* list, SNode* pTagCond, SStorageAPI* pStoreAPI); static int32_t optimizeTbnameInCondImpl(void* metaHandle, SArray* list, SNode* pTagCond, SStorageAPI* pStoreAPI, uint64_t suid);
static int32_t getTableList(void* pVnode, SScanPhysiNode* pScanNode, SNode* pTagCond, SNode* pTagIndexCond, static int32_t getTableList(void* pVnode, SScanPhysiNode* pScanNode, SNode* pTagCond, SNode* pTagIndexCond,
STableListInfo* pListInfo, uint8_t* digest, const char* idstr, SStorageAPI* pStorageAPI); STableListInfo* pListInfo, uint8_t* digest, const char* idstr, SStorageAPI* pStorageAPI);
static int64_t getLimit(const SNode* pLimit) { return NULL == pLimit ? -1 : ((SLimitNode*)pLimit)->limit; } static int64_t getLimit(const SNode* pLimit) { return (NULL == pLimit || NULL == ((SLimitNode*)pLimit)->limit) ? -1 : ((SLimitNode*)pLimit)->limit->datum.i; }
static int64_t getOffset(const SNode* pLimit) { return NULL == pLimit ? -1 : ((SLimitNode*)pLimit)->offset; } static int64_t getOffset(const SNode* pLimit) { return (NULL == pLimit || NULL == ((SLimitNode*)pLimit)->offset) ? -1 : ((SLimitNode*)pLimit)->offset->datum.i; }
static void releaseColInfoData(void* pCol); static void releaseColInfoData(void* pCol);
void initResultRowInfo(SResultRowInfo* pResultRowInfo) { void initResultRowInfo(SResultRowInfo* pResultRowInfo) {
@ -1061,7 +1061,7 @@ static int32_t optimizeTbnameInCond(void* pVnode, int64_t suid, SArray* list, SN
int32_t ntype = nodeType(cond); int32_t ntype = nodeType(cond);
if (ntype == QUERY_NODE_OPERATOR) { if (ntype == QUERY_NODE_OPERATOR) {
ret = optimizeTbnameInCondImpl(pVnode, list, cond, pAPI); ret = optimizeTbnameInCondImpl(pVnode, list, cond, pAPI, suid);
} }
if (ntype != QUERY_NODE_LOGIC_CONDITION || ((SLogicConditionNode*)cond)->condType != LOGIC_COND_TYPE_AND) { if (ntype != QUERY_NODE_LOGIC_CONDITION || ((SLogicConditionNode*)cond)->condType != LOGIC_COND_TYPE_AND) {
@ -1080,7 +1080,7 @@ static int32_t optimizeTbnameInCond(void* pVnode, int64_t suid, SArray* list, SN
SListCell* cell = pList->pHead; SListCell* cell = pList->pHead;
for (int i = 0; i < len; i++) { for (int i = 0; i < len; i++) {
if (cell == NULL) break; if (cell == NULL) break;
if (optimizeTbnameInCondImpl(pVnode, list, cell->pNode, pAPI) == 0) { if (optimizeTbnameInCondImpl(pVnode, list, cell->pNode, pAPI, suid) == 0) {
hasTbnameCond = true; hasTbnameCond = true;
break; break;
} }
@ -1099,7 +1099,7 @@ static int32_t optimizeTbnameInCond(void* pVnode, int64_t suid, SArray* list, SN
// only return uid that does not contained in pExistedUidList // only return uid that does not contained in pExistedUidList
static int32_t optimizeTbnameInCondImpl(void* pVnode, SArray* pExistedUidList, SNode* pTagCond, static int32_t optimizeTbnameInCondImpl(void* pVnode, SArray* pExistedUidList, SNode* pTagCond,
SStorageAPI* pStoreAPI) { SStorageAPI* pStoreAPI, uint64_t suid) {
if (nodeType(pTagCond) != QUERY_NODE_OPERATOR) { if (nodeType(pTagCond) != QUERY_NODE_OPERATOR) {
return -1; return -1;
} }
@ -1148,10 +1148,13 @@ static int32_t optimizeTbnameInCondImpl(void* pVnode, SArray* pExistedUidList, S
for (int i = 0; i < numOfTables; i++) { for (int i = 0; i < numOfTables; i++) {
char* name = taosArrayGetP(pTbList, i); char* name = taosArrayGetP(pTbList, i);
uint64_t uid = 0; uint64_t uid = 0, csuid = 0;
if (pStoreAPI->metaFn.getTableUidByName(pVnode, name, &uid) == 0) { if (pStoreAPI->metaFn.getTableUidByName(pVnode, name, &uid) == 0) {
ETableType tbType = TSDB_TABLE_MAX; ETableType tbType = TSDB_TABLE_MAX;
if (pStoreAPI->metaFn.getTableTypeByName(pVnode, name, &tbType) == 0 && tbType == TSDB_CHILD_TABLE) { if (pStoreAPI->metaFn.getTableTypeSuidByName(pVnode, name, &tbType, &csuid) == 0 && tbType == TSDB_CHILD_TABLE) {
if (suid != csuid) {
continue;
}
if (NULL == uHash || taosHashGet(uHash, &uid, sizeof(uid)) == NULL) { if (NULL == uHash || taosHashGet(uHash, &uid, sizeof(uid)) == NULL) {
STUidTagInfo s = {.uid = uid, .name = name, .pTagVal = NULL}; STUidTagInfo s = {.uid = uid, .name = name, .pTagVal = NULL};
void* tmp = taosArrayPush(pExistedUidList, &s); void* tmp = taosArrayPush(pExistedUidList, &s);

View File

@ -1185,7 +1185,7 @@ int32_t createHashJoinOperatorInfo(SOperatorInfo** pDownstream, int32_t numOfDow
pInfo->tblTimeRange.skey = pJoinNode->timeRange.skey; pInfo->tblTimeRange.skey = pJoinNode->timeRange.skey;
pInfo->tblTimeRange.ekey = pJoinNode->timeRange.ekey; pInfo->tblTimeRange.ekey = pJoinNode->timeRange.ekey;
pInfo->ctx.limit = pJoinNode->node.pLimit ? ((SLimitNode*)pJoinNode->node.pLimit)->limit : INT64_MAX; pInfo->ctx.limit = (pJoinNode->node.pLimit && ((SLimitNode*)pJoinNode->node.pLimit)->limit) ? ((SLimitNode*)pJoinNode->node.pLimit)->limit->datum.i : INT64_MAX;
setOperatorInfo(pOperator, "HashJoinOperator", QUERY_NODE_PHYSICAL_PLAN_HASH_JOIN, false, OP_NOT_OPENED, pInfo, pTaskInfo); setOperatorInfo(pOperator, "HashJoinOperator", QUERY_NODE_PHYSICAL_PLAN_HASH_JOIN, false, OP_NOT_OPENED, pInfo, pTaskInfo);

View File

@ -3592,7 +3592,7 @@ int32_t mJoinInitWindowCtx(SMJoinOperatorInfo* pJoin, SSortMergeJoinPhysiNode* p
switch (pJoinNode->subType) { switch (pJoinNode->subType) {
case JOIN_STYPE_ASOF: case JOIN_STYPE_ASOF:
pCtx->asofOpType = pJoinNode->asofOpType; pCtx->asofOpType = pJoinNode->asofOpType;
pCtx->jLimit = pJoinNode->pJLimit ? ((SLimitNode*)pJoinNode->pJLimit)->limit : 1; pCtx->jLimit = (pJoinNode->pJLimit && ((SLimitNode*)pJoinNode->pJLimit)->limit) ? ((SLimitNode*)pJoinNode->pJLimit)->limit->datum.i : 1;
pCtx->eqRowsAcq = ASOF_EQ_ROW_INCLUDED(pCtx->asofOpType); pCtx->eqRowsAcq = ASOF_EQ_ROW_INCLUDED(pCtx->asofOpType);
pCtx->lowerRowsAcq = (JOIN_TYPE_RIGHT != pJoin->joinType) ? ASOF_LOWER_ROW_INCLUDED(pCtx->asofOpType) : ASOF_GREATER_ROW_INCLUDED(pCtx->asofOpType); pCtx->lowerRowsAcq = (JOIN_TYPE_RIGHT != pJoin->joinType) ? ASOF_LOWER_ROW_INCLUDED(pCtx->asofOpType) : ASOF_GREATER_ROW_INCLUDED(pCtx->asofOpType);
pCtx->greaterRowsAcq = (JOIN_TYPE_RIGHT != pJoin->joinType) ? ASOF_GREATER_ROW_INCLUDED(pCtx->asofOpType) : ASOF_LOWER_ROW_INCLUDED(pCtx->asofOpType); pCtx->greaterRowsAcq = (JOIN_TYPE_RIGHT != pJoin->joinType) ? ASOF_GREATER_ROW_INCLUDED(pCtx->asofOpType) : ASOF_LOWER_ROW_INCLUDED(pCtx->asofOpType);
@ -3609,7 +3609,7 @@ int32_t mJoinInitWindowCtx(SMJoinOperatorInfo* pJoin, SSortMergeJoinPhysiNode* p
SWindowOffsetNode* pOffsetNode = (SWindowOffsetNode*)pJoinNode->pWindowOffset; SWindowOffsetNode* pOffsetNode = (SWindowOffsetNode*)pJoinNode->pWindowOffset;
SValueNode* pWinBegin = (SValueNode*)pOffsetNode->pStartOffset; SValueNode* pWinBegin = (SValueNode*)pOffsetNode->pStartOffset;
SValueNode* pWinEnd = (SValueNode*)pOffsetNode->pEndOffset; SValueNode* pWinEnd = (SValueNode*)pOffsetNode->pEndOffset;
pCtx->jLimit = pJoinNode->pJLimit ? ((SLimitNode*)pJoinNode->pJLimit)->limit : INT64_MAX; pCtx->jLimit = (pJoinNode->pJLimit && ((SLimitNode*)pJoinNode->pJLimit)->limit) ? ((SLimitNode*)pJoinNode->pJLimit)->limit->datum.i : INT64_MAX;
pCtx->winBeginOffset = pWinBegin->datum.i; pCtx->winBeginOffset = pWinBegin->datum.i;
pCtx->winEndOffset = pWinEnd->datum.i; pCtx->winEndOffset = pWinEnd->datum.i;
pCtx->eqRowsAcq = (pCtx->winBeginOffset <= 0 && pCtx->winEndOffset >= 0); pCtx->eqRowsAcq = (pCtx->winBeginOffset <= 0 && pCtx->winEndOffset >= 0);
@ -3662,7 +3662,7 @@ int32_t mJoinInitMergeCtx(SMJoinOperatorInfo* pJoin, SSortMergeJoinPhysiNode* pJ
pCtx->hashCan = pJoin->probe->keyNum > 0; pCtx->hashCan = pJoin->probe->keyNum > 0;
if (JOIN_STYPE_ASOF == pJoinNode->subType || JOIN_STYPE_WIN == pJoinNode->subType) { if (JOIN_STYPE_ASOF == pJoinNode->subType || JOIN_STYPE_WIN == pJoinNode->subType) {
pCtx->jLimit = pJoinNode->pJLimit ? ((SLimitNode*)pJoinNode->pJLimit)->limit : 1; pCtx->jLimit = (pJoinNode->pJLimit && ((SLimitNode*)pJoinNode->pJLimit)->limit) ? ((SLimitNode*)pJoinNode->pJLimit)->limit->datum.i : 1;
pJoin->subType = JOIN_STYPE_OUTER; pJoin->subType = JOIN_STYPE_OUTER;
pJoin->build->eqRowLimit = pCtx->jLimit; pJoin->build->eqRowLimit = pCtx->jLimit;
pJoin->grpResetFp = mLeftJoinGroupReset; pJoin->grpResetFp = mLeftJoinGroupReset;

View File

@ -986,7 +986,7 @@ static int32_t mJoinInitTableInfo(SMJoinOperatorInfo* pJoin, SSortMergeJoinPhysi
pTable->multiEqGrpRows = !((JOIN_STYPE_SEMI == pJoin->subType || JOIN_STYPE_ANTI == pJoin->subType) && NULL == pJoin->pFPreFilter); pTable->multiEqGrpRows = !((JOIN_STYPE_SEMI == pJoin->subType || JOIN_STYPE_ANTI == pJoin->subType) && NULL == pJoin->pFPreFilter);
pTable->multiRowsGrp = !((JOIN_STYPE_SEMI == pJoin->subType || JOIN_STYPE_ANTI == pJoin->subType) && NULL == pJoin->pPreFilter); pTable->multiRowsGrp = !((JOIN_STYPE_SEMI == pJoin->subType || JOIN_STYPE_ANTI == pJoin->subType) && NULL == pJoin->pPreFilter);
if (JOIN_STYPE_ASOF == pJoinNode->subType) { if (JOIN_STYPE_ASOF == pJoinNode->subType) {
pTable->eqRowLimit = pJoinNode->pJLimit ? ((SLimitNode*)pJoinNode->pJLimit)->limit : 1; pTable->eqRowLimit = (pJoinNode->pJLimit && ((SLimitNode*)pJoinNode->pJLimit)->limit) ? ((SLimitNode*)pJoinNode->pJLimit)->limit->datum.i : 1;
} }
} else { } else {
pTable->multiEqGrpRows = true; pTable->multiEqGrpRows = true;
@ -1169,7 +1169,7 @@ static FORCE_INLINE SSDataBlock* mJoinRetrieveImpl(SMJoinOperatorInfo* pJoin, SM
static int32_t mJoinInitCtx(SMJoinOperatorInfo* pJoin, SSortMergeJoinPhysiNode* pJoinNode) { static int32_t mJoinInitCtx(SMJoinOperatorInfo* pJoin, SSortMergeJoinPhysiNode* pJoinNode) {
pJoin->ctx.mergeCtx.groupJoin = pJoinNode->grpJoin; pJoin->ctx.mergeCtx.groupJoin = pJoinNode->grpJoin;
pJoin->ctx.mergeCtx.limit = pJoinNode->node.pLimit ? ((SLimitNode*)pJoinNode->node.pLimit)->limit : INT64_MAX; pJoin->ctx.mergeCtx.limit = (pJoinNode->node.pLimit && ((SLimitNode*)pJoinNode->node.pLimit)->limit) ? ((SLimitNode*)pJoinNode->node.pLimit)->limit->datum.i : INT64_MAX;
pJoin->retrieveFp = pJoinNode->grpJoin ? mJoinGrpRetrieveImpl : mJoinRetrieveImpl; pJoin->retrieveFp = pJoinNode->grpJoin ? mJoinGrpRetrieveImpl : mJoinRetrieveImpl;
pJoin->outBlkId = pJoinNode->node.pOutputDataBlockDesc->dataBlockId; pJoin->outBlkId = pJoinNode->node.pOutputDataBlockDesc->dataBlockId;

View File

@ -84,9 +84,11 @@ int32_t createSortOperatorInfo(SOperatorInfo* downstream, SSortPhysiNode* pSortN
calcSortOperMaxTupleLength(pInfo, pSortNode->pSortKeys); calcSortOperMaxTupleLength(pInfo, pSortNode->pSortKeys);
pInfo->maxRows = -1; pInfo->maxRows = -1;
if (pSortNode->node.pLimit) { if (pSortNode->node.pLimit && ((SLimitNode*)pSortNode->node.pLimit)->limit) {
SLimitNode* pLimit = (SLimitNode*)pSortNode->node.pLimit; SLimitNode* pLimit = (SLimitNode*)pSortNode->node.pLimit;
if (pLimit->limit > 0) pInfo->maxRows = pLimit->limit + pLimit->offset; if (pLimit->limit->datum.i > 0) {
pInfo->maxRows = pLimit->limit->datum.i + (pLimit->offset ? pLimit->offset->datum.i : 0);
}
} }
pOperator->exprSupp.pCtx = pOperator->exprSupp.pCtx =

View File

@ -63,7 +63,7 @@ static void doKeepPrevRows(STimeSliceOperatorInfo* pSliceInfo, const SSDataBlock
pkey->isNull = false; pkey->isNull = false;
char* val = colDataGetData(pColInfoData, rowIndex); char* val = colDataGetData(pColInfoData, rowIndex);
if (IS_VAR_DATA_TYPE(pkey->type)) { if (IS_VAR_DATA_TYPE(pkey->type)) {
memcpy(pkey->pData, val, varDataLen(val)); memcpy(pkey->pData, val, varDataTLen(val));
} else { } else {
memcpy(pkey->pData, val, pkey->bytes); memcpy(pkey->pData, val, pkey->bytes);
} }
@ -87,7 +87,7 @@ static void doKeepNextRows(STimeSliceOperatorInfo* pSliceInfo, const SSDataBlock
if (!IS_VAR_DATA_TYPE(pkey->type)) { if (!IS_VAR_DATA_TYPE(pkey->type)) {
memcpy(pkey->pData, val, pkey->bytes); memcpy(pkey->pData, val, pkey->bytes);
} else { } else {
memcpy(pkey->pData, val, varDataLen(val)); memcpy(pkey->pData, val, varDataTLen(val));
} }
} else { } else {
pkey->isNull = true; pkey->isNull = true;

View File

@ -1417,15 +1417,15 @@ int32_t createIntervalOperatorInfo(SOperatorInfo* downstream, SIntervalPhysiNode
pInfo->interval = interval; pInfo->interval = interval;
pInfo->twAggSup = as; pInfo->twAggSup = as;
pInfo->binfo.mergeResultBlock = pPhyNode->window.mergeDataBlock; pInfo->binfo.mergeResultBlock = pPhyNode->window.mergeDataBlock;
if (pPhyNode->window.node.pLimit) { if (pPhyNode->window.node.pLimit && ((SLimitNode*)pPhyNode->window.node.pLimit)->limit) {
SLimitNode* pLimit = (SLimitNode*)pPhyNode->window.node.pLimit; SLimitNode* pLimit = (SLimitNode*)pPhyNode->window.node.pLimit;
pInfo->limited = true; pInfo->limited = true;
pInfo->limit = pLimit->limit + pLimit->offset; pInfo->limit = pLimit->limit->datum.i + (pLimit->offset ? pLimit->offset->datum.i : 0);
} }
if (pPhyNode->window.node.pSlimit) { if (pPhyNode->window.node.pSlimit && ((SLimitNode*)pPhyNode->window.node.pSlimit)->limit) {
SLimitNode* pLimit = (SLimitNode*)pPhyNode->window.node.pSlimit; SLimitNode* pLimit = (SLimitNode*)pPhyNode->window.node.pSlimit;
pInfo->slimited = true; pInfo->slimited = true;
pInfo->slimit = pLimit->limit + pLimit->offset; pInfo->slimit = pLimit->limit->datum.i + (pLimit->offset ? pLimit->offset->datum.i : 0);
pInfo->curGroupId = UINT64_MAX; pInfo->curGroupId = UINT64_MAX;
} }

View File

@ -864,7 +864,11 @@ SSortMergeJoinPhysiNode* createDummySortMergeJoinPhysiNode(SJoinTestParam* param
SLimitNode* limitNode = NULL; SLimitNode* limitNode = NULL;
code = nodesMakeNode(QUERY_NODE_LIMIT, (SNode**)&limitNode); code = nodesMakeNode(QUERY_NODE_LIMIT, (SNode**)&limitNode);
assert(limitNode); assert(limitNode);
limitNode->limit = param->jLimit; code = nodesMakeNode(QUERY_NODE_VALUE, (SNode**)&limitNode->limit);
assert(limitNode->limit);
limitNode->limit->node.resType.type = TSDB_DATA_TYPE_BIGINT;
limitNode->limit->node.resType.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes;
limitNode->limit->datum.i = param->jLimit;
p->pJLimit = (SNode*)limitNode; p->pJLimit = (SNode*)limitNode;
} }

View File

@ -1418,6 +1418,7 @@ SNode* qptMakeExprNode(SNode** ppNode) {
SNode* qptMakeLimitNode(SNode** ppNode) { SNode* qptMakeLimitNode(SNode** ppNode) {
SNode* pNode = NULL; SNode* pNode = NULL;
int32_t code = 0;
if (QPT_NCORRECT_LOW_PROB()) { if (QPT_NCORRECT_LOW_PROB()) {
return qptMakeRandNode(&pNode); return qptMakeRandNode(&pNode);
} }
@ -1429,15 +1430,27 @@ SNode* qptMakeLimitNode(SNode** ppNode) {
if (!qptCtx.param.correctExpected) { if (!qptCtx.param.correctExpected) {
if (taosRand() % 2) { if (taosRand() % 2) {
pLimit->limit = taosRand() * ((taosRand() % 2) ? 1 : -1); code = nodesMakeNode(QUERY_NODE_VALUE, (SNode**)&pLimit->limit);
assert(pLimit->limit);
pLimit->limit->node.resType.type = TSDB_DATA_TYPE_BIGINT;
pLimit->limit->node.resType.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes;
pLimit->limit->datum.i = taosRand() * ((taosRand() % 2) ? 1 : -1);
} }
if (taosRand() % 2) { if (taosRand() % 2) {
pLimit->offset = taosRand() * ((taosRand() % 2) ? 1 : -1); code = nodesMakeNode(QUERY_NODE_VALUE, (SNode**)&pLimit->offset);
assert(pLimit->offset);
pLimit->offset->node.resType.type = TSDB_DATA_TYPE_BIGINT;
pLimit->offset->node.resType.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes;
pLimit->offset->datum.i = taosRand() * ((taosRand() % 2) ? 1 : -1);
} }
} else { } else {
pLimit->limit = taosRand(); pLimit->limit->datum.i = taosRand();
if (taosRand() % 2) { if (taosRand() % 2) {
pLimit->offset = taosRand(); code = nodesMakeNode(QUERY_NODE_VALUE, (SNode**)&pLimit->offset);
assert(pLimit->offset);
pLimit->offset->node.resType.type = TSDB_DATA_TYPE_BIGINT;
pLimit->offset->node.resType.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes;
pLimit->offset->datum.i = taosRand();
} }
} }

View File

@ -52,7 +52,7 @@
if (NULL == (pSrc)->fldname) { \ if (NULL == (pSrc)->fldname) { \
break; \ break; \
} \ } \
int32_t code = nodesCloneNode((pSrc)->fldname, &((pDst)->fldname)); \ int32_t code = nodesCloneNode((SNode*)(pSrc)->fldname, (SNode**)&((pDst)->fldname)); \
if (NULL == (pDst)->fldname) { \ if (NULL == (pDst)->fldname) { \
return code; \ return code; \
} \ } \
@ -346,8 +346,8 @@ static int32_t orderByExprNodeCopy(const SOrderByExprNode* pSrc, SOrderByExprNod
} }
static int32_t limitNodeCopy(const SLimitNode* pSrc, SLimitNode* pDst) { static int32_t limitNodeCopy(const SLimitNode* pSrc, SLimitNode* pDst) {
COPY_SCALAR_FIELD(limit); CLONE_NODE_FIELD(limit);
COPY_SCALAR_FIELD(offset); CLONE_NODE_FIELD(offset);
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
} }

View File

@ -4933,9 +4933,9 @@ static const char* jkLimitOffset = "Offset";
static int32_t limitNodeToJson(const void* pObj, SJson* pJson) { static int32_t limitNodeToJson(const void* pObj, SJson* pJson) {
const SLimitNode* pNode = (const SLimitNode*)pObj; const SLimitNode* pNode = (const SLimitNode*)pObj;
int32_t code = tjsonAddIntegerToObject(pJson, jkLimitLimit, pNode->limit); int32_t code = tjsonAddObject(pJson, jkLimitLimit, nodeToJson, pNode->limit);
if (TSDB_CODE_SUCCESS == code) { if (TSDB_CODE_SUCCESS == code && pNode->offset) {
code = tjsonAddIntegerToObject(pJson, jkLimitOffset, pNode->offset); code = tjsonAddObject(pJson, jkLimitOffset, nodeToJson, pNode->offset);
} }
return code; return code;
@ -4944,9 +4944,9 @@ static int32_t limitNodeToJson(const void* pObj, SJson* pJson) {
static int32_t jsonToLimitNode(const SJson* pJson, void* pObj) { static int32_t jsonToLimitNode(const SJson* pJson, void* pObj) {
SLimitNode* pNode = (SLimitNode*)pObj; SLimitNode* pNode = (SLimitNode*)pObj;
int32_t code = tjsonGetBigIntValue(pJson, jkLimitLimit, &pNode->limit); int32_t code = jsonToNodeObject(pJson, jkLimitLimit, (SNode**)&pNode->limit);
if (TSDB_CODE_SUCCESS == code) { if (TSDB_CODE_SUCCESS == code) {
code = tjsonGetBigIntValue(pJson, jkLimitOffset, &pNode->offset); code = jsonToNodeObject(pJson, jkLimitOffset, (SNode**)&pNode->offset);
} }
return code; return code;

View File

@ -1246,9 +1246,9 @@ enum { LIMIT_CODE_LIMIT = 1, LIMIT_CODE_OFFSET };
static int32_t limitNodeToMsg(const void* pObj, STlvEncoder* pEncoder) { static int32_t limitNodeToMsg(const void* pObj, STlvEncoder* pEncoder) {
const SLimitNode* pNode = (const SLimitNode*)pObj; const SLimitNode* pNode = (const SLimitNode*)pObj;
int32_t code = tlvEncodeI64(pEncoder, LIMIT_CODE_LIMIT, pNode->limit); int32_t code = tlvEncodeObj(pEncoder, LIMIT_CODE_LIMIT, nodeToMsg, pNode->limit);
if (TSDB_CODE_SUCCESS == code) { if (TSDB_CODE_SUCCESS == code && pNode->offset) {
code = tlvEncodeI64(pEncoder, LIMIT_CODE_OFFSET, pNode->offset); code = tlvEncodeObj(pEncoder, LIMIT_CODE_OFFSET, nodeToMsg, pNode->offset);
} }
return code; return code;
@ -1262,10 +1262,10 @@ static int32_t msgToLimitNode(STlvDecoder* pDecoder, void* pObj) {
tlvForEach(pDecoder, pTlv, code) { tlvForEach(pDecoder, pTlv, code) {
switch (pTlv->type) { switch (pTlv->type) {
case LIMIT_CODE_LIMIT: case LIMIT_CODE_LIMIT:
code = tlvDecodeI64(pTlv, &pNode->limit); code = msgToNodeFromTlv(pTlv, (void**)&pNode->limit);
break; break;
case LIMIT_CODE_OFFSET: case LIMIT_CODE_OFFSET:
code = tlvDecodeI64(pTlv, &pNode->offset); code = msgToNodeFromTlv(pTlv, (void**)&pNode->offset);
break; break;
default: default:
break; break;

View File

@ -1106,8 +1106,12 @@ void nodesDestroyNode(SNode* pNode) {
case QUERY_NODE_ORDER_BY_EXPR: case QUERY_NODE_ORDER_BY_EXPR:
nodesDestroyNode(((SOrderByExprNode*)pNode)->pExpr); nodesDestroyNode(((SOrderByExprNode*)pNode)->pExpr);
break; break;
case QUERY_NODE_LIMIT: // no pointer field case QUERY_NODE_LIMIT: {
SLimitNode* pLimit = (SLimitNode*)pNode;
nodesDestroyNode((SNode*)pLimit->limit);
nodesDestroyNode((SNode*)pLimit->offset);
break; break;
}
case QUERY_NODE_STATE_WINDOW: { case QUERY_NODE_STATE_WINDOW: {
SStateWindowNode* pState = (SStateWindowNode*)pNode; SStateWindowNode* pState = (SStateWindowNode*)pNode;
nodesDestroyNode(pState->pCol); nodesDestroyNode(pState->pCol);
@ -3097,6 +3101,25 @@ int32_t nodesMakeValueNodeFromInt32(int32_t value, SNode** ppNode) {
return code; return code;
} }
int32_t nodesMakeValueNodeFromInt64(int64_t value, SNode** ppNode) {
SValueNode* pValNode = NULL;
int32_t code = nodesMakeNode(QUERY_NODE_VALUE, (SNode**)&pValNode);
if (TSDB_CODE_SUCCESS == code) {
pValNode->node.resType.type = TSDB_DATA_TYPE_BIGINT;
pValNode->node.resType.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes;
code = nodesSetValueNodeValue(pValNode, &value);
if (TSDB_CODE_SUCCESS == code) {
pValNode->translate = true;
pValNode->isNull = false;
*ppNode = (SNode*)pValNode;
} else {
nodesDestroyNode((SNode*)pValNode);
}
}
return code;
}
bool nodesIsStar(SNode* pNode) { bool nodesIsStar(SNode* pNode) {
return (QUERY_NODE_COLUMN == nodeType(pNode)) && ('\0' == ((SColumnNode*)pNode)->tableAlias[0]) && return (QUERY_NODE_COLUMN == nodeType(pNode)) && ('\0' == ((SColumnNode*)pNode)->tableAlias[0]) &&
(0 == strcmp(((SColumnNode*)pNode)->colName, "*")); (0 == strcmp(((SColumnNode*)pNode)->colName, "*"));

View File

@ -152,7 +152,7 @@ SNode* createTempTableNode(SAstCreateContext* pCxt, SNode* pSubquery, SToken
SNode* createJoinTableNode(SAstCreateContext* pCxt, EJoinType type, EJoinSubType stype, SNode* pLeft, SNode* pRight, SNode* createJoinTableNode(SAstCreateContext* pCxt, EJoinType type, EJoinSubType stype, SNode* pLeft, SNode* pRight,
SNode* pJoinCond); SNode* pJoinCond);
SNode* createViewNode(SAstCreateContext* pCxt, SToken* pDbName, SToken* pViewName); SNode* createViewNode(SAstCreateContext* pCxt, SToken* pDbName, SToken* pViewName);
SNode* createLimitNode(SAstCreateContext* pCxt, const SToken* pLimit, const SToken* pOffset); SNode* createLimitNode(SAstCreateContext* pCxt, SNode* pLimit, SNode* pOffset);
SNode* createOrderByExprNode(SAstCreateContext* pCxt, SNode* pExpr, EOrder order, ENullOrder nullOrder); SNode* createOrderByExprNode(SAstCreateContext* pCxt, SNode* pExpr, EOrder order, ENullOrder nullOrder);
SNode* createSessionWindowNode(SAstCreateContext* pCxt, SNode* pCol, SNode* pGap); SNode* createSessionWindowNode(SAstCreateContext* pCxt, SNode* pCol, SNode* pGap);
SNode* createStateWindowNode(SAstCreateContext* pCxt, SNode* pExpr); SNode* createStateWindowNode(SAstCreateContext* pCxt, SNode* pExpr);

19
source/libs/parser/inc/sql.y Normal file → Executable file
View File

@ -1078,6 +1078,10 @@ signed_integer(A) ::= NK_MINUS(B) NK_INTEGER(C).
A = createValueNode(pCxt, TSDB_DATA_TYPE_BIGINT, &t); A = createValueNode(pCxt, TSDB_DATA_TYPE_BIGINT, &t);
} }
unsigned_integer(A) ::= NK_INTEGER(B). { A = createValueNode(pCxt, TSDB_DATA_TYPE_BIGINT, &B); }
unsigned_integer(A) ::= NK_QUESTION(B). { A = releaseRawExprNode(pCxt, createRawExprNode(pCxt, &B, createPlaceholderValueNode(pCxt, &B))); }
signed_float(A) ::= NK_FLOAT(B). { A = createValueNode(pCxt, TSDB_DATA_TYPE_DOUBLE, &B); } signed_float(A) ::= NK_FLOAT(B). { A = createValueNode(pCxt, TSDB_DATA_TYPE_DOUBLE, &B); }
signed_float(A) ::= NK_PLUS NK_FLOAT(B). { A = createValueNode(pCxt, TSDB_DATA_TYPE_DOUBLE, &B); } signed_float(A) ::= NK_PLUS NK_FLOAT(B). { A = createValueNode(pCxt, TSDB_DATA_TYPE_DOUBLE, &B); }
signed_float(A) ::= NK_MINUS(B) NK_FLOAT(C). { signed_float(A) ::= NK_MINUS(B) NK_FLOAT(C). {
@ -1098,6 +1102,7 @@ signed_literal(A) ::= NULL(B).
signed_literal(A) ::= literal_func(B). { A = releaseRawExprNode(pCxt, B); } signed_literal(A) ::= literal_func(B). { A = releaseRawExprNode(pCxt, B); }
signed_literal(A) ::= NK_QUESTION(B). { A = createPlaceholderValueNode(pCxt, &B); } signed_literal(A) ::= NK_QUESTION(B). { A = createPlaceholderValueNode(pCxt, &B); }
%type literal_list { SNodeList* } %type literal_list { SNodeList* }
%destructor literal_list { nodesDestroyList($$); } %destructor literal_list { nodesDestroyList($$); }
literal_list(A) ::= signed_literal(B). { A = createNodeList(pCxt, B); } literal_list(A) ::= signed_literal(B). { A = createNodeList(pCxt, B); }
@ -1480,7 +1485,7 @@ window_offset_literal(A) ::= NK_MINUS(B) NK_VARIABLE(C).
} }
jlimit_clause_opt(A) ::= . { A = NULL; } jlimit_clause_opt(A) ::= . { A = NULL; }
jlimit_clause_opt(A) ::= JLIMIT NK_INTEGER(B). { A = createLimitNode(pCxt, &B, NULL); } jlimit_clause_opt(A) ::= JLIMIT unsigned_integer(B). { A = createLimitNode(pCxt, B, NULL); }
/************************************************ query_specification *************************************************/ /************************************************ query_specification *************************************************/
query_specification(A) ::= query_specification(A) ::=
@ -1660,14 +1665,14 @@ order_by_clause_opt(A) ::= .
order_by_clause_opt(A) ::= ORDER BY sort_specification_list(B). { A = B; } order_by_clause_opt(A) ::= ORDER BY sort_specification_list(B). { A = B; }
slimit_clause_opt(A) ::= . { A = NULL; } slimit_clause_opt(A) ::= . { A = NULL; }
slimit_clause_opt(A) ::= SLIMIT NK_INTEGER(B). { A = createLimitNode(pCxt, &B, NULL); } slimit_clause_opt(A) ::= SLIMIT unsigned_integer(B). { A = createLimitNode(pCxt, B, NULL); }
slimit_clause_opt(A) ::= SLIMIT NK_INTEGER(B) SOFFSET NK_INTEGER(C). { A = createLimitNode(pCxt, &B, &C); } slimit_clause_opt(A) ::= SLIMIT unsigned_integer(B) SOFFSET unsigned_integer(C). { A = createLimitNode(pCxt, B, C); }
slimit_clause_opt(A) ::= SLIMIT NK_INTEGER(C) NK_COMMA NK_INTEGER(B). { A = createLimitNode(pCxt, &B, &C); } slimit_clause_opt(A) ::= SLIMIT unsigned_integer(C) NK_COMMA unsigned_integer(B). { A = createLimitNode(pCxt, B, C); }
limit_clause_opt(A) ::= . { A = NULL; } limit_clause_opt(A) ::= . { A = NULL; }
limit_clause_opt(A) ::= LIMIT NK_INTEGER(B). { A = createLimitNode(pCxt, &B, NULL); } limit_clause_opt(A) ::= LIMIT unsigned_integer(B). { A = createLimitNode(pCxt, B, NULL); }
limit_clause_opt(A) ::= LIMIT NK_INTEGER(B) OFFSET NK_INTEGER(C). { A = createLimitNode(pCxt, &B, &C); } limit_clause_opt(A) ::= LIMIT unsigned_integer(B) OFFSET unsigned_integer(C). { A = createLimitNode(pCxt, B, C); }
limit_clause_opt(A) ::= LIMIT NK_INTEGER(C) NK_COMMA NK_INTEGER(B). { A = createLimitNode(pCxt, &B, &C); } limit_clause_opt(A) ::= LIMIT unsigned_integer(C) NK_COMMA unsigned_integer(B). { A = createLimitNode(pCxt, B, C); }
/************************************************ subquery ************************************************************/ /************************************************ subquery ************************************************************/
subquery(A) ::= NK_LP(B) query_expression(C) NK_RP(D). { A = createRawExprNodeExt(pCxt, &B, &D, C); } subquery(A) ::= NK_LP(B) query_expression(C) NK_RP(D). { A = createRawExprNodeExt(pCxt, &B, &D, C); }

View File

@ -1287,14 +1287,14 @@ _err:
return NULL; return NULL;
} }
SNode* createLimitNode(SAstCreateContext* pCxt, const SToken* pLimit, const SToken* pOffset) { SNode* createLimitNode(SAstCreateContext* pCxt, SNode* pLimit, SNode* pOffset) {
CHECK_PARSER_STATUS(pCxt); CHECK_PARSER_STATUS(pCxt);
SLimitNode* limitNode = NULL; SLimitNode* limitNode = NULL;
pCxt->errCode = nodesMakeNode(QUERY_NODE_LIMIT, (SNode**)&limitNode); pCxt->errCode = nodesMakeNode(QUERY_NODE_LIMIT, (SNode**)&limitNode);
CHECK_MAKE_NODE(limitNode); CHECK_MAKE_NODE(limitNode);
limitNode->limit = taosStr2Int64(pLimit->z, NULL, 10); limitNode->limit = (SValueNode*)pLimit;
if (NULL != pOffset) { if (NULL != pOffset) {
limitNode->offset = taosStr2Int64(pOffset->z, NULL, 10); limitNode->offset = (SValueNode*)pOffset;
} }
return (SNode*)limitNode; return (SNode*)limitNode;
_err: _err:

View File

@ -4729,16 +4729,20 @@ static int32_t translateJoinTable(STranslateContext* pCxt, SJoinTableNode* pJoin
return buildInvalidOperationMsg(&pCxt->msgBuf, "WINDOW_OFFSET required for WINDOW join"); return buildInvalidOperationMsg(&pCxt->msgBuf, "WINDOW_OFFSET required for WINDOW join");
} }
if (TSDB_CODE_SUCCESS == code && NULL != pJoinTable->pJLimit) { if (TSDB_CODE_SUCCESS == code && NULL != pJoinTable->pJLimit && NULL != ((SLimitNode*)pJoinTable->pJLimit)->limit) {
if (*pSType != JOIN_STYPE_ASOF && *pSType != JOIN_STYPE_WIN) { if (*pSType != JOIN_STYPE_ASOF && *pSType != JOIN_STYPE_WIN) {
return buildInvalidOperationMsgExt(&pCxt->msgBuf, "JLIMIT not supported for %s join", return buildInvalidOperationMsgExt(&pCxt->msgBuf, "JLIMIT not supported for %s join",
getFullJoinTypeString(type, *pSType)); getFullJoinTypeString(type, *pSType));
} }
SLimitNode* pJLimit = (SLimitNode*)pJoinTable->pJLimit; SLimitNode* pJLimit = (SLimitNode*)pJoinTable->pJLimit;
if (pJLimit->limit > JOIN_JLIMIT_MAX_VALUE || pJLimit->limit < 0) { code = translateExpr(pCxt, (SNode**)&pJLimit->limit);
if (TSDB_CODE_SUCCESS != code) {
return code;
}
if (pJLimit->limit->datum.i > JOIN_JLIMIT_MAX_VALUE || pJLimit->limit->datum.i < 0) {
return buildInvalidOperationMsg(&pCxt->msgBuf, "JLIMIT value is out of valid range [0, 1024]"); return buildInvalidOperationMsg(&pCxt->msgBuf, "JLIMIT value is out of valid range [0, 1024]");
} }
if (0 == pJLimit->limit) { if (0 == pJLimit->limit->datum.i) {
pCurrSmt->isEmptyResult = true; pCurrSmt->isEmptyResult = true;
} }
} }
@ -6994,16 +6998,32 @@ static int32_t translateFrom(STranslateContext* pCxt, SNode** pTable) {
} }
static int32_t checkLimit(STranslateContext* pCxt, SSelectStmt* pSelect) { static int32_t checkLimit(STranslateContext* pCxt, SSelectStmt* pSelect) {
if ((NULL != pSelect->pLimit && pSelect->pLimit->offset < 0) || int32_t code = 0;
(NULL != pSelect->pSlimit && pSelect->pSlimit->offset < 0)) {
if (pSelect->pLimit && pSelect->pLimit->limit) {
code = translateExpr(pCxt, (SNode**)&pSelect->pLimit->limit);
}
if (TSDB_CODE_SUCCESS == code && pSelect->pLimit && pSelect->pLimit->offset) {
code = translateExpr(pCxt, (SNode**)&pSelect->pLimit->offset);
}
if (TSDB_CODE_SUCCESS == code && pSelect->pSlimit && pSelect->pSlimit->limit) {
code = translateExpr(pCxt, (SNode**)&pSelect->pSlimit->limit);
}
if (TSDB_CODE_SUCCESS == code && pSelect->pSlimit && pSelect->pSlimit->offset) {
code = translateExpr(pCxt, (SNode**)&pSelect->pSlimit->offset);
}
if ((TSDB_CODE_SUCCESS == code) &&
((NULL != pSelect->pLimit && pSelect->pLimit->offset && pSelect->pLimit->offset->datum.i < 0) ||
(NULL != pSelect->pSlimit && pSelect->pSlimit->offset && pSelect->pSlimit->offset->datum.i < 0))) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_OFFSET_LESS_ZERO); return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_OFFSET_LESS_ZERO);
} }
if (NULL != pSelect->pSlimit && (NULL == pSelect->pPartitionByList && NULL == pSelect->pGroupByList)) { if ((TSDB_CODE_SUCCESS == code) && NULL != pSelect->pSlimit && (NULL == pSelect->pPartitionByList && NULL == pSelect->pGroupByList)) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_SLIMIT_LEAK_PARTITION_GROUP_BY); return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_SLIMIT_LEAK_PARTITION_GROUP_BY);
} }
return TSDB_CODE_SUCCESS; return code;
} }
static int32_t createPrimaryKeyColByTable(STranslateContext* pCxt, STableNode* pTable, SNode** pPrimaryKey) { static int32_t createPrimaryKeyColByTable(STranslateContext* pCxt, STableNode* pTable, SNode** pPrimaryKey) {
@ -7482,7 +7502,14 @@ static int32_t translateSetOperOrderBy(STranslateContext* pCxt, SSetOperator* pS
} }
static int32_t checkSetOperLimit(STranslateContext* pCxt, SLimitNode* pLimit) { static int32_t checkSetOperLimit(STranslateContext* pCxt, SLimitNode* pLimit) {
if ((NULL != pLimit && pLimit->offset < 0)) { int32_t code = 0;
if (pLimit && pLimit->limit) {
code = translateExpr(pCxt, (SNode**)&pLimit->limit);
}
if (TSDB_CODE_SUCCESS == code && pLimit && pLimit->offset) {
code = translateExpr(pCxt, (SNode**)&pLimit->offset);
}
if (TSDB_CODE_SUCCESS == code && (NULL != pLimit && NULL != pLimit->offset && pLimit->offset->datum.i < 0)) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_OFFSET_LESS_ZERO); return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_OFFSET_LESS_ZERO);
} }
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;

View File

@ -3705,8 +3705,14 @@ static int32_t rewriteTailOptCreateLimit(SNode* pLimit, SNode* pOffset, SNode**
if (NULL == pLimitNode) { if (NULL == pLimitNode) {
return code; return code;
} }
pLimitNode->limit = NULL == pLimit ? -1 : ((SValueNode*)pLimit)->datum.i; code = nodesMakeValueNodeFromInt64(NULL == pLimit ? -1 : ((SValueNode*)pLimit)->datum.i, (SNode**)&pLimitNode->limit);
pLimitNode->offset = NULL == pOffset ? 0 : ((SValueNode*)pOffset)->datum.i; if (TSDB_CODE_SUCCESS != code) {
return code;
}
code = nodesMakeValueNodeFromInt64(NULL == pOffset ? 0 : ((SValueNode*)pOffset)->datum.i, (SNode**)&pLimitNode->offset);
if (TSDB_CODE_SUCCESS != code) {
return code;
}
*pOutput = (SNode*)pLimitNode; *pOutput = (SNode*)pLimitNode;
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
} }

View File

@ -1823,9 +1823,9 @@ static int32_t createAggPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChildren,
if (NULL == pAgg) { if (NULL == pAgg) {
return terrno; return terrno;
} }
if (pAgg->node.pSlimit) { if (pAgg->node.pSlimit && ((SLimitNode*)pAgg->node.pSlimit)->limit) {
pSubPlan->dynamicRowThreshold = true; pSubPlan->dynamicRowThreshold = true;
pSubPlan->rowsThreshold = ((SLimitNode*)pAgg->node.pSlimit)->limit; pSubPlan->rowsThreshold = ((SLimitNode*)pAgg->node.pSlimit)->limit->datum.i;
} }
pAgg->mergeDataBlock = (GROUP_ACTION_KEEP == pAggLogicNode->node.groupAction ? false : true); pAgg->mergeDataBlock = (GROUP_ACTION_KEEP == pAggLogicNode->node.groupAction ? false : true);

View File

@ -133,8 +133,12 @@ static int32_t splCreateExchangeNode(SSplitContext* pCxt, SLogicNode* pChild, SE
nodesDestroyNode((SNode*)pExchange); nodesDestroyNode((SNode*)pExchange);
return code; return code;
} }
((SLimitNode*)pChild->pLimit)->limit += ((SLimitNode*)pChild->pLimit)->offset; if (((SLimitNode*)pChild->pLimit)->limit && ((SLimitNode*)pChild->pLimit)->offset) {
((SLimitNode*)pChild->pLimit)->offset = 0; ((SLimitNode*)pChild->pLimit)->limit->datum.i += ((SLimitNode*)pChild->pLimit)->offset->datum.i;
}
if (((SLimitNode*)pChild->pLimit)->offset) {
((SLimitNode*)pChild->pLimit)->offset->datum.i = 0;
}
} }
*pOutput = pExchange; *pOutput = pExchange;
@ -679,8 +683,12 @@ static int32_t stbSplCreateMergeNode(SSplitContext* pCxt, SLogicSubplan* pSubpla
if (TSDB_CODE_SUCCESS == code && NULL != pSplitNode->pLimit) { if (TSDB_CODE_SUCCESS == code && NULL != pSplitNode->pLimit) {
pMerge->node.pLimit = NULL; pMerge->node.pLimit = NULL;
code = nodesCloneNode(pSplitNode->pLimit, &pMerge->node.pLimit); code = nodesCloneNode(pSplitNode->pLimit, &pMerge->node.pLimit);
((SLimitNode*)pSplitNode->pLimit)->limit += ((SLimitNode*)pSplitNode->pLimit)->offset; if (((SLimitNode*)pSplitNode->pLimit)->limit && ((SLimitNode*)pSplitNode->pLimit)->offset) {
((SLimitNode*)pSplitNode->pLimit)->offset = 0; ((SLimitNode*)pSplitNode->pLimit)->limit->datum.i += ((SLimitNode*)pSplitNode->pLimit)->offset->datum.i;
}
if (((SLimitNode*)pSplitNode->pLimit)->offset) {
((SLimitNode*)pSplitNode->pLimit)->offset->datum.i = 0;
}
} }
if (TSDB_CODE_SUCCESS == code) { if (TSDB_CODE_SUCCESS == code) {
code = stbSplRewriteFromMergeNode(pMerge, pSplitNode); code = stbSplRewriteFromMergeNode(pMerge, pSplitNode);
@ -1427,8 +1435,12 @@ static int32_t stbSplGetSplitNodeForScan(SStableSplitInfo* pInfo, SLogicNode** p
if (NULL == (*pSplitNode)->pLimit) { if (NULL == (*pSplitNode)->pLimit) {
return code; return code;
} }
((SLimitNode*)pInfo->pSplitNode->pLimit)->limit += ((SLimitNode*)pInfo->pSplitNode->pLimit)->offset; if (((SLimitNode*)pInfo->pSplitNode->pLimit)->limit && ((SLimitNode*)pInfo->pSplitNode->pLimit)->offset) {
((SLimitNode*)pInfo->pSplitNode->pLimit)->offset = 0; ((SLimitNode*)pInfo->pSplitNode->pLimit)->limit->datum.i += ((SLimitNode*)pInfo->pSplitNode->pLimit)->offset->datum.i;
}
if (((SLimitNode*)pInfo->pSplitNode->pLimit)->offset) {
((SLimitNode*)pInfo->pSplitNode->pLimit)->offset->datum.i = 0;
}
} }
} }
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
@ -1579,8 +1591,12 @@ static int32_t stbSplSplitMergeScanNode(SSplitContext* pCxt, SLogicSubplan* pSub
int32_t code = stbSplCreateMergeScanNode(pScan, &pMergeScan, &pMergeKeys); int32_t code = stbSplCreateMergeScanNode(pScan, &pMergeScan, &pMergeKeys);
if (TSDB_CODE_SUCCESS == code) { if (TSDB_CODE_SUCCESS == code) {
if (NULL != pMergeScan->pLimit) { if (NULL != pMergeScan->pLimit) {
((SLimitNode*)pMergeScan->pLimit)->limit += ((SLimitNode*)pMergeScan->pLimit)->offset; if (((SLimitNode*)pMergeScan->pLimit)->limit && ((SLimitNode*)pMergeScan->pLimit)->offset) {
((SLimitNode*)pMergeScan->pLimit)->offset = 0; ((SLimitNode*)pMergeScan->pLimit)->limit->datum.i += ((SLimitNode*)pMergeScan->pLimit)->offset->datum.i;
}
if (((SLimitNode*)pMergeScan->pLimit)->offset) {
((SLimitNode*)pMergeScan->pLimit)->offset->datum.i = 0;
}
} }
code = stbSplCreateMergeNode(pCxt, pSubplan, (SLogicNode*)pScan, pMergeKeys, pMergeScan, groupSort, true); code = stbSplCreateMergeNode(pCxt, pSubplan, (SLogicNode*)pScan, pMergeKeys, pMergeScan, groupSort, true);
} }

View File

@ -592,8 +592,12 @@ int32_t cloneLimit(SLogicNode* pParent, SLogicNode* pChild, uint8_t cloneWhat, b
if (pParent->pLimit && (cloneWhat & CLONE_LIMIT)) { if (pParent->pLimit && (cloneWhat & CLONE_LIMIT)) {
code = nodesCloneNode(pParent->pLimit, (SNode**)&pLimit); code = nodesCloneNode(pParent->pLimit, (SNode**)&pLimit);
if (TSDB_CODE_SUCCESS == code) { if (TSDB_CODE_SUCCESS == code) {
pLimit->limit += pLimit->offset; if (pLimit->limit && pLimit->offset) {
pLimit->offset = 0; pLimit->limit->datum.i += pLimit->offset->datum.i;
}
if (pLimit->offset) {
pLimit->offset->datum.i = 0;
}
cloned = true; cloned = true;
} }
} }
@ -601,8 +605,12 @@ int32_t cloneLimit(SLogicNode* pParent, SLogicNode* pChild, uint8_t cloneWhat, b
if (pParent->pSlimit && (cloneWhat & CLONE_SLIMIT)) { if (pParent->pSlimit && (cloneWhat & CLONE_SLIMIT)) {
code = nodesCloneNode(pParent->pSlimit, (SNode**)&pSlimit); code = nodesCloneNode(pParent->pSlimit, (SNode**)&pSlimit);
if (TSDB_CODE_SUCCESS == code) { if (TSDB_CODE_SUCCESS == code) {
pSlimit->limit += pSlimit->offset; if (pSlimit->limit && pSlimit->offset) {
pSlimit->offset = 0; pSlimit->limit->datum.i += pSlimit->offset->datum.i;
}
if (pSlimit->offset) {
pSlimit->offset->datum.i = 0;
}
cloned = true; cloned = true;
} }
} }

View File

@ -604,6 +604,17 @@ int32_t streamTaskUpdateTaskCheckpointInfo(SStreamTask* pTask, bool restored, SV
streamMutexLock(&pTask->lock); streamMutexLock(&pTask->lock);
// not update the checkpoint info if the checkpointId is less than the failed checkpointId
if (pReq->checkpointId < pInfo->pActiveInfo->failedId) {
stWarn("s-task:%s vgId:%d not update the checkpoint-info, since update checkpointId:%" PRId64
" is less than the failed checkpointId:%" PRId64 ", discard the update info",
id, vgId, pReq->checkpointId, pInfo->pActiveInfo->failedId);
streamMutexUnlock(&pTask->lock);
// always return true
return TSDB_CODE_SUCCESS;
}
if (pReq->checkpointId <= pInfo->checkpointId) { if (pReq->checkpointId <= pInfo->checkpointId) {
stDebug("s-task:%s vgId:%d latest checkpointId:%" PRId64 " Ver:%" PRId64 stDebug("s-task:%s vgId:%d latest checkpointId:%" PRId64 " Ver:%" PRId64
" no need to update checkpoint info, updated checkpointId:%" PRId64 " Ver:%" PRId64 " transId:%d ignored", " no need to update checkpoint info, updated checkpointId:%" PRId64 " Ver:%" PRId64 " transId:%d ignored",
@ -638,9 +649,9 @@ int32_t streamTaskUpdateTaskCheckpointInfo(SStreamTask* pTask, bool restored, SV
pInfo->checkpointTime, pReq->checkpointTs); pInfo->checkpointTime, pReq->checkpointTs);
} else { // not in restore status, must be in checkpoint status } else { // not in restore status, must be in checkpoint status
if ((pStatus.state == TASK_STATUS__CK) || (pMeta->role == NODE_ROLE_FOLLOWER)) { if ((pStatus.state == TASK_STATUS__CK) || (pMeta->role == NODE_ROLE_FOLLOWER)) {
stDebug("s-task:%s vgId:%d status:%s start to update the checkpoint-info, checkpointId:%" PRId64 "->%" PRId64 stDebug("s-task:%s vgId:%d status:%s role:%d start to update the checkpoint-info, checkpointId:%" PRId64 "->%" PRId64
" checkpointVer:%" PRId64 "->%" PRId64 " checkpointTs:%" PRId64 "->%" PRId64, " checkpointVer:%" PRId64 "->%" PRId64 " checkpointTs:%" PRId64 "->%" PRId64,
id, vgId, pStatus.name, pInfo->checkpointId, pReq->checkpointId, pInfo->checkpointVer, id, vgId, pStatus.name, pMeta->role, pInfo->checkpointId, pReq->checkpointId, pInfo->checkpointVer,
pReq->checkpointVer, pInfo->checkpointTime, pReq->checkpointTs); pReq->checkpointVer, pInfo->checkpointTime, pReq->checkpointTs);
} else { } else {
stDebug("s-task:%s vgId:%d status:%s NOT update the checkpoint-info, checkpointId:%" PRId64 "->%" PRId64 stDebug("s-task:%s vgId:%d status:%s NOT update the checkpoint-info, checkpointId:%" PRId64 "->%" PRId64

View File

@ -152,9 +152,9 @@ int64_t taosQueueMemorySize(STaosQueue *queue) {
int32_t taosAllocateQitem(int32_t size, EQItype itype, int64_t dataSize, void **item) { int32_t taosAllocateQitem(int32_t size, EQItype itype, int64_t dataSize, void **item) {
int64_t alloced = -1; int64_t alloced = -1;
if (alloced > tsQueueMemoryAllowed) {
alloced = atomic_add_fetch_64(&tsQueueMemoryUsed, size + dataSize);
if (itype == RPC_QITEM) { if (itype == RPC_QITEM) {
alloced = atomic_add_fetch_64(&tsQueueMemoryUsed, size + dataSize);
if (alloced > tsQueueMemoryAllowed) {
uError("failed to alloc qitem, size:%" PRId64 " alloc:%" PRId64 " allowed:%" PRId64, size + dataSize, alloced, uError("failed to alloc qitem, size:%" PRId64 " alloc:%" PRId64 " allowed:%" PRId64, size + dataSize, alloced,
tsQueueMemoryAllowed); tsQueueMemoryAllowed);
(void)atomic_sub_fetch_64(&tsQueueMemoryUsed, size + dataSize); (void)atomic_sub_fetch_64(&tsQueueMemoryUsed, size + dataSize);

232
tests/README-CN.md Normal file
View File

@ -0,0 +1,232 @@
# 目录
1. [简介](#1-简介)
2. [必备工具](#2-必备工具)
3. [测试指南](#3-测试指南)
- [3.1 单元测试](#31-单元测试)
- [3.2 系统测试](#32-系统测试)
- [3.3 TSIM测试](#33-tsim测试)
- [3.4 冒烟测试](#34-冒烟测试)
- [3.5 混沌测试](#35-混沌测试)
- [3.6 CI测试](#36-ci测试)
# 1. 简介
本手册旨在为开发人员提供有效测试TDengine的全面指导。它分为三个主要部分简介必备工具和测试指南。
> [!NOTE]
> - 本文档所有的命令和脚本在LinuxUbuntu 18.04/20.04/22.04)上进行了验证。
> - 本文档所有的命令和脚本用于在单个主机上运行测试。
# 2. 必备工具
- 安装Python3
```bash
apt install python3
apt install python3-pip
```
- 安装Python依赖工具包
```bash
pip3 install pandas psutil fabric2 requests faker simplejson \
toml pexpect tzlocal distro decorator loguru hyperloglog
```
- 安装TDengine的Python连接器
```bash
pip3 install taospy taos-ws-py
```
- 构建
在测试之前,请确保选项“-DBUILD_TOOLS=true -DBUILD_TEST=true -DBUILD_CONTRIB=true”的构建操作已经完成如果没有请执行如下命令:
```bash
cd debug
cmake .. -DBUILD_TOOLS=true -DBUILD_TEST=true -DBUILD_CONTRIB=true
make && make install
```
# 3. 测试指南
`tests` 目录中TDengine有不同类型的测试。下面是关于如何运行它们以及如何添加新测试用例的简要介绍。
### 3.1 单元测试
单元测试是最小的可测试单元用于测试TDengine代码中的函数、方法或类。
### 3.1.1 如何运行单个测试用例?
```bash
cd debug/build/bin
./osTimeTests
```
### 3.1.2 如何运行所有测试用例?
```bash
cd tests/unit-test/
bash test.sh -e 0
```
### 3.1.3 如何添加测试用例?
<details>
<summary>添加新单元测试用例的详细步骤</summary>
Google测试框架用于对特定功能模块进行单元测试请参考以下步骤添加新的测试用例:
##### a. 创建测试用例文件并开发测试脚本
在目标功能模块对应的测试目录下创建CPP格式的测试文件编写相应的测试用例。
##### b. 更新构建配置
修改此目录中的CMakeLists.txt文件, 以确保新的测试文件被包含在编译过程中。配置示例可参考 `source/os/test/CMakeLists.txt`
##### c. 编译测试代码
在项目的根目录下,创建一个编译目录 (例如 debug), 切换到该目录并运行cmake命令 (如 `cmake .. -DBUILD_TEST=1` ) 生成编译文件,
然后运行make命令如 make来完成测试代码的编译。
##### d. 执行测试
在编译目录中找到可执行文件并运行它 (如:`TDengine/debug/build/bin/`)。
##### e. 集成用例到CI测试
使用add_test命令将新编译的测试用例添加到CI测试集合中确保新添加的测试用例可以在每次构建运行。
</details>
## 3.2 系统测试
系统测试是用Python编写的端到端测试用例。其中一些特性仅在企业版中支持和测试因此在社区版上运行时它们可能会失败。我们将逐渐通过将用例分成不同的组来解决这个问题。
### 3.2.1 如何运行单个测试用例?
以测试文件 `system-test/2-query/avg.py` 举例,可以使用如下命令运行单个测试用例:
```bash
cd tests/system-test
python3 ./test.py -f 2-query/avg.py
```
### 3.2.2 如何运行所有测试用例?
```bash
cd tests
./run_all_ci_cases.sh -t python # all python cases
```
### 3.2.3 如何添加测试用例?
<details>
<summary>添加新系统测试用例的详细步骤</summary>
Python测试框架由TDengine团队开发, test.py是测试用例执行和监控的入口程序使用 `python3 ./test.py -h` 查看更多功能。
请参考下面的步骤来添加一个新的测试用例:
##### a. 创建一个测试用例文件并开发测试用例
在目录 `tests/system-test` 下的某个功能目录创建一个测试用例文件, 并参考用例模板 `tests/system-test/0-others/test_case_template.py` 来添加一个新的测试用例。
##### b. 执行测试用例
使用如下命令执行测试用例, 并确保用例执行成功。
``` bash
cd tests/system-test && python3 ./test.py -f 0-others/test_case_template.py
```
##### c. 集成用例到CI测试
编辑 `tests/parallel_test/cases.task`, 以指定的格式添加测试用例路径。文件的第三列表示是否使用 Address Sanitizer 模式进行测试。
```bash
#caseID,rerunTimes,Run with Sanitizer,casePath,caseCommand
,,n,system-test, python3 ./test.py -f 0-others/test_case_template.py
```
</details>
## 3.3 TSIM测试
在TDengine开发的早期阶段, TDengine团队用C++开发的内部测试框架 TSIM。
### 3.3.1 如何运行单个测试用例?
要运行TSIM测试用例请执行如下命令:
```bash
cd tests/script
./test.sh -f tsim/db/basic1.sim
```
### 3.3.2 如何运行所有TSIM测试用例?
```bash
cd tests
./run_all_ci_cases.sh -t legacy # all legacy cases
```
### 3.3.3 如何添加TSIM测试用例?
> [!NOTE]
> TSIM测试框架现已被系统测试弃用建议在系统测试中增加新的测试用例请参考 [系统测试](#32-系统测试)。
## 3.4 冒烟测试
冒烟测试是从系统测试中选择的一组测试用例也称为基本功能测试以确保TDengine的关键功能。
### 3.4.1 如何运行冒烟测试?
```bash
cd /root/TDengine/packaging/smokeTest
./test_smoking_selfhost.sh
```
### 3.4.2 如何添加冒烟测试用例?
可以通过更新 `test_smoking_selfhost.sh` 中的 `commands` 变量的值来添加新的case。
## 3.5 混沌测试
一个简单的工具,以随机的方式执行系统的各种功能测试,期望在没有预定义测试场景的情况下暴露潜在的问题。
### 3.5.1 如何运行混沌测试?
```bash
cd tests/pytest
python3 auto_crash_gen.py
```
### 3.5.2 如何增加混沌测试用例?
1. 添加一个函数,如 `pytest/crash_gen/crash_gen_main.py` 中的 `TaskCreateNewFunction`
2. 将 `TaskCreateNewFunction` 集成到 `crash_gen_main.py` 中的 `balance_pickTaskType` 函数中。
## 3.6 CI测试
CI测试持续集成测试是软件开发中的一项重要实践旨在将代码频繁地自动集成到共享代码库的过程中构建和测试它以确保代码的质量和稳定性。
TDengine CI测试将运行以下三种测试类型中的所有测试用例单元测试、系统测试和TSIM测试。
### 3.6.1 如何运行所有CI测试用例?
如果这是第一次运行所有CI测试用例建议添加测试分支使用如下命令运行:
```bash
cd tests
./run_all_ci_cases.sh -b main # on main branch
```
### 3.6.2 如何添加新的CI测试用例?
请参考[单元测试](#31-单元测试)、[系统测试](#32-系统测试)和[TSIM测试](#33-tsim测试)部分了解添加新测试用例的详细步骤当在上述测试中添加新用例时它们将在CI测试自动运行。

View File

@ -1015,3 +1015,108 @@ taos> select _irowts, _isfilled, interp(c1) from test.td32861 where ts between '
taos> select _irowts, _isfilled, interp(c1) from test.td32861 where ts between '2020-01-01 00:00:22' and '2020-01-01 00:00:30' range('2020-01-01 00:00:00', '2020-01-01 00:00:21') every(1s) fill(linear); taos> select _irowts, _isfilled, interp(c1) from test.td32861 where ts between '2020-01-01 00:00:22' and '2020-01-01 00:00:30' range('2020-01-01 00:00:00', '2020-01-01 00:00:21') every(1s) fill(linear);
taos> select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(prev);
_irowts | interp(c1) | t1 |
===========================================================================
2020-02-01 00:00:05.000 | 5 | testts5941 |
2020-02-01 00:00:06.000 | 5 | testts5941 |
2020-02-01 00:00:07.000 | 5 | testts5941 |
2020-02-01 00:00:08.000 | 5 | testts5941 |
2020-02-01 00:00:09.000 | 5 | testts5941 |
2020-02-01 00:00:10.000 | 10 | testts5941 |
2020-02-01 00:00:11.000 | 10 | testts5941 |
2020-02-01 00:00:12.000 | 10 | testts5941 |
2020-02-01 00:00:13.000 | 10 | testts5941 |
2020-02-01 00:00:14.000 | 10 | testts5941 |
2020-02-01 00:00:15.000 | 15 | testts5941 |
2020-02-01 00:00:16.000 | 15 | testts5941 |
2020-02-01 00:00:17.000 | 15 | testts5941 |
2020-02-01 00:00:18.000 | 15 | testts5941 |
2020-02-01 00:00:19.000 | 15 | testts5941 |
2020-02-01 00:00:20.000 | 15 | testts5941 |
taos> select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(next);
_irowts | interp(c1) | t1 |
===========================================================================
2020-02-01 00:00:00.000 | 5 | testts5941 |
2020-02-01 00:00:01.000 | 5 | testts5941 |
2020-02-01 00:00:02.000 | 5 | testts5941 |
2020-02-01 00:00:03.000 | 5 | testts5941 |
2020-02-01 00:00:04.000 | 5 | testts5941 |
2020-02-01 00:00:05.000 | 5 | testts5941 |
2020-02-01 00:00:06.000 | 10 | testts5941 |
2020-02-01 00:00:07.000 | 10 | testts5941 |
2020-02-01 00:00:08.000 | 10 | testts5941 |
2020-02-01 00:00:09.000 | 10 | testts5941 |
2020-02-01 00:00:10.000 | 10 | testts5941 |
2020-02-01 00:00:11.000 | 15 | testts5941 |
2020-02-01 00:00:12.000 | 15 | testts5941 |
2020-02-01 00:00:13.000 | 15 | testts5941 |
2020-02-01 00:00:14.000 | 15 | testts5941 |
2020-02-01 00:00:15.000 | 15 | testts5941 |
taos> select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(linear);
_irowts | interp(c1) | t1 |
===========================================================================
2020-02-01 00:00:05.000 | 5 | testts5941 |
2020-02-01 00:00:06.000 | 6 | testts5941 |
2020-02-01 00:00:07.000 | 7 | testts5941 |
2020-02-01 00:00:08.000 | 8 | testts5941 |
2020-02-01 00:00:09.000 | 9 | testts5941 |
2020-02-01 00:00:10.000 | 10 | testts5941 |
2020-02-01 00:00:11.000 | 11 | testts5941 |
2020-02-01 00:00:12.000 | 12 | testts5941 |
2020-02-01 00:00:13.000 | 13 | testts5941 |
2020-02-01 00:00:14.000 | 14 | testts5941 |
2020-02-01 00:00:15.000 | 15 | testts5941 |
taos> select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(null);
_irowts | interp(c1) | t1 |
===========================================================================
2020-02-01 00:00:00.000 | NULL | testts5941 |
2020-02-01 00:00:01.000 | NULL | testts5941 |
2020-02-01 00:00:02.000 | NULL | testts5941 |
2020-02-01 00:00:03.000 | NULL | testts5941 |
2020-02-01 00:00:04.000 | NULL | testts5941 |
2020-02-01 00:00:05.000 | 5 | testts5941 |
2020-02-01 00:00:06.000 | NULL | testts5941 |
2020-02-01 00:00:07.000 | NULL | testts5941 |
2020-02-01 00:00:08.000 | NULL | testts5941 |
2020-02-01 00:00:09.000 | NULL | testts5941 |
2020-02-01 00:00:10.000 | 10 | testts5941 |
2020-02-01 00:00:11.000 | NULL | testts5941 |
2020-02-01 00:00:12.000 | NULL | testts5941 |
2020-02-01 00:00:13.000 | NULL | testts5941 |
2020-02-01 00:00:14.000 | NULL | testts5941 |
2020-02-01 00:00:15.000 | 15 | testts5941 |
2020-02-01 00:00:16.000 | NULL | testts5941 |
2020-02-01 00:00:17.000 | NULL | testts5941 |
2020-02-01 00:00:18.000 | NULL | testts5941 |
2020-02-01 00:00:19.000 | NULL | testts5941 |
2020-02-01 00:00:20.000 | NULL | testts5941 |
taos> select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(value, 1);
_irowts | interp(c1) | t1 |
===========================================================================
2020-02-01 00:00:00.000 | 1 | testts5941 |
2020-02-01 00:00:01.000 | 1 | testts5941 |
2020-02-01 00:00:02.000 | 1 | testts5941 |
2020-02-01 00:00:03.000 | 1 | testts5941 |
2020-02-01 00:00:04.000 | 1 | testts5941 |
2020-02-01 00:00:05.000 | 5 | testts5941 |
2020-02-01 00:00:06.000 | 1 | testts5941 |
2020-02-01 00:00:07.000 | 1 | testts5941 |
2020-02-01 00:00:08.000 | 1 | testts5941 |
2020-02-01 00:00:09.000 | 1 | testts5941 |
2020-02-01 00:00:10.000 | 10 | testts5941 |
2020-02-01 00:00:11.000 | 1 | testts5941 |
2020-02-01 00:00:12.000 | 1 | testts5941 |
2020-02-01 00:00:13.000 | 1 | testts5941 |
2020-02-01 00:00:14.000 | 1 | testts5941 |
2020-02-01 00:00:15.000 | 15 | testts5941 |
2020-02-01 00:00:16.000 | 1 | testts5941 |
2020-02-01 00:00:17.000 | 1 | testts5941 |
2020-02-01 00:00:18.000 | 1 | testts5941 |
2020-02-01 00:00:19.000 | 1 | testts5941 |
2020-02-01 00:00:20.000 | 1 | testts5941 |

1 taos> select _irowts as irowts ,tbname as table_name, _isfilled as isfilled , interp(c1) as intp_c1 from test.td32727 partition by tbname range('2020-02-01 00:00:04', '2020-02-01 00:00:16') every(1s) fill (null) order by irowts;
1015 2020-02-01 00:00:08.000 | NULL | testts5941 |
1016 2020-02-01 00:00:09.000 | NULL | testts5941 |
1017 2020-02-01 00:00:10.000 | 10 | testts5941 |
1018 2020-02-01 00:00:11.000 | NULL | testts5941 |
1019 2020-02-01 00:00:12.000 | NULL | testts5941 |
1020 2020-02-01 00:00:13.000 | NULL | testts5941 |
1021 2020-02-01 00:00:14.000 | NULL | testts5941 |
1022 2020-02-01 00:00:15.000 | 15 | testts5941 |
1023 2020-02-01 00:00:16.000 | NULL | testts5941 |
1024 2020-02-01 00:00:17.000 | NULL | testts5941 |
1025 2020-02-01 00:00:18.000 | NULL | testts5941 |
1026 2020-02-01 00:00:19.000 | NULL | testts5941 |
1027 2020-02-01 00:00:20.000 | NULL | testts5941 |
1028 taos> select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(value, 1);
1029 _irowts | interp(c1) | t1 |
1030 ===========================================================================
1031 2020-02-01 00:00:00.000 | 1 | testts5941 |
1032 2020-02-01 00:00:01.000 | 1 | testts5941 |
1033 2020-02-01 00:00:02.000 | 1 | testts5941 |
1034 2020-02-01 00:00:03.000 | 1 | testts5941 |
1035 2020-02-01 00:00:04.000 | 1 | testts5941 |
1036 2020-02-01 00:00:05.000 | 5 | testts5941 |
1037 2020-02-01 00:00:06.000 | 1 | testts5941 |
1038 2020-02-01 00:00:07.000 | 1 | testts5941 |
1039 2020-02-01 00:00:08.000 | 1 | testts5941 |
1040 2020-02-01 00:00:09.000 | 1 | testts5941 |
1041 2020-02-01 00:00:10.000 | 10 | testts5941 |
1042 2020-02-01 00:00:11.000 | 1 | testts5941 |
1043 2020-02-01 00:00:12.000 | 1 | testts5941 |
1044 2020-02-01 00:00:13.000 | 1 | testts5941 |
1045 2020-02-01 00:00:14.000 | 1 | testts5941 |
1046 2020-02-01 00:00:15.000 | 15 | testts5941 |
1047 2020-02-01 00:00:16.000 | 1 | testts5941 |
1048 2020-02-01 00:00:17.000 | 1 | testts5941 |
1049 2020-02-01 00:00:18.000 | 1 | testts5941 |
1050 2020-02-01 00:00:19.000 | 1 | testts5941 |
1051 2020-02-01 00:00:20.000 | 1 | testts5941 |
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122

View File

@ -63,3 +63,8 @@ select _irowts, _isfilled, interp(c1) from test.td32861 where ts between '2020-0
select _irowts, _isfilled, interp(c1) from test.td32861 where ts between '2020-01-01 00:00:22' and '2020-01-01 00:00:30' range('2020-01-01 00:00:00', '2020-01-01 00:00:21') every(1s) fill(prev); select _irowts, _isfilled, interp(c1) from test.td32861 where ts between '2020-01-01 00:00:22' and '2020-01-01 00:00:30' range('2020-01-01 00:00:00', '2020-01-01 00:00:21') every(1s) fill(prev);
select _irowts, _isfilled, interp(c1) from test.td32861 where ts between '2020-01-01 00:00:22' and '2020-01-01 00:00:30' range('2020-01-01 00:00:00', '2020-01-01 00:00:21') every(1s) fill(next); select _irowts, _isfilled, interp(c1) from test.td32861 where ts between '2020-01-01 00:00:22' and '2020-01-01 00:00:30' range('2020-01-01 00:00:00', '2020-01-01 00:00:21') every(1s) fill(next);
select _irowts, _isfilled, interp(c1) from test.td32861 where ts between '2020-01-01 00:00:22' and '2020-01-01 00:00:30' range('2020-01-01 00:00:00', '2020-01-01 00:00:21') every(1s) fill(linear); select _irowts, _isfilled, interp(c1) from test.td32861 where ts between '2020-01-01 00:00:22' and '2020-01-01 00:00:30' range('2020-01-01 00:00:00', '2020-01-01 00:00:21') every(1s) fill(linear);
select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(prev);
select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(next);
select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(linear);
select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(null);
select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(value, 1);

View File

@ -40,6 +40,9 @@ class TDTestCase(TBase):
) )
tdSql.execute("create table if not exists test.td32861(ts timestamp, c1 int);") tdSql.execute("create table if not exists test.td32861(ts timestamp, c1 int);")
tdSql.execute("create stable if not exists test.ts5941(ts timestamp, c1 int, c2 int) tags (t1 varchar(30));")
tdSql.execute("create table if not exists test.ts5941_child using test.ts5941 tags ('testts5941');")
tdLog.printNoPrefix("==========step2:insert data") tdLog.printNoPrefix("==========step2:insert data")
tdSql.execute(f"insert into test.td32727 values ('2020-02-01 00:00:05', 5, 5, 5, 5, 5.0, 5.0, true, 'varchar', 'nchar', 5, 5, 5, 5)") tdSql.execute(f"insert into test.td32727 values ('2020-02-01 00:00:05', 5, 5, 5, 5, 5.0, 5.0, true, 'varchar', 'nchar', 5, 5, 5, 5)")
@ -56,6 +59,9 @@ class TDTestCase(TBase):
('2020-01-01 00:00:15', 15), ('2020-01-01 00:00:15', 15),
('2020-01-01 00:00:21', 21);""" ('2020-01-01 00:00:21', 21);"""
) )
tdSql.execute(f"insert into test.ts5941_child values ('2020-02-01 00:00:05', 5, 5)")
tdSql.execute(f"insert into test.ts5941_child values ('2020-02-01 00:00:10', 10, 10)")
tdSql.execute(f"insert into test.ts5941_child values ('2020-02-01 00:00:15', 15, 15)")
def test_normal_query_new(self, testCase): def test_normal_query_new(self, testCase):
# read sql from .sql file and execute # read sql from .sql file and execute

View File

@ -1,4 +1,5 @@
#!/bin/bash #!/bin/bash
set -e
pgrep taosd || taosd >> /dev/null 2>&1 & pgrep taosd || taosd >> /dev/null 2>&1 &
pgrep taosadapter || taosadapter >> /dev/null 2>&1 & pgrep taosadapter || taosadapter >> /dev/null 2>&1 &
@ -6,11 +7,12 @@ cd ../../docs/examples/java
mvn clean test > jdbc-out.log 2>&1 mvn clean test > jdbc-out.log 2>&1
tail -n 20 jdbc-out.log tail -n 20 jdbc-out.log
totalJDBCCases=`grep 'Tests run' jdbc-out.log | awk -F"[:,]" 'END{ print $2 }'` totalJDBCCases=`grep 'Tests run' jdbc-out.log | awk -F"[:,]" 'END{ print $2 }'`
failed=`grep 'Tests run' jdbc-out.log | awk -F"[:,]" 'END{ print $4 }'` failed=`grep 'Tests run' jdbc-out.log | awk -F"[:,]" 'END{ print $4 }'`
error=`grep 'Tests run' jdbc-out.log | awk -F"[:,]" 'END{ print $6 }'` error=`grep 'Tests run' jdbc-out.log | awk -F"[:,]" 'END{ print $6 }'`
totalJDBCFailed=`expr $failed + $error` totalJDBCFailed=$((failed + error))
totalJDBCSuccess=`expr $totalJDBCCases - $totalJDBCFailed` totalJDBCSuccess=$((totalJDBCCases - totalJDBCFailed))
if [ "$totalJDBCSuccess" -gt "0" ]; then if [ "$totalJDBCSuccess" -gt "0" ]; then
echo -e "\n${GREEN} ### Total $totalJDBCSuccess JDBC case(s) succeed! ### ${NC}" echo -e "\n${GREEN} ### Total $totalJDBCSuccess JDBC case(s) succeed! ### ${NC}"

View File

@ -176,6 +176,11 @@
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma2.py -Q 2 ,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma2.py -Q 2
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma2.py -Q 3 ,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma2.py -Q 3
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma2.py -Q 4 ,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma2.py -Q 4
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tbnameIn.py
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tbnameIn.py -R
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tbnameIn.py -Q 2
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tbnameIn.py -Q 3
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tbnameIn.py -Q 4
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/nestedQuery2.py ,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/nestedQuery2.py
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/interp_extension.py ,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/interp_extension.py
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/interp_extension.py -R ,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/interp_extension.py -R

View File

@ -33,37 +33,6 @@ function printHelp() {
exit 0 exit 0
} }
# Initialization parameter
PROJECT_DIR=""
BRANCH=""
TEST_TYPE=""
SAVE_LOG="notsave"
# Parse command line parameters
while getopts "hb:d:t:s:" arg; do
case $arg in
d)
PROJECT_DIR=$OPTARG
;;
b)
BRANCH=$OPTARG
;;
t)
TEST_TYPE=$OPTARG
;;
s)
SAVE_LOG=$OPTARG
;;
h)
printHelp
;;
?)
echo "Usage: ./$(basename $0) -h"
exit 1
;;
esac
done
function get_DIR() { function get_DIR() {
today=`date +"%Y%m%d"` today=`date +"%Y%m%d"`
if [ -z "$PROJECT_DIR" ]; then if [ -z "$PROJECT_DIR" ]; then
@ -102,13 +71,6 @@ function get_DIR() {
fi fi
} }
get_DIR
echo "PROJECT_DIR = $PROJECT_DIR"
echo "TDENGINE_DIR = $TDENGINE_DIR"
echo "BUILD_DIR = $BUILD_DIR"
echo "BACKUP_DIR = $BACKUP_DIR"
function buildTDengine() { function buildTDengine() {
print_color "$GREEN" "TDengine build start" print_color "$GREEN" "TDengine build start"
@ -118,14 +80,14 @@ function buildTDengine() {
# pull tdinternal code # pull tdinternal code
cd "$TDENGINE_DIR/../" cd "$TDENGINE_DIR/../"
print_color "$GREEN" "Git pull TDinternal code..." print_color "$GREEN" "Git pull TDinternal code..."
git remote prune origin > /dev/null # git remote prune origin > /dev/null
git remote update > /dev/null # git remote update > /dev/null
# pull tdengine code # pull tdengine code
cd $TDENGINE_DIR cd $TDENGINE_DIR
print_color "$GREEN" "Git pull TDengine code..." print_color "$GREEN" "Git pull TDengine code..."
git remote prune origin > /dev/null # git remote prune origin > /dev/null
git remote update > /dev/null # git remote update > /dev/null
REMOTE_COMMIT=`git rev-parse --short remotes/origin/$branch` REMOTE_COMMIT=`git rev-parse --short remotes/origin/$branch`
LOCAL_COMMIT=`git rev-parse --short @` LOCAL_COMMIT=`git rev-parse --short @`
print_color "$GREEN" " LOCAL: $LOCAL_COMMIT" print_color "$GREEN" " LOCAL: $LOCAL_COMMIT"
@ -137,12 +99,12 @@ function buildTDengine() {
print_color "$GREEN" "Repo need to pull" print_color "$GREEN" "Repo need to pull"
fi fi
git reset --hard # git reset --hard
git checkout -- . # git checkout -- .
git checkout $branch git checkout $branch
git checkout -- . # git checkout -- .
git clean -f # git clean -f
git pull # git pull
[ -d $TDENGINE_DIR/../debug ] || mkdir $TDENGINE_DIR/../debug [ -d $TDENGINE_DIR/../debug ] || mkdir $TDENGINE_DIR/../debug
cd $TDENGINE_DIR/../debug cd $TDENGINE_DIR/../debug
@ -155,15 +117,15 @@ function buildTDengine() {
print_color "$GREEN" "$makecmd" print_color "$GREEN" "$makecmd"
$makecmd $makecmd
make -j 8 install make -j $(nproc) install
else else
TDENGINE_DIR="$PROJECT_DIR" TDENGINE_DIR="$PROJECT_DIR"
# pull tdengine code # pull tdengine code
cd $TDENGINE_DIR cd $TDENGINE_DIR
print_color "$GREEN" "Git pull TDengine code..." print_color "$GREEN" "Git pull TDengine code..."
git remote prune origin > /dev/null # git remote prune origin > /dev/null
git remote update > /dev/null # git remote update > /dev/null
REMOTE_COMMIT=`git rev-parse --short remotes/origin/$branch` REMOTE_COMMIT=`git rev-parse --short remotes/origin/$branch`
LOCAL_COMMIT=`git rev-parse --short @` LOCAL_COMMIT=`git rev-parse --short @`
print_color "$GREEN" " LOCAL: $LOCAL_COMMIT" print_color "$GREEN" " LOCAL: $LOCAL_COMMIT"
@ -175,12 +137,12 @@ function buildTDengine() {
print_color "$GREEN" "Repo need to pull" print_color "$GREEN" "Repo need to pull"
fi fi
git reset --hard # git reset --hard
git checkout -- . # git checkout -- .
git checkout $branch git checkout $branch
git checkout -- . # git checkout -- .
git clean -f # git clean -f
git pull # git pull
[ -d $TDENGINE_DIR/debug ] || mkdir $TDENGINE_DIR/debug [ -d $TDENGINE_DIR/debug ] || mkdir $TDENGINE_DIR/debug
cd $TDENGINE_DIR/debug cd $TDENGINE_DIR/debug
@ -193,24 +155,12 @@ function buildTDengine() {
print_color "$GREEN" "$makecmd" print_color "$GREEN" "$makecmd"
$makecmd $makecmd
make -j 8 install make -j $(nproc) install
fi fi
print_color "$GREEN" "TDengine build end" print_color "$GREEN" "TDengine build end"
} }
# Check and get the branch name
if [ -n "$BRANCH" ] ; then
branch="$BRANCH"
print_color "$GREEN" "Testing branch: $branch "
print_color "$GREEN" "Build is required for this test"
buildTDengine
else
print_color "$GREEN" "Build is not required for this test"
fi
function runCasesOneByOne () { function runCasesOneByOne () {
while read -r line; do while read -r line; do
if [[ "$line" != "#"* ]]; then if [[ "$line" != "#"* ]]; then
@ -264,7 +214,7 @@ function runUnitTest() {
cd $BUILD_DIR cd $BUILD_DIR
pgrep taosd || taosd >> /dev/null 2>&1 & pgrep taosd || taosd >> /dev/null 2>&1 &
sleep 10 sleep 10
ctest -E "cunit_test" -j8 ctest -E "cunit_test" -j4
print_color "$GREEN" "3.0 unit test done" print_color "$GREEN" "3.0 unit test done"
} }
@ -314,7 +264,6 @@ function runPythonCases() {
fi fi
} }
function runTest() { function runTest() {
print_color "$GREEN" "run Test" print_color "$GREEN" "run Test"
@ -344,7 +293,7 @@ function stopTaosd {
sleep 1 sleep 1
PID=`ps -ef|grep -w taosd | grep -v grep | awk '{print $2}'` PID=`ps -ef|grep -w taosd | grep -v grep | awk '{print $2}'`
done done
print_color "$GREEN" "Stop tasod end" print_color "$GREEN" "Stop taosd end"
} }
function stopTaosadapter { function stopTaosadapter {
@ -357,10 +306,52 @@ function stopTaosadapter {
sleep 1 sleep 1
PID=`ps -ef|grep -w taosd | grep -v grep | awk '{print $2}'` PID=`ps -ef|grep -w taosd | grep -v grep | awk '{print $2}'`
done done
print_color "$GREEN" "Stop tasoadapter end" print_color "$GREEN" "Stop taosadapter end"
} }
######################
# main entry
######################
# Initialization parameter
PROJECT_DIR=""
BRANCH=""
TEST_TYPE=""
SAVE_LOG="notsave"
# Parse command line parameters
while getopts "hb:d:t:s:" arg; do
case $arg in
d)
PROJECT_DIR=$OPTARG
;;
b)
BRANCH=$OPTARG
;;
t)
TEST_TYPE=$OPTARG
;;
s)
SAVE_LOG=$OPTARG
;;
h)
printHelp
;;
?)
echo "Usage: ./$(basename $0) -h"
exit 1
;;
esac
done
get_DIR
echo "PROJECT_DIR = $PROJECT_DIR"
echo "TDENGINE_DIR = $TDENGINE_DIR"
echo "BUILD_DIR = $BUILD_DIR"
echo "BACKUP_DIR = $BACKUP_DIR"
# Run all ci case
WORK_DIR=$TDENGINE_DIR WORK_DIR=$TDENGINE_DIR
date >> $WORK_DIR/date.log date >> $WORK_DIR/date.log
@ -368,6 +359,17 @@ print_color "$GREEN" "Run all ci test cases" | tee -a $WORK_DIR/date.log
stopTaosd stopTaosd
# Check and get the branch name
if [ -n "$BRANCH" ] ; then
branch="$BRANCH"
print_color "$GREEN" "Testing branch: $branch "
print_color "$GREEN" "Build is required for this test!"
buildTDengine
else
print_color "$GREEN" "Build is not required for this test!"
fi
# Run different types of case
if [ -z "$TEST_TYPE" -o "$TEST_TYPE" = "all" -o "$TEST_TYPE" = "ALL" ]; then if [ -z "$TEST_TYPE" -o "$TEST_TYPE" = "all" -o "$TEST_TYPE" = "ALL" ]; then
runTest runTest
elif [ "$TEST_TYPE" = "python" -o "$TEST_TYPE" = "PYTHON" ]; then elif [ "$TEST_TYPE" = "python" -o "$TEST_TYPE" = "PYTHON" ]; then

View File

@ -41,49 +41,6 @@ function printHelp() {
} }
PROJECT_DIR=""
CAPTURE_GCDA_DIR=""
TEST_CASE="task"
UNIT_TEST_CASE=""
BRANCH=""
BRANCH_BUILD=""
LCOV_DIR="/usr/local/bin"
# Parse command line parameters
while getopts "hd:b:f:c:u:i:l:" arg; do
case $arg in
d)
PROJECT_DIR=$OPTARG
;;
b)
BRANCH=$OPTARG
;;
f)
CAPTURE_GCDA_DIR=$OPTARG
;;
c)
TEST_CASE=$OPTARG
;;
u)
UNIT_TEST_CASE=$OPTARG
;;
i)
BRANCH_BUILD=$OPTARG
;;
l)
LCOV_DIR=$OPTARG
;;
h)
printHelp
;;
?)
echo "Usage: ./$(basename $0) -h"
exit 1
;;
esac
done
# Find the project/tdengine/build/capture directory # Find the project/tdengine/build/capture directory
function get_DIR() { function get_DIR() {
today=`date +"%Y%m%d"` today=`date +"%Y%m%d"`
@ -118,18 +75,6 @@ function get_DIR() {
} }
# Show all parameters
get_DIR
echo "PROJECT_DIR = $PROJECT_DIR"
echo "TDENGINE_DIR = $TDENGINE_DIR"
echo "BUILD_DIR = $BUILD_DIR"
echo "CAPTURE_GCDA_DIR = $CAPTURE_GCDA_DIR"
echo "TEST_CASE = $TEST_CASE"
echo "UNIT_TEST_CASE = $UNIT_TEST_CASE"
echo "BRANCH_BUILD = $BRANCH_BUILD"
echo "LCOV_DIR = $LCOV_DIR"
function buildTDengine() { function buildTDengine() {
print_color "$GREEN" "TDengine build start" print_color "$GREEN" "TDengine build start"
@ -139,14 +84,14 @@ function buildTDengine() {
# pull tdinternal code # pull tdinternal code
cd "$TDENGINE_DIR/../" cd "$TDENGINE_DIR/../"
print_color "$GREEN" "Git pull TDinternal code..." print_color "$GREEN" "Git pull TDinternal code..."
git remote prune origin > /dev/null # git remote prune origin > /dev/null
git remote update > /dev/null # git remote update > /dev/null
# pull tdengine code # pull tdengine code
cd $TDENGINE_DIR cd $TDENGINE_DIR
print_color "$GREEN" "Git pull TDengine code..." print_color "$GREEN" "Git pull TDengine code..."
git remote prune origin > /dev/null # git remote prune origin > /dev/null
git remote update > /dev/null # git remote update > /dev/null
REMOTE_COMMIT=`git rev-parse --short remotes/origin/$branch` REMOTE_COMMIT=`git rev-parse --short remotes/origin/$branch`
LOCAL_COMMIT=`git rev-parse --short @` LOCAL_COMMIT=`git rev-parse --short @`
print_color "$GREEN" " LOCAL: $LOCAL_COMMIT" print_color "$GREEN" " LOCAL: $LOCAL_COMMIT"
@ -158,12 +103,12 @@ function buildTDengine() {
print_color "$GREEN" "Repo need to pull" print_color "$GREEN" "Repo need to pull"
fi fi
git reset --hard # git reset --hard
git checkout -- . # git checkout -- .
git checkout $branch git checkout $branch
git checkout -- . # git checkout -- .
git clean -f # git clean -f
git pull # git pull
[ -d $TDENGINE_DIR/../debug ] || mkdir $TDENGINE_DIR/../debug [ -d $TDENGINE_DIR/../debug ] || mkdir $TDENGINE_DIR/../debug
cd $TDENGINE_DIR/../debug cd $TDENGINE_DIR/../debug
@ -183,8 +128,8 @@ function buildTDengine() {
# pull tdengine code # pull tdengine code
cd $TDENGINE_DIR cd $TDENGINE_DIR
print_color "$GREEN" "Git pull TDengine code..." print_color "$GREEN" "Git pull TDengine code..."
git remote prune origin > /dev/null # git remote prune origin > /dev/null
git remote update > /dev/null # git remote update > /dev/null
REMOTE_COMMIT=`git rev-parse --short remotes/origin/$branch` REMOTE_COMMIT=`git rev-parse --short remotes/origin/$branch`
LOCAL_COMMIT=`git rev-parse --short @` LOCAL_COMMIT=`git rev-parse --short @`
print_color "$GREEN" " LOCAL: $LOCAL_COMMIT" print_color "$GREEN" " LOCAL: $LOCAL_COMMIT"
@ -196,12 +141,12 @@ function buildTDengine() {
print_color "$GREEN" "Repo need to pull" print_color "$GREEN" "Repo need to pull"
fi fi
git reset --hard # git reset --hard
git checkout -- . # git checkout -- .
git checkout $branch git checkout $branch
git checkout -- . # git checkout -- .
git clean -f # git clean -f
git pull # git pull
[ -d $TDENGINE_DIR/debug ] || mkdir $TDENGINE_DIR/debug [ -d $TDENGINE_DIR/debug ] || mkdir $TDENGINE_DIR/debug
cd $TDENGINE_DIR/debug cd $TDENGINE_DIR/debug
@ -220,44 +165,6 @@ function buildTDengine() {
print_color "$GREEN" "TDengine build end" print_color "$GREEN" "TDengine build end"
} }
# Check and get the branch name and build branch
if [ -n "$BRANCH" ] && [ -z "$BRANCH_BUILD" ] ; then
branch="$BRANCH"
print_color "$GREEN" "Testing branch: $branch "
print_color "$GREEN" "Build is required for this test!"
buildTDengine
elif [ -n "$BRANCH_BUILD" ] && [ "$BRANCH_BUILD" = "YES" -o "$BRANCH_BUILD" = "yes" ] ; then
CURRENT_DIR=$(pwd)
echo "CURRENT_DIR: $CURRENT_DIR"
if [ -d .git ]; then
CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD)
echo "CURRENT_BRANCH: $CURRENT_BRANCH"
else
echo "The current directory is not a Git repository"
fi
branch="$CURRENT_BRANCH"
print_color "$GREEN" "Testing branch: $branch "
print_color "$GREEN" "Build is required for this test!"
buildTDengine
elif [ -n "$BRANCH_BUILD" ] && [ "$BRANCH_BUILD" = "ONLY_INSTALL" -o "$BRANCH_BUILD" = "only_install" ] ; then
CURRENT_DIR=$(pwd)
echo "CURRENT_DIR: $CURRENT_DIR"
if [ -d .git ]; then
CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD)
echo "CURRENT_BRANCH: $CURRENT_BRANCH"
else
echo "The current directory is not a Git repository"
fi
branch="$CURRENT_BRANCH"
print_color "$GREEN" "Testing branch: $branch "
print_color "$GREEN" "not build,only install!"
cd $TDENGINE_DIR/debug
make -j $(nproc) install
elif [ -z "$BRANCH" ] && [ -z "$BRANCH_BUILD" ] ; then
print_color "$GREEN" "Build is not required for this test!"
fi
function runCasesOneByOne () { function runCasesOneByOne () {
while read -r line; do while read -r line; do
if [[ "$line" != "#"* ]]; then if [[ "$line" != "#"* ]]; then
@ -481,10 +388,108 @@ function stopTaosadapter {
} }
######################
# main entry
######################
# Initialization parameter
PROJECT_DIR=""
CAPTURE_GCDA_DIR=""
TEST_CASE="task"
UNIT_TEST_CASE=""
BRANCH=""
BRANCH_BUILD=""
LCOV_DIR="/usr/local/bin"
# Parse command line parameters
while getopts "hd:b:f:c:u:i:l:" arg; do
case $arg in
d)
PROJECT_DIR=$OPTARG
;;
b)
BRANCH=$OPTARG
;;
f)
CAPTURE_GCDA_DIR=$OPTARG
;;
c)
TEST_CASE=$OPTARG
;;
u)
UNIT_TEST_CASE=$OPTARG
;;
i)
BRANCH_BUILD=$OPTARG
;;
l)
LCOV_DIR=$OPTARG
;;
h)
printHelp
;;
?)
echo "Usage: ./$(basename $0) -h"
exit 1
;;
esac
done
# Show all parameters
get_DIR
echo "PROJECT_DIR = $PROJECT_DIR"
echo "TDENGINE_DIR = $TDENGINE_DIR"
echo "BUILD_DIR = $BUILD_DIR"
echo "CAPTURE_GCDA_DIR = $CAPTURE_GCDA_DIR"
echo "TEST_CASE = $TEST_CASE"
echo "UNIT_TEST_CASE = $UNIT_TEST_CASE"
echo "BRANCH_BUILD = $BRANCH_BUILD"
echo "LCOV_DIR = $LCOV_DIR"
date >> $TDENGINE_DIR/date.log date >> $TDENGINE_DIR/date.log
print_color "$GREEN" "Run local coverage test cases" | tee -a $TDENGINE_DIR/date.log print_color "$GREEN" "Run local coverage test cases" | tee -a $TDENGINE_DIR/date.log
# Check and get the branch name and build branch
if [ -n "$BRANCH" ] && [ -z "$BRANCH_BUILD" ] ; then
branch="$BRANCH"
print_color "$GREEN" "Testing branch: $branch "
print_color "$GREEN" "Build is required for this test!"
buildTDengine
elif [ -n "$BRANCH_BUILD" ] && [ "$BRANCH_BUILD" = "YES" -o "$BRANCH_BUILD" = "yes" ] ; then
CURRENT_DIR=$(pwd)
echo "CURRENT_DIR: $CURRENT_DIR"
if [ -d .git ]; then
CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD)
echo "CURRENT_BRANCH: $CURRENT_BRANCH"
else
echo "The current directory is not a Git repository"
fi
branch="$CURRENT_BRANCH"
print_color "$GREEN" "Testing branch: $branch "
print_color "$GREEN" "Build is required for this test!"
buildTDengine
elif [ -n "$BRANCH_BUILD" ] && [ "$BRANCH_BUILD" = "ONLY_INSTALL" -o "$BRANCH_BUILD" = "only_install" ] ; then
CURRENT_DIR=$(pwd)
echo "CURRENT_DIR: $CURRENT_DIR"
if [ -d .git ]; then
CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD)
echo "CURRENT_BRANCH: $CURRENT_BRANCH"
else
echo "The current directory is not a Git repository"
fi
branch="$CURRENT_BRANCH"
print_color "$GREEN" "Testing branch: $branch "
print_color "$GREEN" "not build,only install!"
cd $TDENGINE_DIR/debug
make -j $(nproc) install
elif [ -z "$BRANCH" ] && [ -z "$BRANCH_BUILD" ] ; then
print_color "$GREEN" "Build is not required for this test!"
fi
stopTaosd stopTaosd
runTest runTest

View File

@ -0,0 +1,6 @@
EXCLUDE_IP="192.168.1.10"
SERVER_IP="192.168.1.11"
HTTP_SERV_IP="192.168.1.12"
HTTP_SERV_PORT=8080
FEISHU_MSG_URL="https://open.feishu.cn/open-apis/bot/v2/hook/*******"
OWNER="Jayden Jia"

View File

@ -0,0 +1,308 @@
from datetime import date
from datetime import timedelta
import os
import json
import re
import requests
import subprocess
from dotenv import load_dotenv
# load .env
# You should have a .env file in the same directory as this script
# You can exec: cp .env.example .env
load_dotenv()
# define version
version = "3.3.2.*"
version_pattern_str = version.replace('.', r'\.').replace('*', r'\d+')
version_pattern = re.compile(rf'^{version_pattern_str}$')
version_stack_list = list()
# define ip
ip = os.getenv("EXCLUDE_IP")
server_ip = os.getenv("SERVER_IP")
http_serv_ip = os.getenv("HTTP_SERV_IP")
http_serv_port = os.getenv("HTTP_SERV_PORT")
owner = os.getenv("OWNER")
# feishu-msg url
feishu_msg_url = os.getenv("FEISHU_MSG_URL")
# get today
today = date.today()
# Define the file and parameters
path="/data/telemetry/crash-report/"
trace_report_path = path + "trace_report"
os.makedirs(path, exist_ok=True)
os.makedirs(trace_report_path, exist_ok=True)
assert_script_path = path + "filter_assert.sh"
nassert_script_path = path + "filter_nassert.sh"
# get files for the past 7 days
def get_files():
files = ""
for i in range(1,8):
#print ((today - timedelta(days=i)).strftime("%Y%m%d"))
files = files + path + (today - timedelta(days=i)).strftime("%Y%m%d") + ".txt "
return files.strip().split(" ")
# Define the AWK script as a string with proper escaping
def get_res(file_path):
# Execute the script
command = ['bash', file_path, version, ip] + get_files()
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
# Capture the output and errors
output, errors = process.communicate()
# Check for errors
if process.returncode != 0:
return errors
else:
return output.rstrip()
def get_sum(output):
# Split the output into lines
lines = output.strip().split('\n')
# Initialize the sum
total_sum = 0
# Iterate over each line
for line in lines:
# Split each line by space to separate the columns
parts = line.split()
# The first part of the line is the number, convert it to integer
if parts: # Check if there are any elements in the parts list
number = int(parts[0])
total_sum += number
return total_sum
def convert_html(data):
# convert data to json
start_time = get_files()[6].split("/")[-1].split(".")[0]
end_time = get_files()[0].split("/")[-1].split(".")[0]
html_report_file = f'{start_time}_{end_time}.html'
json_data = json.dumps(data)
# Create HTML content
html_content = f'''
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Stack Trace Report</title>
<style>
body {{
font-family: Arial, sans-serif;
margin: 20px;
background-color: #f0f0f5;
}}
h1 {{
color: #2c3e50;
text-align: center;
}}
table {{
width: 100%;
border-collapse: collapse;
margin-bottom: 20px;
box-shadow: 0 2px 10px rgba(0, 0, 0, 0.1);
}}
th, td {{
border: 1px solid #ddd;
padding: 10px;
text-align: left;
}}
th {{
background-color: #3498db;
color: white;
}}
tr:nth-child(even) {{
background-color: #ecf0f1;
}}
tr:hover {{
background-color: #d1e7fd;
}}
pre {{
background-color: #f7f7f7;
padding: 10px;
border: 1px solid #ddd;
overflow-x: auto;
white-space: pre-wrap;
border-radius: 5px;
}}
</style>
</head>
<body>
<h1>Stack Trace Report From {start_time} To {end_time} </h1>
<table>
<thead>
<tr>
<th>Key Stack Info</th>
<th>Versions</th>
<th>Num Of Crashes</th>
<th>Full Stack Info</th>
</tr>
</thead>
<tbody id="report">
</tbody>
</table>
<script>
const data = {json_data};
const reportBody = document.getElementById('report');
data.forEach(entry => {{
const row = document.createElement('tr');
row.innerHTML = `
<td>${{entry.key_stack_info}}</td>
<td>${{entry.version_list.join('<br>')}}</td>
<td>${{entry.count}}</td>
<td><pre>${{entry.full_stack_info}}</pre></td>
`;
reportBody.appendChild(row);
}});
</script>
</body>
</html>
'''
# Write the HTML content to a file
with open(f'{trace_report_path}/{html_report_file}', 'w') as f:
f.write(html_content)
return html_report_file
def get_version_stack_list(res):
for line in res.strip().split('\n'):
version_list = list()
version_stack_dict = dict()
count = line.split()[0]
key_stack_info = line.split()[1]
for file in get_files():
with open(file, 'r') as infile:
for line in infile:
line = line.strip()
data = json.loads(line)
# print(line)
if ip not in line and version_pattern.search(data["version"]) and key_stack_info in line:
if data["version"] not in version_list:
version_list.append(data["version"])
full_stack_info = data["stackInfo"]
version_stack_dict["key_stack_info"] = key_stack_info
version_stack_dict["full_stack_info"] = full_stack_info
version_stack_dict["version_list"] = version_list
version_stack_dict["count"] = count
# print(version_stack_dict)
version_stack_list.append(version_stack_dict)
return version_stack_list
# get msg info
def get_msg(text):
return {
"msg_type": "post",
"content": {
"post": {
"zh_cn": {
"title": "Telemetry Statistics",
"content": [
[{
"tag": "text",
"text": text
}
]]
}
}
}
}
# post msg
def send_msg(json):
headers = {
'Content-Type': 'application/json'
}
req = requests.post(url=feishu_msg_url, headers=headers, json=json)
inf = req.json()
if "StatusCode" in inf and inf["StatusCode"] == 0:
pass
else:
print(inf)
def format_results(results):
# Split the results into lines
lines = results.strip().split('\n')
# Parse lines into a list of tuples (number, rest_of_line)
parsed_lines = []
for line in lines:
parts = line.split(maxsplit=1)
if len(parts) == 2:
number = int(parts[0]) # Convert the number part to an integer
parsed_lines.append((number, parts[1]))
# Sort the parsed lines by the first element (number) in descending order
parsed_lines.sort(reverse=True, key=lambda x: x[0])
# Determine the maximum width of the first column for alignment
# max_width = max(len(str(item[0])) for item in parsed_lines)
if parsed_lines:
max_width = max(len(str(item[0])) for item in parsed_lines)
else:
max_width = 0
# Format each line to align the numbers and function names with indentation
formatted_lines = []
for number, text in parsed_lines:
formatted_line = f" {str(number).rjust(max_width)} {text}"
formatted_lines.append(formatted_line)
# Join the formatted lines into a single string
return '\n'.join(formatted_lines)
# # send report to feishu
def send_report(res, sum, html_report_file):
content = f'''
version: v{version}
from: {get_files()[6].split("/")[-1].split(".")[0]}
to: {get_files()[0].split("/")[-1].split(".")[0]}
ip: {server_ip}
owner: {owner}
result: \n{format_results(res)}\n
total crashes: {sum}\n
details: http://{http_serv_ip}:{http_serv_port}/{html_report_file}
'''
print(get_msg(content))
send_msg(get_msg(content))
# print(content)
# for none-taosAssertDebug
nassert_res = get_res(nassert_script_path)
# print(nassert_res)
# for taosAssertDebug
assert_res = get_res(assert_script_path)
# print(assert_res)
# combine the results
res = nassert_res + assert_res
# get version stack list
version_stack_list = get_version_stack_list(res) if len(res) > 0 else list()
# convert to html
html_report_file = convert_html(version_stack_list)
# get sum
sum = get_sum(res)
# send report
send_report(res, sum, html_report_file)

View File

@ -0,0 +1,128 @@
from datetime import date
from datetime import timedelta
import os
import re
import requests
from dotenv import load_dotenv
# load .env
load_dotenv()
# define version
version = "3.3.*"
ip = os.getenv("EXCLUDE_IP")
server_ip = os.getenv("SERVER_IP")
owner = os.getenv("OWNER")
# feishu-msg url
feishu_msg_url = os.getenv("FEISHU_MSG_URL")
today = date.today()
#today = date(2023,8,7)
path="/data/telemetry/crash-report/"
# get files for the past 7 days
def get_files():
files = ""
for i in range(1,8):
#print ((today - timedelta(days=i)).strftime("%Y%m%d"))
files = files + path + (today - timedelta(days=i)).strftime("%Y%m%d") + ".txt "
return files
# for none-taosAssertDebug
filter1_cmd = '''grep '"version":"%s"' %s \
| grep "taosd(" \
| awk -F "stackInfo" '{print $2}' \
| grep -v "taosAssertDebug" \
| grep -v %s \
| awk -F "taosd" '{print $3}' \
| cut -d")" -f 1 \
| cut -d"(" -f 2 \
| sort | uniq -c ''' % (version, get_files(), ip)
# for taosAssertDebug
filter2_cmd = '''grep '"version":"%s"' %s \
| grep "taosd(" \
| awk -F "stackInfo" '{print $2}' \
| grep "taosAssertDebug" \
| grep -v %s \
| awk -F "taosd" '{print $3}' \
| cut -d")" -f 1 \
| cut -d"(" -f 2 \
| sort | uniq -c ''' % (version, get_files(), ip)
# get msg info
def get_msg(text):
return {
"msg_type": "post",
"content": {
"post": {
"zh_cn": {
"title": "Telemetry Statistics",
"content": [
[{
"tag": "text",
"text": text
}
]]
}
}
}
}
# post msg
def send_msg(json):
headers = {
'Content-Type': 'application/json'
}
req = requests.post(url=group_url, headers=headers, json=json)
inf = req.json()
if "StatusCode" in inf and inf["StatusCode"] == 0:
pass
else:
print(inf)
# exec cmd and return res
def get_output(cmd):
text = os.popen(cmd)
lines = text.read()
text.close()
return lines
# get sum
def get_count(output):
res = re.findall(" \d+ ", output)
sum1 = 0
for r in res:
sum1 = sum1 + int(r.strip())
return sum1
# print total crash count
def print_result():
#print(f"Files for statistics: {get_files()}\n")
sum1 = get_count(get_output(filter1_cmd))
sum2 = get_count(get_output(filter2_cmd))
total = sum1 + sum2
#print(f"total crashes: {total}")
return total
# send report to feishu
def send_report():
content = f'''
test scope: Telemetry Statistics
owner: {owner}
ip: {server_ip}
from: {get_files().split(" ")[6].split("/")[4].split(".")[0]}
to: {get_files().split(" ")[0].split("/")[4].split(".")[0]}
filter1 result: {get_output(filter1_cmd)}
filter2 result: {get_output(filter2_cmd)}
total crashes: {print_result()}
'''
#send_msg(get_msg(content))
print(content)
print_result()
send_report()

View File

@ -0,0 +1,61 @@
# 目录
1. [介绍](#1-介绍)
1. [前置条件](#2-前置条件)
1. [运行](#3-运行)
# 1. 介绍
本手册旨在为开发人员提供全面的指导以收集过去7天的崩溃信息并将其报告到飞书通知群。
> [!NOTE]
> - 下面的命令和脚本已在 LinuxCentOS 7.9.2009)上验证.
# 2. 前置条件
- 安装 Python3
```bash
yum install python3
yum install python3-pip
```
- 安装 Python 依赖
```bash
pip3 install requests python-dotenv
```
- 调整 .env 文件
```bash
cd $DIR/telemetry/crash-report
cp .env.example .env
vim .env
...
```
- .env 样例
```bash
# 过滤器排除 IP公司网络出口 IP
EXCLUDE_IP="192.168.1.10"
# 英文官网服务器 IP
SERVER_IP="192.168.1.11"
# 内网提供 HTTP 服务的 IP 及端口,用于提供 HTML 报告浏览
HTTP_SERV_IP="192.168.1.12"
HTTP_SERV_PORT=8080
# 飞书群机器人 webhook 地址
FEISHU_MSG_URL="https://open.feishu.cn/open-apis/bot/v2/hook/*******"
# 负责人
OWNER="Jayden Jia"
```
# 3. 运行
在 $DIR/telemetry/crash-report 目录中,有类似文件名为 202501**.txt 的一些文件。Python 脚本会将从这些文本文件中收集崩溃信息,并将报告发送到您的飞书机器人群组中。
```bash
cd $DIR/telemetry/crash-report
python3 CrashCounter.py
```

View File

@ -0,0 +1,61 @@
# Table of Contents
1. [Introduction](#1-introduction)
1. [Prerequisites](#2-prerequisites)
1. [Running](#3-running)
# 1. Introduction
This manual is intended to give developers comprehensive guidance to collect crash information from the past 7 days and report it to the FeiShu notification group.
> [!NOTE]
> - The commands and scripts below are verified on Linux (CentOs 7.9.2009).
# 2. Prerequisites
- Install Python3
```bash
yum install python3
yum install python3-pip
```
- Install Python dependencies
```bash
pip3 install requests python-dotenv
```
- Adjust .env file
```bash
cd $DIR/telemetry/crash-report
cp .env.example .env
vim .env
...
```
- Example for .env
```bash
# Filter to exclude IP (Company network export IP)
EXCLUDE_IP="192.168.1.10"
# Official website server IP
SERVER_IP="192.168.1.11"
# Internal network providing HTTP service IP and port, used for HTML report browsing
HTTP_SERV_IP="192.168.1.12"
HTTP_SERV_PORT=8080
# Webhook address for feiShu group bot
FEISHU_MSG_URL="https://open.feishu.cn/open-apis/bot/v2/hook/*******"
# Owner
OWNER="Jayden Jia"
```
# 3. Running
In `$DIR/telemetry/crash-report` directory, there are several files with names like 202501**.txt. The python script will collect crash information from these text files and send report to your Feishu bot group.
```bash
cd $DIR/telemetry/crash-report
python3 CrashCounter.py
```

View File

@ -0,0 +1,15 @@
#!/bin/bash
source .env
filesPath="/data/telemetry/crash-report"
version="3.0.4.1"
taosdataIp=$EXCLUDE_IP
grep "\"version\":\"${version}\"" ${filesPath}/*.txt \
| grep "taosd(" \
| awk -F "stackInfo" '{print $2}' \
| grep -v "taosAssertDebug" \
| grep -v ${taosdataIp} \
| awk -F "taosd" '{print $2}' \
| cut -d")" -f 1 \
| cut -d"(" -f 2 \
| sort | uniq -c

View File

@ -0,0 +1,14 @@
#!/bin/bash
source .env
filesPath="/data/telemetry/crash-report"
version="3.0.4.1"
taosdataIp=$EXCLUDE_IP
grep "\"version\":\"${version}\"" ${filesPath}/*.txt \
| grep "taosd(" \
| awk -F "stackInfo" '{print $2}' \
| grep "taosAssertDebug" \
| grep -v ${taosdataIp} \
| awk -F "taosd" '{print $3}' \
| cut -d")" -f 1 \
| cut -d"(" -f 2 \
| sort | uniq -c

View File

@ -0,0 +1,67 @@
#!/bin/bash
# Extract version and IP from the first two arguments
version="$1"
ip="$2"
shift 2 # Remove the first two arguments, leaving only file paths
# All remaining arguments are considered as file paths
file_paths="$@"
# Execute the awk script and capture the output
readarray -t output < <(awk -v version="$version" -v ip="$ip" '
BEGIN {
RS = "\\n"; # Set the record separator to newline
FS = ","; # Set the field separator to comma
total = 0; # Initialize total count
version_regex = version; # Use the passed version pattern
ip_regex = ip; # Use the passed IP pattern
}
{
start_collecting = 0;
version_matched = 0;
ip_excluded = 0;
# Check each field within a record
for (i = 1; i <= NF; i++) {
if ($i ~ /"ip":"[^"]*"/ && $i ~ ip_regex) {
ip_excluded = 1;
}
if ($i ~ /"version":"[^"]*"/ && $i ~ version_regex) {
version_matched = 1;
}
}
if (!ip_excluded && version_matched) {
for (i = 1; i <= NF; i++) {
if ($i ~ /taosAssertDebug/ && start_collecting == 0) {
start_collecting = 1;
continue;
}
if (start_collecting == 1 && $i ~ /taosd\(([^)]+)\)/) {
match($i, /taosd\(([^)]+)\)/, arr);
if (arr[1] != "") {
count[arr[1]]++;
total++;
break;
}
}
}
}
}
END {
for (c in count) {
printf "%d %s\n", count[c], c;
}
print "Total count:", total;
}' $file_paths)
# Capture the function details and total count into separate variables
function_details=$(printf "%s\n" "${output[@]::${#output[@]}-1}")
total_count="${output[-1]}"
# Output or use the variables as needed
#echo "Function Details:"
echo "$function_details"
#echo "Total Count:"
#echo "$total_count"

View File

@ -0,0 +1,74 @@
#!/bin/bash
# Pass version, ip, and file paths as arguments
version="$1"
ip="$2"
shift 2 # Shift the first two arguments to get file paths
file_paths="$@"
# Execute awk and capture the output
readarray -t output < <(awk -v version="$version" -v ip="$ip" '
BEGIN {
RS = "\\n"; # Set the record separator to newline
total = 0; # Initialize total count
version_regex = "\"version\":\"" version; # Construct the regex for version
ip_regex = "\"ip\":\"" ip "\""; # Construct the regex for IP
}
{
found = 0; # Initialize the found flag to false
start_collecting = 1; # Start collecting by default, unless taosAssertDebug is encountered
split($0, parts, "\\n"); # Split each record by newline
# Check for version and IP in each part
version_matched = 0;
ip_excluded = 0;
for (i in parts) {
if (parts[i] ~ version_regex) {
version_matched = 1; # Set flag if version is matched
}
if (parts[i] ~ ip_regex) {
ip_excluded = 1; # Set flag if IP is matched
break; # No need to continue if IP is excluded
}
}
# Process only if version is matched and IP is not excluded
if (version_matched && !ip_excluded) {
for (i in parts) {
if (parts[i] ~ /taosAssertDebug/) {
start_collecting = 0; # Skip this record if taosAssertDebug is encountered
break; # Exit the loop
}
}
if (start_collecting == 1) { # Continue processing if taosAssertDebug is not found
for (i in parts) {
if (found == 0 && parts[i] ~ /frame:.*taosd\([^)]+\)/) {
# Match the first frame that meets the condition
match(parts[i], /taosd\(([^)]+)\)/, a); # Extract the function name
if (a[1] != "") {
count[a[1]]++; # Increment the count for this function name
total++; # Increment the total count
found = 1; # Set found flag to true
break; # Exit the loop once the function is found
}
}
}
}
}
}
END {
for (c in count) {
printf "%d %s\n", count[c], c; # Print the count and function name formatted
}
print total; # Print the total count alone
}' $file_paths) # Note the removal of quotes around "$file_paths" to handle multiple paths
# Capture the function details and total count into separate variables
function_details=$(printf "%s\n" "${output[@]::${#output[@]}-1}") # Join array elements with newlines
total_count="${output[-1]}" # The last element
# Output or use the variables as needed
#echo "Function Details:"
echo "$function_details"
#echo "Total Count:"
#echo "$total_count"

View File

@ -158,8 +158,20 @@ class TDTestCase:
tdSql.query(f'show grants;') tdSql.query(f'show grants;')
tdSql.checkEqual(len(tdSql.queryResult), 1) tdSql.checkEqual(len(tdSql.queryResult), 1)
infoFile.write(";".join(map(str,tdSql.queryResult[0])) + "\n") infoFile.write(";".join(map(str,tdSql.queryResult[0])) + "\n")
tdLog.info(f"show grants: {tdSql.queryResult[0]}")
expireTimeStr=tdSql.queryResult[0][1]
serviceTimeStr=tdSql.queryResult[0][2]
tdLog.info(f"expireTimeStr: {expireTimeStr}, serviceTimeStr: {serviceTimeStr}")
expireTime = time.mktime(time.strptime(expireTimeStr, "%Y-%m-%d %H:%M:%S"))
serviceTime = time.mktime(time.strptime(serviceTimeStr, "%Y-%m-%d %H:%M:%S"))
tdLog.info(f"expireTime: {expireTime}, serviceTime: {serviceTime}")
tdSql.checkEqual(True, abs(expireTime - serviceTime - 864000) < 15)
tdSql.query(f'show grants full;') tdSql.query(f'show grants full;')
tdSql.checkEqual(len(tdSql.queryResult), 31) nGrantItems = 31
tdSql.checkEqual(len(tdSql.queryResult), nGrantItems)
tdSql.checkEqual(tdSql.queryResult[0][2], serviceTimeStr)
for i in range(1, nGrantItems):
tdSql.checkEqual(tdSql.queryResult[i][2], expireTimeStr)
if infoFile: if infoFile:
infoFile.flush() infoFile.flush()

View File

@ -153,7 +153,7 @@ class TDTestCase:
tdSql.checkData(9, 1, '8') tdSql.checkData(9, 1, '8')
tdSql.checkData(9, 2, 8) tdSql.checkData(9, 2, 8)
tdSql.query('select * from d1.st order by ts limit 2;') tdSql.query('select * from d1.st order by ts,pk limit 2;')
tdSql.checkRows(2) tdSql.checkRows(2)
tdSql.checkData(0, 0, datetime.datetime(2021, 4, 19, 0, 0)) tdSql.checkData(0, 0, datetime.datetime(2021, 4, 19, 0, 0))
tdSql.checkData(0, 1, '1') tdSql.checkData(0, 1, '1')
@ -286,7 +286,7 @@ class TDTestCase:
tdSql.checkData(9, 1, '8') tdSql.checkData(9, 1, '8')
tdSql.checkData(9, 2, 8) tdSql.checkData(9, 2, 8)
tdSql.query('select * from d2.st order by ts limit 2;') tdSql.query('select * from d2.st order by ts,pk limit 2;')
tdSql.checkRows(2) tdSql.checkRows(2)
tdSql.checkData(0, 0, datetime.datetime(2021, 4, 19, 0, 0)) tdSql.checkData(0, 0, datetime.datetime(2021, 4, 19, 0, 0))
tdSql.checkData(0, 1, '1') tdSql.checkData(0, 1, '1')

View File

@ -0,0 +1,145 @@
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import taos
from util.log import *
from util.cases import *
from util.sql import *
class TDTestCase:
def init(self, conn, logSql, replicaVar=1):
self.replicaVar = int(replicaVar)
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor())
def inTest(self, dbname="db"):
tdSql.execute(f'drop database if exists {dbname}')
tdSql.execute(f'create database {dbname}')
tdSql.execute(f'use {dbname}')
tdSql.execute(f'CREATE STABLE {dbname}.`st1` (`ts` TIMESTAMP, `v1` INT) TAGS (`t1` INT);')
tdSql.execute(f'CREATE STABLE {dbname}.`st2` (`ts` TIMESTAMP, `v1` INT) TAGS (`t1` INT);')
tdSql.execute(f'CREATE TABLE {dbname}.`t11` USING {dbname}.`st1` (`t1`) TAGS (11);')
tdSql.execute(f'CREATE TABLE {dbname}.`t12` USING {dbname}.`st1` (`t1`) TAGS (12);')
tdSql.execute(f'CREATE TABLE {dbname}.`t21` USING {dbname}.`st2` (`t1`) TAGS (21);')
tdSql.execute(f'CREATE TABLE {dbname}.`t22` USING {dbname}.`st2` (`t1`) TAGS (22);')
tdSql.execute(f'CREATE TABLE {dbname}.`ta` (`ts` TIMESTAMP, `v1` INT);')
tdSql.execute(f"insert into {dbname}.t11 values ( '2025-01-21 00:11:01', 111 )")
tdSql.execute(f"insert into {dbname}.t11 values ( '2025-01-21 00:11:02', 112 )")
tdSql.execute(f"insert into {dbname}.t11 values ( '2025-01-21 00:11:03', 113 )")
tdSql.execute(f"insert into {dbname}.t12 values ( '2025-01-21 00:12:01', 121 )")
tdSql.execute(f"insert into {dbname}.t12 values ( '2025-01-21 00:12:02', 122 )")
tdSql.execute(f"insert into {dbname}.t12 values ( '2025-01-21 00:12:03', 123 )")
tdSql.execute(f"insert into {dbname}.t21 values ( '2025-01-21 00:21:01', 211 )")
tdSql.execute(f"insert into {dbname}.t21 values ( '2025-01-21 00:21:02', 212 )")
tdSql.execute(f"insert into {dbname}.t21 values ( '2025-01-21 00:21:03', 213 )")
tdSql.execute(f"insert into {dbname}.t22 values ( '2025-01-21 00:22:01', 221 )")
tdSql.execute(f"insert into {dbname}.t22 values ( '2025-01-21 00:22:02', 222 )")
tdSql.execute(f"insert into {dbname}.t22 values ( '2025-01-21 00:22:03', 223 )")
tdSql.execute(f"insert into {dbname}.ta values ( '2025-01-21 00:00:01', 1 )")
tdSql.execute(f"insert into {dbname}.ta values ( '2025-01-21 00:00:02', 2 )")
tdSql.execute(f"insert into {dbname}.ta values ( '2025-01-21 00:00:03', 3 )")
tdLog.debug(f"-------------- step1: normal table test ------------------")
tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('ta');")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '2025-01-21 00:00:03')
tdSql.checkData(0, 1, 3)
tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('ta', 't21');")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '2025-01-21 00:00:03')
tdSql.checkData(0, 1, 3)
tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('t21');")
tdSql.checkRows(0)
tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('ta') and tbname in ('ta');")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '2025-01-21 00:00:03')
tdSql.checkData(0, 1, 3)
tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('ta') or tbname in ('ta');")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '2025-01-21 00:00:03')
tdSql.checkData(0, 1, 3)
tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('ta') or tbname in ('tb');")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '2025-01-21 00:00:03')
tdSql.checkData(0, 1, 3)
tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('ta', 't21') and tbname in ('ta');")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '2025-01-21 00:00:03')
tdSql.checkData(0, 1, 3)
tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('ta', 't21') and tbname in ('t21');")
tdSql.checkRows(0)
tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('t21') or tbname in ('ta');")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '2025-01-21 00:00:03')
tdSql.checkData(0, 1, 3)
tdLog.debug(f"-------------- step2: super table test ------------------")
tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('t11');")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '2025-01-21 00:11:03')
tdSql.checkData(0, 1, 113)
tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('t21');")
tdSql.checkRows(0)
tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('ta', 't21');")
tdSql.checkRows(0)
tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('t21', 't12');")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '2025-01-21 00:12:03')
tdSql.checkData(0, 1, 123)
tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('ta') and tbname in ('t12');")
tdSql.checkRows(0)
tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('t12') or tbname in ('t11');")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '2025-01-21 00:12:03')
tdSql.checkData(0, 1, 123)
tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('ta') or tbname in ('t21');")
tdSql.checkRows(0)
tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('t12', 't21') and tbname in ('t21');")
tdSql.checkRows(0)
tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('t12', 't11') and tbname in ('t11');")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '2025-01-21 00:11:03')
tdSql.checkData(0, 1, 113)
def run(self):
self.inTest()
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())