diff --git a/README-CN.md b/README-CN.md index 99bbf9aabd..37c6f59c46 100644 --- a/README-CN.md +++ b/README-CN.md @@ -10,7 +10,36 @@ 简体中文 | [English](README.md) | [TDengine 云服务](https://cloud.taosdata.com/?utm_medium=cn&utm_source=github) | 很多职位正在热招中,请看[这里](https://www.taosdata.com/careers/) -# TDengine 简介 +# 目录 + +1. [TDengine 简介](#1-tdengine-简介) +1. [文档](#2-文档) +1. [必备工具](#3-必备工具) + - [3.1 Linux预备](#31-linux系统) + - [3.2 macOS预备](#32-macos系统) + - [3.3 Windows预备](#33-windows系统) + - [3.4 克隆仓库](#34-克隆仓库) +1. [构建](#4-构建) + - [4.1 Linux系统上构建](#41-linux系统上构建) + - [4.2 macOS系统上构建](#42-macos系统上构建) + - [4.3 Windows系统上构建](#43-windows系统上构建) +1. [打包](#5-打包) +1. [安装](#6-安装) + - [6.1 Linux系统上安装](#61-linux系统上安装) + - [6.2 macOS系统上安装](#62-macos系统上安装) + - [6.3 Windows系统上安装](#63-windows系统上安装) +1. [快速运行](#7-快速运行) + - [7.1 Linux系统上运行](#71-linux系统上运行) + - [7.2 macOS系统上运行](#72-macos系统上运行) + - [7.3 Windows系统上运行](#73-windows系统上运行) +1. [测试](#8-测试) +1. [版本发布](#9-版本发布) +1. [工作流](#10-工作流) +1. [覆盖率](#11-覆盖率) +1. [成为社区贡献者](#12-成为社区贡献者) + + +# 1. 简介 TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series Database, TSDB)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外,TDengine 还提供缓存、数据订阅、流式计算等功能,是一极简的时序数据处理平台,最大程度的减小系统设计的复杂度,降低研发和运营成本。与其他时序数据库相比,TDengine 的主要优势如下: @@ -26,12 +55,82 @@ TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series - **核心开源**:TDengine 的核心代码包括集群功能全部开源,截止到2022年8月1日,全球超过 135.9k 个运行实例,GitHub Star 18.7k,Fork 4.4k,社区活跃。 -# 文档 +了解TDengine高级功能的完整列表,请 [点击](https://tdengine.com/tdengine/)。体验TDengine最简单的方式是通过[TDengine云平台](https://cloud.tdengine.com)。 -关于完整的使用手册,系统架构和更多细节,请参考 [TDengine 文档](https://docs.taosdata.com) 或者 [TDengine Documentation](https://docs.tdengine.com)。 +# 2. 文档 -# 构建 +关于完整的使用手册,系统架构和更多细节,请参考 [TDengine](https://www.taosdata.com/) 或者 [TDengine 官方文档](https://docs.taosdata.com)。 +# 3. 必备工具 + +## 3.1 Linux系统 + +
+ +安装Linux必备工具 + +### Ubuntu 18.04、20.04、22.04 + +```bash +sudo apt-get udpate +sudo apt-get install -y gcc cmake build-essential git libjansson-dev \ + libsnappy-dev liblzma-dev zlib1g-dev pkg-config +``` + +### CentOS 8 + +```bash +sudo yum update +yum install -y epel-release gcc gcc-c++ make cmake git perl dnf-plugins-core +yum config-manager --set-enabled powertools +yum install -y zlib-static xz-devel snappy-devel jansson-devel pkgconfig libatomic-static libstdc++-static +``` + +
+ +## 3.2 macOS系统 + +
+ +安装macOS必备工具 + +根据提示安装依赖工具 [brew](https://brew.sh/). + +```bash +brew install argp-standalone gflags pkgconfig +``` + +
+ +## 3.3 Windows系统 + +
+ +安装Windows必备工具 + +进行中。 + +
+ +## 3.4 克隆仓库 + +
+ +克隆仓库 + +通过如下命令将TDengine仓库克隆到指定计算机: + +```bash +git clone https://github.com/taosdata/TDengine.git +cd TDengine +``` + +> **注意:** +> 想连接更多TDengine连接器,可参考以下仓库: [JDBC连接器](https://github.com/taosdata/taos-connector-jdbc), [Go连接器](https://github.com/taosdata/driver-go), [Python连接器](https://github.com/taosdata/taos-connector-python), [Node.js连接器](https://github.com/taosdata/taos-connector-node), [C#连接器](https://github.com/taosdata/taos-connector-dotnet), [Rust连接器](https://github.com/taosdata/taos-connector-rust). + +
+ +# 4. 构建 TDengine 目前可以在 Linux、 Windows、macOS 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64,后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。目前不支持使用交叉编译器构建。 用户可根据需求选择通过源码、[容器](https://docs.taosdata.com/get-started/docker/)、[安装包](https://docs.taosdata.com/get-started/package/)或[Kubernetes](https://docs.taosdata.com/deployment/k8s/)来安装。本快速指南仅适用于通过源码安装。 @@ -40,309 +139,257 @@ TDengine 还提供一组辅助工具软件 taosTools,目前它包含 taosBench 为了构建TDengine, 请使用 [CMake](https://cmake.org/) 3.13.0 或者更高版本。 -## 安装工具 +## 4.1 Linux系统上构建 -### Ubuntu 18.04 及以上版本 & Debian: +
-```bash -sudo apt-get install -y gcc cmake build-essential git libssl-dev libgflags2.2 libgflags-dev -``` +Linux系统上构建步骤 -#### 为 taos-tools 安装编译需要的软件 - -为了在 Ubuntu/Debian 系统上编译 [taos-tools](https://github.com/taosdata/taos-tools) 需要安装如下软件: - -```bash -sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev zlib1g pkg-config -``` - -### CentOS 7.9 - -```bash -sudo yum install epel-release -sudo yum update -sudo yum install -y gcc gcc-c++ make cmake3 gflags git openssl-devel -sudo ln -sf /usr/bin/cmake3 /usr/bin/cmake -``` - -### CentOS 8/Fedora/Rocky Linux - -```bash -sudo dnf install -y gcc gcc-c++ gflags make cmake epel-release git openssl-devel -``` - -#### 在 CentOS 上构建 taosTools 安装依赖软件 - - -#### CentOS 7.9 - - -``` -sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel -``` - -#### CentOS 8/Fedora/Rocky Linux - -``` -sudo yum install -y epel-release -sudo yum install -y dnf-plugins-core -sudo yum config-manager --set-enabled powertools -sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel -``` - -注意:由于 snappy 缺乏 pkg-config 支持(参考 [链接](https://github.com/google/snappy/pull/86)),会导致 cmake 提示无法发现 libsnappy,实际上工作正常。 - -若 powertools 安装失败,可以尝试改用: -``` -sudo yum config-manager --set-enabled powertools -``` - -#### CentOS + devtoolset - -除上述编译依赖包,需要执行以下命令: - -``` -sudo yum install centos-release-scl -sudo yum install devtoolset-9 devtoolset-9-libatomic-devel -scl enable devtoolset-9 -- bash -``` - -### macOS - -``` -brew install argp-standalone gflags pkgconfig -``` - -### 设置 golang 开发环境 - -TDengine 包含数个使用 Go 语言开发的组件,比如taosAdapter, 请参考 golang.org 官方文档设置 go 开发环境。 - -请使用 1.20 及以上版本。对于中国用户,我们建议使用代理来加速软件包下载。 - -``` -go env -w GO111MODULE=on -go env -w GOPROXY=https://goproxy.cn,direct -``` - -缺省是不会构建 taosAdapter, 但您可以使用以下命令选择构建 taosAdapter 作为 RESTful 接口的服务。 - -``` -cmake .. -DBUILD_HTTP=false -``` - -### 设置 rust 开发环境 - -TDengine 包含数个使用 Rust 语言开发的组件. 请参考 rust-lang.org 官方文档设置 rust 开发环境。 - -## 获取源码 - -首先,你需要从 GitHub 克隆源码: - -```bash -git clone https://github.com/taosdata/TDengine.git -cd TDengine -``` -如果使用 https 协议下载比较慢,可以通过修改 ~/.gitconfig 文件添加以下两行设置使用 ssh 协议下载。需要首先上传 ssh 密钥到 GitHub,详细方法请参考 GitHub 官方文档。 - -``` -[url "git@github.com:"] - insteadOf = https://github.com/ -``` -## 特别说明 - -[JDBC 连接器](https://github.com/taosdata/taos-connector-jdbc), [Go 连接器](https://github.com/taosdata/driver-go),[Python 连接器](https://github.com/taosdata/taos-connector-python),[Node.js 连接器](https://github.com/taosdata/taos-connector-node),[C# 连接器](https://github.com/taosdata/taos-connector-dotnet) ,[Rust 连接器](https://github.com/taosdata/taos-connector-rust) 和 [Grafana 插件](https://github.com/taosdata/grafanaplugin)已移到独立仓库。 - - -## 构建 TDengine - -### Linux 系统 - -可以运行代码仓库中的 `build.sh` 脚本编译出 TDengine 和 taosTools(包含 taosBenchmark 和 taosdump)。 +可以通过以下命令使用脚本 `build.sh` 编译TDengine和taosTools,包括taosBenchmark和taosdump: ```bash ./build.sh ``` -这个脚本等价于执行如下命令: +也可以通过以下命令进行构建: ```bash -mkdir debug -cd debug +mkdir debug && cd debug cmake .. -DBUILD_TOOLS=true -DBUILD_CONTRIB=true make ``` -您也可以选择使用 jemalloc 作为内存分配器,替代默认的 glibc: +可以使用Jemalloc作为内存分配器,而不是使用glibc: ```bash -apt install autoconf cmake .. -DJEMALLOC_ENABLED=true ``` - -在 X86-64、X86、arm64 平台上,TDengine 生成脚本可以自动检测机器架构。也可以手动配置 CPUTYPE 参数来指定 CPU 类型,如 aarch64 等。 - -aarch64: +TDengine构建脚本可以自动检测x86、x86-64、arm64平台上主机的体系结构。 +您也可以通过CPUTYPE选项手动指定架构: ```bash cmake .. -DCPUTYPE=aarch64 && cmake --build . ``` -### Windows 系统 +
-如果你使用的是 Visual Studio 2013 版本: +## 4.2 macOS系统上构建 -打开 cmd.exe,执行 vcvarsall.bat 时,为 64 位操作系统指定“x86_amd64”,为 32 位操作系统指定“x86”。 +
-```bash +macOS系统上构建步骤 + +请安装XCode命令行工具和cmake。使用XCode 11.4+在Catalina和Big Sur上完成验证。 + +```shell mkdir debug && cd debug -"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" < x86_amd64 | x86 > +cmake .. && cmake --build . +``` + +
+ +## 4.3 Windows系统上构建 + +
+ +Windows系统上构建步骤 + +如果您使用的是Visual Studio 2013,请执行“cmd.exe”打开命令窗口执行如下命令。 +执行vcvarsall.bat时,64位的Windows请指定“amd64”,32位的Windows请指定“x86”。 + +```cmd +mkdir debug && cd debug +"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" < amd64 | x86 > cmake .. -G "NMake Makefiles" nmake ``` -如果你使用的是 Visual Studio 2019 或 2017 版本: +如果您使用Visual Studio 2019或2017: -打开 cmd.exe,执行 vcvarsall.bat 时,为 64 位操作系统指定“x64”,为 32 位操作系统指定“x86”。 +请执行“cmd.exe”打开命令窗口执行如下命令。 +执行vcvarsall.bat时,64位的Windows请指定“x64”,32位的Windows请指定“x86”。 -```bash +```cmd mkdir debug && cd debug "c:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvarsall.bat" < x64 | x86 > cmake .. -G "NMake Makefiles" nmake ``` -你也可以从开始菜单中找到"Visual Studio < 2019 | 2017 >"菜单项,根据你的系统选择"x64 Native Tools Command Prompt for VS < 2019 | 2017 >"或"x86 Native Tools Command Prompt for VS < 2019 | 2017 >",打开命令行窗口,执行: +或者,您可以通过点击Windows开始菜单打开命令窗口->“Visual Studio < 2019 | 2017 >”文件夹->“x64原生工具命令提示符VS < 2019 | 2017 >”或“x86原生工具命令提示符VS < 2019 | 2017 >”取决于你的Windows是什么架构,然后执行命令如下: -```bash +```cmd mkdir debug && cd debug cmake .. -G "NMake Makefiles" nmake ``` +
-### macOS 系统 +# 5. 打包 -安装 XCode 命令行工具和 cmake. 在 Catalina 和 Big Sur 操作系统上,需要安装 XCode 11.4+ 版本。 +由于一些组件依赖关系,TDengine社区安装程序不能仅由该存储库创建。我们仍在努力改进。 + +# 6. 安装 + + +## 6.1 Linux系统上安装 + +
+ +Linux系统上安装详细步骤 + +构建成功后,TDengine可以通过以下命令进行安装: ```bash -mkdir debug && cd debug -cmake .. && cmake --build . +sudo make install ``` +从源代码安装还将为TDengine配置服务管理。用户也可以使用[TDengine安装包](https://docs.taosdata.com/get-started/package/)进行安装。 -# 安装 +
-## Linux 系统 +## 6.2 macOS系统上安装 -生成完成后,安装 TDengine: +
+ +macOS系统上安装详细步骤 + +构建成功后,TDengine可以通过以下命令进行安装: ```bash sudo make install ``` -用户可以在[文件目录结构](https://docs.taosdata.com/reference/directory/)中了解更多在操作系统中生成的目录或文件。 +
-从源代码安装也会为 TDengine 配置服务管理 ,用户也可以选择[从安装包中安装](https://docs.taosdata.com/get-started/package/)。 +## 6.3 Windows系统上安装 -安装成功后,在终端中启动 TDengine 服务: +
-```bash -sudo systemctl start taosd -``` +Windows系统上安装详细步骤 -用户可以使用 TDengine CLI 来连接 TDengine 服务,在终端中,输入: - -```bash -taos -``` - -如果 TDengine CLI 连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印出错误消息。 - -## Windows 系统 - -生成完成后,安装 TDengine: +构建成功后,TDengine可以通过以下命令进行安装: ```cmd nmake install ``` -## macOS 系统 +
-生成完成后,安装 TDengine: +# 7. 快速运行 + +## 7.1 Linux系统上运行 + +
+ +Linux系统上运行详细步骤 + +在Linux系统上安装TDengine完成后,在终端运行如下命令启动服务: ```bash -sudo make install +sudo systemctl start taosd ``` - -用户可以在[文件目录结构](https://docs.taosdata.com/reference/directory/)中了解更多在操作系统中生成的目录或文件。 - -从源代码安装也会为 TDengine 配置服务管理 ,用户也可以选择[从安装包中安装](https://docs.taosdata.com/get-started/package/)。 - -安装成功后,可以在应用程序中双击 TDengine 图标启动服务,或者在终端中启动 TDengine 服务: - -```bash -sudo launchctl start com.tdengine.taosd -``` - -用户可以使用 TDengine CLI 来连接 TDengine 服务,在终端中,输入: +然后用户可以通过如下命令使用TDengine命令行连接TDengine服务: ```bash taos ``` -如果 TDengine CLI 连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印出错误消息。 +如果TDengine 命令行连接服务器成功,系统将打印欢迎信息和版本信息。否则,将显示连接错误信息。 -## 快速运行 - -如果不希望以服务方式运行 TDengine,也可以在终端中直接运行它。也即在生成完成后,执行以下命令(在 Windows 下,生成的可执行文件会带有 .exe 后缀,例如会名为 taosd.exe ): +如果您不想将TDengine作为服务运行,您可以在当前终端中运行它。例如,要在构建完成后快速启动TDengine服务器,在终端中运行以下命令:(我们以Linux为例,Windows上的命令为 `taosd.exe`) ```bash ./build/bin/taosd -c test/cfg ``` -在另一个终端,使用 TDengine CLI 连接服务器: +在另一个终端上,使用TDengine命令行连接服务器: ```bash ./build/bin/taos -c test/cfg ``` -"-c test/cfg"指定系统配置文件所在目录。 +选项 `-c test/cfg` 指定系统配置文件的目录。 -# 体验 TDengine +
-在 TDengine 终端中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行插入查询操作。 +## 7.2 macOS系统上运行 -```sql -CREATE DATABASE demo; -USE demo; -CREATE TABLE t (ts TIMESTAMP, speed INT); -INSERT INTO t VALUES('2019-07-15 00:00:00', 10); -INSERT INTO t VALUES('2019-07-15 01:00:00', 20); -SELECT * FROM t; - ts | speed | -=================================== - 19-07-15 00:00:00.000| 10| - 19-07-15 01:00:00.000| 20| -Query OK, 2 row(s) in set (0.001700s) +
+ +macOS系统上运行详细步骤 + +在macOS上安装完成后启动服务,双击/applications/TDengine启动程序,或者在终端中执行如下命令: + +```bash +sudo launchctl start com.tdengine.taosd ``` -# 应用开发 +然后在终端中使用如下命令通过TDengine命令行连接TDengine服务器: -## 官方连接器 +```bash +taos +``` -TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用: +如果TDengine命令行连接服务器成功,系统将打印欢迎信息和版本信息。否则,将显示错误信息。 -- [Java](https://docs.taosdata.com/reference/connector/java/) -- [C/C++](https://docs.taosdata.com/reference/connector/cpp/) -- [Python](https://docs.taosdata.com/reference/connector/python/) -- [Go](https://docs.taosdata.com/reference/connector/go/) -- [Node.js](https://docs.taosdata.com/reference/connector/node/) -- [Rust](https://docs.taosdata.com/reference/connector/rust/) -- [C#](https://docs.taosdata.com/reference/connector/csharp/) -- [RESTful API](https://docs.taosdata.com/reference/connector/rest-api/) +
-# 成为社区贡献者 + +## 7.3 Windows系统上运行 + +
+ +Windows系统上运行详细步骤 + +您可以使用以下命令在Windows平台上启动TDengine服务器: + +```cmd +.\build\bin\taosd.exe -c test\cfg +``` + +在另一个终端上,使用TDengine命令行连接服务器: + +```cmd +.\build\bin\taos.exe -c test\cfg +``` + +选项 `-c test/cfg` 指定系统配置文件的目录。 + +
+ +# 8. 测试 + +有关如何在TDengine上运行不同类型的测试,请参考 [TDengine测试](./tests/README-CN.md) + +# 9. 版本发布 + +TDengine发布版本的完整列表,请参考 [版本列表](https://github.com/taosdata/TDengine/releases) + +# 10. 工作流 + +TDengine构建检查工作流可以在参考 [Github Action](https://github.com/taosdata/TDengine/actions/workflows/taosd-ci-build.yml), 更多的工作流正在创建中,将很快可用。 + +# 11. 覆盖率 + +最新的TDengine测试覆盖率报告可参考 [coveralls.io](https://coveralls.io/github/taosdata/TDengine) + +
+ +如何在本地运行测试覆盖率报告? + +在本地创建测试覆盖率报告(HTML格式),请运行以下命令: + +```bash +cd tests +bash setup-lcov.sh -v 1.16 && ./run_local_coverage.sh -b main -c task +# on main branch and run cases in longtimeruning_cases.task +# for more infomation about options please refer to ./run_local_coverage.sh -h +``` +> **注意:** +> 请注意,-b和-i选项将使用-DCOVER=true选项重新编译TDengine,这可能需要花费一些时间。 + +
+ +# 12. 成为社区贡献者 点击 [这里](https://www.taosdata.com/contributor),了解如何成为 TDengine 的贡献者。 - -# 加入技术交流群 - -TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小 T 为好友,即可入群。 diff --git a/README.md b/README.md index e4814cee67..f540b1cc43 100644 --- a/README.md +++ b/README.md @@ -26,24 +26,33 @@ English | [简体中文](README-CN.md) | [TDengine Cloud](https://cloud.tdengine # Table of Contents -1. [What is TDengine?](#1-what-is-tdengine) -2. [Documentation](#2-documentation) -3. [Building](#3-building) - 1. [Install build tools](#31-install-build-tools) - 1. [Get the source codes](#32-get-the-source-codes) - 1. [Special Note](#33-special-note) - 1. [Build TDengine](#34-build-tdengine) -4. [Installing](#4-installing) - 1. [On Linux platform](#41-on-linux-platform) - 1. [On Windows platform](#42-on-windows-platform) - 1. [On macOS platform](#43-on-macos-platform) - 1. [Quick Run](#44-quick-run) -5. [Try TDengine](#5-try-tdengine) -6. [Developing with TDengine](#6-developing-with-tdengine) -7. [Contribute to TDengine](#7-contribute-to-tdengine) -8. [Join the TDengine Community](#8-join-the-tdengine-community) +1. [Introduction](#1-introduction) +1. [Documentation](#2-documentation) +1. [Prerequisites](#3-prerequisites) + - [3.1 Prerequisites On Linux](#31-on-linux) + - [3.2 Prerequisites On macOS](#32-on-macos) + - [3.3 Prerequisites On Windows](#33-on-windows) + - [3.4 Clone the repo](#34-clone-the-repo) +1. [Building](#4-building) + - [4.1 Build on Linux](#41-build-on-linux) + - [4.2 Build on macOS](#42-build-on-macos) + - [4.3 Build On Windows](#43-build-on-windows) +1. [Packaging](#5-packaging) +1. [Installation](#6-installation) + - [6.1 Install on Linux](#61-install-on-linux) + - [6.2 Install on macOS](#62-install-on-macos) + - [6.3 Install on Windows](#63-install-on-windows) +1. [Running](#7-running) + - [7.1 Run TDengine on Linux](#71-run-tdengine-on-linux) + - [7.2 Run TDengine on macOS](#72-run-tdengine-on-macos) + - [7.3 Run TDengine on Windows](#73-run-tdengine-on-windows) +1. [Testing](#8-testing) +1. [Releasing](#9-releasing) +1. [Workflow](#10-workflow) +1. [Coverage](#11-coverage) +1. [Contributing](#12-contributing) -# 1. What is TDengine? +# 1. Introduction TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-series databases with the following advantages: @@ -65,132 +74,91 @@ For a full list of TDengine competitive advantages, please [check here](https:// For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.tdengine.com) ([TDengine 文档](https://docs.taosdata.com)) -# 3. Building +# 3. Prerequisites -At the moment, TDengine server supports running on Linux/Windows/macOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. Right now we don't support build with cross-compiling environment. +## 3.1 On Linux -You can choose to install through source code, [container](https://docs.tdengine.com/get-started/docker/), [installation package](https://docs.tdengine.com/get-started/package/) or [Kubernetes](https://docs.tdengine.com/deployment/k8s/). This quick guide only applies to installing from source. +
-TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine. +Install required tools on Linux -To build TDengine, use [CMake](https://cmake.org/) 3.13.0 or higher versions in the project directory. - -## 3.1 Install build tools - -### Ubuntu 18.04 and above or Debian +### For Ubuntu 18.04、20.04、22.04 ```bash -sudo apt-get install -y gcc cmake build-essential git libssl-dev libgflags2.2 libgflags-dev +sudo apt-get udpate +sudo apt-get install -y gcc cmake build-essential git libjansson-dev \ + libsnappy-dev liblzma-dev zlib1g-dev pkg-config ``` -#### Install build dependencies for taosTools - -To build the [taosTools](https://github.com/taosdata/taos-tools) on Ubuntu/Debian, the following packages need to be installed. +### For CentOS 8 ```bash -sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev zlib1g pkg-config -``` - -### CentOS 7.9 - -```bash -sudo yum install epel-release sudo yum update -sudo yum install -y gcc gcc-c++ make cmake3 gflags git openssl-devel -sudo ln -sf /usr/bin/cmake3 /usr/bin/cmake +yum install -y epel-release gcc gcc-c++ make cmake git perl dnf-plugins-core +yum config-manager --set-enabled powertools +yum install -y zlib-static xz-devel snappy-devel jansson-devel pkgconfig libatomic-static libstdc++-static ``` -### CentOS 8/Fedora/Rocky Linux +
+ +## 3.2 On macOS + +
+ +Install required tools on macOS + +Please intall the dependencies with [brew](https://brew.sh/). ```bash -sudo dnf install -y gcc gcc-c++ make cmake epel-release gflags git openssl-devel -``` - -#### Install build dependencies for taosTools on CentOS - -#### CentOS 7.9 - -``` -sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel -``` - -#### CentOS 8/Fedora/Rocky Linux - -``` -sudo yum install -y epel-release -sudo yum install -y dnf-plugins-core -sudo yum config-manager --set-enabled powertools -sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel -``` - -Note: Since snappy lacks pkg-config support (refer to [link](https://github.com/google/snappy/pull/86)), it leads a cmake prompt libsnappy not found. But snappy still works well. - -If the PowerTools installation fails, you can try to use: - -``` -sudo yum config-manager --set-enabled powertools -``` - -#### For CentOS + devtoolset - -Besides above dependencies, please run following commands: - -``` -sudo yum install centos-release-scl -sudo yum install devtoolset-9 devtoolset-9-libatomic-devel -scl enable devtoolset-9 -- bash -``` - -### macOS - -``` brew install argp-standalone gflags pkgconfig ``` -### Setup golang environment +
-TDengine includes a few components like taosAdapter developed by Go language. Please refer to golang.org official documentation for golang environment setup. +## 3.3 On Windows -Please use version 1.20+. For the user in China, we recommend using a proxy to accelerate package downloading. +
-``` -go env -w GO111MODULE=on -go env -w GOPROXY=https://goproxy.cn,direct -``` +Install required tools on Windows -The default will not build taosAdapter, but you can use the following command to build taosAdapter as the service for RESTful interface. +Work in Progress. -``` -cmake .. -DBUILD_HTTP=false -``` +
-### Setup rust environment +## 3.4 Clone the repo -TDengine includes a few components developed by Rust language. Please refer to rust-lang.org official documentation for rust environment setup. +
-## 3.2 Get the source codes +Clone the repo -First of all, you may clone the source codes from github: +Clone the repository to the target machine: ```bash git clone https://github.com/taosdata/TDengine.git cd TDengine ``` -You can modify the file ~/.gitconfig to use ssh protocol instead of https for better download speed. You will need to upload ssh public key to GitHub first. Please refer to GitHub official documentation for detail. -``` -[url "git@github.com:"] - insteadOf = https://github.com/ -``` +> **NOTE:** +> TDengine Connectors can be found in following repositories: [JDBC Connector](https://github.com/taosdata/taos-connector-jdbc), [Go Connector](https://github.com/taosdata/driver-go), [Python Connector](https://github.com/taosdata/taos-connector-python), [Node.js Connector](https://github.com/taosdata/taos-connector-node), [C# Connector](https://github.com/taosdata/taos-connector-dotnet), [Rust Connector](https://github.com/taosdata/taos-connector-rust). -## 3.3 Special Note +
-[JDBC Connector](https://github.com/taosdata/taos-connector-jdbc), [Go Connector](https://github.com/taosdata/driver-go),[Python Connector](https://github.com/taosdata/taos-connector-python),[Node.js Connector](https://github.com/taosdata/taos-connector-node),[C# Connector](https://github.com/taosdata/taos-connector-dotnet) ,[Rust Connector](https://github.com/taosdata/taos-connector-rust) and [Grafana plugin](https://github.com/taosdata/grafanaplugin) has been moved to standalone repository. +# 4. Building -## 3.4 Build TDengine +At the moment, TDengine server supports running on Linux/Windows/MacOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service. TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. Right now we don't support build with cross-compiling environment. -### On Linux platform +You can choose to install through source code, [container](https://docs.tdengine.com/get-started/deploy-in-docker/), [installation package](https://docs.tdengine.com/get-started/deploy-from-package/) or [Kubernetes](https://docs.tdengine.com/operations-and-maintenance/deploy-your-cluster/#kubernetes-deployment). This quick guide only applies to install from source. + +TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine. + +To build TDengine, use [CMake](https://cmake.org/) 3.13.0 or higher versions in the project directory. + +## 4.1 Build on Linux + +
+ +Detailed steps to build on Linux You can run the bash script `build.sh` to build both TDengine and taosTools including taosBenchmark and taosdump as below: @@ -201,29 +169,46 @@ You can run the bash script `build.sh` to build both TDengine and taosTools incl It equals to execute following commands: ```bash -mkdir debug -cd debug +mkdir debug && cd debug cmake .. -DBUILD_TOOLS=true -DBUILD_CONTRIB=true make ``` You can use Jemalloc as memory allocator instead of glibc: -``` -apt install autoconf +```bash cmake .. -DJEMALLOC_ENABLED=true ``` -TDengine build script can detect the host machine's architecture on X86-64, X86, arm64 platform. -You can also specify CPUTYPE option like aarch64 too if the detection result is not correct: - -aarch64: +TDengine build script can auto-detect the host machine's architecture on x86, x86-64, arm64 platform. +You can also specify architecture manually by CPUTYPE option: ```bash cmake .. -DCPUTYPE=aarch64 && cmake --build . ``` -### On Windows platform +
+ +## 4.2 Build on macOS + +
+ +Detailed steps to build on macOS + +Please install XCode command line tools and cmake. Verified with XCode 11.4+ on Catalina and Big Sur. + +```shell +mkdir debug && cd debug +cmake .. && cmake --build . +``` + +
+ +## 4.3 Build on Windows + +
+ +Detailed steps to build on Windows If you use the Visual Studio 2013, please open a command window by executing "cmd.exe". Please specify "amd64" for 64 bits Windows or specify "x86" for 32 bits Windows when you execute vcvarsall.bat. @@ -254,31 +239,67 @@ mkdir debug && cd debug cmake .. -G "NMake Makefiles" nmake ``` +
-### On macOS platform +# 5. Packaging -Please install XCode command line tools and cmake. Verified with XCode 11.4+ on Catalina and Big Sur. +The TDengine community installer can NOT be created by this repository only, due to some component dependencies. We are still working on this improvement. -```shell -mkdir debug && cd debug -cmake .. && cmake --build . -``` +# 6. Installation -# 4. Installing +## 6.1 Install on Linux -## 4.1 On Linux platform +
-After building successfully, TDengine can be installed by +Detailed steps to install on Linux + +After building successfully, TDengine can be installed by: ```bash sudo make install ``` -Users can find more information about directories installed on the system in the [directory and files](https://docs.tdengine.com/reference/directory/) section. +Installing from source code will also configure service management for TDengine. Users can also choose to [install from packages](https://docs.tdengine.com/get-started/deploy-from-package/) for it. -Installing from source code will also configure service management for TDengine.Users can also choose to [install from packages](https://docs.tdengine.com/get-started/package/) for it. +
-To start the service after installation, in a terminal, use: +## 6.2 Install on macOS + +
+ +Detailed steps to install on macOS + +After building successfully, TDengine can be installed by: + +```bash +sudo make install +``` + +
+ +## 6.3 Install on Windows + +
+ +Detailed steps to install on windows + +After building successfully, TDengine can be installed by: + +```cmd +nmake install +``` + +
+ +# 7. Running + +## 7.1 Run TDengine on Linux + +
+ +Detailed steps to run on Linux + +To start the service after installation on linux, in a terminal, use: ```bash sudo systemctl start taosd @@ -292,27 +313,29 @@ taos If TDengine CLI connects the server successfully, welcome messages and version info are printed. Otherwise, an error message is shown. -## 4.2 On Windows platform - -After building successfully, TDengine can be installed by: - -```cmd -nmake install -``` - -## 4.3 On macOS platform - -After building successfully, TDengine can be installed by: +If you don't want to run TDengine as a service, you can run it in current shell. For example, to quickly start a TDengine server after building, run the command below in terminal: (We take Linux as an example, command on Windows will be `taosd.exe`) ```bash -sudo make install +./build/bin/taosd -c test/cfg ``` -Users can find more information about directories installed on the system in the [directory and files](https://docs.tdengine.com/reference/directory/) section. +In another terminal, use the TDengine CLI to connect the server: -Installing from source code will also configure service management for TDengine.Users can also choose to [install from packages](https://docs.tdengine.com/get-started/package/) for it. +```bash +./build/bin/taos -c test/cfg +``` -To start the service after installation, double-click the /applications/TDengine to start the program, or in a terminal, use: +Option `-c test/cfg` specifies the system configuration file directory. + +
+ +## 7.2 Run TDengine on macOS + +
+ +Detailed steps to run on macOS + +To start the service after installation on macOS, double-click the /applications/TDengine to start the program, or in a terminal, use: ```bash sudo launchctl start com.tdengine.taosd @@ -326,64 +349,63 @@ taos If TDengine CLI connects the server successfully, welcome messages and version info are printed. Otherwise, an error message is shown. -## 4.4 Quick Run +
-If you don't want to run TDengine as a service, you can run it in current shell. For example, to quickly start a TDengine server after building, run the command below in terminal: (We take Linux as an example, command on Windows will be `taosd.exe`) -```bash -./build/bin/taosd -c test/cfg +## 7.3 Run TDengine on Windows + +
+ +Detailed steps to run on windows + +You can start TDengine server on Windows platform with below commands: + +```cmd +.\build\bin\taosd.exe -c test\cfg ``` In another terminal, use the TDengine CLI to connect the server: -```bash -./build/bin/taos -c test/cfg +```cmd +.\build\bin\taos.exe -c test\cfg ``` option "-c test/cfg" specifies the system configuration file directory. -# 5. Try TDengine +
-It is easy to run SQL commands from TDengine CLI which is the same as other SQL databases. +# 8. Testing -```sql -CREATE DATABASE demo; -USE demo; -CREATE TABLE t (ts TIMESTAMP, speed INT); -INSERT INTO t VALUES('2019-07-15 00:00:00', 10); -INSERT INTO t VALUES('2019-07-15 01:00:00', 20); -SELECT * FROM t; - ts | speed | -=================================== - 19-07-15 00:00:00.000| 10| - 19-07-15 01:00:00.000| 20| -Query OK, 2 row(s) in set (0.001700s) +For how to run different types of tests on TDengine, please see [Testing TDengine](./tests/README.md). + +# 9. Releasing + +For the complete list of TDengine Releases, please see [Releases](https://github.com/taosdata/TDengine/releases). + +# 10. Workflow + +TDengine build check workflow can be found in this [Github Action](https://github.com/taosdata/TDengine/actions/workflows/taosd-ci-build.yml). More workflows will be available soon. + +# 11. Coverage + +Latest TDengine test coverage report can be found on [coveralls.io](https://coveralls.io/github/taosdata/TDengine) + +
+ +How to run the coverage report locally? +To create the test coverage report (in HTML format) locally, please run following commands: + +```bash +cd tests +bash setup-lcov.sh -v 1.16 && ./run_local_coverage.sh -b main -c task +# on main branch and run cases in longtimeruning_cases.task +# for more infomation about options please refer to ./run_local_coverage.sh -h ``` +> **NOTE:** +> Please note that the -b and -i options will recompile TDengine with the -DCOVER=true option, which may take a amount of time. -# 6. Developing with TDengine +
-## Official Connectors +# 12. Contributing -TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation. - -- [Java](https://docs.tdengine.com/reference/connectors/java/) -- [C/C++](https://docs.tdengine.com/reference/connectors/cpp/) -- [Python](https://docs.tdengine.com/reference/connectors/python/) -- [Go](https://docs.tdengine.com/reference/connectors/go/) -- [Node.js](https://docs.tdengine.com/reference/connectors/node/) -- [Rust](https://docs.tdengine.com/reference/connectors/rust/) -- [C#](https://docs.tdengine.com/reference/connectors/csharp/) -- [RESTful API](https://docs.tdengine.com/reference/connectors/rest-api/) - -# 7. Contribute to TDengine - -Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to the project. - -# 8. Join the TDengine Community - -For more information about TDengine, you can follow us on social media and join our Discord server: - -- [Discord](https://discord.com/invite/VZdSuUg4pS) -- [Twitter](https://twitter.com/TDengineDB) -- [LinkedIn](https://www.linkedin.com/company/tdengine/) -- [YouTube](https://www.youtube.com/@tdengine) +Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to TDengine. diff --git a/cmake/cmake.options b/cmake/cmake.options index e3b5782d85..3e655b1796 100644 --- a/cmake/cmake.options +++ b/cmake/cmake.options @@ -166,6 +166,10 @@ IF(${BUILD_WITH_ANALYSIS}) set(BUILD_WITH_S3 ON) ENDIF() +IF(${TD_LINUX}) + set(BUILD_WITH_ANALYSIS ON) +ENDIF() + IF(${BUILD_S3}) IF(${BUILD_WITH_S3}) @@ -205,13 +209,6 @@ option( off ) - -option( - BUILD_WITH_NURAFT - "If build with NuRaft" - OFF -) - option( BUILD_WITH_UV "If build with libuv" diff --git a/cmake/cmake.version b/cmake/cmake.version index 13fac68e3a..ad78dbbc1e 100644 --- a/cmake/cmake.version +++ b/cmake/cmake.version @@ -2,7 +2,7 @@ IF (DEFINED VERNUMBER) SET(TD_VER_NUMBER ${VERNUMBER}) ELSE () - SET(TD_VER_NUMBER "3.3.5.0.alpha") + SET(TD_VER_NUMBER "3.3.5.2.alpha") ENDIF () IF (DEFINED VERCOMPATIBLE) diff --git a/contrib/CMakeLists.txt b/contrib/CMakeLists.txt index 78eded3928..9c719eb68d 100644 --- a/contrib/CMakeLists.txt +++ b/contrib/CMakeLists.txt @@ -17,7 +17,6 @@ elseif(${BUILD_WITH_COS}) file(MAKE_DIRECTORY $ENV{HOME}/.cos-local.1/) cat("${TD_SUPPORT_DIR}/mxml_CMakeLists.txt.in" ${CONTRIB_TMP_FILE3}) cat("${TD_SUPPORT_DIR}/apr_CMakeLists.txt.in" ${CONTRIB_TMP_FILE3}) - cat("${TD_SUPPORT_DIR}/curl_CMakeLists.txt.in" ${CONTRIB_TMP_FILE3}) endif(${BUILD_WITH_COS}) configure_file(${CONTRIB_TMP_FILE3} "${TD_CONTRIB_DIR}/deps-download/CMakeLists.txt") @@ -148,9 +147,7 @@ endif(${BUILD_WITH_SQLITE}) # s3 if(${BUILD_WITH_S3}) - cat("${TD_SUPPORT_DIR}/ssl_CMakeLists.txt.in" ${CONTRIB_TMP_FILE}) cat("${TD_SUPPORT_DIR}/xml2_CMakeLists.txt.in" ${CONTRIB_TMP_FILE}) - cat("${TD_SUPPORT_DIR}/curl_CMakeLists.txt.in" ${CONTRIB_TMP_FILE}) cat("${TD_SUPPORT_DIR}/libs3_CMakeLists.txt.in" ${CONTRIB_TMP_FILE}) cat("${TD_SUPPORT_DIR}/azure_CMakeLists.txt.in" ${CONTRIB_TMP_FILE}) add_definitions(-DUSE_S3) @@ -183,6 +180,11 @@ if(${BUILD_GEOS}) cat("${TD_SUPPORT_DIR}/geos_CMakeLists.txt.in" ${CONTRIB_TMP_FILE}) endif() +if (${BUILD_WITH_ANALYSIS}) + cat("${TD_SUPPORT_DIR}/ssl_CMakeLists.txt.in" ${CONTRIB_TMP_FILE}) + cat("${TD_SUPPORT_DIR}/curl_CMakeLists.txt.in" ${CONTRIB_TMP_FILE}) +endif() + # if(${BUILD_PCRE2}) cat("${TD_SUPPORT_DIR}/pcre2_CMakeLists.txt.in" ${CONTRIB_TMP_FILE}) diff --git a/contrib/test/CMakeLists.txt b/contrib/test/CMakeLists.txt index 5d613dfed2..318d00b92c 100644 --- a/contrib/test/CMakeLists.txt +++ b/contrib/test/CMakeLists.txt @@ -20,14 +20,6 @@ if(${BUILD_WITH_SQLITE}) add_subdirectory(sqlite) endif(${BUILD_WITH_SQLITE}) -if(${BUILD_WITH_CRAFT}) - add_subdirectory(craft) -endif(${BUILD_WITH_CRAFT}) - -if(${BUILD_WITH_TRAFT}) - # add_subdirectory(traft) -endif(${BUILD_WITH_TRAFT}) - if(${BUILD_S3}) add_subdirectory(azure) endif() diff --git a/docs/en/07-develop/01-connect.md b/docs/en/07-develop/01-connect.md index c14eed311a..b6725ed7a4 100644 --- a/docs/en/07-develop/01-connect.md +++ b/docs/en/07-develop/01-connect.md @@ -109,7 +109,7 @@ If you are using Maven to manage your project, simply add the following dependen com.taosdata.jdbc taos-jdbcdriver - 3.5.1 + 3.5.2 ``` diff --git a/docs/en/10-third-party/01-collection/flink.md b/docs/en/10-third-party/01-collection/flink.md index 12468b4f6c..19a767f1f6 100644 --- a/docs/en/10-third-party/01-collection/flink.md +++ b/docs/en/10-third-party/01-collection/flink.md @@ -26,7 +26,8 @@ Flink Connector supports all platforms that can run Flink 1.19 and above version | Flink Connector Version | Major Changes | TDengine Version| |-------------------------| ------------------------------------ | ---------------- | -| 2.0.0 | 1.Support SQL queries on data in TDengine database.
2. Support CDC subscription to data in TDengine database.
3. Supports reading and writing to TDengine database using Table SQL. | 3.3.5.0 and higher| +| 2.0.1 | Sink supports writing types from Rowdata implementations.| - | +| 2.0.0 | 1.Support SQL queries on data in TDengine database.
2. Support CDC subscription to data in TDengine database.
3. Supports reading and writing to TDengine database using Table SQL. | 3.3.5.1 and higher| | 1.0.0 | Support Sink function to write data from other sources to TDengine in the future.| 3.3.2.0 and higher| ## Exception and error codes @@ -114,7 +115,7 @@ If using Maven to manage a project, simply add the following dependencies in pom com.taosdata.flink flink-connector-tdengine - 2.0.0 + 2.0.1 ``` diff --git a/docs/en/14-reference/05-connector/14-java.md b/docs/en/14-reference/05-connector/14-java.md index c28702440a..43b219bf4e 100644 --- a/docs/en/14-reference/05-connector/14-java.md +++ b/docs/en/14-reference/05-connector/14-java.md @@ -33,6 +33,8 @@ The JDBC driver implementation for TDengine strives to be consistent with relati | taos-jdbcdriver Version | Major Changes | TDengine Version | | ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ | +| 3.5.3 | Support unsigned data types in WebSocket connections. | - | +| 3.5.2 | Fixed WebSocket result set free bug. | - | | 3.5.1 | Fixed the getObject issue in data subscription. | - | | 3.5.0 | 1. Optimized the performance of WebSocket connection parameter binding, supporting parameter binding queries using binary data.
2. Optimized the performance of small queries in WebSocket connection.
3. Added support for setting time zone and app info on WebSocket connection. | 3.3.5.0 and higher | | 3.4.0 | 1. Replaced fastjson library with jackson.
2. WebSocket uses a separate protocol identifier.
3. Optimized background thread usage to avoid user misuse leading to timeouts. | - | @@ -127,24 +129,27 @@ Please refer to the specific error codes: TDengine currently supports timestamp, numeric, character, boolean types, and the corresponding Java type conversions are as follows: -| TDengine DataType | JDBCType | -| ----------------- | ------------------ | -| TIMESTAMP | java.sql.Timestamp | -| INT | java.lang.Integer | -| BIGINT | java.lang.Long | -| FLOAT | java.lang.Float | -| DOUBLE | java.lang.Double | -| SMALLINT | java.lang.Short | -| TINYINT | java.lang.Byte | -| BOOL | java.lang.Boolean | -| BINARY | byte array | -| NCHAR | java.lang.String | -| JSON | java.lang.String | -| VARBINARY | byte[] | -| GEOMETRY | byte[] | +| TDengine DataType | JDBCType | Remark | +| ----------------- | -------------------- | --------------------------------------- | +| TIMESTAMP | java.sql.Timestamp | | +| BOOL | java.lang.Boolean | | +| TINYINT | java.lang.Byte | | +| TINYINT UNSIGNED | java.lang.Short | only supported in WebSocket connections | +| SMALLINT | java.lang.Short | | +| SMALLINT UNSIGNED | java.lang.Integer | only supported in WebSocket connections | +| INT | java.lang.Integer | | +| INT UNSIGNED | java.lang.Long | only supported in WebSocket connections | +| BIGINT | java.lang.Long | | +| BIGINT UNSIGNED | java.math.BigInteger | only supported in WebSocket connections | +| FLOAT | java.lang.Float | | +| DOUBLE | java.lang.Double | | +| BINARY | byte array | | +| NCHAR | java.lang.String | | +| JSON | java.lang.String | only supported in tags | +| VARBINARY | byte[] | | +| GEOMETRY | byte[] | | -**Note**: JSON type is only supported in tags. -Due to historical reasons, the BINARY type in TDengine is not truly binary data and is no longer recommended. Please use VARBINARY type instead. +**Note**: Due to historical reasons, the BINARY type in TDengine is not truly binary data and is no longer recommended. Please use VARBINARY type instead. GEOMETRY type is binary data in little endian byte order, complying with the WKB standard. For more details, please refer to [Data Types](../../sql-manual/data-types/) For the WKB standard, please refer to [Well-Known Binary (WKB)](https://libgeos.org/specifications/wkb/) For the Java connector, you can use the jts library to conveniently create GEOMETRY type objects, serialize them, and write to TDengine. Here is an example [Geometry Example](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/GeometryDemo.java) diff --git a/docs/en/28-releases/01-tdengine.md b/docs/en/28-releases/01-tdengine.md index 12cf5484d4..9f4246c7a0 100644 --- a/docs/en/28-releases/01-tdengine.md +++ b/docs/en/28-releases/01-tdengine.md @@ -25,6 +25,10 @@ Download links for TDengine 3.x version installation packages are as follows: import Release from "/components/ReleaseV3"; +## 3.3.5.2 + + + ## 3.3.5.0 diff --git a/docs/en/28-releases/03-notes/3.3.5.2.md b/docs/en/28-releases/03-notes/3.3.5.2.md new file mode 100755 index 0000000000..ce0141cccf --- /dev/null +++ b/docs/en/28-releases/03-notes/3.3.5.2.md @@ -0,0 +1,43 @@ +--- +title: TDengine 3.3.5.2 Release Notes +sidebar_label: 3.3.5.2 +description: Version 3.3.5.2 Notes +slug: /release-history/release-notes/3.3.5.2 +--- + +## Features + 1. feat: taosX now support multiple stables with template for MQTT + +## Enhancements + 1. enh: improve taosX error message if database is invalid + 2. enh: use poetry group depencencies and reduce dep when install [#251](https://github.com/taosdata/taos-connector-python/issues/251) + 3. enh: improve backup restore using taosX + 4. enh: during the multi-level storage data migration, if the migration time is too long, it may cause the Vnode to switch leader + 5. enh: adjust the systemctl strategy for managing the taosd process, if three consecutive restarts fail within 60 seconds, the next restart will be delayed until 900 seconds later + +## Fixes + 1. fix: the maxRetryWaitTime parameter is used to control the maximum reconnection timeout time for the client when the cluster is unable to provide services, but it does not take effect when encountering a Sync timeout error + 2. fix: supports immediate subscription to the new tag value after modifying the tag value of the sub-table + 3. fix: the tmq_consumer_poll function for data subscription does not return an error code when the call fails + 4. fix: taosd may crash when more than 100 views are created and the show views command is executed + 5. fix: when using stmt2 to insert data, if not all data columns are bound, the insertion operation will fail + 6. fix: when using stmt2 to insert data, if the database name or table name is enclosed in backticks, the insertion operation will fail + 7. fix: when closing a vnode, if there are ongoing file merge tasks, taosd may crash + 8. fix: frequent execution of the “drop table with tb_uid” statement may lead to a deadlock in taosd + 9. fix: the potential deadlock during the switching of log files + 10. fix: prohibit the creation of databases with the same names as system databases (information_schema, performance_schema) + 11. fix: when the inner query of a nested query come from a super table, the sorting information cannot be pushed up + 12. fix: incorrect error reporting when attempting to write Geometry data types that do not conform to topological specifications through the STMT interface + 13. fix: when using the percentile function and session window in a query statement, if an error occurs, taosd may crash + 14. fix: the issue of being unable to dynamically modify system parameters + 15. fix: random error of tranlict transaction in replication + 16. fix: the same consumer executes the unsubscribe operation and immediately attempts to subscribe to other different topics, the subscription API will return an error + 17. fix: fix CVE-2022-28948 security issue in go connector + 18. fix: when a subquery in a view contains an ORDER BY clause with an alias, and the query function itself also has an alias, querying the view will result in an error + 19. fix: when changing the database from a single replica to a mulit replica, if there are some metadata generated by earlier versions that are no longer used in the new version, the modification operation will fail + 20. fix: column names were not correctly copied when using SELECT * FROM subqueries + 21. fix: when performing max/min function on string type data, the results are inaccurate and taosd will crash + 22. fix: stream computing does not support the use of the HAVING clause, but no error is reported during creation + 23. fix: the version information displayed by taos shell for the server is inaccurate, such as being unable to correctly distinguish between the community edition and the enterprise edition + 24. fix: in certain specific query scenarios, when JOIN and CAST are used together, taosd may crash + diff --git a/docs/en/28-releases/03-notes/index.md b/docs/en/28-releases/03-notes/index.md index e54862e105..5ff7350e6c 100644 --- a/docs/en/28-releases/03-notes/index.md +++ b/docs/en/28-releases/03-notes/index.md @@ -5,6 +5,7 @@ slug: /release-history/release-notes [3.3.5.0](./3-3-5-0/) +[3.3.5.2](./3.3.5.2) [3.3.4.8](./3-3-4-8/) [3.3.4.3](./3-3-4-3/) diff --git a/docs/examples/JDBC/JDBCDemo/pom.xml b/docs/examples/JDBC/JDBCDemo/pom.xml index 78262712e9..7b8a64e2c7 100644 --- a/docs/examples/JDBC/JDBCDemo/pom.xml +++ b/docs/examples/JDBC/JDBCDemo/pom.xml @@ -19,7 +19,7 @@ com.taosdata.jdbc taos-jdbcdriver - 3.5.1 + 3.5.3 org.locationtech.jts diff --git a/docs/examples/JDBC/SpringJdbcTemplate/pom.xml b/docs/examples/JDBC/SpringJdbcTemplate/pom.xml index 7ff4a72f5e..12e1721112 100644 --- a/docs/examples/JDBC/SpringJdbcTemplate/pom.xml +++ b/docs/examples/JDBC/SpringJdbcTemplate/pom.xml @@ -47,7 +47,7 @@ com.taosdata.jdbc taos-jdbcdriver - 3.5.1 + 3.5.3 diff --git a/docs/examples/JDBC/connectionPools/pom.xml b/docs/examples/JDBC/connectionPools/pom.xml index 70be6ed527..f30d7c7f94 100644 --- a/docs/examples/JDBC/connectionPools/pom.xml +++ b/docs/examples/JDBC/connectionPools/pom.xml @@ -18,7 +18,7 @@ com.taosdata.jdbc taos-jdbcdriver - 3.5.1 + 3.5.3 diff --git a/docs/examples/JDBC/consumer-demo/pom.xml b/docs/examples/JDBC/consumer-demo/pom.xml index c9537a93bf..fa1b4b93a3 100644 --- a/docs/examples/JDBC/consumer-demo/pom.xml +++ b/docs/examples/JDBC/consumer-demo/pom.xml @@ -17,7 +17,7 @@ com.taosdata.jdbc taos-jdbcdriver - 3.5.1 + 3.5.3 com.google.guava diff --git a/docs/examples/JDBC/mybatisplus-demo/pom.xml b/docs/examples/JDBC/mybatisplus-demo/pom.xml index effb13cfe8..322842ea3e 100644 --- a/docs/examples/JDBC/mybatisplus-demo/pom.xml +++ b/docs/examples/JDBC/mybatisplus-demo/pom.xml @@ -47,7 +47,7 @@ com.taosdata.jdbc taos-jdbcdriver - 3.5.1 + 3.5.3 diff --git a/docs/examples/JDBC/springbootdemo/pom.xml b/docs/examples/JDBC/springbootdemo/pom.xml index 25b503b0e6..ed8c66544a 100644 --- a/docs/examples/JDBC/springbootdemo/pom.xml +++ b/docs/examples/JDBC/springbootdemo/pom.xml @@ -70,7 +70,7 @@ com.taosdata.jdbc taos-jdbcdriver - 3.5.1 + 3.5.3 diff --git a/docs/examples/JDBC/taosdemo/pom.xml b/docs/examples/JDBC/taosdemo/pom.xml index a80deeff94..0435a8736b 100644 --- a/docs/examples/JDBC/taosdemo/pom.xml +++ b/docs/examples/JDBC/taosdemo/pom.xml @@ -67,7 +67,7 @@ com.taosdata.jdbc taos-jdbcdriver - 3.5.1 + 3.5.3 diff --git a/docs/examples/flink/Main.java b/docs/examples/flink/Main.java index 12d79126cf..50a507d1de 100644 --- a/docs/examples/flink/Main.java +++ b/docs/examples/flink/Main.java @@ -263,7 +263,7 @@ splitSql.setSelect("ts, current, voltage, phase, groupid, location") Class> typeClass = (Class>) (Class) SourceRecords.class; SourceSplitSql sql = new SourceSplitSql("select ts, `current`, voltage, phase, tbname from meters"); TDengineSource> source = new TDengineSource<>(connProps, sql, typeClass); - DataStreamSource> input = env.fromSource(source, WatermarkStrategy.noWatermarks(), "kafka-source"); + DataStreamSource> input = env.fromSource(source, WatermarkStrategy.noWatermarks(), "tdengine-source"); DataStream resultStream = input.map((MapFunction, String>) records -> { StringBuilder sb = new StringBuilder(); Iterator iterator = records.iterator(); @@ -304,7 +304,7 @@ splitSql.setSelect("ts, current, voltage, phase, groupid, location") config.setProperty(TDengineCdcParams.VALUE_DESERIALIZER, "RowData"); config.setProperty(TDengineCdcParams.VALUE_DESERIALIZER_ENCODING, "UTF-8"); TDengineCdcSource tdengineSource = new TDengineCdcSource<>("topic_meters", config, RowData.class); - DataStreamSource input = env.fromSource(tdengineSource, WatermarkStrategy.noWatermarks(), "kafka-source"); + DataStreamSource input = env.fromSource(tdengineSource, WatermarkStrategy.noWatermarks(), "tdengine-source"); DataStream resultStream = input.map((MapFunction) rowData -> { StringBuilder sb = new StringBuilder(); sb.append("tsxx: " + rowData.getTimestamp(0, 0) + @@ -343,7 +343,7 @@ splitSql.setSelect("ts, current, voltage, phase, groupid, location") Class> typeClass = (Class>) (Class) ConsumerRecords.class; TDengineCdcSource> tdengineSource = new TDengineCdcSource<>("topic_meters", config, typeClass); - DataStreamSource> input = env.fromSource(tdengineSource, WatermarkStrategy.noWatermarks(), "kafka-source"); + DataStreamSource> input = env.fromSource(tdengineSource, WatermarkStrategy.noWatermarks(), "tdengine-source"); DataStream resultStream = input.map((MapFunction, String>) records -> { Iterator> iterator = records.iterator(); StringBuilder sb = new StringBuilder(); @@ -388,7 +388,7 @@ splitSql.setSelect("ts, current, voltage, phase, groupid, location") config.setProperty(TDengineCdcParams.VALUE_DESERIALIZER, "com.taosdata.flink.entity.ResultDeserializer"); config.setProperty(TDengineCdcParams.VALUE_DESERIALIZER_ENCODING, "UTF-8"); TDengineCdcSource tdengineSource = new TDengineCdcSource<>("topic_meters", config, ResultBean.class); - DataStreamSource input = env.fromSource(tdengineSource, WatermarkStrategy.noWatermarks(), "kafka-source"); + DataStreamSource input = env.fromSource(tdengineSource, WatermarkStrategy.noWatermarks(), "tdengine-source"); DataStream resultStream = input.map((MapFunction) rowData -> { StringBuilder sb = new StringBuilder(); sb.append("ts: " + rowData.getTs() + diff --git a/docs/examples/java/pom.xml b/docs/examples/java/pom.xml index 63ce3159e6..2f156b5eb7 100644 --- a/docs/examples/java/pom.xml +++ b/docs/examples/java/pom.xml @@ -22,7 +22,7 @@ com.taosdata.jdbc taos-jdbcdriver - 3.5.1 + 3.5.3 diff --git a/docs/examples/java/src/main/java/com/taos/example/WSParameterBindingFullDemo.java b/docs/examples/java/src/main/java/com/taos/example/WSParameterBindingFullDemo.java index 7eaccb3db2..e463ecd760 100644 --- a/docs/examples/java/src/main/java/com/taos/example/WSParameterBindingFullDemo.java +++ b/docs/examples/java/src/main/java/com/taos/example/WSParameterBindingFullDemo.java @@ -2,6 +2,7 @@ package com.taos.example; import com.taosdata.jdbc.ws.TSWSPreparedStatement; +import java.math.BigInteger; import java.sql.*; import java.util.Random; @@ -26,7 +27,12 @@ public class WSParameterBindingFullDemo { "binary_col BINARY(100), " + "nchar_col NCHAR(100), " + "varbinary_col VARBINARY(100), " + - "geometry_col GEOMETRY(100)) " + + "geometry_col GEOMETRY(100)," + + "utinyint_col tinyint unsigned," + + "usmallint_col smallint unsigned," + + "uint_col int unsigned," + + "ubigint_col bigint unsigned" + + ") " + "tags (" + "int_tag INT, " + "double_tag DOUBLE, " + @@ -34,7 +40,12 @@ public class WSParameterBindingFullDemo { "binary_tag BINARY(100), " + "nchar_tag NCHAR(100), " + "varbinary_tag VARBINARY(100), " + - "geometry_tag GEOMETRY(100))" + "geometry_tag GEOMETRY(100)," + + "utinyint_tag tinyint unsigned," + + "usmallint_tag smallint unsigned," + + "uint_tag int unsigned," + + "ubigint_tag bigint unsigned" + + ")" }; private static final int numOfSubTable = 10, numOfRow = 10; @@ -79,7 +90,7 @@ public class WSParameterBindingFullDemo { // set table name pstmt.setTableName("ntb_json_" + i); // set tags - pstmt.setTagJson(1, "{\"device\":\"device_" + i + "\"}"); + pstmt.setTagJson(0, "{\"device\":\"device_" + i + "\"}"); // set columns long current = System.currentTimeMillis(); for (int j = 0; j < numOfRow; j++) { @@ -94,25 +105,29 @@ public class WSParameterBindingFullDemo { } private static void stmtAll(Connection conn) throws SQLException { - String sql = "INSERT INTO ? using stb tags(?,?,?,?,?,?,?) VALUES (?,?,?,?,?,?,?,?)"; + String sql = "INSERT INTO ? using stb tags(?,?,?,?,?,?,?,?,?,?,?) VALUES (?,?,?,?,?,?,?,?,?,?,?,?)"; try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) { // set table name pstmt.setTableName("ntb"); // set tags - pstmt.setTagInt(1, 1); - pstmt.setTagDouble(2, 1.1); - pstmt.setTagBoolean(3, true); - pstmt.setTagString(4, "binary_value"); - pstmt.setTagNString(5, "nchar_value"); - pstmt.setTagVarbinary(6, new byte[] { (byte) 0x98, (byte) 0xf4, 0x6e }); - pstmt.setTagGeometry(7, new byte[] { + pstmt.setTagInt(0, 1); + pstmt.setTagDouble(1, 1.1); + pstmt.setTagBoolean(2, true); + pstmt.setTagString(3, "binary_value"); + pstmt.setTagNString(4, "nchar_value"); + pstmt.setTagVarbinary(5, new byte[] { (byte) 0x98, (byte) 0xf4, 0x6e }); + pstmt.setTagGeometry(6, new byte[] { 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x59, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x59, 0x40 }); + pstmt.setTagShort(7, (short)255); + pstmt.setTagInt(8, 65535); + pstmt.setTagLong(9, 4294967295L); + pstmt.setTagBigInteger(10, new BigInteger("18446744073709551615")); long current = System.currentTimeMillis(); @@ -129,6 +144,10 @@ public class WSParameterBindingFullDemo { 0x00, 0x00, 0x00, 0x59, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x59, 0x40 }); + pstmt.setShort(9, (short)255); + pstmt.setInt(10, 65535); + pstmt.setLong(11, 4294967295L); + pstmt.setObject(12, new BigInteger("18446744073709551615")); pstmt.addBatch(); pstmt.executeBatch(); System.out.println("Successfully inserted rows to example_all_type_stmt.ntb"); diff --git a/docs/zh/07-develop/01-connect/index.md b/docs/zh/07-develop/01-connect/index.md index fa22f750f5..494e93f6ef 100644 --- a/docs/zh/07-develop/01-connect/index.md +++ b/docs/zh/07-develop/01-connect/index.md @@ -89,7 +89,7 @@ TDengine 提供了丰富的应用程序开发接口,为了便于用户快速 com.taosdata.jdbc taos-jdbcdriver - 3.5.1 + 3.5.3 ``` diff --git a/docs/zh/08-operation/04-maintenance.md b/docs/zh/08-operation/04-maintenance.md index 9ef165179d..a82d8c2c17 100644 --- a/docs/zh/08-operation/04-maintenance.md +++ b/docs/zh/08-operation/04-maintenance.md @@ -19,7 +19,8 @@ TDengine 面向多种写入场景,而很多写入场景下,TDengine 的存 ```SQL COMPACT DATABASE db_name [start with 'XXXX'] [end with 'YYYY']; COMPACT [db_name.]VGROUPS IN (vgroup_id1, vgroup_id2, ...) [start with 'XXXX'] [end with 'YYYY']; -SHOW COMPACTS [compact_id]; +SHOW COMPACTS; +SHOW COMPACT compact_id; KILL COMPACT compact_id; ``` diff --git a/docs/zh/08-operation/09-backup.md b/docs/zh/08-operation/09-backup.md index aa4f9f61a0..5b02b4fa55 100644 --- a/docs/zh/08-operation/09-backup.md +++ b/docs/zh/08-operation/09-backup.md @@ -69,7 +69,7 @@ taosExplorer 服务页面中,进入“系统管理 - 备份”页面,在“ 1. 数据库:需要备份的数据库名称。一个备份计划只能备份一个数据库/超级表。 2. 超级表:需要备份的超级表名称。如果不填写,则备份整个数据库。 3. 下次执行时间:首次执行备份任务的日期时间。 -4. 备份周期:备份点之间的时间间隔。注意:备份周期必须大于数据库的 WAL_RETENTION_PERIOD 参数值。 +4. 备份周期:备份点之间的时间间隔。注意:备份周期必须小于数据库的 WAL_RETENTION_PERIOD 参数值。 5. 错误重试次数:对于可通过重试解决的错误,系统会按照此次数进行重试。 6. 错误重试间隔:每次重试之间的时间间隔。 7. 目录:存储备份文件的目录。 @@ -152,4 +152,4 @@ Caused by: ```sql alter database test wal_retention_period 3600; -``` \ No newline at end of file +``` diff --git a/docs/zh/10-third-party/01-collection/12-flink.md b/docs/zh/10-third-party/01-collection/12-flink.md index e085d2fd53..0f8bde5260 100644 --- a/docs/zh/10-third-party/01-collection/12-flink.md +++ b/docs/zh/10-third-party/01-collection/12-flink.md @@ -24,7 +24,8 @@ Flink Connector 支持所有能运行 Flink 1.19 及以上版本的平台。 ## 版本历史 | Flink Connector 版本 | 主要变化 | TDengine 版本 | | ------------------| ------------------------------------ | ---------------- | -| 2.0.0 | 1. 支持 SQL 查询 TDengine 数据库中的数据
2. 支持 CDC 订阅 TDengine 数据库中的数据
3. 支持 Table SQL 方式读取和写入 TDengine 数据库| 3.3.5.0 及以上版本 | +| 2.0.1 | Sink 支持对所有继承自 RowData 并已实现的类型进行数据写入| - | +| 2.0.0 | 1. 支持 SQL 查询 TDengine 数据库中的数据
2. 支持 CDC 订阅 TDengine 数据库中的数据
3. 支持 Table SQL 方式读取和写入 TDengine 数据库| 3.3.5.1 及以上版本 | | 1.0.0 | 支持 Sink 功能,将来着其他数据源的数据写入到 TDengine| 3.3.2.0 及以上版本| ## 异常和错误码 @@ -111,7 +112,7 @@ env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.AT_LEAST_ONCE); com.taosdata.flink flink-connector-tdengine - 2.0.0 + 2.0.1 ``` diff --git a/docs/zh/10-third-party/05-bi/12-tableau.md b/docs/zh/10-third-party/05-bi/12-tableau.md new file mode 100644 index 0000000000..f14e8c1594 --- /dev/null +++ b/docs/zh/10-third-party/05-bi/12-tableau.md @@ -0,0 +1,35 @@ +--- +sidebar_label: Tableau +title: 与 Tableau 集成 +--- + +Tableau 是一款知名的商业智能工具,它支持多种数据源,可方便地连接、导入和整合数据。并且可以通过直观的操作界面,让用户创建丰富多样的可视化图表,并具备强大的分析和筛选功能,为数据决策提供有力支持。 + +## 前置条件 + +准备以下环境: +- TDengine 3.3.5.4以上版本集群已部署并正常运行(企业及社区版均可) +- taosAdapter 能够正常运行。详细参考 [taosAdapter 使用手册](../../../reference/components/taosadapter) +- Tableau 桌面版安装并运行(如未安装,请下载并安装 Windows 操作系统 32/64 位 [Tableau 桌面版](https://www.tableau.com/products/desktop/download) )。安装 Tableau 桌面版请参考 [官方文档](https://www.tableau.com)。 +- ODBC 驱动安装成功。详细参考[安装 ODBC 驱动](../../../reference/connector/odbc/#安装) +- ODBC 数据源配置成功。详细参考[配置ODBC数据源](../../../reference/connector/odbc/#配置数据源) + +## 加载和分析 TDengine 数据 + +**第 1 步**,在 Windows 系统环境下启动 Tableau,之后在其连接页面中搜索 “ODBC”,并选择 “其他数据库 (ODBC)”。 + +**第 2 步**,点击 DNS 单选框,接着选择已配置好的数据源(MyTDengine),然后点击连接按钮。待连接成功后,删除字符串附加部分的内容,最后点击登录按钮即可。 + +![tableau-odbc](./tableau/tableau-odbc.jpg) + +**第 3 步**,在弹出的工作簿页面中,会显示已连接的数据源。点击数据库的下拉列表,会显示需要进行数据分析的数据库。在此基础上,点击表选项中的查找按钮,即可将该数据库下的所有表显示出来。然后,拖动需要分析的表到右侧区域,即可显示出表结构。 + +![tableau-workbook](./tableau/tableau-table.jpg) + +**第 4 步**,点击下方的"立即更新"按钮,即可将表中的数据展示出来。 + +![tableau-workbook](./tableau/tableau-data.jpg) + +**第 5 步**,点击窗口下方的"工作表",弹出数据分析窗口, 并展示分析表的所有字段,将字段拖动到行列即可展示出图表。 + +![tableau-workbook](./tableau/tableau-analysis.jpg) \ No newline at end of file diff --git a/docs/zh/10-third-party/05-bi/tableau/tableau-analysis.jpg b/docs/zh/10-third-party/05-bi/tableau/tableau-analysis.jpg new file mode 100644 index 0000000000..7408804a54 Binary files /dev/null and b/docs/zh/10-third-party/05-bi/tableau/tableau-analysis.jpg differ diff --git a/docs/zh/10-third-party/05-bi/tableau/tableau-data.jpg b/docs/zh/10-third-party/05-bi/tableau/tableau-data.jpg new file mode 100644 index 0000000000..fe6b8a38e4 Binary files /dev/null and b/docs/zh/10-third-party/05-bi/tableau/tableau-data.jpg differ diff --git a/docs/zh/10-third-party/05-bi/tableau/tableau-odbc.jpg b/docs/zh/10-third-party/05-bi/tableau/tableau-odbc.jpg new file mode 100644 index 0000000000..e02ba8ee53 Binary files /dev/null and b/docs/zh/10-third-party/05-bi/tableau/tableau-odbc.jpg differ diff --git a/docs/zh/10-third-party/05-bi/tableau/tableau-table.jpg b/docs/zh/10-third-party/05-bi/tableau/tableau-table.jpg new file mode 100644 index 0000000000..75dd059bf9 Binary files /dev/null and b/docs/zh/10-third-party/05-bi/tableau/tableau-table.jpg differ diff --git a/docs/zh/14-reference/01-components/01-taosd.md b/docs/zh/14-reference/01-components/01-taosd.md index 0b7189897c..bb0c8b7fb7 100644 --- a/docs/zh/14-reference/01-components/01-taosd.md +++ b/docs/zh/14-reference/01-components/01-taosd.md @@ -306,7 +306,7 @@ charset 的有效值是 UTF-8。 |compressor | |支持动态修改 重启生效 |内部参数,用于有损压缩设置| **补充说明** -1. 在 3.4.0.0 之后,所有配置参数都将被持久化到本地存储,重启数据库服务后,将默认使用持久化的配置参数列表;如果您希望继续使用 config 文件中配置的参数,需设置 forceReadConfig 为 1。 +1. 在 3.3.5.0 之后,所有配置参数都将被持久化到本地存储,重启数据库服务后,将默认使用持久化的配置参数列表;如果您希望继续使用 config 文件中配置的参数,需设置 forceReadConfig 为 1。 2. 在 3.2.0.0 ~ 3.3.0.0(不包含)版本生效,启用该参数后不能回退到升级前的版本 3. TSZ 压缩算法是通过数据预测技术完成的压缩,所以更适合有规律变化的数据 4. TSZ 压缩时间会更长一些,如果您的服务器 CPU 空闲多,存储空间小的情况下适合选用 diff --git a/docs/zh/14-reference/03-taos-sql/02-database.md b/docs/zh/14-reference/03-taos-sql/02-database.md index 9f60b51efd..53d52a3e96 100644 --- a/docs/zh/14-reference/03-taos-sql/02-database.md +++ b/docs/zh/14-reference/03-taos-sql/02-database.md @@ -205,7 +205,7 @@ REDISTRIBUTE VGROUP vgroup_no DNODE dnode_id1 [DNODE dnode_id2] [DNODE dnode_id3 BALANCE VGROUP LEADER ``` -触发集群所有 vgroup 中的 leader 重新选主,对集群各节点进行负载再均衡操作。 +触发集群所有 vgroup 中的 leader 重新选主,对集群各节点进行负载再均衡操作。(企业版功能) ## 查看数据库工作状态 diff --git a/docs/zh/14-reference/05-connector/14-java.mdx b/docs/zh/14-reference/05-connector/14-java.mdx index 7d5096bb66..c76faddd57 100644 --- a/docs/zh/14-reference/05-connector/14-java.mdx +++ b/docs/zh/14-reference/05-connector/14-java.mdx @@ -33,6 +33,8 @@ TDengine 的 JDBC 驱动实现尽可能与关系型数据库驱动保持一致 | taos-jdbcdriver 版本 | 主要变化 | TDengine 版本 | | ------------------| ---------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------- | +| 3.5.3 | 在 WebSocket 连接上支持无符号数据类型 | - | +| 3.5.2 | 解决了 WebSocket 查询结果集释放 bug | - | | 3.5.1 | 解决了数据订阅获取时间戳对象类型问题 | - | | 3.5.0 | 1. 优化了 WebSocket 连接参数绑定性能,支持参数绑定查询使用二进制数据
2. 优化了 WebSocket 连接在小查询上的性能
3. WebSocket 连接上支持设置时区和应用信息 | 3.3.5.0 及更高版本 | | 3.4.0 | 1. 使用 jackson 库替换 fastjson 库
2. WebSocket 采用独立协议标识
3. 优化后台拉取线程使用,避免用户误用导致超时 | - | @@ -127,24 +129,27 @@ JDBC 连接器可能报错的错误码包括 4 种: TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对应类型转换如下: -| TDengine DataType | JDBCType | -| ----------------- | ------------------ | -| TIMESTAMP | java.sql.Timestamp | -| INT | java.lang.Integer | -| BIGINT | java.lang.Long | -| FLOAT | java.lang.Float | -| DOUBLE | java.lang.Double | -| SMALLINT | java.lang.Short | -| TINYINT | java.lang.Byte | -| BOOL | java.lang.Boolean | -| BINARY | byte array | -| NCHAR | java.lang.String | -| JSON | java.lang.String | -| VARBINARY | byte[] | -| GEOMETRY | byte[] | +| TDengine DataType | JDBCType | 备注| +| ----------------- | -------------------- |-------------------- | +| TIMESTAMP | java.sql.Timestamp || +| BOOL | java.lang.Boolean || +| TINYINT | java.lang.Byte || +| TINYINT UNSIGNED | java.lang.Short |仅在 WebSocket 连接方式支持| +| SMALLINT | java.lang.Short || +| SMALLINT UNSIGNED | java.lang.Integer |仅在 WebSocket 连接方式支持| +| INT | java.lang.Integer || +| INT UNSIGNED | java.lang.Long |仅在 WebSocket 连接方式支持| +| BIGINT | java.lang.Long || +| BIGINT UNSIGNED | java.math.BigInteger |仅在 WebSocket 连接方式支持| +| FLOAT | java.lang.Float || +| DOUBLE | java.lang.Double || +| BINARY | byte array || +| NCHAR | java.lang.String || +| JSON | java.lang.String |仅在 tag 中支持| +| VARBINARY | byte[] || +| GEOMETRY | byte[] || -**注意**:JSON 类型仅在 tag 中支持。 -由于历史原因,TDengine中的BINARY底层不是真正的二进制数据,已不建议使用。请用VARBINARY类型代替。 +**注意**:由于历史原因,TDengine中的BINARY底层不是真正的二进制数据,已不建议使用。请用VARBINARY类型代替。 GEOMETRY类型是little endian字节序的二进制数据,符合WKB规范。详细信息请参考 [数据类型](../../taos-sql/data-type/#数据类型) WKB规范请参考[Well-Known Binary (WKB)](https://libgeos.org/specifications/wkb/) 对于java连接器,可以使用jts库来方便的创建GEOMETRY类型对象,序列化后写入TDengine,这里有一个样例[Geometry示例](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/GeometryDemo.java) diff --git a/docs/zh/26-tdinternal/01-arch.md b/docs/zh/26-tdinternal/01-arch.md index 9cc1ef6f02..615eadd508 100644 --- a/docs/zh/26-tdinternal/01-arch.md +++ b/docs/zh/26-tdinternal/01-arch.md @@ -191,7 +191,7 @@ TDengine 存储的数据包括采集的时序数据以及库、表相关的元 在进行海量数据管理时,为了实现水平扩展,通常需要采用数据分片(sharding)和数据分区(partitioning)策略。TDengine 通过 vnode 来实现数据分片,并通过按时间段划分数据文件来实现时序数据的分区。 -vnode 不仅负责处理时序数据的写入、查询和计算任务,还承担着负载均衡、数据恢复以及支持异构环境的重要角色。为了实现这些目标,TDengine 将一个 dnode 根据其计算和存储资源切分为多个 vnode。这些 vnode 的管理过程对应用程序是完全透明的,由TDengine 自动完成。。 +vnode 不仅负责处理时序数据的写入、查询和计算任务,还承担着负载均衡、数据恢复以及支持异构环境的重要角色。为了实现这些目标,TDengine 将一个 dnode 根据其计算和存储资源切分为多个 vnode。这些 vnode 的管理过程对应用程序是完全透明的,由TDengine 自动完成。 对于单个数据采集点,无论其数据量有多大,一个 vnode 都拥有足够的计算资源和存储资源来应对(例如,如果每秒生成一条 16B 的记录,一年产生的原始数据量也不到 0.5GB)。因此,TDengine 将一张表(即一个数据采集点)的所有数据都存储在一个vnode 中,避免将同一数据采集点的数据分散到两个或多个 dnode 上。同时,一个 vnode 可以存储多个数据采集点(表)的数据,最大可容纳的表数目上限为 100 万。设计上,一个 vnode 中的所有表都属于同一个数据库。 @@ -371,4 +371,4 @@ alter dnode 1 "/mnt/disk2/taos 1"; 3. 多级存储目前不支持删除已经挂载的硬盘的功能。 4. 0 级存储至少存在一个 disable_create_new_file 为 0 的挂载点,1 级 和 2 级存储没有该限制。 -::: \ No newline at end of file +::: diff --git a/docs/zh/28-releases/01-tdengine.md b/docs/zh/28-releases/01-tdengine.md index 356777acdc..88c07a89f4 100644 --- a/docs/zh/28-releases/01-tdengine.md +++ b/docs/zh/28-releases/01-tdengine.md @@ -24,6 +24,10 @@ TDengine 3.x 各版本安装包下载链接如下: import Release from "/components/ReleaseV3"; +## 3.3.5.2 + + + ## 3.3.5.0 diff --git a/docs/zh/28-releases/03-notes/3.3.5.2.md b/docs/zh/28-releases/03-notes/3.3.5.2.md new file mode 100755 index 0000000000..dc2734c50b --- /dev/null +++ b/docs/zh/28-releases/03-notes/3.3.5.2.md @@ -0,0 +1,42 @@ +--- +title: 3.3.5.2 版本说明 +sidebar_label: 3.3.5.2 +description: 3.3.5.2 版本说明 +--- + +## 特性 + 1. 特性:taosX MQTT 数据源支持根据模板创建多个超级表 + +## 优化 + 1. 优化:改进 taosX 数据库不可用时的错误信息 + 2. 优化:使用 Poetry 标准管理依赖项并减少 Python 连接器安装依赖项 [#251](https://github.com/taosdata/taos-connector-python/issues/251) + 3. 优化:taosX 增量备份和恢复优化 + 4. 优化:在多级存储数据迁移过程中,如果迁移时间过长,可能会导致 Vnode 切主 + 5. 优化:调整 systemctl 守护 taosd 进程的策略,如果 60 秒内连续三次重启失败,下次重启将推迟至 900 秒后 + +## 修复 + 1. 修复:maxRetryWaitTime 参数用于控制当集群无法提供服务时客户端的最大重连超时时间,但在遇到 Sync timeout 错误时,该参数不生效 + 2. 修复:支持在修改子表的 tag 值后,即时订阅到更新后的 tag 值 + 3. 修复:数据订阅的 tmq_consumer_poll 函数调用失败时没有返回错误码 + 4. 修复:当创建超过 100 个视图并执行 show views 命令时,taosd 可能会发生崩溃 + 5. 修复:当使用 stmt2 写入数据时,如果未绑定所有的数据列,写入操作将会失败 + 6. 修复:当使用 stmt2 写入数据时,如果数据库名或表名使用了反引号,写入操作将会失败 + 7. 修复:关闭 vnode 时如果有正在进行的文件合并任务,taosd 可能会崩溃 + 8. 修复:频繁执行 drop table with `tb_uid` 语句可能导致 taosd 死锁 + 9. 修复:日志文件切换过程中可能出现的死锁问题 + 10. 修复:禁止创建与系统库(information_schema, performance_schema)同名的数据库 + 11. 修复:当嵌套查询的内层查询来源于超级表时,排序信息无法被上推 + 12. 修复:通过 STMT 接口尝试写入不符合拓扑规范的 Geometry 数据类型时误报错误 + 13. 修复:在查询语句中使用 percentile 函数和会话窗口时,如果出现错误,taosd 可能会崩溃 + 14. 修复:无法动态修改系统参数的问题 + 15. 修复:订阅同步偶发 Translict transaction 错误 + 16. 修复:同一消费者在执行取消订阅操作后,立即尝试订阅其他不同的主题时,会返回错误 + 17. 修复:Go 连接器安全修复 CVE-2022-28948 + 18. 修复:当视图中的子查询包含带别名的 ORDER BY 子句,并且查询函数自身也带有别名时,查询该视图会引发错误 + 19. 修复:在将数据库从单副本修改为多副本时,如果存在一些由较早版本生成且在新版本中已不再使用的元数据,会导致修改操作失败 + 20. 修复:在使用 SELECT * FROM 子查询时,列名未能正确复制到外层查询 + 21. 修复:对字符串类型数据执行 max/min 函数时,结果不准确且 taosd 可能会崩溃 + 22. 修复:流式计算不支持使用 HAVING 语句,但在创建时未报告错误 + 23. 修复:taos shell 显示的服务端版本信息不准确,例如无法正确区分社区版和企业版 + 24. 修复:在某些特定的查询场景下,当 JOIN 和 CAST 联合使用时,taosd 可能会崩溃 + diff --git a/docs/zh/28-releases/03-notes/index.md b/docs/zh/28-releases/03-notes/index.md index 27898aa2df..420ab4a54d 100644 --- a/docs/zh/28-releases/03-notes/index.md +++ b/docs/zh/28-releases/03-notes/index.md @@ -4,6 +4,7 @@ sidebar_label: 版本说明 description: 各版本版本说明 --- +[3.3.5.2](./3.3.5.2) [3.3.5.0](./3.3.5.0) [3.3.4.8](./3.3.4.8) [3.3.4.3](./3.3.4.3) diff --git a/include/common/tanalytics.h b/include/common/tanalytics.h index 6ebdb38fa6..344093245b 100644 --- a/include/common/tanalytics.h +++ b/include/common/tanalytics.h @@ -86,7 +86,7 @@ int32_t taosAnalBufWriteDataEnd(SAnalyticBuf *pBuf); int32_t taosAnalBufClose(SAnalyticBuf *pBuf); void taosAnalBufDestroy(SAnalyticBuf *pBuf); -const char *taosAnalAlgoStr(EAnalAlgoType algoType); +const char *taosAnalysisAlgoType(EAnalAlgoType algoType); EAnalAlgoType taosAnalAlgoInt(const char *algoName); const char *taosAnalAlgoUrlStr(EAnalAlgoType algoType); diff --git a/include/common/tglobal.h b/include/common/tglobal.h index 6beb7c8860..4e9a9bd801 100644 --- a/include/common/tglobal.h +++ b/include/common/tglobal.h @@ -34,6 +34,9 @@ extern "C" { #define GLOBAL_CONFIG_FILE_VERSION 1 #define LOCAL_CONFIG_FILE_VERSION 1 +#define RPC_MEMORY_USAGE_RATIO 0.1 +#define QUEUE_MEMORY_USAGE_RATIO 0.6 + typedef enum { DND_CA_SM4 = 1, } EEncryptAlgor; @@ -110,6 +113,7 @@ extern int32_t tsNumOfQnodeFetchThreads; extern int32_t tsNumOfSnodeStreamThreads; extern int32_t tsNumOfSnodeWriteThreads; extern int64_t tsQueueMemoryAllowed; +extern int64_t tsApplyMemoryAllowed; extern int32_t tsRetentionSpeedLimitMB; extern int32_t tsNumOfCompactThreads; diff --git a/include/libs/executor/storageapi.h b/include/libs/executor/storageapi.h index 7a4401827c..3cc2acf30f 100644 --- a/include/libs/executor/storageapi.h +++ b/include/libs/executor/storageapi.h @@ -268,7 +268,7 @@ typedef struct SStoreMeta { const void* (*extractTagVal)(const void* tag, int16_t type, STagVal* tagVal); // todo remove it int32_t (*getTableUidByName)(void* pVnode, char* tbName, uint64_t* uid); - int32_t (*getTableTypeByName)(void* pVnode, char* tbName, ETableType* tbType); + int32_t (*getTableTypeSuidByName)(void* pVnode, char* tbName, ETableType* tbType, uint64_t* suid); int32_t (*getTableNameByUid)(void* pVnode, uint64_t uid, char* tbName); bool (*isTableExisted)(void* pVnode, tb_uid_t uid); diff --git a/include/libs/nodes/querynodes.h b/include/libs/nodes/querynodes.h index 5e4e8b6292..ec3e37b7e5 100644 --- a/include/libs/nodes/querynodes.h +++ b/include/libs/nodes/querynodes.h @@ -313,9 +313,9 @@ typedef struct SOrderByExprNode { } SOrderByExprNode; typedef struct SLimitNode { - ENodeType type; // QUERY_NODE_LIMIT - int64_t limit; - int64_t offset; + ENodeType type; // QUERY_NODE_LIMIT + SValueNode* limit; + SValueNode* offset; } SLimitNode; typedef struct SStateWindowNode { @@ -681,6 +681,7 @@ int32_t nodesValueNodeToVariant(const SValueNode* pNode, SVariant* pVal); int32_t nodesMakeValueNodeFromString(char* literal, SValueNode** ppValNode); int32_t nodesMakeValueNodeFromBool(bool b, SValueNode** ppValNode); int32_t nodesMakeValueNodeFromInt32(int32_t value, SNode** ppNode); +int32_t nodesMakeValueNodeFromInt64(int64_t value, SNode** ppNode); char* nodesGetFillModeString(EFillMode mode); int32_t nodesMergeConds(SNode** pDst, SNodeList** pSrc); diff --git a/include/util/tqueue.h b/include/util/tqueue.h index 5ae642b69f..1d634ce742 100644 --- a/include/util/tqueue.h +++ b/include/util/tqueue.h @@ -55,6 +55,7 @@ typedef struct { typedef enum { DEF_QITEM = 0, RPC_QITEM = 1, + APPLY_QITEM = 2, } EQItype; typedef void (*FItem)(SQueueInfo *pInfo, void *pItem); diff --git a/packaging/smokeTest/test_smoking_selfhost.sh b/packaging/smokeTest/test_smoking_selfhost.sh index a25c5a6d90..6ed0b9c715 100755 --- a/packaging/smokeTest/test_smoking_selfhost.sh +++ b/packaging/smokeTest/test_smoking_selfhost.sh @@ -6,12 +6,6 @@ SUCCESS_FILE="success.txt" FAILED_FILE="failed.txt" REPORT_FILE="report.txt" -# Initialize/clear result files -> "$SUCCESS_FILE" -> "$FAILED_FILE" -> "$LOG_FILE" -> "$REPORT_FILE" - # Switch to the target directory TARGET_DIR="../../tests/system-test/" @@ -24,6 +18,12 @@ else exit 1 fi +# Initialize/clear result files +> "$SUCCESS_FILE" +> "$FAILED_FILE" +> "$LOG_FILE" +> "$REPORT_FILE" + # Define the Python commands to execute commands=( "python3 ./test.py -f 2-query/join.py" @@ -102,4 +102,4 @@ fi echo "Detailed logs can be found in: $(realpath "$LOG_FILE")" echo "Successful commands can be found in: $(realpath "$SUCCESS_FILE")" echo "Failed commands can be found in: $(realpath "$FAILED_FILE")" -echo "Test report can be found in: $(realpath "$REPORT_FILE")" \ No newline at end of file +echo "Test report can be found in: $(realpath "$REPORT_FILE")" diff --git a/source/client/inc/clientStmt.h b/source/client/inc/clientStmt.h index 3540dc5c68..35bfa66f72 100644 --- a/source/client/inc/clientStmt.h +++ b/source/client/inc/clientStmt.h @@ -131,6 +131,8 @@ typedef struct SStmtQueue { SStmtQNode* head; SStmtQNode* tail; uint64_t qRemainNum; + TdThreadMutex mutex; + TdThreadCond waitCond; } SStmtQueue; typedef struct STscStmt { diff --git a/source/client/src/clientMain.c b/source/client/src/clientMain.c index 83aff351dd..190a724151 100644 --- a/source/client/src/clientMain.c +++ b/source/client/src/clientMain.c @@ -253,7 +253,7 @@ void taos_cleanup(void) { taosCloseRef(id); nodesDestroyAllocatorSet(); - // cleanupAppInfo(); + cleanupAppInfo(); rpcCleanup(); tscDebug("rpc cleanup"); diff --git a/source/client/src/clientStmt.c b/source/client/src/clientStmt.c index 4b993ccc1e..4f912ec077 100644 --- a/source/client/src/clientStmt.c +++ b/source/client/src/clientStmt.c @@ -39,31 +39,39 @@ static FORCE_INLINE int32_t stmtAllocQNodeFromBuf(STableBufInfo* pTblBuf, void** } bool stmtDequeue(STscStmt* pStmt, SStmtQNode** param) { - while (0 == atomic_load_64(&pStmt->queue.qRemainNum)) { - taosUsleep(1); - return false; + (void)taosThreadMutexLock(&pStmt->queue.mutex); + while (0 == atomic_load_64((int64_t*)&pStmt->queue.qRemainNum)) { + (void)taosThreadCondWait(&pStmt->queue.waitCond, &pStmt->queue.mutex); + if (atomic_load_8((int8_t*)&pStmt->queue.stopQueue)) { + (void)taosThreadMutexUnlock(&pStmt->queue.mutex); + return false; + } } - SStmtQNode* orig = pStmt->queue.head; - SStmtQNode* node = pStmt->queue.head->next; pStmt->queue.head = pStmt->queue.head->next; - - // taosMemoryFreeClear(orig); - *param = node; - (void)atomic_sub_fetch_64(&pStmt->queue.qRemainNum, 1); + (void)atomic_sub_fetch_64((int64_t*)&pStmt->queue.qRemainNum, 1); + (void)taosThreadMutexUnlock(&pStmt->queue.mutex); + + + *param = node; return true; } void stmtEnqueue(STscStmt* pStmt, SStmtQNode* param) { + (void)taosThreadMutexLock(&pStmt->queue.mutex); + pStmt->queue.tail->next = param; pStmt->queue.tail = param; pStmt->stat.bindDataNum++; (void)atomic_add_fetch_64(&pStmt->queue.qRemainNum, 1); + (void)taosThreadCondSignal(&(pStmt->queue.waitCond)); + + (void)taosThreadMutexUnlock(&pStmt->queue.mutex); } static int32_t stmtCreateRequest(STscStmt* pStmt) { @@ -415,9 +423,11 @@ void stmtResetQueueTableBuf(STableBufInfo* pTblBuf, SStmtQueue* pQueue) { pTblBuf->buffIdx = 1; pTblBuf->buffOffset = sizeof(*pQueue->head); + (void)taosThreadMutexLock(&pQueue->mutex); pQueue->head = pQueue->tail = pTblBuf->pCurBuff; pQueue->qRemainNum = 0; pQueue->head->next = NULL; + (void)taosThreadMutexUnlock(&pQueue->mutex); } int32_t stmtCleanExecInfo(STscStmt* pStmt, bool keepTable, bool deepClean) { @@ -809,6 +819,8 @@ int32_t stmtStartBindThread(STscStmt* pStmt) { } int32_t stmtInitQueue(STscStmt* pStmt) { + (void)taosThreadCondInit(&pStmt->queue.waitCond, NULL); + (void)taosThreadMutexInit(&pStmt->queue.mutex, NULL); STMT_ERR_RET(stmtAllocQNodeFromBuf(&pStmt->sql.siInfo.tbBuf, (void**)&pStmt->queue.head)); pStmt->queue.tail = pStmt->queue.head; @@ -1619,11 +1631,18 @@ int stmtClose(TAOS_STMT* stmt) { pStmt->queue.stopQueue = true; + (void)taosThreadMutexLock(&pStmt->queue.mutex); + (void)taosThreadCondSignal(&(pStmt->queue.waitCond)); + (void)taosThreadMutexUnlock(&pStmt->queue.mutex); + if (pStmt->bindThreadInUse) { (void)taosThreadJoin(pStmt->bindThread, NULL); pStmt->bindThreadInUse = false; } + (void)taosThreadCondDestroy(&pStmt->queue.waitCond); + (void)taosThreadMutexDestroy(&pStmt->queue.mutex); + STMT_DLOG("stmt %p closed, stbInterlaceMode: %d, statInfo: ctgGetTbMetaNum=>%" PRId64 ", getCacheTbInfo=>%" PRId64 ", parseSqlNum=>%" PRId64 ", pStmt->stat.bindDataNum=>%" PRId64 ", settbnameAPI:%u, bindAPI:%u, addbatchAPI:%u, execAPI:%u" @@ -1757,7 +1776,9 @@ _return: } int stmtGetParamNum(TAOS_STMT* stmt, int* nums) { + int code = 0; STscStmt* pStmt = (STscStmt*)stmt; + int32_t preCode = pStmt->errCode; STMT_DLOG_E("start to get param num"); @@ -1765,7 +1786,7 @@ int stmtGetParamNum(TAOS_STMT* stmt, int* nums) { return pStmt->errCode; } - STMT_ERR_RET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS)); + STMT_ERRI_JRET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS)); if (pStmt->bInfo.needParse && pStmt->sql.runTimes && pStmt->sql.type > 0 && STMT_TYPE_MULTI_INSERT != pStmt->sql.type) { @@ -1777,23 +1798,29 @@ int stmtGetParamNum(TAOS_STMT* stmt, int* nums) { pStmt->exec.pRequest = NULL; } - STMT_ERR_RET(stmtCreateRequest(pStmt)); + STMT_ERRI_JRET(stmtCreateRequest(pStmt)); if (pStmt->bInfo.needParse) { - STMT_ERR_RET(stmtParseSql(pStmt)); + STMT_ERRI_JRET(stmtParseSql(pStmt)); } if (STMT_TYPE_QUERY == pStmt->sql.type) { *nums = taosArrayGetSize(pStmt->sql.pQuery->pPlaceholderValues); } else { - STMT_ERR_RET(stmtFetchColFields(stmt, nums, NULL)); + STMT_ERRI_JRET(stmtFetchColFields(stmt, nums, NULL)); } - return TSDB_CODE_SUCCESS; +_return: + + pStmt->errCode = preCode; + + return code; } int stmtGetParam(TAOS_STMT* stmt, int idx, int* type, int* bytes) { + int code = 0; STscStmt* pStmt = (STscStmt*)stmt; + int32_t preCode = pStmt->errCode; STMT_DLOG_E("start to get param"); @@ -1802,10 +1829,10 @@ int stmtGetParam(TAOS_STMT* stmt, int idx, int* type, int* bytes) { } if (STMT_TYPE_QUERY == pStmt->sql.type) { - STMT_RET(TSDB_CODE_TSC_STMT_API_ERROR); + STMT_ERRI_JRET(TSDB_CODE_TSC_STMT_API_ERROR); } - STMT_ERR_RET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS)); + STMT_ERRI_JRET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS)); if (pStmt->bInfo.needParse && pStmt->sql.runTimes && pStmt->sql.type > 0 && STMT_TYPE_MULTI_INSERT != pStmt->sql.type) { @@ -1817,27 +1844,29 @@ int stmtGetParam(TAOS_STMT* stmt, int idx, int* type, int* bytes) { pStmt->exec.pRequest = NULL; } - STMT_ERR_RET(stmtCreateRequest(pStmt)); + STMT_ERRI_JRET(stmtCreateRequest(pStmt)); if (pStmt->bInfo.needParse) { - STMT_ERR_RET(stmtParseSql(pStmt)); + STMT_ERRI_JRET(stmtParseSql(pStmt)); } int32_t nums = 0; TAOS_FIELD_E* pField = NULL; - STMT_ERR_RET(stmtFetchColFields(stmt, &nums, &pField)); + STMT_ERRI_JRET(stmtFetchColFields(stmt, &nums, &pField)); if (idx >= nums) { tscError("idx %d is too big", idx); - taosMemoryFree(pField); - STMT_ERR_RET(TSDB_CODE_INVALID_PARA); + STMT_ERRI_JRET(TSDB_CODE_INVALID_PARA); } *type = pField[idx].type; *bytes = pField[idx].bytes; - taosMemoryFree(pField); +_return: - return TSDB_CODE_SUCCESS; + taosMemoryFree(pField); + pStmt->errCode = preCode; + + return code; } TAOS_RES* stmtUseResult(TAOS_STMT* stmt) { diff --git a/source/client/src/clientStmt2.c b/source/client/src/clientStmt2.c index 8edd60e4b5..8e517eb5f2 100644 --- a/source/client/src/clientStmt2.c +++ b/source/client/src/clientStmt2.c @@ -39,31 +39,35 @@ static FORCE_INLINE int32_t stmtAllocQNodeFromBuf(STableBufInfo* pTblBuf, void** } static bool stmtDequeue(STscStmt2* pStmt, SStmtQNode** param) { + (void)taosThreadMutexLock(&pStmt->queue.mutex); while (0 == atomic_load_64((int64_t*)&pStmt->queue.qRemainNum)) { - taosUsleep(1); - return false; + (void)taosThreadCondWait(&pStmt->queue.waitCond, &pStmt->queue.mutex); + if (atomic_load_8((int8_t*)&pStmt->queue.stopQueue)) { + (void)taosThreadMutexUnlock(&pStmt->queue.mutex); + return false; + } } - SStmtQNode* orig = pStmt->queue.head; - SStmtQNode* node = pStmt->queue.head->next; pStmt->queue.head = pStmt->queue.head->next; - - // taosMemoryFreeClear(orig); - *param = node; (void)atomic_sub_fetch_64((int64_t*)&pStmt->queue.qRemainNum, 1); + (void)taosThreadMutexUnlock(&pStmt->queue.mutex); return true; } static void stmtEnqueue(STscStmt2* pStmt, SStmtQNode* param) { + (void)taosThreadMutexLock(&pStmt->queue.mutex); + pStmt->queue.tail->next = param; pStmt->queue.tail = param; - pStmt->stat.bindDataNum++; (void)atomic_add_fetch_64((int64_t*)&pStmt->queue.qRemainNum, 1); + (void)taosThreadCondSignal(&(pStmt->queue.waitCond)); + + (void)taosThreadMutexUnlock(&pStmt->queue.mutex); } static int32_t stmtCreateRequest(STscStmt2* pStmt) { @@ -339,9 +343,11 @@ static void stmtResetQueueTableBuf(STableBufInfo* pTblBuf, SStmtQueue* pQueue) { pTblBuf->buffIdx = 1; pTblBuf->buffOffset = sizeof(*pQueue->head); + (void)taosThreadMutexLock(&pQueue->mutex); pQueue->head = pQueue->tail = pTblBuf->pCurBuff; pQueue->qRemainNum = 0; pQueue->head->next = NULL; + (void)taosThreadMutexUnlock(&pQueue->mutex); } static int32_t stmtCleanExecInfo(STscStmt2* pStmt, bool keepTable, bool deepClean) { @@ -735,6 +741,8 @@ static int32_t stmtStartBindThread(STscStmt2* pStmt) { } static int32_t stmtInitQueue(STscStmt2* pStmt) { + (void)taosThreadCondInit(&pStmt->queue.waitCond, NULL); + (void)taosThreadMutexInit(&pStmt->queue.mutex, NULL); STMT_ERR_RET(stmtAllocQNodeFromBuf(&pStmt->sql.siInfo.tbBuf, (void**)&pStmt->queue.head)); pStmt->queue.tail = pStmt->queue.head; @@ -1066,13 +1074,16 @@ static int stmtFetchColFields2(STscStmt2* pStmt, int32_t* fieldNum, TAOS_FIELD_E } static int stmtFetchStbColFields2(STscStmt2* pStmt, int32_t* fieldNum, TAOS_FIELD_ALL** fields) { + int32_t code = 0; + int32_t preCode = pStmt->errCode; + if (pStmt->errCode != TSDB_CODE_SUCCESS) { return pStmt->errCode; } if (STMT_TYPE_QUERY == pStmt->sql.type) { tscError("invalid operation to get query column fileds"); - STMT_ERR_RET(TSDB_CODE_TSC_STMT_API_ERROR); + STMT_ERRI_JRET(TSDB_CODE_TSC_STMT_API_ERROR); } STableDataCxt** pDataBlock = NULL; @@ -1084,21 +1095,25 @@ static int stmtFetchStbColFields2(STscStmt2* pStmt, int32_t* fieldNum, TAOS_FIEL (STableDataCxt**)taosHashGet(pStmt->exec.pBlockHash, pStmt->bInfo.tbFName, strlen(pStmt->bInfo.tbFName)); if (NULL == pDataBlock) { tscError("table %s not found in exec blockHash", pStmt->bInfo.tbFName); - STMT_ERR_RET(TSDB_CODE_APP_ERROR); + STMT_ERRI_JRET(TSDB_CODE_APP_ERROR); } } - STMT_ERR_RET(qBuildStmtStbColFields(*pDataBlock, pStmt->bInfo.boundTags, pStmt->bInfo.preCtbname, fieldNum, fields)); + STMT_ERRI_JRET(qBuildStmtStbColFields(*pDataBlock, pStmt->bInfo.boundTags, pStmt->bInfo.preCtbname, fieldNum, fields)); if (pStmt->bInfo.tbType == TSDB_SUPER_TABLE) { pStmt->bInfo.needParse = true; qDestroyStmtDataBlock(*pDataBlock); if (taosHashRemove(pStmt->exec.pBlockHash, pStmt->bInfo.tbFName, strlen(pStmt->bInfo.tbFName)) != 0) { tscError("get fileds %s remove exec blockHash fail", pStmt->bInfo.tbFName); - STMT_ERR_RET(TSDB_CODE_APP_ERROR); + STMT_ERRI_JRET(TSDB_CODE_APP_ERROR); } } - return TSDB_CODE_SUCCESS; +_return: + + pStmt->errCode = preCode; + + return code; } /* SArray* stmtGetFreeCol(STscStmt2* pStmt, int32_t* idx) { @@ -1748,11 +1763,18 @@ int stmtClose2(TAOS_STMT2* stmt) { pStmt->queue.stopQueue = true; + (void)taosThreadMutexLock(&pStmt->queue.mutex); + (void)taosThreadCondSignal(&(pStmt->queue.waitCond)); + (void)taosThreadMutexUnlock(&pStmt->queue.mutex); + if (pStmt->bindThreadInUse) { (void)taosThreadJoin(pStmt->bindThread, NULL); pStmt->bindThreadInUse = false; } + (void)taosThreadCondDestroy(&pStmt->queue.waitCond); + (void)taosThreadMutexDestroy(&pStmt->queue.mutex); + if (pStmt->options.asyncExecFn && !pStmt->semWaited) { if (tsem_wait(&pStmt->asyncQuerySem) != 0) { tscError("failed to wait asyncQuerySem"); @@ -1824,7 +1846,7 @@ int stmtParseColFields2(TAOS_STMT2* stmt) { if (pStmt->exec.pRequest && STMT_TYPE_QUERY == pStmt->sql.type && pStmt->sql.runTimes) { taos_free_result(pStmt->exec.pRequest); pStmt->exec.pRequest = NULL; - STMT_ERR_RET(stmtCreateRequest(pStmt)); + STMT_ERRI_JRET(stmtCreateRequest(pStmt)); } STMT_ERRI_JRET(stmtCreateRequest(pStmt)); @@ -1850,7 +1872,9 @@ int stmtGetStbColFields2(TAOS_STMT2* stmt, int* nums, TAOS_FIELD_ALL** fields) { } int stmtGetParamNum2(TAOS_STMT2* stmt, int* nums) { + int32_t code = 0; STscStmt2* pStmt = (STscStmt2*)stmt; + int32_t preCode = pStmt->errCode; STMT_DLOG_E("start to get param num"); @@ -1858,7 +1882,7 @@ int stmtGetParamNum2(TAOS_STMT2* stmt, int* nums) { return pStmt->errCode; } - STMT_ERR_RET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS)); + STMT_ERRI_JRET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS)); if (pStmt->bInfo.needParse && pStmt->sql.runTimes && pStmt->sql.type > 0 && STMT_TYPE_MULTI_INSERT != pStmt->sql.type) { @@ -1870,19 +1894,23 @@ int stmtGetParamNum2(TAOS_STMT2* stmt, int* nums) { pStmt->exec.pRequest = NULL; } - STMT_ERR_RET(stmtCreateRequest(pStmt)); + STMT_ERRI_JRET(stmtCreateRequest(pStmt)); if (pStmt->bInfo.needParse) { - STMT_ERR_RET(stmtParseSql(pStmt)); + STMT_ERRI_JRET(stmtParseSql(pStmt)); } if (STMT_TYPE_QUERY == pStmt->sql.type) { *nums = taosArrayGetSize(pStmt->sql.pQuery->pPlaceholderValues); } else { - STMT_ERR_RET(stmtFetchColFields2(stmt, nums, NULL)); + STMT_ERRI_JRET(stmtFetchColFields2(stmt, nums, NULL)); } - return TSDB_CODE_SUCCESS; +_return: + + pStmt->errCode = preCode; + + return code; } TAOS_RES* stmtUseResult2(TAOS_STMT2* stmt) { diff --git a/source/client/src/clientTmq.c b/source/client/src/clientTmq.c index 78bceb4ef8..fcd88ed8d7 100644 --- a/source/client/src/clientTmq.c +++ b/source/client/src/clientTmq.c @@ -74,8 +74,9 @@ enum { }; typedef struct { - tmr_h timer; - int32_t rsetId; + tmr_h timer; + int32_t rsetId; + TdThreadMutex lock; } SMqMgmt; struct tmq_list_t { @@ -1600,16 +1601,25 @@ void tmqFreeImpl(void* handle) { static void tmqMgmtInit(void) { tmqInitRes = 0; + + if (taosThreadMutexInit(&tmqMgmt.lock, NULL) != 0){ + goto END; + } + tmqMgmt.timer = taosTmrInit(1000, 100, 360000, "TMQ"); if (tmqMgmt.timer == NULL) { - tmqInitRes = terrno; + goto END; } tmqMgmt.rsetId = taosOpenRef(10000, tmqFreeImpl); if (tmqMgmt.rsetId < 0) { - tmqInitRes = terrno; + goto END; } + + return; +END: + tmqInitRes = terrno; } void tmqMgmtClose(void) { @@ -1618,9 +1628,27 @@ void tmqMgmtClose(void) { tmqMgmt.timer = NULL; } - if (tmqMgmt.rsetId >= 0) { + if (tmqMgmt.rsetId > 0) { + (void) taosThreadMutexLock(&tmqMgmt.lock); + tmq_t *tmq = taosIterateRef(tmqMgmt.rsetId, 0); + int64_t refId = 0; + + while (tmq) { + refId = tmq->refId; + if (refId == 0) { + break; + } + atomic_store_8(&tmq->status, TMQ_CONSUMER_STATUS__CLOSED); + + if (taosRemoveRef(tmqMgmt.rsetId, tmq->refId) != 0) { + qWarn("taosRemoveRef tmq refId:%" PRId64 " failed, error:%s", refId, tstrerror(terrno)); + } + + tmq = taosIterateRef(tmqMgmt.rsetId, refId); + } taosCloseRef(tmqMgmt.rsetId); tmqMgmt.rsetId = -1; + (void)taosThreadMutexUnlock(&tmqMgmt.lock); } } @@ -2617,8 +2645,13 @@ int32_t tmq_unsubscribe(tmq_t* tmq) { int32_t tmq_consumer_close(tmq_t* tmq) { if (tmq == NULL) return TSDB_CODE_INVALID_PARA; + int32_t code = 0; + code = taosThreadMutexLock(&tmqMgmt.lock); + if (atomic_load_8(&tmq->status) == TMQ_CONSUMER_STATUS__CLOSED){ + goto end; + } tqInfoC("consumer:0x%" PRIx64 " start to close consumer, status:%d", tmq->consumerId, tmq->status); - int32_t code = tmq_unsubscribe(tmq); + code = tmq_unsubscribe(tmq); if (code == 0) { atomic_store_8(&tmq->status, TMQ_CONSUMER_STATUS__CLOSED); code = taosRemoveRef(tmqMgmt.rsetId, tmq->refId); @@ -2626,6 +2659,9 @@ int32_t tmq_consumer_close(tmq_t* tmq) { tqErrorC("tmq close failed to remove ref:%" PRId64 ", code:%d", tmq->refId, code); } } + + end: + code = taosThreadMutexUnlock(&tmqMgmt.lock); return code; } diff --git a/source/client/test/stmt2Test.cpp b/source/client/test/stmt2Test.cpp index 52c89e97ab..0f721d6a6b 100644 --- a/source/client/test/stmt2Test.cpp +++ b/source/client/test/stmt2Test.cpp @@ -237,6 +237,63 @@ int main(int argc, char** argv) { return RUN_ALL_TESTS(); } +TEST(stmt2Case, stmt2_test_limit) { + TAOS* taos = taos_connect("localhost", "root", "taosdata", "", 0); + ASSERT_NE(taos, nullptr); + do_query(taos, "drop database if exists stmt2_testdb_7"); + do_query(taos, "create database IF NOT EXISTS stmt2_testdb_7"); + do_query(taos, "create stable stmt2_testdb_7.stb (ts timestamp, b binary(10)) tags(t1 int, t2 binary(10))"); + do_query(taos, + "insert into stmt2_testdb_7.tb2 using stmt2_testdb_7.stb tags(2,'xyz') values(1591060628000, " + "'abc'),(1591060628001,'def'),(1591060628004, 'hij')"); + do_query(taos, "use stmt2_testdb_7"); + + + TAOS_STMT2_OPTION option = {0, true, true, NULL, NULL}; + + + TAOS_STMT2* stmt = taos_stmt2_init(taos, &option); + ASSERT_NE(stmt, nullptr); + + + const char* sql = "select * from stmt2_testdb_7.tb2 where ts > ? and ts < ? limit ?"; + int code = taos_stmt2_prepare(stmt, sql, 0); + checkError(stmt, code); + + + int t64_len[1] = {sizeof(int64_t)}; + int b_len[1] = {3}; + int x = 2; + int x_len = sizeof(int); + int64_t ts[2] = {1591060627000, 1591060628005}; + TAOS_STMT2_BIND params[3] = {{TSDB_DATA_TYPE_TIMESTAMP, &ts[0], t64_len, NULL, 1}, + {TSDB_DATA_TYPE_TIMESTAMP, &ts[1], t64_len, NULL, 1}, + {TSDB_DATA_TYPE_INT, &x, &x_len, NULL, 1}}; + TAOS_STMT2_BIND* paramv = ¶ms[0]; + TAOS_STMT2_BINDV bindv = {1, NULL, NULL, ¶mv}; + code = taos_stmt2_bind_param(stmt, &bindv, -1); + checkError(stmt, code); + + + taos_stmt2_exec(stmt, NULL); + checkError(stmt, code); + + + TAOS_RES* pRes = taos_stmt2_result(stmt); + ASSERT_NE(pRes, nullptr); + + + int getRecordCounts = 0; + while ((taos_fetch_row(pRes))) { + getRecordCounts++; + } + ASSERT_EQ(getRecordCounts, 2); + taos_stmt2_close(stmt); + do_query(taos, "drop database if exists stmt2_testdb_7"); + taos_close(taos); +} + + TEST(stmt2Case, insert_stb_get_fields_Test) { TAOS* taos = taos_connect("localhost", "root", "taosdata", NULL, 0); ASSERT_NE(taos, nullptr); @@ -735,7 +792,7 @@ TEST(stmt2Case, insert_ntb_get_fields_Test) { { const char* sql = "insert into stmt2_testdb_4.? values(?,?)"; printf("case 2 : %s\n", sql); - getFieldsError(taos, sql, TSDB_CODE_PAR_TABLE_NOT_EXIST); + getFieldsError(taos, sql, TSDB_CODE_TSC_STMT_TBNAME_ERROR); } // case 3 : wrong para nums @@ -1496,8 +1553,51 @@ TEST(stmt2Case, geometry) { checkError(stmt, code); ASSERT_EQ(affected_rows, 3); + // test wrong wkb input + unsigned char wkb2[3][61] = { + { + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0xF0, 0x3F, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, + }, + {0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f}, + {0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40}}; + params[1].buffer = wkb2; + code = taos_stmt2_bind_param(stmt, &bindv, -1); + ASSERT_EQ(code, TSDB_CODE_FUNC_FUNTION_PARA_VALUE); + taos_stmt2_close(stmt); do_query(taos, "DROP DATABASE IF EXISTS stmt2_testdb_13"); taos_close(taos); } + +// TD-33582 +TEST(stmt2Case, errcode) { + TAOS* taos = taos_connect("localhost", "root", "taosdata", NULL, 0); + ASSERT_NE(taos, nullptr); + do_query(taos, "DROP DATABASE IF EXISTS stmt2_testdb_14"); + do_query(taos, "CREATE DATABASE IF NOT EXISTS stmt2_testdb_14"); + do_query(taos, "use stmt2_testdb_14"); + + TAOS_STMT2_OPTION option = {0}; + TAOS_STMT2* stmt = taos_stmt2_init(taos, &option); + ASSERT_NE(stmt, nullptr); + char* sql = "select * from t where ts > ? and name = ? foo = ?"; + int code = taos_stmt2_prepare(stmt, sql, 0); + checkError(stmt, code); + + int fieldNum = 0; + TAOS_FIELD_ALL* pFields = NULL; + code = taos_stmt2_get_fields(stmt, &fieldNum, &pFields); + ASSERT_EQ(code, TSDB_CODE_PAR_SYNTAX_ERROR); + + // get fail dont influence the next stmt prepare + sql = "nsert into ? (ts, name) values (?, ?)"; + code = taos_stmt_prepare(stmt, sql, 0); + checkError(stmt, code); +} #pragma GCC diagnostic pop diff --git a/source/client/test/stmtTest.cpp b/source/client/test/stmtTest.cpp index 77130e41db..9a716d7f19 100644 --- a/source/client/test/stmtTest.cpp +++ b/source/client/test/stmtTest.cpp @@ -212,15 +212,6 @@ void insertData(TAOS *taos, TAOS_STMT_OPTIONS *option, const char *sql, int CTB_ void getFields(TAOS *taos, const char *sql, int expectedALLFieldNum, TAOS_FIELD_E *expectedTagFields, int expectedTagFieldNum, TAOS_FIELD_E *expectedColFields, int expectedColFieldNum) { - // create database and table - do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_3"); - do_query(taos, "CREATE DATABASE IF NOT EXISTS stmt_testdb_3"); - do_query(taos, "USE stmt_testdb_3"); - do_query( - taos, - "CREATE STABLE IF NOT EXISTS stmt_testdb_3.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS " - "(groupId INT, location BINARY(24))"); - TAOS_STMT *stmt = taos_stmt_init(taos); ASSERT_NE(stmt, nullptr); int code = taos_stmt_prepare(stmt, sql, 0); @@ -267,6 +258,24 @@ void getFields(TAOS *taos, const char *sql, int expectedALLFieldNum, TAOS_FIELD_ taos_stmt_close(stmt); } +void getFieldsError(TAOS *taos, const char *sql, int expectedErrocode) { + TAOS_STMT *stmt = taos_stmt_init(taos); + ASSERT_NE(stmt, nullptr); + STscStmt *pStmt = (STscStmt *)stmt; + + int code = taos_stmt_prepare(stmt, sql, 0); + + int fieldNum = 0; + TAOS_FIELD_E *pFields = NULL; + code = taos_stmt_get_tag_fields(stmt, &fieldNum, &pFields); + ASSERT_EQ(code, expectedErrocode); + ASSERT_EQ(pStmt->errCode, TSDB_CODE_SUCCESS); + + taosMemoryFree(pFields); + + taos_stmt_close(stmt); +} + } // namespace int main(int argc, char **argv) { @@ -298,6 +307,15 @@ TEST(stmtCase, get_fields) { TAOS *taos = taos_connect("localhost", "root", "taosdata", NULL, 0); ASSERT_NE(taos, nullptr); + // create database and table + do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_3"); + do_query(taos, "CREATE DATABASE IF NOT EXISTS stmt_testdb_3"); + do_query(taos, "USE stmt_testdb_3"); + do_query( + taos, + "CREATE STABLE IF NOT EXISTS stmt_testdb_3.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS " + "(groupId INT, location BINARY(24))"); + // nomarl test { TAOS_FIELD_E tagFields[2] = {{"groupid", TSDB_DATA_TYPE_INT, 0, 0, sizeof(int)}, {"location", TSDB_DATA_TYPE_BINARY, 0, 0, 24}}; @@ -307,6 +325,12 @@ TEST(stmtCase, get_fields) { {"phase", TSDB_DATA_TYPE_FLOAT, 0, 0, sizeof(float)}}; getFields(taos, "INSERT INTO ? USING meters TAGS(?,?) VALUES (?,?,?,?)", 7, &tagFields[0], 2, &colFields[0], 4); } + // error case [TD-33570] + { getFieldsError(taos, "INSERT INTO ? VALUES (?,?,?,?)", TSDB_CODE_TSC_STMT_TBNAME_ERROR); } + + { getFieldsError(taos, "INSERT INTO ? USING meters TAGS(?,?) VALUES (?,?,?,?)", TSDB_CODE_TSC_STMT_TBNAME_ERROR); } + + do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_3"); taos_close(taos); } @@ -520,9 +544,6 @@ TEST(stmtCase, geometry) { int code = taos_stmt_prepare(stmt, stmt_sql, 0); checkError(stmt, code); - // code = taos_stmt_set_tbname(stmt, "tb1"); - // checkError(stmt, code); - code = taos_stmt_bind_param_batch(stmt, params); checkError(stmt, code); @@ -532,11 +553,58 @@ TEST(stmtCase, geometry) { code = taos_stmt_execute(stmt); checkError(stmt, code); + //test wrong wkb input + unsigned char wkb2[3][61] = { + { + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0xF0, 0x3F, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, + }, + {0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f}, + {0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40}}; + params[1].buffer = wkb2; + code = taos_stmt_bind_param_batch(stmt, params); + ASSERT_EQ(code, TSDB_CODE_FUNC_FUNTION_PARA_VALUE); + taosMemoryFree(t64_len); taosMemoryFree(wkb_len); taos_stmt_close(stmt); do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_5"); taos_close(taos); } +//TD-33582 +TEST(stmtCase, errcode) { + TAOS *taos = taos_connect("localhost", "root", "taosdata", NULL, 0); + ASSERT_NE(taos, nullptr); + do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_4"); + do_query(taos, "CREATE DATABASE IF NOT EXISTS stmt_testdb_4"); + do_query(taos, "USE stmt_testdb_4"); + do_query( + taos, + "CREATE STABLE IF NOT EXISTS stmt_testdb_4.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS " + "(groupId INT, location BINARY(24))"); + + TAOS_STMT *stmt = taos_stmt_init(taos); + ASSERT_NE(stmt, nullptr); + char *sql = "select * from t where ts > ? and name = ? foo = ?"; + int code = taos_stmt_prepare(stmt, sql, 0); + checkError(stmt, code); + + int fieldNum = 0; + TAOS_FIELD_E *pFields = NULL; + code = stmtGetParamNum(stmt, &fieldNum); + ASSERT_EQ(code, TSDB_CODE_PAR_SYNTAX_ERROR); + + code = taos_stmt_get_tag_fields(stmt, &fieldNum, &pFields); + ASSERT_EQ(code, TSDB_CODE_PAR_SYNTAX_ERROR); + // get fail dont influence the next stmt prepare + sql = "nsert into ? (ts, name) values (?, ?)"; + code = taos_stmt_prepare(stmt, sql, 0); + checkError(stmt, code); +} #pragma GCC diagnostic pop \ No newline at end of file diff --git a/source/common/CMakeLists.txt b/source/common/CMakeLists.txt index 39380a0644..ac8fea90e5 100644 --- a/source/common/CMakeLists.txt +++ b/source/common/CMakeLists.txt @@ -15,6 +15,10 @@ if(DEFINED GRANT_CFG_INCLUDE_DIR) add_definitions(-DGRANTS_CFG) endif() +if(${BUILD_WITH_ANALYSIS}) + add_definitions(-DUSE_ANALYTICS) +endif() + if(TD_GRANT) ADD_DEFINITIONS(-D_GRANT) endif() @@ -34,7 +38,9 @@ endif() target_include_directories( common + PUBLIC "$ENV{HOME}/.cos-local.2/include" PUBLIC "${TD_SOURCE_DIR}/include/common" + PRIVATE "${CMAKE_CURRENT_SOURCE_DIR}/inc" PRIVATE "${GRANT_CFG_INCLUDE_DIR}" ) @@ -45,30 +51,39 @@ if(${TD_WINDOWS}) PRIVATE "${TD_SOURCE_DIR}/contrib/pthread" PRIVATE "${TD_SOURCE_DIR}/contrib/msvcregex" ) -endif() -target_link_libraries( - common - PUBLIC os - PUBLIC util - INTERFACE api -) + target_link_libraries( + common + + PUBLIC os + PUBLIC util + INTERFACE api + ) + +else() + find_library(CURL_LIBRARY curl $ENV{HOME}/.cos-local.2/lib NO_DEFAULT_PATH) + find_library(SSL_LIBRARY ssl $ENV{HOME}/.cos-local.2/lib64 $ENV{HOME}/.cos-local.2/lib NO_DEFAULT_PATH) + find_library(CRYPTO_LIBRARY crypto $ENV{HOME}/.cos-local.2/lib64 $ENV{HOME}/.cos-local.2/lib NO_DEFAULT_PATH) + + target_link_libraries( + common + + PUBLIC ${CURL_LIBRARY} + PUBLIC ${SSL_LIBRARY} + PUBLIC ${CRYPTO_LIBRARY} + + PUBLIC os + PUBLIC util + INTERFACE api + ) +endif() if(${BUILD_S3}) if(${BUILD_WITH_S3}) - target_include_directories( - common - - PUBLIC "$ENV{HOME}/.cos-local.2/include" - ) - set(CMAKE_FIND_LIBRARY_SUFFIXES ".a") set(CMAKE_PREFIX_PATH $ENV{HOME}/.cos-local.2) find_library(S3_LIBRARY s3) - find_library(CURL_LIBRARY curl $ENV{HOME}/.cos-local.2/lib NO_DEFAULT_PATH) find_library(XML2_LIBRARY xml2) - find_library(SSL_LIBRARY ssl $ENV{HOME}/.cos-local.2/lib64 $ENV{HOME}/.cos-local.2/lib NO_DEFAULT_PATH) - find_library(CRYPTO_LIBRARY crypto $ENV{HOME}/.cos-local.2/lib64 $ENV{HOME}/.cos-local.2/lib NO_DEFAULT_PATH) target_link_libraries( common diff --git a/source/util/src/tanalytics.c b/source/common/src/tanalytics.c similarity index 96% rename from source/util/src/tanalytics.c rename to source/common/src/tanalytics.c index bf2cb4fd07..0ed67eed0a 100644 --- a/source/util/src/tanalytics.c +++ b/source/common/src/tanalytics.c @@ -36,7 +36,7 @@ typedef struct { static SAlgoMgmt tsAlgos = {0}; static int32_t taosAnalBufGetCont(SAnalyticBuf *pBuf, char **ppCont, int64_t *pContLen); -const char *taosAnalAlgoStr(EAnalAlgoType type) { +const char *taosAnalysisAlgoType(EAnalAlgoType type) { switch (type) { case ANAL_ALGO_TYPE_ANOMALY_DETECT: return "anomaly-detection"; @@ -60,7 +60,7 @@ const char *taosAnalAlgoUrlStr(EAnalAlgoType type) { EAnalAlgoType taosAnalAlgoInt(const char *name) { for (EAnalAlgoType i = 0; i < ANAL_ALGO_TYPE_END; ++i) { - if (strcasecmp(name, taosAnalAlgoStr(i)) == 0) { + if (strcasecmp(name, taosAnalysisAlgoType(i)) == 0) { return i; } } @@ -188,12 +188,12 @@ int32_t taosAnalGetAlgoUrl(const char *algoName, EAnalAlgoType type, char *url, SAnalyticsUrl *pUrl = taosHashAcquire(tsAlgos.hash, name, nameLen); if (pUrl != NULL) { tstrncpy(url, pUrl->url, urlLen); - uDebug("algo:%s, type:%s, url:%s", algoName, taosAnalAlgoStr(type), url); + uDebug("algo:%s, type:%s, url:%s", algoName, taosAnalysisAlgoType(type), url); } else { url[0] = 0; terrno = TSDB_CODE_ANA_ALGO_NOT_FOUND; code = terrno; - uError("algo:%s, type:%s, url not found", algoName, taosAnalAlgoStr(type)); + uError("algo:%s, type:%s, url not found", algoName, taosAnalysisAlgoType(type)); } if (taosThreadMutexUnlock(&tsAlgos.lock) != 0) { @@ -216,20 +216,20 @@ static size_t taosCurlWriteData(char *pCont, size_t contLen, size_t nmemb, void return 0; } - int64_t newDataSize = (int64_t) contLen * nmemb; + int64_t newDataSize = (int64_t)contLen * nmemb; int64_t size = pRsp->dataLen + newDataSize; if (pRsp->data == NULL) { pRsp->data = taosMemoryMalloc(size + 1); if (pRsp->data == NULL) { - uError("failed to prepare recv buffer for post rsp, len:%d, code:%s", (int32_t) size + 1, tstrerror(terrno)); - return 0; // return the recv length, if failed, return 0 + uError("failed to prepare recv buffer for post rsp, len:%d, code:%s", (int32_t)size + 1, tstrerror(terrno)); + return 0; // return the recv length, if failed, return 0 } } else { - char* p = taosMemoryRealloc(pRsp->data, size + 1); + char *p = taosMemoryRealloc(pRsp->data, size + 1); if (p == NULL) { - uError("failed to prepare recv buffer for post rsp, len:%d, code:%s", (int32_t) size + 1, tstrerror(terrno)); - return 0; // return the recv length, if failed, return 0 + uError("failed to prepare recv buffer for post rsp, len:%d, code:%s", (int32_t)size + 1, tstrerror(terrno)); + return 0; // return the recv length, if failed, return 0 } pRsp->data = p; @@ -473,7 +473,7 @@ static int32_t taosAnalJsonBufWriteColMeta(SAnalyticBuf *pBuf, int32_t colIndex, } int32_t bufLen = tsnprintf(buf, sizeof(buf), " [\"%s\", \"%s\", %d]%s\n", colName, tDataTypes[colType].name, - tDataTypes[colType].bytes, last ? "" : ","); + tDataTypes[colType].bytes, last ? "" : ","); if (taosWriteFile(pBuf->filePtr, buf, bufLen) != bufLen) { return terrno; } @@ -779,7 +779,9 @@ int32_t tsosAnalBufOpen(SAnalyticBuf *pBuf, int32_t numOfCols) { return 0; } int32_t taosAnalBufWriteOptStr(SAnalyticBuf *pBuf, const char *optName, const char *optVal) { return 0; } int32_t taosAnalBufWriteOptInt(SAnalyticBuf *pBuf, const char *optName, int64_t optVal) { return 0; } int32_t taosAnalBufWriteOptFloat(SAnalyticBuf *pBuf, const char *optName, float optVal) { return 0; } -int32_t taosAnalBufWriteColMeta(SAnalyticBuf *pBuf, int32_t colIndex, int32_t colType, const char *colName) { return 0; } +int32_t taosAnalBufWriteColMeta(SAnalyticBuf *pBuf, int32_t colIndex, int32_t colType, const char *colName) { + return 0; +} int32_t taosAnalBufWriteDataBegin(SAnalyticBuf *pBuf) { return 0; } int32_t taosAnalBufWriteColBegin(SAnalyticBuf *pBuf, int32_t colIndex) { return 0; } int32_t taosAnalBufWriteColData(SAnalyticBuf *pBuf, int32_t colIndex, int32_t colType, void *colValue) { return 0; } @@ -788,7 +790,7 @@ int32_t taosAnalBufWriteDataEnd(SAnalyticBuf *pBuf) { return 0; } int32_t taosAnalBufClose(SAnalyticBuf *pBuf) { return 0; } void taosAnalBufDestroy(SAnalyticBuf *pBuf) {} -const char *taosAnalAlgoStr(EAnalAlgoType algoType) { return 0; } +const char *taosAnalysisAlgoType(EAnalAlgoType algoType) { return 0; } EAnalAlgoType taosAnalAlgoInt(const char *algoName) { return 0; } const char *taosAnalAlgoUrlStr(EAnalAlgoType algoType) { return 0; } diff --git a/source/common/src/tglobal.c b/source/common/src/tglobal.c index 1e4d3f8c3c..84e0ffb313 100644 --- a/source/common/src/tglobal.c +++ b/source/common/src/tglobal.c @@ -14,12 +14,12 @@ */ #define _DEFAULT_SOURCE -#include "tglobal.h" #include "cJSON.h" #include "defines.h" #include "os.h" #include "osString.h" #include "tconfig.h" +#include "tglobal.h" #include "tgrant.h" #include "tjson.h" #include "tlog.h" @@ -500,7 +500,9 @@ int32_t taosSetS3Cfg(SConfig *pCfg) { TAOS_RETURN(TSDB_CODE_SUCCESS); } -struct SConfig *taosGetCfg() { return tsCfg; } +struct SConfig *taosGetCfg() { + return tsCfg; +} static int32_t taosLoadCfg(SConfig *pCfg, const char **envCmd, const char *inputCfgDir, const char *envFile, char *apolloUrl) { @@ -818,8 +820,13 @@ static int32_t taosAddServerCfg(SConfig *pCfg) { tsNumOfSnodeWriteThreads = tsNumOfCores / 4; tsNumOfSnodeWriteThreads = TRANGE(tsNumOfSnodeWriteThreads, 2, 4); - tsQueueMemoryAllowed = tsTotalMemoryKB * 1024 * 0.1; - tsQueueMemoryAllowed = TRANGE(tsQueueMemoryAllowed, TSDB_MAX_MSG_SIZE * 10LL, TSDB_MAX_MSG_SIZE * 10000LL); + tsQueueMemoryAllowed = tsTotalMemoryKB * 1024 * RPC_MEMORY_USAGE_RATIO * QUEUE_MEMORY_USAGE_RATIO; + tsQueueMemoryAllowed = TRANGE(tsQueueMemoryAllowed, TSDB_MAX_MSG_SIZE * QUEUE_MEMORY_USAGE_RATIO * 10LL, + TSDB_MAX_MSG_SIZE * QUEUE_MEMORY_USAGE_RATIO * 10000LL); + + tsApplyMemoryAllowed = tsTotalMemoryKB * 1024 * RPC_MEMORY_USAGE_RATIO * (1 - QUEUE_MEMORY_USAGE_RATIO); + tsApplyMemoryAllowed = TRANGE(tsApplyMemoryAllowed, TSDB_MAX_MSG_SIZE * (1 - QUEUE_MEMORY_USAGE_RATIO) * 10LL, + TSDB_MAX_MSG_SIZE * (1 - QUEUE_MEMORY_USAGE_RATIO) * 10000LL); tsLogBufferMemoryAllowed = tsTotalMemoryKB * 1024 * 0.1; tsLogBufferMemoryAllowed = TRANGE(tsLogBufferMemoryAllowed, TSDB_MAX_MSG_SIZE * 10LL, TSDB_MAX_MSG_SIZE * 10000LL); @@ -857,7 +864,7 @@ static int32_t taosAddServerCfg(SConfig *pCfg) { TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "numOfSnodeSharedThreads", tsNumOfSnodeStreamThreads, 2, 1024, CFG_SCOPE_SERVER, CFG_DYN_SERVER_LAZY,CFG_CATEGORY_LOCAL)); TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "numOfSnodeUniqueThreads", tsNumOfSnodeWriteThreads, 2, 1024, CFG_SCOPE_SERVER, CFG_DYN_SERVER_LAZY,CFG_CATEGORY_LOCAL)); - TAOS_CHECK_RETURN(cfgAddInt64(pCfg, "rpcQueueMemoryAllowed", tsQueueMemoryAllowed, TSDB_MAX_MSG_SIZE * 10L, INT64_MAX, CFG_SCOPE_SERVER, CFG_DYN_SERVER,CFG_CATEGORY_GLOBAL)); + TAOS_CHECK_RETURN(cfgAddInt64(pCfg, "rpcQueueMemoryAllowed", tsQueueMemoryAllowed, TSDB_MAX_MSG_SIZE * RPC_MEMORY_USAGE_RATIO * 10L, INT64_MAX, CFG_SCOPE_SERVER, CFG_DYN_SERVER,CFG_CATEGORY_GLOBAL)); TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "syncElectInterval", tsElectInterval, 10, 1000 * 60 * 24 * 2, CFG_SCOPE_SERVER, CFG_DYN_NONE,CFG_CATEGORY_GLOBAL)); TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "syncHeartbeatInterval", tsHeartbeatInterval, 10, 1000 * 60 * 24 * 2, CFG_SCOPE_SERVER, CFG_DYN_NONE,CFG_CATEGORY_GLOBAL)); TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "syncHeartbeatTimeout", tsHeartbeatTimeout, 10, 1000 * 60 * 24 * 2, CFG_SCOPE_SERVER, CFG_DYN_NONE,CFG_CATEGORY_GLOBAL)); @@ -1572,7 +1579,8 @@ static int32_t taosSetServerCfg(SConfig *pCfg) { tsNumOfSnodeWriteThreads = pItem->i32; TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "rpcQueueMemoryAllowed"); - tsQueueMemoryAllowed = pItem->i64; + tsQueueMemoryAllowed = cfgGetItem(pCfg, "rpcQueueMemoryAllowed")->i64 * QUEUE_MEMORY_USAGE_RATIO; + tsApplyMemoryAllowed = cfgGetItem(pCfg, "rpcQueueMemoryAllowed")->i64 * (1 - QUEUE_MEMORY_USAGE_RATIO); TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "simdEnable"); tsSIMDEnable = (bool)pItem->bval; @@ -2395,6 +2403,12 @@ static int32_t taosCfgDynamicOptionsForServer(SConfig *pCfg, const char *name) { code = TSDB_CODE_SUCCESS; goto _exit; } + if (strcasecmp("rpcQueueMemoryAllowed", name) == 0) { + tsQueueMemoryAllowed = cfgGetItem(pCfg, "rpcQueueMemoryAllowed")->i64 * QUEUE_MEMORY_USAGE_RATIO; + tsApplyMemoryAllowed = cfgGetItem(pCfg, "rpcQueueMemoryAllowed")->i64 * (1 - QUEUE_MEMORY_USAGE_RATIO); + code = TSDB_CODE_SUCCESS; + goto _exit; + } if (strcasecmp(name, "numOfCompactThreads") == 0) { #ifdef TD_ENTERPRISE @@ -2500,7 +2514,6 @@ static int32_t taosCfgDynamicOptionsForServer(SConfig *pCfg, const char *name) { {"experimental", &tsExperimental}, {"numOfRpcSessions", &tsNumOfRpcSessions}, - {"rpcQueueMemoryAllowed", &tsQueueMemoryAllowed}, {"shellActivityTimer", &tsShellActivityTimer}, {"readTimeout", &tsReadTimeout}, {"safetyCheckLevel", &tsSafetyCheckLevel}, diff --git a/source/dnode/mgmt/mgmt_dnode/src/dmHandle.c b/source/dnode/mgmt/mgmt_dnode/src/dmHandle.c index 9ed4ee83c4..637713d2f9 100644 --- a/source/dnode/mgmt/mgmt_dnode/src/dmHandle.c +++ b/source/dnode/mgmt/mgmt_dnode/src/dmHandle.c @@ -181,7 +181,7 @@ void dmSendStatusReq(SDnodeMgmt *pMgmt) { req.numOfSupportVnodes = tsNumOfSupportVnodes; req.numOfDiskCfg = tsDiskCfgNum; req.memTotal = tsTotalMemoryKB * 1024; - req.memAvail = req.memTotal - tsQueueMemoryAllowed - 16 * 1024 * 1024; + req.memAvail = req.memTotal - tsQueueMemoryAllowed - tsApplyMemoryAllowed - 16 * 1024 * 1024; tstrncpy(req.dnodeEp, tsLocalEp, TSDB_EP_LEN); tstrncpy(req.machineId, pMgmt->pData->machineId, TSDB_MACHINE_ID_LEN + 1); diff --git a/source/dnode/mgmt/mgmt_vnode/src/vmWorker.c b/source/dnode/mgmt/mgmt_vnode/src/vmWorker.c index c22adec9b4..334c213945 100644 --- a/source/dnode/mgmt/mgmt_vnode/src/vmWorker.c +++ b/source/dnode/mgmt/mgmt_vnode/src/vmWorker.c @@ -323,7 +323,7 @@ int32_t vmPutRpcMsgToQueue(SVnodeMgmt *pMgmt, EQueueType qtype, SRpcMsg *pRpc) { return TSDB_CODE_INVALID_MSG; } - EQItype itype = APPLY_QUEUE == qtype ? DEF_QITEM : RPC_QITEM; + EQItype itype = APPLY_QUEUE == qtype ? APPLY_QITEM : RPC_QITEM; SRpcMsg *pMsg; code = taosAllocateQitem(sizeof(SRpcMsg), itype, pRpc->contLen, (void **)&pMsg); if (code) { diff --git a/source/dnode/mgmt/test/sut/src/sut.cpp b/source/dnode/mgmt/test/sut/src/sut.cpp index 13c8c73f44..a1fdebb636 100644 --- a/source/dnode/mgmt/test/sut/src/sut.cpp +++ b/source/dnode/mgmt/test/sut/src/sut.cpp @@ -36,7 +36,8 @@ void Testbase::InitLog(const char* path) { tstrncpy(tsLogDir, path, PATH_MAX); taosGetSystemInfo(); - tsQueueMemoryAllowed = tsTotalMemoryKB * 0.1; + tsQueueMemoryAllowed = tsTotalMemoryKB * 0.06; + tsApplyMemoryAllowed = tsTotalMemoryKB * 0.04; if (taosInitLog("taosdlog", 1, false) != 0) { printf("failed to init log file\n"); } diff --git a/source/dnode/mnode/impl/CMakeLists.txt b/source/dnode/mnode/impl/CMakeLists.txt index ad36d8c8ae..e4e184eee0 100644 --- a/source/dnode/mnode/impl/CMakeLists.txt +++ b/source/dnode/mnode/impl/CMakeLists.txt @@ -16,10 +16,10 @@ if(TD_ENTERPRISE) ELSEIF(${BUILD_WITH_COS}) add_definitions(-DUSE_COS) endif() +endif() - if(${BUILD_WITH_ANALYSIS}) - add_definitions(-DUSE_ANALYTICS) - endif() +if(${BUILD_WITH_ANALYSIS}) + add_definitions(-DUSE_ANALYTICS) endif() add_library(mnode STATIC ${MNODE_SRC}) diff --git a/source/dnode/mnode/impl/inc/mndAnode.h b/source/dnode/mnode/impl/inc/mndAnode.h index 63e8f9090e..d92d35a0fc 100644 --- a/source/dnode/mnode/impl/inc/mndAnode.h +++ b/source/dnode/mnode/impl/inc/mndAnode.h @@ -22,8 +22,9 @@ extern "C" { #endif -int32_t mndInitAnode(SMnode *pMnode); -void mndCleanupAnode(SMnode *pMnode); +int32_t mndInitAnode(SMnode* pMnode); +void mndCleanupAnode(SMnode* pMnode); +void mndRetrieveAlgoList(SMnode* pMnode, SArray* pFc, SArray* pAd); #ifdef __cplusplus } diff --git a/source/dnode/mnode/impl/src/mndAnode.c b/source/dnode/mnode/impl/src/mndAnode.c index c64208600a..9f5635a74b 100644 --- a/source/dnode/mnode/impl/src/mndAnode.c +++ b/source/dnode/mnode/impl/src/mndAnode.c @@ -637,6 +637,32 @@ static void mndCancelGetNextAnode(SMnode *pMnode, void *pIter) { sdbCancelFetchByType(pSdb, pIter, SDB_ANODE); } +// todo handle multiple anode case, remove the duplicate algos +void mndRetrieveAlgoList(SMnode* pMnode, SArray* pFc, SArray* pAd) { + SSdb *pSdb = pMnode->pSdb; + void *pIter = NULL; + SAnodeObj *pObj = NULL; + + while (1) { + pIter = sdbFetch(pSdb, SDB_ANODE, pIter, (void **)&pObj); + if (pIter == NULL) { + break; + } + + if (pObj->numOfAlgos >= ANAL_ALGO_TYPE_END) { + if (pObj->algos[ANAL_ALGO_TYPE_ANOMALY_DETECT] != NULL) { + taosArrayAddAll(pAd, pObj->algos[ANAL_ALGO_TYPE_ANOMALY_DETECT]); + } + + if (pObj->algos[ANAL_ALGO_TYPE_FORECAST] != NULL) { + taosArrayAddAll(pFc, pObj->algos[ANAL_ALGO_TYPE_FORECAST]); + } + } + + sdbRelease(pSdb, pObj); + } +} + static int32_t mndRetrieveAnodesFull(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pBlock, int32_t rows) { SMnode *pMnode = pReq->info.node; SSdb *pSdb = pMnode->pSdb; @@ -661,7 +687,7 @@ static int32_t mndRetrieveAnodesFull(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock code = colDataSetVal(pColInfo, numOfRows, (const char *)&pObj->id, false); if (code != 0) goto _end; - STR_TO_VARSTR(buf, taosAnalAlgoStr(t)); + STR_TO_VARSTR(buf, taosAnalysisAlgoType(t)); pColInfo = taosArrayGet(pBlock->pDataBlock, cols++); code = colDataSetVal(pColInfo, numOfRows, buf, false); if (code != 0) goto _end; @@ -900,5 +926,6 @@ int32_t mndInitAnode(SMnode *pMnode) { } void mndCleanupAnode(SMnode *pMnode) {} +void mndRetrieveAlgoList(SMnode *pMnode, SArray *pFc, SArray *pAd) {} #endif \ No newline at end of file diff --git a/source/dnode/mnode/impl/src/mndTelem.c b/source/dnode/mnode/impl/src/mndTelem.c index bd613d7e69..5eee1ed3c4 100644 --- a/source/dnode/mnode/impl/src/mndTelem.c +++ b/source/dnode/mnode/impl/src/mndTelem.c @@ -19,6 +19,7 @@ #include "mndSync.h" #include "thttp.h" #include "tjson.h" +#include "mndAnode.h" typedef struct { int64_t numOfDnode; @@ -32,6 +33,7 @@ typedef struct { int64_t totalPoints; int64_t totalStorage; int64_t compStorage; + int32_t numOfAnalysisAlgos; } SMnodeStat; static void mndGetStat(SMnode* pMnode, SMnodeStat* pStat) { @@ -58,18 +60,21 @@ static void mndGetStat(SMnode* pMnode, SMnodeStat* pStat) { sdbRelease(pSdb, pVgroup); } +} - pStat->numOfChildTable = 100; - pStat->numOfColumn = 200; - pStat->totalPoints = 300; - pStat->totalStorage = 400; - pStat->compStorage = 500; +static int32_t algoToJson(const void* pObj, SJson* pJson) { + const SAnodeAlgo* pNode = (const SAnodeAlgo*)pObj; + int32_t code = tjsonAddStringToObject(pJson, "name", pNode->name); + return code; } static void mndBuildRuntimeInfo(SMnode* pMnode, SJson* pJson) { SMnodeStat mstat = {0}; int32_t code = 0; int32_t lino = 0; + SArray* pFcList = NULL; + SArray* pAdList = NULL; + mndGetStat(pMnode, &mstat); TAOS_CHECK_GOTO(tjsonAddDoubleToObject(pJson, "numOfDnode", mstat.numOfDnode), &lino, _OVER); @@ -82,8 +87,55 @@ static void mndBuildRuntimeInfo(SMnode* pMnode, SJson* pJson) { TAOS_CHECK_GOTO(tjsonAddDoubleToObject(pJson, "numOfPoint", mstat.totalPoints), &lino, _OVER); TAOS_CHECK_GOTO(tjsonAddDoubleToObject(pJson, "totalStorage", mstat.totalStorage), &lino, _OVER); TAOS_CHECK_GOTO(tjsonAddDoubleToObject(pJson, "compStorage", mstat.compStorage), &lino, _OVER); + + pFcList = taosArrayInit(4, sizeof(SAnodeAlgo)); + pAdList = taosArrayInit(4, sizeof(SAnodeAlgo)); + if (pFcList == NULL || pAdList == NULL) { + lino = __LINE__; + goto _OVER; + } + + mndRetrieveAlgoList(pMnode, pFcList, pAdList); + + if (taosArrayGetSize(pFcList) > 0) { + SJson* items = tjsonAddArrayToObject(pJson, "forecast"); + TSDB_CHECK_NULL(items, code, lino, _OVER, terrno); + + for (int32_t i = 0; i < taosArrayGetSize(pFcList); ++i) { + SJson* item = tjsonCreateObject(); + + TSDB_CHECK_NULL(item, code, lino, _OVER, terrno); + TAOS_CHECK_GOTO(tjsonAddItemToArray(items, item), &lino, _OVER); + + SAnodeAlgo* p = taosArrayGet(pFcList, i); + TSDB_CHECK_NULL(p, code, lino, _OVER, terrno); + TAOS_CHECK_GOTO(tjsonAddStringToObject(item, "name", p->name), &lino, _OVER); + } + } + + if (taosArrayGetSize(pAdList) > 0) { + SJson* items1 = tjsonAddArrayToObject(pJson, "anomaly_detection"); + TSDB_CHECK_NULL(items1, code, lino, _OVER, terrno); + + for (int32_t i = 0; i < taosArrayGetSize(pAdList); ++i) { + SJson* item = tjsonCreateObject(); + + TSDB_CHECK_NULL(item, code, lino, _OVER, terrno); + TAOS_CHECK_GOTO(tjsonAddItemToArray(items1, item), &lino, _OVER); + + SAnodeAlgo* p = taosArrayGet(pAdList, i); + TSDB_CHECK_NULL(p, code, lino, _OVER, terrno); + TAOS_CHECK_GOTO(tjsonAddStringToObject(item, "name", p->name), &lino, _OVER); + } + } + _OVER: - if (code != 0) mError("failed to mndBuildRuntimeInfo at line:%d since %s", lino, tstrerror(code)); + taosArrayDestroy(pFcList); + taosArrayDestroy(pAdList); + + if (code != 0) { + mError("failed to mndBuildRuntimeInfo at line:%d since %s", lino, tstrerror(code)); + } } static char* mndBuildTelemetryReport(SMnode* pMnode) { @@ -136,21 +188,24 @@ static int32_t mndProcessTelemTimer(SRpcMsg* pReq) { int32_t line = 0; SMnode* pMnode = pReq->info.node; STelemMgmt* pMgmt = &pMnode->telemMgmt; - if (!tsEnableTelem) return 0; + + if (!tsEnableTelem) { + return 0; + } (void)taosThreadMutexLock(&pMgmt->lock); char* pCont = mndBuildTelemetryReport(pMnode); (void)taosThreadMutexUnlock(&pMgmt->lock); - if (pCont == NULL) { - return 0; - } + TSDB_CHECK_NULL(pCont, code, line, _end, terrno); + code = taosSendTelemReport(&pMgmt->addrMgt, tsTelemUri, tsTelemPort, pCont, strlen(pCont), HTTP_FLAT); taosMemoryFree(pCont); return code; + _end: if (code != 0) { - mError("%s failed to send at line %d since %s", __func__, line, tstrerror(code)); + mError("%s failed to send telemetry report, line %d since %s", __func__, line, tstrerror(code)); } taosMemoryFree(pCont); return code; @@ -161,15 +216,17 @@ int32_t mndInitTelem(SMnode* pMnode) { STelemMgmt* pMgmt = &pMnode->telemMgmt; (void)taosThreadMutexInit(&pMgmt->lock, NULL); - if ((code = taosGetEmail(pMgmt->email, sizeof(pMgmt->email))) != 0) + if ((code = taosGetEmail(pMgmt->email, sizeof(pMgmt->email))) != 0) { mWarn("failed to get email since %s", tstrerror(code)); + } + code = taosTelemetryMgtInit(&pMgmt->addrMgt, tsTelemServer); if (code != 0) { mError("failed to init telemetry management since %s", tstrerror(code)); return code; } - mndSetMsgHandle(pMnode, TDMT_MND_TELEM_TIMER, mndProcessTelemTimer); + mndSetMsgHandle(pMnode, TDMT_MND_TELEM_TIMER, mndProcessTelemTimer); return 0; } diff --git a/source/dnode/vnode/inc/vnode.h b/source/dnode/vnode/inc/vnode.h index b33bdb0976..475f8c9814 100644 --- a/source/dnode/vnode/inc/vnode.h +++ b/source/dnode/vnode/inc/vnode.h @@ -130,7 +130,7 @@ int32_t metaGetTableNameByUid(void *pVnode, uint64_t uid, char *tbName); int metaGetTableSzNameByUid(void *meta, uint64_t uid, char *tbName); int metaGetTableUidByName(void *pVnode, char *tbName, uint64_t *uid); -int metaGetTableTypeByName(void *meta, char *tbName, ETableType *tbType); +int metaGetTableTypeSuidByName(void *meta, char *tbName, ETableType *tbType, uint64_t* suid); int metaGetTableTtlByUid(void *meta, uint64_t uid, int64_t *ttlDays); bool metaIsTableExist(void *pVnode, tb_uid_t uid); int32_t metaGetCachedTableUidList(void *pVnode, tb_uid_t suid, const uint8_t *key, int32_t keyLen, SArray *pList, diff --git a/source/dnode/vnode/src/meta/metaQuery.c b/source/dnode/vnode/src/meta/metaQuery.c index c19a2e3ce2..b367232e2d 100644 --- a/source/dnode/vnode/src/meta/metaQuery.c +++ b/source/dnode/vnode/src/meta/metaQuery.c @@ -190,13 +190,20 @@ int metaGetTableUidByName(void *pVnode, char *tbName, uint64_t *uid) { return 0; } -int metaGetTableTypeByName(void *pVnode, char *tbName, ETableType *tbType) { +int metaGetTableTypeSuidByName(void *pVnode, char *tbName, ETableType *tbType, uint64_t* suid) { int code = 0; SMetaReader mr = {0}; metaReaderDoInit(&mr, ((SVnode *)pVnode)->pMeta, META_READER_LOCK); code = metaGetTableEntryByName(&mr, tbName); if (code == 0) *tbType = mr.me.type; + if (TSDB_CHILD_TABLE == mr.me.type) { + *suid = mr.me.ctbEntry.suid; + } else if (TSDB_SUPER_TABLE == mr.me.type) { + *suid = mr.me.uid; + } else { + *suid = 0; + } metaReaderClear(&mr); return code; diff --git a/source/dnode/vnode/src/tsdb/tsdbSnapshot.c b/source/dnode/vnode/src/tsdb/tsdbSnapshot.c index e8740a0650..95cf1f9449 100644 --- a/source/dnode/vnode/src/tsdb/tsdbSnapshot.c +++ b/source/dnode/vnode/src/tsdb/tsdbSnapshot.c @@ -559,6 +559,7 @@ struct STsdbSnapWriter { SIterMerger* tombIterMerger; // writer + bool toSttOnly; SFSetWriter* fsetWriter; } ctx[1]; }; @@ -622,6 +623,7 @@ static int32_t tsdbSnapWriteFileSetOpenReader(STsdbSnapWriter* writer) { int32_t code = 0; int32_t lino = 0; + writer->ctx->toSttOnly = false; if (writer->ctx->fset) { #if 0 // open data reader @@ -656,6 +658,14 @@ static int32_t tsdbSnapWriteFileSetOpenReader(STsdbSnapWriter* writer) { // open stt reader array SSttLvl* lvl; TARRAY2_FOREACH(writer->ctx->fset->lvlArr, lvl) { + if (lvl->level != 0) { + if (TARRAY2_SIZE(lvl->fobjArr) > 0) { + writer->ctx->toSttOnly = true; + } + + continue; // Only merge level 0 + } + STFileObj* fobj; TARRAY2_FOREACH(lvl->fobjArr, fobj) { SSttFileReader* reader; @@ -782,7 +792,7 @@ static int32_t tsdbSnapWriteFileSetOpenWriter(STsdbSnapWriter* writer) { SFSetWriterConfig config = { .tsdb = writer->tsdb, - .toSttOnly = false, + .toSttOnly = writer->ctx->toSttOnly, .compactVersion = writer->compactVersion, .minRow = writer->minRow, .maxRow = writer->maxRow, @@ -791,7 +801,7 @@ static int32_t tsdbSnapWriteFileSetOpenWriter(STsdbSnapWriter* writer) { .fid = writer->ctx->fid, .cid = writer->commitID, .did = writer->ctx->did, - .level = 0, + .level = writer->ctx->toSttOnly ? 1 : 0, }; // merge stt files to either data or a new stt file if (writer->ctx->fset) { diff --git a/source/dnode/vnode/src/vnd/vnodeInitApi.c b/source/dnode/vnode/src/vnd/vnodeInitApi.c index 41e6c6c2c5..b8682028cf 100644 --- a/source/dnode/vnode/src/vnd/vnodeInitApi.c +++ b/source/dnode/vnode/src/vnd/vnodeInitApi.c @@ -94,7 +94,7 @@ void initMetadataAPI(SStoreMeta* pMeta) { pMeta->getTableTagsByUid = metaGetTableTagsByUids; pMeta->getTableUidByName = metaGetTableUidByName; - pMeta->getTableTypeByName = metaGetTableTypeByName; + pMeta->getTableTypeSuidByName = metaGetTableTypeSuidByName; pMeta->getTableNameByUid = metaGetTableNameByUid; pMeta->getTableSchema = vnodeGetTableSchema; diff --git a/source/dnode/vnode/src/vnd/vnodeQuery.c b/source/dnode/vnode/src/vnd/vnodeQuery.c index 2b07de916c..c7b1a816cd 100644 --- a/source/dnode/vnode/src/vnd/vnodeQuery.c +++ b/source/dnode/vnode/src/vnd/vnodeQuery.c @@ -49,6 +49,35 @@ int32_t fillTableColCmpr(SMetaReader *reader, SSchemaExt *pExt, int32_t numOfCol return 0; } +void vnodePrintTableMeta(STableMetaRsp* pMeta) { + if (!(qDebugFlag & DEBUG_DEBUG)) { + return; + } + + qDebug("tbName:%s", pMeta->tbName); + qDebug("stbName:%s", pMeta->stbName); + qDebug("dbFName:%s", pMeta->dbFName); + qDebug("dbId:%" PRId64, pMeta->dbId); + qDebug("numOfTags:%d", pMeta->numOfTags); + qDebug("numOfColumns:%d", pMeta->numOfColumns); + qDebug("precision:%d", pMeta->precision); + qDebug("tableType:%d", pMeta->tableType); + qDebug("sversion:%d", pMeta->sversion); + qDebug("tversion:%d", pMeta->tversion); + qDebug("suid:%" PRIu64, pMeta->suid); + qDebug("tuid:%" PRIu64, pMeta->tuid); + qDebug("vgId:%d", pMeta->vgId); + qDebug("sysInfo:%d", pMeta->sysInfo); + if (pMeta->pSchemas) { + for (int32_t i = 0; i < (pMeta->numOfColumns + pMeta->numOfTags); ++i) { + SSchema* pSchema = pMeta->pSchemas + i; + qDebug("%d col/tag: type:%d, flags:%d, colId:%d, bytes:%d, name:%s", i, pSchema->type, pSchema->flags, pSchema->colId, pSchema->bytes, pSchema->name); + } + } + +} + + int32_t vnodeGetTableMeta(SVnode *pVnode, SRpcMsg *pMsg, bool direct) { STableInfoReq infoReq = {0}; STableMetaRsp metaRsp = {0}; @@ -155,6 +184,8 @@ int32_t vnodeGetTableMeta(SVnode *pVnode, SRpcMsg *pMsg, bool direct) { goto _exit; } + vnodePrintTableMeta(&metaRsp); + // encode and send response rspLen = tSerializeSTableMetaRsp(NULL, 0, &metaRsp); if (rspLen < 0) { diff --git a/source/libs/command/inc/commandInt.h b/source/libs/command/inc/commandInt.h index feb1b3cc19..7d6a170ab3 100644 --- a/source/libs/command/inc/commandInt.h +++ b/source/libs/command/inc/commandInt.h @@ -202,13 +202,13 @@ do { \ #define EXPLAIN_SUM_ROW_END() do { varDataSetLen(tbuf, tlen); tlen += VARSTR_HEADER_SIZE; } while (0) #define EXPLAIN_ROW_APPEND_LIMIT_IMPL(_pLimit, sl) do { \ - if (_pLimit) { \ + if (_pLimit && ((SLimitNode*)_pLimit)->limit) { \ EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT); \ SLimitNode* pLimit = (SLimitNode*)_pLimit; \ - EXPLAIN_ROW_APPEND(((sl) ? EXPLAIN_SLIMIT_FORMAT : EXPLAIN_LIMIT_FORMAT), pLimit->limit); \ + EXPLAIN_ROW_APPEND(((sl) ? EXPLAIN_SLIMIT_FORMAT : EXPLAIN_LIMIT_FORMAT), pLimit->limit->datum.i); \ if (pLimit->offset) { \ EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT); \ - EXPLAIN_ROW_APPEND(((sl) ? EXPLAIN_SOFFSET_FORMAT : EXPLAIN_OFFSET_FORMAT), pLimit->offset);\ + EXPLAIN_ROW_APPEND(((sl) ? EXPLAIN_SOFFSET_FORMAT : EXPLAIN_OFFSET_FORMAT), pLimit->offset->datum.i);\ } \ } \ } while (0) diff --git a/source/libs/command/src/explain.c b/source/libs/command/src/explain.c index 0778d5d5f8..f863ff1455 100644 --- a/source/libs/command/src/explain.c +++ b/source/libs/command/src/explain.c @@ -676,9 +676,9 @@ static int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx EXPLAIN_ROW_APPEND(EXPLAIN_WIN_OFFSET_FORMAT, pStart->literal, pEnd->literal); EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT); } - if (NULL != pJoinNode->pJLimit) { + if (NULL != pJoinNode->pJLimit && NULL != ((SLimitNode*)pJoinNode->pJLimit)->limit) { SLimitNode* pJLimit = (SLimitNode*)pJoinNode->pJLimit; - EXPLAIN_ROW_APPEND(EXPLAIN_JLIMIT_FORMAT, pJLimit->limit); + EXPLAIN_ROW_APPEND(EXPLAIN_JLIMIT_FORMAT, pJLimit->limit->datum.i); EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT); } if (IS_WINDOW_JOIN(pJoinNode->subType)) { diff --git a/source/libs/executor/src/anomalywindowoperator.c b/source/libs/executor/src/anomalywindowoperator.c index eb72edb964..3124fa0b57 100644 --- a/source/libs/executor/src/anomalywindowoperator.c +++ b/source/libs/executor/src/anomalywindowoperator.c @@ -668,4 +668,4 @@ int32_t createAnomalywindowOperatorInfo(SOperatorInfo* downstream, SPhysiNode* p } void destroyForecastInfo(void* param) {} -#endif +#endif \ No newline at end of file diff --git a/source/libs/executor/src/executil.c b/source/libs/executor/src/executil.c index cce754a8c8..147d62d245 100644 --- a/source/libs/executor/src/executil.c +++ b/source/libs/executor/src/executil.c @@ -49,13 +49,13 @@ typedef enum { static FilterCondType checkTagCond(SNode* cond); static int32_t optimizeTbnameInCond(void* metaHandle, int64_t suid, SArray* list, SNode* pTagCond, SStorageAPI* pAPI); -static int32_t optimizeTbnameInCondImpl(void* metaHandle, SArray* list, SNode* pTagCond, SStorageAPI* pStoreAPI); +static int32_t optimizeTbnameInCondImpl(void* metaHandle, SArray* list, SNode* pTagCond, SStorageAPI* pStoreAPI, uint64_t suid); static int32_t getTableList(void* pVnode, SScanPhysiNode* pScanNode, SNode* pTagCond, SNode* pTagIndexCond, STableListInfo* pListInfo, uint8_t* digest, const char* idstr, SStorageAPI* pStorageAPI); -static int64_t getLimit(const SNode* pLimit) { return NULL == pLimit ? -1 : ((SLimitNode*)pLimit)->limit; } -static int64_t getOffset(const SNode* pLimit) { return NULL == pLimit ? -1 : ((SLimitNode*)pLimit)->offset; } +static int64_t getLimit(const SNode* pLimit) { return (NULL == pLimit || NULL == ((SLimitNode*)pLimit)->limit) ? -1 : ((SLimitNode*)pLimit)->limit->datum.i; } +static int64_t getOffset(const SNode* pLimit) { return (NULL == pLimit || NULL == ((SLimitNode*)pLimit)->offset) ? -1 : ((SLimitNode*)pLimit)->offset->datum.i; } static void releaseColInfoData(void* pCol); void initResultRowInfo(SResultRowInfo* pResultRowInfo) { @@ -1061,7 +1061,7 @@ static int32_t optimizeTbnameInCond(void* pVnode, int64_t suid, SArray* list, SN int32_t ntype = nodeType(cond); if (ntype == QUERY_NODE_OPERATOR) { - ret = optimizeTbnameInCondImpl(pVnode, list, cond, pAPI); + ret = optimizeTbnameInCondImpl(pVnode, list, cond, pAPI, suid); } if (ntype != QUERY_NODE_LOGIC_CONDITION || ((SLogicConditionNode*)cond)->condType != LOGIC_COND_TYPE_AND) { @@ -1080,7 +1080,7 @@ static int32_t optimizeTbnameInCond(void* pVnode, int64_t suid, SArray* list, SN SListCell* cell = pList->pHead; for (int i = 0; i < len; i++) { if (cell == NULL) break; - if (optimizeTbnameInCondImpl(pVnode, list, cell->pNode, pAPI) == 0) { + if (optimizeTbnameInCondImpl(pVnode, list, cell->pNode, pAPI, suid) == 0) { hasTbnameCond = true; break; } @@ -1099,7 +1099,7 @@ static int32_t optimizeTbnameInCond(void* pVnode, int64_t suid, SArray* list, SN // only return uid that does not contained in pExistedUidList static int32_t optimizeTbnameInCondImpl(void* pVnode, SArray* pExistedUidList, SNode* pTagCond, - SStorageAPI* pStoreAPI) { + SStorageAPI* pStoreAPI, uint64_t suid) { if (nodeType(pTagCond) != QUERY_NODE_OPERATOR) { return -1; } @@ -1148,10 +1148,13 @@ static int32_t optimizeTbnameInCondImpl(void* pVnode, SArray* pExistedUidList, S for (int i = 0; i < numOfTables; i++) { char* name = taosArrayGetP(pTbList, i); - uint64_t uid = 0; + uint64_t uid = 0, csuid = 0; if (pStoreAPI->metaFn.getTableUidByName(pVnode, name, &uid) == 0) { ETableType tbType = TSDB_TABLE_MAX; - if (pStoreAPI->metaFn.getTableTypeByName(pVnode, name, &tbType) == 0 && tbType == TSDB_CHILD_TABLE) { + if (pStoreAPI->metaFn.getTableTypeSuidByName(pVnode, name, &tbType, &csuid) == 0 && tbType == TSDB_CHILD_TABLE) { + if (suid != csuid) { + continue; + } if (NULL == uHash || taosHashGet(uHash, &uid, sizeof(uid)) == NULL) { STUidTagInfo s = {.uid = uid, .name = name, .pTagVal = NULL}; void* tmp = taosArrayPush(pExistedUidList, &s); diff --git a/source/libs/executor/src/forecastoperator.c b/source/libs/executor/src/forecastoperator.c index 2985e5e000..02b122830c 100644 --- a/source/libs/executor/src/forecastoperator.c +++ b/source/libs/executor/src/forecastoperator.c @@ -12,13 +12,12 @@ * You should have received a copy of the GNU Affero General Public License * along with this program. If not, see . */ + #include "executorInt.h" #include "filter.h" -#include "function.h" #include "functionMgt.h" #include "operator.h" #include "querytask.h" -#include "storageapi.h" #include "tanalytics.h" #include "tcommon.h" #include "tcompare.h" @@ -29,24 +28,24 @@ #ifdef USE_ANALYTICS typedef struct { - char algoName[TSDB_ANALYTIC_ALGO_NAME_LEN]; - char algoUrl[TSDB_ANALYTIC_ALGO_URL_LEN]; - char algoOpt[TSDB_ANALYTIC_ALGO_OPTION_LEN]; - int64_t maxTs; - int64_t minTs; - int64_t numOfRows; - uint64_t groupId; - int64_t optRows; - int64_t cachedRows; - int32_t numOfBlocks; - int16_t resTsSlot; - int16_t resValSlot; - int16_t resLowSlot; - int16_t resHighSlot; - int16_t inputTsSlot; - int16_t inputValSlot; - int8_t inputValType; - int8_t inputPrecision; + char algoName[TSDB_ANALYTIC_ALGO_NAME_LEN]; + char algoUrl[TSDB_ANALYTIC_ALGO_URL_LEN]; + char algoOpt[TSDB_ANALYTIC_ALGO_OPTION_LEN]; + int64_t maxTs; + int64_t minTs; + int64_t numOfRows; + uint64_t groupId; + int64_t optRows; + int64_t cachedRows; + int32_t numOfBlocks; + int16_t resTsSlot; + int16_t resValSlot; + int16_t resLowSlot; + int16_t resHighSlot; + int16_t inputTsSlot; + int16_t inputValSlot; + int8_t inputValType; + int8_t inputPrecision; SAnalyticBuf analBuf; } SForecastSupp; @@ -118,7 +117,7 @@ static int32_t forecastCacheBlock(SForecastSupp* pSupp, SSDataBlock* pBlock, con static int32_t forecastCloseBuf(SForecastSupp* pSupp) { SAnalyticBuf* pBuf = &pSupp->analBuf; - int32_t code = 0; + int32_t code = 0; for (int32_t i = 0; i < 2; ++i) { code = taosAnalBufWriteColEnd(pBuf, i); @@ -178,7 +177,6 @@ static int32_t forecastCloseBuf(SForecastSupp* pSupp) { code = taosAnalBufWriteOptInt(pBuf, "start", start); if (code != 0) return code; - bool hasEvery = taosAnalGetOptInt(pSupp->algoOpt, "every", &every); if (!hasEvery) { qDebug("forecast every not found from %s, use %" PRId64, pSupp->algoOpt, every); @@ -192,14 +190,14 @@ static int32_t forecastCloseBuf(SForecastSupp* pSupp) { static int32_t forecastAnalysis(SForecastSupp* pSupp, SSDataBlock* pBlock, const char* pId) { SAnalyticBuf* pBuf = &pSupp->analBuf; - int32_t resCurRow = pBlock->info.rows; - int8_t tmpI8; - int16_t tmpI16; - int32_t tmpI32; - int64_t tmpI64; - float tmpFloat; - double tmpDouble; - int32_t code = 0; + int32_t resCurRow = pBlock->info.rows; + int8_t tmpI8; + int16_t tmpI16; + int32_t tmpI32; + int64_t tmpI64; + float tmpFloat; + double tmpDouble; + int32_t code = 0; SColumnInfoData* pResValCol = taosArrayGet(pBlock->pDataBlock, pSupp->resValSlot); if (NULL == pResValCol) { @@ -356,8 +354,8 @@ _OVER: } static int32_t forecastAggregateBlocks(SForecastSupp* pSupp, SSDataBlock* pResBlock, const char* pId) { - int32_t code = TSDB_CODE_SUCCESS; - int32_t lino = 0; + int32_t code = TSDB_CODE_SUCCESS; + int32_t lino = 0; SAnalyticBuf* pBuf = &pSupp->analBuf; code = forecastCloseBuf(pSupp); @@ -542,7 +540,7 @@ static int32_t forecastParseAlgo(SForecastSupp* pSupp) { static int32_t forecastCreateBuf(SForecastSupp* pSupp) { SAnalyticBuf* pBuf = &pSupp->analBuf; - int64_t ts = 0; // taosGetTimestampMs(); + int64_t ts = 0; // taosGetTimestampMs(); pBuf->bufType = ANALYTICS_BUF_TYPE_JSON_COL; snprintf(pBuf->fileName, sizeof(pBuf->fileName), "%s/tdengine-forecast-%" PRId64, tsTempDir, ts); diff --git a/source/libs/executor/src/hashjoinoperator.c b/source/libs/executor/src/hashjoinoperator.c index 73a5139e43..42e99e5fef 100644 --- a/source/libs/executor/src/hashjoinoperator.c +++ b/source/libs/executor/src/hashjoinoperator.c @@ -1185,7 +1185,7 @@ int32_t createHashJoinOperatorInfo(SOperatorInfo** pDownstream, int32_t numOfDow pInfo->tblTimeRange.skey = pJoinNode->timeRange.skey; pInfo->tblTimeRange.ekey = pJoinNode->timeRange.ekey; - pInfo->ctx.limit = pJoinNode->node.pLimit ? ((SLimitNode*)pJoinNode->node.pLimit)->limit : INT64_MAX; + pInfo->ctx.limit = (pJoinNode->node.pLimit && ((SLimitNode*)pJoinNode->node.pLimit)->limit) ? ((SLimitNode*)pJoinNode->node.pLimit)->limit->datum.i : INT64_MAX; setOperatorInfo(pOperator, "HashJoinOperator", QUERY_NODE_PHYSICAL_PLAN_HASH_JOIN, false, OP_NOT_OPENED, pInfo, pTaskInfo); diff --git a/source/libs/executor/src/mergejoin.c b/source/libs/executor/src/mergejoin.c index adf1b4f0d1..f133c68410 100755 --- a/source/libs/executor/src/mergejoin.c +++ b/source/libs/executor/src/mergejoin.c @@ -3592,7 +3592,7 @@ int32_t mJoinInitWindowCtx(SMJoinOperatorInfo* pJoin, SSortMergeJoinPhysiNode* p switch (pJoinNode->subType) { case JOIN_STYPE_ASOF: pCtx->asofOpType = pJoinNode->asofOpType; - pCtx->jLimit = pJoinNode->pJLimit ? ((SLimitNode*)pJoinNode->pJLimit)->limit : 1; + pCtx->jLimit = (pJoinNode->pJLimit && ((SLimitNode*)pJoinNode->pJLimit)->limit) ? ((SLimitNode*)pJoinNode->pJLimit)->limit->datum.i : 1; pCtx->eqRowsAcq = ASOF_EQ_ROW_INCLUDED(pCtx->asofOpType); pCtx->lowerRowsAcq = (JOIN_TYPE_RIGHT != pJoin->joinType) ? ASOF_LOWER_ROW_INCLUDED(pCtx->asofOpType) : ASOF_GREATER_ROW_INCLUDED(pCtx->asofOpType); pCtx->greaterRowsAcq = (JOIN_TYPE_RIGHT != pJoin->joinType) ? ASOF_GREATER_ROW_INCLUDED(pCtx->asofOpType) : ASOF_LOWER_ROW_INCLUDED(pCtx->asofOpType); @@ -3609,7 +3609,7 @@ int32_t mJoinInitWindowCtx(SMJoinOperatorInfo* pJoin, SSortMergeJoinPhysiNode* p SWindowOffsetNode* pOffsetNode = (SWindowOffsetNode*)pJoinNode->pWindowOffset; SValueNode* pWinBegin = (SValueNode*)pOffsetNode->pStartOffset; SValueNode* pWinEnd = (SValueNode*)pOffsetNode->pEndOffset; - pCtx->jLimit = pJoinNode->pJLimit ? ((SLimitNode*)pJoinNode->pJLimit)->limit : INT64_MAX; + pCtx->jLimit = (pJoinNode->pJLimit && ((SLimitNode*)pJoinNode->pJLimit)->limit) ? ((SLimitNode*)pJoinNode->pJLimit)->limit->datum.i : INT64_MAX; pCtx->winBeginOffset = pWinBegin->datum.i; pCtx->winEndOffset = pWinEnd->datum.i; pCtx->eqRowsAcq = (pCtx->winBeginOffset <= 0 && pCtx->winEndOffset >= 0); @@ -3662,7 +3662,7 @@ int32_t mJoinInitMergeCtx(SMJoinOperatorInfo* pJoin, SSortMergeJoinPhysiNode* pJ pCtx->hashCan = pJoin->probe->keyNum > 0; if (JOIN_STYPE_ASOF == pJoinNode->subType || JOIN_STYPE_WIN == pJoinNode->subType) { - pCtx->jLimit = pJoinNode->pJLimit ? ((SLimitNode*)pJoinNode->pJLimit)->limit : 1; + pCtx->jLimit = (pJoinNode->pJLimit && ((SLimitNode*)pJoinNode->pJLimit)->limit) ? ((SLimitNode*)pJoinNode->pJLimit)->limit->datum.i : 1; pJoin->subType = JOIN_STYPE_OUTER; pJoin->build->eqRowLimit = pCtx->jLimit; pJoin->grpResetFp = mLeftJoinGroupReset; diff --git a/source/libs/executor/src/mergejoinoperator.c b/source/libs/executor/src/mergejoinoperator.c index e007504ffb..3edef48ed1 100644 --- a/source/libs/executor/src/mergejoinoperator.c +++ b/source/libs/executor/src/mergejoinoperator.c @@ -986,7 +986,7 @@ static int32_t mJoinInitTableInfo(SMJoinOperatorInfo* pJoin, SSortMergeJoinPhysi pTable->multiEqGrpRows = !((JOIN_STYPE_SEMI == pJoin->subType || JOIN_STYPE_ANTI == pJoin->subType) && NULL == pJoin->pFPreFilter); pTable->multiRowsGrp = !((JOIN_STYPE_SEMI == pJoin->subType || JOIN_STYPE_ANTI == pJoin->subType) && NULL == pJoin->pPreFilter); if (JOIN_STYPE_ASOF == pJoinNode->subType) { - pTable->eqRowLimit = pJoinNode->pJLimit ? ((SLimitNode*)pJoinNode->pJLimit)->limit : 1; + pTable->eqRowLimit = (pJoinNode->pJLimit && ((SLimitNode*)pJoinNode->pJLimit)->limit) ? ((SLimitNode*)pJoinNode->pJLimit)->limit->datum.i : 1; } } else { pTable->multiEqGrpRows = true; @@ -1169,7 +1169,7 @@ static FORCE_INLINE SSDataBlock* mJoinRetrieveImpl(SMJoinOperatorInfo* pJoin, SM static int32_t mJoinInitCtx(SMJoinOperatorInfo* pJoin, SSortMergeJoinPhysiNode* pJoinNode) { pJoin->ctx.mergeCtx.groupJoin = pJoinNode->grpJoin; - pJoin->ctx.mergeCtx.limit = pJoinNode->node.pLimit ? ((SLimitNode*)pJoinNode->node.pLimit)->limit : INT64_MAX; + pJoin->ctx.mergeCtx.limit = (pJoinNode->node.pLimit && ((SLimitNode*)pJoinNode->node.pLimit)->limit) ? ((SLimitNode*)pJoinNode->node.pLimit)->limit->datum.i : INT64_MAX; pJoin->retrieveFp = pJoinNode->grpJoin ? mJoinGrpRetrieveImpl : mJoinRetrieveImpl; pJoin->outBlkId = pJoinNode->node.pOutputDataBlockDesc->dataBlockId; diff --git a/source/libs/executor/src/sortoperator.c b/source/libs/executor/src/sortoperator.c index ed073d21a0..2bb8c4403e 100644 --- a/source/libs/executor/src/sortoperator.c +++ b/source/libs/executor/src/sortoperator.c @@ -84,9 +84,11 @@ int32_t createSortOperatorInfo(SOperatorInfo* downstream, SSortPhysiNode* pSortN calcSortOperMaxTupleLength(pInfo, pSortNode->pSortKeys); pInfo->maxRows = -1; - if (pSortNode->node.pLimit) { + if (pSortNode->node.pLimit && ((SLimitNode*)pSortNode->node.pLimit)->limit) { SLimitNode* pLimit = (SLimitNode*)pSortNode->node.pLimit; - if (pLimit->limit > 0) pInfo->maxRows = pLimit->limit + pLimit->offset; + if (pLimit->limit->datum.i > 0) { + pInfo->maxRows = pLimit->limit->datum.i + (pLimit->offset ? pLimit->offset->datum.i : 0); + } } pOperator->exprSupp.pCtx = diff --git a/source/libs/executor/src/timesliceoperator.c b/source/libs/executor/src/timesliceoperator.c index 3a639772c8..49fd557fe3 100644 --- a/source/libs/executor/src/timesliceoperator.c +++ b/source/libs/executor/src/timesliceoperator.c @@ -63,7 +63,7 @@ static void doKeepPrevRows(STimeSliceOperatorInfo* pSliceInfo, const SSDataBlock pkey->isNull = false; char* val = colDataGetData(pColInfoData, rowIndex); if (IS_VAR_DATA_TYPE(pkey->type)) { - memcpy(pkey->pData, val, varDataLen(val)); + memcpy(pkey->pData, val, varDataTLen(val)); } else { memcpy(pkey->pData, val, pkey->bytes); } @@ -87,7 +87,7 @@ static void doKeepNextRows(STimeSliceOperatorInfo* pSliceInfo, const SSDataBlock if (!IS_VAR_DATA_TYPE(pkey->type)) { memcpy(pkey->pData, val, pkey->bytes); } else { - memcpy(pkey->pData, val, varDataLen(val)); + memcpy(pkey->pData, val, varDataTLen(val)); } } else { pkey->isNull = true; diff --git a/source/libs/executor/src/timewindowoperator.c b/source/libs/executor/src/timewindowoperator.c index 25716be0f8..71c71a547e 100644 --- a/source/libs/executor/src/timewindowoperator.c +++ b/source/libs/executor/src/timewindowoperator.c @@ -1417,15 +1417,15 @@ int32_t createIntervalOperatorInfo(SOperatorInfo* downstream, SIntervalPhysiNode pInfo->interval = interval; pInfo->twAggSup = as; pInfo->binfo.mergeResultBlock = pPhyNode->window.mergeDataBlock; - if (pPhyNode->window.node.pLimit) { + if (pPhyNode->window.node.pLimit && ((SLimitNode*)pPhyNode->window.node.pLimit)->limit) { SLimitNode* pLimit = (SLimitNode*)pPhyNode->window.node.pLimit; pInfo->limited = true; - pInfo->limit = pLimit->limit + pLimit->offset; + pInfo->limit = pLimit->limit->datum.i + (pLimit->offset ? pLimit->offset->datum.i : 0); } - if (pPhyNode->window.node.pSlimit) { + if (pPhyNode->window.node.pSlimit && ((SLimitNode*)pPhyNode->window.node.pSlimit)->limit) { SLimitNode* pLimit = (SLimitNode*)pPhyNode->window.node.pSlimit; pInfo->slimited = true; - pInfo->slimit = pLimit->limit + pLimit->offset; + pInfo->slimit = pLimit->limit->datum.i + (pLimit->offset ? pLimit->offset->datum.i : 0); pInfo->curGroupId = UINT64_MAX; } diff --git a/source/libs/executor/test/joinTests.cpp b/source/libs/executor/test/joinTests.cpp index efbe1fcc83..09d5753f78 100755 --- a/source/libs/executor/test/joinTests.cpp +++ b/source/libs/executor/test/joinTests.cpp @@ -864,7 +864,11 @@ SSortMergeJoinPhysiNode* createDummySortMergeJoinPhysiNode(SJoinTestParam* param SLimitNode* limitNode = NULL; code = nodesMakeNode(QUERY_NODE_LIMIT, (SNode**)&limitNode); assert(limitNode); - limitNode->limit = param->jLimit; + code = nodesMakeNode(QUERY_NODE_VALUE, (SNode**)&limitNode->limit); + assert(limitNode->limit); + limitNode->limit->node.resType.type = TSDB_DATA_TYPE_BIGINT; + limitNode->limit->node.resType.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes; + limitNode->limit->datum.i = param->jLimit; p->pJLimit = (SNode*)limitNode; } diff --git a/source/libs/executor/test/queryPlanTests.cpp b/source/libs/executor/test/queryPlanTests.cpp index 8126e53bd6..3815dab444 100755 --- a/source/libs/executor/test/queryPlanTests.cpp +++ b/source/libs/executor/test/queryPlanTests.cpp @@ -1418,6 +1418,7 @@ SNode* qptMakeExprNode(SNode** ppNode) { SNode* qptMakeLimitNode(SNode** ppNode) { SNode* pNode = NULL; + int32_t code = 0; if (QPT_NCORRECT_LOW_PROB()) { return qptMakeRandNode(&pNode); } @@ -1429,15 +1430,27 @@ SNode* qptMakeLimitNode(SNode** ppNode) { if (!qptCtx.param.correctExpected) { if (taosRand() % 2) { - pLimit->limit = taosRand() * ((taosRand() % 2) ? 1 : -1); + code = nodesMakeNode(QUERY_NODE_VALUE, (SNode**)&pLimit->limit); + assert(pLimit->limit); + pLimit->limit->node.resType.type = TSDB_DATA_TYPE_BIGINT; + pLimit->limit->node.resType.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes; + pLimit->limit->datum.i = taosRand() * ((taosRand() % 2) ? 1 : -1); } if (taosRand() % 2) { - pLimit->offset = taosRand() * ((taosRand() % 2) ? 1 : -1); + code = nodesMakeNode(QUERY_NODE_VALUE, (SNode**)&pLimit->offset); + assert(pLimit->offset); + pLimit->offset->node.resType.type = TSDB_DATA_TYPE_BIGINT; + pLimit->offset->node.resType.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes; + pLimit->offset->datum.i = taosRand() * ((taosRand() % 2) ? 1 : -1); } } else { - pLimit->limit = taosRand(); + pLimit->limit->datum.i = taosRand(); if (taosRand() % 2) { - pLimit->offset = taosRand(); + code = nodesMakeNode(QUERY_NODE_VALUE, (SNode**)&pLimit->offset); + assert(pLimit->offset); + pLimit->offset->node.resType.type = TSDB_DATA_TYPE_BIGINT; + pLimit->offset->node.resType.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes; + pLimit->offset->datum.i = taosRand(); } } diff --git a/source/libs/function/src/builtinsimpl.c b/source/libs/function/src/builtinsimpl.c index 0f6db39cc8..efe16ce662 100644 --- a/source/libs/function/src/builtinsimpl.c +++ b/source/libs/function/src/builtinsimpl.c @@ -784,6 +784,7 @@ static bool funcNotSupportStringSma(SFunctionNode* pFunc) { case FUNCTION_TYPE_SPREAD_PARTIAL: case FUNCTION_TYPE_SPREAD_MERGE: case FUNCTION_TYPE_TWA: + case FUNCTION_TYPE_ELAPSED: pParam = nodesListGetNode(pFunc->pParameterList, 0); if (pParam && nodesIsExprNode(pParam) && (IS_VAR_DATA_TYPE(((SExprNode*)pParam)->resType.type))) { return true; diff --git a/source/libs/nodes/src/nodesCloneFuncs.c b/source/libs/nodes/src/nodesCloneFuncs.c index 6d245ebd61..161c5f7ca7 100644 --- a/source/libs/nodes/src/nodesCloneFuncs.c +++ b/source/libs/nodes/src/nodesCloneFuncs.c @@ -52,7 +52,7 @@ if (NULL == (pSrc)->fldname) { \ break; \ } \ - int32_t code = nodesCloneNode((pSrc)->fldname, &((pDst)->fldname)); \ + int32_t code = nodesCloneNode((SNode*)(pSrc)->fldname, (SNode**)&((pDst)->fldname)); \ if (NULL == (pDst)->fldname) { \ return code; \ } \ @@ -346,8 +346,8 @@ static int32_t orderByExprNodeCopy(const SOrderByExprNode* pSrc, SOrderByExprNod } static int32_t limitNodeCopy(const SLimitNode* pSrc, SLimitNode* pDst) { - COPY_SCALAR_FIELD(limit); - COPY_SCALAR_FIELD(offset); + CLONE_NODE_FIELD(limit); + CLONE_NODE_FIELD(offset); return TSDB_CODE_SUCCESS; } diff --git a/source/libs/nodes/src/nodesCodeFuncs.c b/source/libs/nodes/src/nodesCodeFuncs.c index 6d4d89607f..9dcb2e67d4 100644 --- a/source/libs/nodes/src/nodesCodeFuncs.c +++ b/source/libs/nodes/src/nodesCodeFuncs.c @@ -4933,9 +4933,9 @@ static const char* jkLimitOffset = "Offset"; static int32_t limitNodeToJson(const void* pObj, SJson* pJson) { const SLimitNode* pNode = (const SLimitNode*)pObj; - int32_t code = tjsonAddIntegerToObject(pJson, jkLimitLimit, pNode->limit); - if (TSDB_CODE_SUCCESS == code) { - code = tjsonAddIntegerToObject(pJson, jkLimitOffset, pNode->offset); + int32_t code = tjsonAddObject(pJson, jkLimitLimit, nodeToJson, pNode->limit); + if (TSDB_CODE_SUCCESS == code && pNode->offset) { + code = tjsonAddObject(pJson, jkLimitOffset, nodeToJson, pNode->offset); } return code; @@ -4944,9 +4944,9 @@ static int32_t limitNodeToJson(const void* pObj, SJson* pJson) { static int32_t jsonToLimitNode(const SJson* pJson, void* pObj) { SLimitNode* pNode = (SLimitNode*)pObj; - int32_t code = tjsonGetBigIntValue(pJson, jkLimitLimit, &pNode->limit); + int32_t code = jsonToNodeObject(pJson, jkLimitLimit, (SNode**)&pNode->limit); if (TSDB_CODE_SUCCESS == code) { - code = tjsonGetBigIntValue(pJson, jkLimitOffset, &pNode->offset); + code = jsonToNodeObject(pJson, jkLimitOffset, (SNode**)&pNode->offset); } return code; diff --git a/source/libs/nodes/src/nodesMsgFuncs.c b/source/libs/nodes/src/nodesMsgFuncs.c index 930a88aea0..1becd07aba 100644 --- a/source/libs/nodes/src/nodesMsgFuncs.c +++ b/source/libs/nodes/src/nodesMsgFuncs.c @@ -1246,9 +1246,9 @@ enum { LIMIT_CODE_LIMIT = 1, LIMIT_CODE_OFFSET }; static int32_t limitNodeToMsg(const void* pObj, STlvEncoder* pEncoder) { const SLimitNode* pNode = (const SLimitNode*)pObj; - int32_t code = tlvEncodeI64(pEncoder, LIMIT_CODE_LIMIT, pNode->limit); - if (TSDB_CODE_SUCCESS == code) { - code = tlvEncodeI64(pEncoder, LIMIT_CODE_OFFSET, pNode->offset); + int32_t code = tlvEncodeObj(pEncoder, LIMIT_CODE_LIMIT, nodeToMsg, pNode->limit); + if (TSDB_CODE_SUCCESS == code && pNode->offset) { + code = tlvEncodeObj(pEncoder, LIMIT_CODE_OFFSET, nodeToMsg, pNode->offset); } return code; @@ -1262,10 +1262,10 @@ static int32_t msgToLimitNode(STlvDecoder* pDecoder, void* pObj) { tlvForEach(pDecoder, pTlv, code) { switch (pTlv->type) { case LIMIT_CODE_LIMIT: - code = tlvDecodeI64(pTlv, &pNode->limit); + code = msgToNodeFromTlv(pTlv, (void**)&pNode->limit); break; case LIMIT_CODE_OFFSET: - code = tlvDecodeI64(pTlv, &pNode->offset); + code = msgToNodeFromTlv(pTlv, (void**)&pNode->offset); break; default: break; diff --git a/source/libs/nodes/src/nodesUtilFuncs.c b/source/libs/nodes/src/nodesUtilFuncs.c index 7beaeaa46c..47c6292a9a 100644 --- a/source/libs/nodes/src/nodesUtilFuncs.c +++ b/source/libs/nodes/src/nodesUtilFuncs.c @@ -1106,8 +1106,12 @@ void nodesDestroyNode(SNode* pNode) { case QUERY_NODE_ORDER_BY_EXPR: nodesDestroyNode(((SOrderByExprNode*)pNode)->pExpr); break; - case QUERY_NODE_LIMIT: // no pointer field + case QUERY_NODE_LIMIT: { + SLimitNode* pLimit = (SLimitNode*)pNode; + nodesDestroyNode((SNode*)pLimit->limit); + nodesDestroyNode((SNode*)pLimit->offset); break; + } case QUERY_NODE_STATE_WINDOW: { SStateWindowNode* pState = (SStateWindowNode*)pNode; nodesDestroyNode(pState->pCol); @@ -3097,6 +3101,25 @@ int32_t nodesMakeValueNodeFromInt32(int32_t value, SNode** ppNode) { return code; } +int32_t nodesMakeValueNodeFromInt64(int64_t value, SNode** ppNode) { + SValueNode* pValNode = NULL; + int32_t code = nodesMakeNode(QUERY_NODE_VALUE, (SNode**)&pValNode); + if (TSDB_CODE_SUCCESS == code) { + pValNode->node.resType.type = TSDB_DATA_TYPE_BIGINT; + pValNode->node.resType.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes; + code = nodesSetValueNodeValue(pValNode, &value); + if (TSDB_CODE_SUCCESS == code) { + pValNode->translate = true; + pValNode->isNull = false; + *ppNode = (SNode*)pValNode; + } else { + nodesDestroyNode((SNode*)pValNode); + } + } + return code; +} + + bool nodesIsStar(SNode* pNode) { return (QUERY_NODE_COLUMN == nodeType(pNode)) && ('\0' == ((SColumnNode*)pNode)->tableAlias[0]) && (0 == strcmp(((SColumnNode*)pNode)->colName, "*")); diff --git a/source/libs/parser/inc/parAst.h b/source/libs/parser/inc/parAst.h index e69a3da4a9..293649e06e 100644 --- a/source/libs/parser/inc/parAst.h +++ b/source/libs/parser/inc/parAst.h @@ -152,7 +152,7 @@ SNode* createTempTableNode(SAstCreateContext* pCxt, SNode* pSubquery, SToken SNode* createJoinTableNode(SAstCreateContext* pCxt, EJoinType type, EJoinSubType stype, SNode* pLeft, SNode* pRight, SNode* pJoinCond); SNode* createViewNode(SAstCreateContext* pCxt, SToken* pDbName, SToken* pViewName); -SNode* createLimitNode(SAstCreateContext* pCxt, const SToken* pLimit, const SToken* pOffset); +SNode* createLimitNode(SAstCreateContext* pCxt, SNode* pLimit, SNode* pOffset); SNode* createOrderByExprNode(SAstCreateContext* pCxt, SNode* pExpr, EOrder order, ENullOrder nullOrder); SNode* createSessionWindowNode(SAstCreateContext* pCxt, SNode* pCol, SNode* pGap); SNode* createStateWindowNode(SAstCreateContext* pCxt, SNode* pExpr); diff --git a/source/libs/parser/inc/sql.y b/source/libs/parser/inc/sql.y old mode 100644 new mode 100755 index fda49e7ee2..5c16da8665 --- a/source/libs/parser/inc/sql.y +++ b/source/libs/parser/inc/sql.y @@ -1078,6 +1078,10 @@ signed_integer(A) ::= NK_MINUS(B) NK_INTEGER(C). A = createValueNode(pCxt, TSDB_DATA_TYPE_BIGINT, &t); } + +unsigned_integer(A) ::= NK_INTEGER(B). { A = createValueNode(pCxt, TSDB_DATA_TYPE_BIGINT, &B); } +unsigned_integer(A) ::= NK_QUESTION(B). { A = releaseRawExprNode(pCxt, createRawExprNode(pCxt, &B, createPlaceholderValueNode(pCxt, &B))); } + signed_float(A) ::= NK_FLOAT(B). { A = createValueNode(pCxt, TSDB_DATA_TYPE_DOUBLE, &B); } signed_float(A) ::= NK_PLUS NK_FLOAT(B). { A = createValueNode(pCxt, TSDB_DATA_TYPE_DOUBLE, &B); } signed_float(A) ::= NK_MINUS(B) NK_FLOAT(C). { @@ -1098,6 +1102,7 @@ signed_literal(A) ::= NULL(B). signed_literal(A) ::= literal_func(B). { A = releaseRawExprNode(pCxt, B); } signed_literal(A) ::= NK_QUESTION(B). { A = createPlaceholderValueNode(pCxt, &B); } + %type literal_list { SNodeList* } %destructor literal_list { nodesDestroyList($$); } literal_list(A) ::= signed_literal(B). { A = createNodeList(pCxt, B); } @@ -1480,7 +1485,7 @@ window_offset_literal(A) ::= NK_MINUS(B) NK_VARIABLE(C). } jlimit_clause_opt(A) ::= . { A = NULL; } -jlimit_clause_opt(A) ::= JLIMIT NK_INTEGER(B). { A = createLimitNode(pCxt, &B, NULL); } +jlimit_clause_opt(A) ::= JLIMIT unsigned_integer(B). { A = createLimitNode(pCxt, B, NULL); } /************************************************ query_specification *************************************************/ query_specification(A) ::= @@ -1660,14 +1665,14 @@ order_by_clause_opt(A) ::= . order_by_clause_opt(A) ::= ORDER BY sort_specification_list(B). { A = B; } slimit_clause_opt(A) ::= . { A = NULL; } -slimit_clause_opt(A) ::= SLIMIT NK_INTEGER(B). { A = createLimitNode(pCxt, &B, NULL); } -slimit_clause_opt(A) ::= SLIMIT NK_INTEGER(B) SOFFSET NK_INTEGER(C). { A = createLimitNode(pCxt, &B, &C); } -slimit_clause_opt(A) ::= SLIMIT NK_INTEGER(C) NK_COMMA NK_INTEGER(B). { A = createLimitNode(pCxt, &B, &C); } +slimit_clause_opt(A) ::= SLIMIT unsigned_integer(B). { A = createLimitNode(pCxt, B, NULL); } +slimit_clause_opt(A) ::= SLIMIT unsigned_integer(B) SOFFSET unsigned_integer(C). { A = createLimitNode(pCxt, B, C); } +slimit_clause_opt(A) ::= SLIMIT unsigned_integer(C) NK_COMMA unsigned_integer(B). { A = createLimitNode(pCxt, B, C); } limit_clause_opt(A) ::= . { A = NULL; } -limit_clause_opt(A) ::= LIMIT NK_INTEGER(B). { A = createLimitNode(pCxt, &B, NULL); } -limit_clause_opt(A) ::= LIMIT NK_INTEGER(B) OFFSET NK_INTEGER(C). { A = createLimitNode(pCxt, &B, &C); } -limit_clause_opt(A) ::= LIMIT NK_INTEGER(C) NK_COMMA NK_INTEGER(B). { A = createLimitNode(pCxt, &B, &C); } +limit_clause_opt(A) ::= LIMIT unsigned_integer(B). { A = createLimitNode(pCxt, B, NULL); } +limit_clause_opt(A) ::= LIMIT unsigned_integer(B) OFFSET unsigned_integer(C). { A = createLimitNode(pCxt, B, C); } +limit_clause_opt(A) ::= LIMIT unsigned_integer(C) NK_COMMA unsigned_integer(B). { A = createLimitNode(pCxt, B, C); } /************************************************ subquery ************************************************************/ subquery(A) ::= NK_LP(B) query_expression(C) NK_RP(D). { A = createRawExprNodeExt(pCxt, &B, &D, C); } diff --git a/source/libs/parser/src/parAstCreater.c b/source/libs/parser/src/parAstCreater.c index fa656667af..708c8aa6eb 100644 --- a/source/libs/parser/src/parAstCreater.c +++ b/source/libs/parser/src/parAstCreater.c @@ -1287,14 +1287,14 @@ _err: return NULL; } -SNode* createLimitNode(SAstCreateContext* pCxt, const SToken* pLimit, const SToken* pOffset) { +SNode* createLimitNode(SAstCreateContext* pCxt, SNode* pLimit, SNode* pOffset) { CHECK_PARSER_STATUS(pCxt); SLimitNode* limitNode = NULL; pCxt->errCode = nodesMakeNode(QUERY_NODE_LIMIT, (SNode**)&limitNode); CHECK_MAKE_NODE(limitNode); - limitNode->limit = taosStr2Int64(pLimit->z, NULL, 10); + limitNode->limit = (SValueNode*)pLimit; if (NULL != pOffset) { - limitNode->offset = taosStr2Int64(pOffset->z, NULL, 10); + limitNode->offset = (SValueNode*)pOffset; } return (SNode*)limitNode; _err: diff --git a/source/libs/parser/src/parInsertSql.c b/source/libs/parser/src/parInsertSql.c index 67ad874b15..5ff6e4f555 100644 --- a/source/libs/parser/src/parInsertSql.c +++ b/source/libs/parser/src/parInsertSql.c @@ -2751,6 +2751,9 @@ static int32_t parseInsertBody(SInsertParseContext* pCxt, SVnodeModifyOpStmt* pS if (TSDB_CODE_SUCCESS == code && hasData) { code = parseInsertTableClause(pCxt, pStmt, &token); } + if (TSDB_CODE_PAR_TABLE_NOT_EXIST == code && pCxt->preCtbname) { + code = TSDB_CODE_TSC_STMT_TBNAME_ERROR; + } } if (TSDB_CODE_SUCCESS == code && !pCxt->missCache) { diff --git a/source/libs/parser/src/parTranslater.c b/source/libs/parser/src/parTranslater.c index 86490ce4c8..2a4fdb0136 100755 --- a/source/libs/parser/src/parTranslater.c +++ b/source/libs/parser/src/parTranslater.c @@ -4729,16 +4729,20 @@ static int32_t translateJoinTable(STranslateContext* pCxt, SJoinTableNode* pJoin return buildInvalidOperationMsg(&pCxt->msgBuf, "WINDOW_OFFSET required for WINDOW join"); } - if (TSDB_CODE_SUCCESS == code && NULL != pJoinTable->pJLimit) { + if (TSDB_CODE_SUCCESS == code && NULL != pJoinTable->pJLimit && NULL != ((SLimitNode*)pJoinTable->pJLimit)->limit) { if (*pSType != JOIN_STYPE_ASOF && *pSType != JOIN_STYPE_WIN) { return buildInvalidOperationMsgExt(&pCxt->msgBuf, "JLIMIT not supported for %s join", getFullJoinTypeString(type, *pSType)); } SLimitNode* pJLimit = (SLimitNode*)pJoinTable->pJLimit; - if (pJLimit->limit > JOIN_JLIMIT_MAX_VALUE || pJLimit->limit < 0) { + code = translateExpr(pCxt, (SNode**)&pJLimit->limit); + if (TSDB_CODE_SUCCESS != code) { + return code; + } + if (pJLimit->limit->datum.i > JOIN_JLIMIT_MAX_VALUE || pJLimit->limit->datum.i < 0) { return buildInvalidOperationMsg(&pCxt->msgBuf, "JLIMIT value is out of valid range [0, 1024]"); } - if (0 == pJLimit->limit) { + if (0 == pJLimit->limit->datum.i) { pCurrSmt->isEmptyResult = true; } } @@ -6994,16 +6998,32 @@ static int32_t translateFrom(STranslateContext* pCxt, SNode** pTable) { } static int32_t checkLimit(STranslateContext* pCxt, SSelectStmt* pSelect) { - if ((NULL != pSelect->pLimit && pSelect->pLimit->offset < 0) || - (NULL != pSelect->pSlimit && pSelect->pSlimit->offset < 0)) { + int32_t code = 0; + + if (pSelect->pLimit && pSelect->pLimit->limit) { + code = translateExpr(pCxt, (SNode**)&pSelect->pLimit->limit); + } + if (TSDB_CODE_SUCCESS == code && pSelect->pLimit && pSelect->pLimit->offset) { + code = translateExpr(pCxt, (SNode**)&pSelect->pLimit->offset); + } + if (TSDB_CODE_SUCCESS == code && pSelect->pSlimit && pSelect->pSlimit->limit) { + code = translateExpr(pCxt, (SNode**)&pSelect->pSlimit->limit); + } + if (TSDB_CODE_SUCCESS == code && pSelect->pSlimit && pSelect->pSlimit->offset) { + code = translateExpr(pCxt, (SNode**)&pSelect->pSlimit->offset); + } + + if ((TSDB_CODE_SUCCESS == code) && + ((NULL != pSelect->pLimit && pSelect->pLimit->offset && pSelect->pLimit->offset->datum.i < 0) || + (NULL != pSelect->pSlimit && pSelect->pSlimit->offset && pSelect->pSlimit->offset->datum.i < 0))) { return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_OFFSET_LESS_ZERO); } - if (NULL != pSelect->pSlimit && (NULL == pSelect->pPartitionByList && NULL == pSelect->pGroupByList)) { + if ((TSDB_CODE_SUCCESS == code) && NULL != pSelect->pSlimit && (NULL == pSelect->pPartitionByList && NULL == pSelect->pGroupByList)) { return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_SLIMIT_LEAK_PARTITION_GROUP_BY); } - return TSDB_CODE_SUCCESS; + return code; } static int32_t createPrimaryKeyColByTable(STranslateContext* pCxt, STableNode* pTable, SNode** pPrimaryKey) { @@ -7482,7 +7502,14 @@ static int32_t translateSetOperOrderBy(STranslateContext* pCxt, SSetOperator* pS } static int32_t checkSetOperLimit(STranslateContext* pCxt, SLimitNode* pLimit) { - if ((NULL != pLimit && pLimit->offset < 0)) { + int32_t code = 0; + if (pLimit && pLimit->limit) { + code = translateExpr(pCxt, (SNode**)&pLimit->limit); + } + if (TSDB_CODE_SUCCESS == code && pLimit && pLimit->offset) { + code = translateExpr(pCxt, (SNode**)&pLimit->offset); + } + if (TSDB_CODE_SUCCESS == code && (NULL != pLimit && NULL != pLimit->offset && pLimit->offset->datum.i < 0)) { return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_OFFSET_LESS_ZERO); } return TSDB_CODE_SUCCESS; diff --git a/source/libs/parser/test/parAlterToBalanceTest.cpp b/source/libs/parser/test/parAlterToBalanceTest.cpp index a81076557e..172c729f34 100644 --- a/source/libs/parser/test/parAlterToBalanceTest.cpp +++ b/source/libs/parser/test/parAlterToBalanceTest.cpp @@ -196,23 +196,16 @@ TEST_F(ParserInitialATest, alterDatabase) { setAlterDbFsync(200); setAlterDbWal(1); setAlterDbCacheModel(TSDB_CACHE_MODEL_LAST_ROW); -#ifndef _STORAGE - setAlterDbSttTrigger(-1); -#else setAlterDbSttTrigger(16); -#endif setAlterDbBuffer(16); setAlterDbPages(128); setAlterDbReplica(3); setAlterDbWalRetentionPeriod(10); setAlterDbWalRetentionSize(20); -#ifndef _STORAGE run("ALTER DATABASE test BUFFER 16 CACHEMODEL 'last_row' CACHESIZE 32 WAL_FSYNC_PERIOD 200 KEEP 10 PAGES 128 " - "REPLICA 3 WAL_LEVEL 1 WAL_RETENTION_PERIOD 10 WAL_RETENTION_SIZE 20"); -#else - run("ALTER DATABASE test BUFFER 16 CACHEMODEL 'last_row' CACHESIZE 32 WAL_FSYNC_PERIOD 200 KEEP 10 PAGES 128 " - "REPLICA 3 WAL_LEVEL 1 STT_TRIGGER 16 WAL_RETENTION_PERIOD 10 WAL_RETENTION_SIZE 20"); -#endif + "REPLICA 3 WAL_LEVEL 1 " + "STT_TRIGGER 16 " + "WAL_RETENTION_PERIOD 10 WAL_RETENTION_SIZE 20"); clearAlterDbReq(); initAlterDb("test"); diff --git a/source/libs/parser/test/parInitialCTest.cpp b/source/libs/parser/test/parInitialCTest.cpp index 7d778d9c0b..2412bf4e78 100644 --- a/source/libs/parser/test/parInitialCTest.cpp +++ b/source/libs/parser/test/parInitialCTest.cpp @@ -292,11 +292,7 @@ TEST_F(ParserInitialCTest, createDatabase) { setDbWalRetentionSize(-1); setDbWalRollPeriod(10); setDbWalSegmentSize(20); -#ifndef _STORAGE setDbSstTrigger(1); -#else - setDbSstTrigger(16); -#endif setDbHashPrefix(3); setDbHashSuffix(4); setDbTsdbPageSize(32); @@ -354,7 +350,7 @@ TEST_F(ParserInitialCTest, createDatabase) { "WAL_RETENTION_SIZE -1 " "WAL_ROLL_PERIOD 10 " "WAL_SEGMENT_SIZE 20 " - "STT_TRIGGER 16 " + "STT_TRIGGER 1 " "TABLE_PREFIX 3 " "TABLE_SUFFIX 4 " "TSDB_PAGESIZE 32"); diff --git a/source/libs/planner/src/planOptimizer.c b/source/libs/planner/src/planOptimizer.c index b9f5d42604..e7ea028e5a 100644 --- a/source/libs/planner/src/planOptimizer.c +++ b/source/libs/planner/src/planOptimizer.c @@ -3705,8 +3705,14 @@ static int32_t rewriteTailOptCreateLimit(SNode* pLimit, SNode* pOffset, SNode** if (NULL == pLimitNode) { return code; } - pLimitNode->limit = NULL == pLimit ? -1 : ((SValueNode*)pLimit)->datum.i; - pLimitNode->offset = NULL == pOffset ? 0 : ((SValueNode*)pOffset)->datum.i; + code = nodesMakeValueNodeFromInt64(NULL == pLimit ? -1 : ((SValueNode*)pLimit)->datum.i, (SNode**)&pLimitNode->limit); + if (TSDB_CODE_SUCCESS != code) { + return code; + } + code = nodesMakeValueNodeFromInt64(NULL == pOffset ? 0 : ((SValueNode*)pOffset)->datum.i, (SNode**)&pLimitNode->offset); + if (TSDB_CODE_SUCCESS != code) { + return code; + } *pOutput = (SNode*)pLimitNode; return TSDB_CODE_SUCCESS; } diff --git a/source/libs/planner/src/planPhysiCreater.c b/source/libs/planner/src/planPhysiCreater.c index 31d51fad9b..9513e90c50 100644 --- a/source/libs/planner/src/planPhysiCreater.c +++ b/source/libs/planner/src/planPhysiCreater.c @@ -1823,9 +1823,9 @@ static int32_t createAggPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChildren, if (NULL == pAgg) { return terrno; } - if (pAgg->node.pSlimit) { + if (pAgg->node.pSlimit && ((SLimitNode*)pAgg->node.pSlimit)->limit) { pSubPlan->dynamicRowThreshold = true; - pSubPlan->rowsThreshold = ((SLimitNode*)pAgg->node.pSlimit)->limit; + pSubPlan->rowsThreshold = ((SLimitNode*)pAgg->node.pSlimit)->limit->datum.i; } pAgg->mergeDataBlock = (GROUP_ACTION_KEEP == pAggLogicNode->node.groupAction ? false : true); diff --git a/source/libs/planner/src/planSpliter.c b/source/libs/planner/src/planSpliter.c index fe66023332..03de345936 100644 --- a/source/libs/planner/src/planSpliter.c +++ b/source/libs/planner/src/planSpliter.c @@ -133,8 +133,12 @@ static int32_t splCreateExchangeNode(SSplitContext* pCxt, SLogicNode* pChild, SE nodesDestroyNode((SNode*)pExchange); return code; } - ((SLimitNode*)pChild->pLimit)->limit += ((SLimitNode*)pChild->pLimit)->offset; - ((SLimitNode*)pChild->pLimit)->offset = 0; + if (((SLimitNode*)pChild->pLimit)->limit && ((SLimitNode*)pChild->pLimit)->offset) { + ((SLimitNode*)pChild->pLimit)->limit->datum.i += ((SLimitNode*)pChild->pLimit)->offset->datum.i; + } + if (((SLimitNode*)pChild->pLimit)->offset) { + ((SLimitNode*)pChild->pLimit)->offset->datum.i = 0; + } } *pOutput = pExchange; @@ -679,8 +683,12 @@ static int32_t stbSplCreateMergeNode(SSplitContext* pCxt, SLogicSubplan* pSubpla if (TSDB_CODE_SUCCESS == code && NULL != pSplitNode->pLimit) { pMerge->node.pLimit = NULL; code = nodesCloneNode(pSplitNode->pLimit, &pMerge->node.pLimit); - ((SLimitNode*)pSplitNode->pLimit)->limit += ((SLimitNode*)pSplitNode->pLimit)->offset; - ((SLimitNode*)pSplitNode->pLimit)->offset = 0; + if (((SLimitNode*)pSplitNode->pLimit)->limit && ((SLimitNode*)pSplitNode->pLimit)->offset) { + ((SLimitNode*)pSplitNode->pLimit)->limit->datum.i += ((SLimitNode*)pSplitNode->pLimit)->offset->datum.i; + } + if (((SLimitNode*)pSplitNode->pLimit)->offset) { + ((SLimitNode*)pSplitNode->pLimit)->offset->datum.i = 0; + } } if (TSDB_CODE_SUCCESS == code) { code = stbSplRewriteFromMergeNode(pMerge, pSplitNode); @@ -1427,8 +1435,12 @@ static int32_t stbSplGetSplitNodeForScan(SStableSplitInfo* pInfo, SLogicNode** p if (NULL == (*pSplitNode)->pLimit) { return code; } - ((SLimitNode*)pInfo->pSplitNode->pLimit)->limit += ((SLimitNode*)pInfo->pSplitNode->pLimit)->offset; - ((SLimitNode*)pInfo->pSplitNode->pLimit)->offset = 0; + if (((SLimitNode*)pInfo->pSplitNode->pLimit)->limit && ((SLimitNode*)pInfo->pSplitNode->pLimit)->offset) { + ((SLimitNode*)pInfo->pSplitNode->pLimit)->limit->datum.i += ((SLimitNode*)pInfo->pSplitNode->pLimit)->offset->datum.i; + } + if (((SLimitNode*)pInfo->pSplitNode->pLimit)->offset) { + ((SLimitNode*)pInfo->pSplitNode->pLimit)->offset->datum.i = 0; + } } } return TSDB_CODE_SUCCESS; @@ -1579,8 +1591,12 @@ static int32_t stbSplSplitMergeScanNode(SSplitContext* pCxt, SLogicSubplan* pSub int32_t code = stbSplCreateMergeScanNode(pScan, &pMergeScan, &pMergeKeys); if (TSDB_CODE_SUCCESS == code) { if (NULL != pMergeScan->pLimit) { - ((SLimitNode*)pMergeScan->pLimit)->limit += ((SLimitNode*)pMergeScan->pLimit)->offset; - ((SLimitNode*)pMergeScan->pLimit)->offset = 0; + if (((SLimitNode*)pMergeScan->pLimit)->limit && ((SLimitNode*)pMergeScan->pLimit)->offset) { + ((SLimitNode*)pMergeScan->pLimit)->limit->datum.i += ((SLimitNode*)pMergeScan->pLimit)->offset->datum.i; + } + if (((SLimitNode*)pMergeScan->pLimit)->offset) { + ((SLimitNode*)pMergeScan->pLimit)->offset->datum.i = 0; + } } code = stbSplCreateMergeNode(pCxt, pSubplan, (SLogicNode*)pScan, pMergeKeys, pMergeScan, groupSort, true); } diff --git a/source/libs/planner/src/planUtil.c b/source/libs/planner/src/planUtil.c index f03e2d8ab0..1cc8c93d29 100644 --- a/source/libs/planner/src/planUtil.c +++ b/source/libs/planner/src/planUtil.c @@ -592,8 +592,12 @@ int32_t cloneLimit(SLogicNode* pParent, SLogicNode* pChild, uint8_t cloneWhat, b if (pParent->pLimit && (cloneWhat & CLONE_LIMIT)) { code = nodesCloneNode(pParent->pLimit, (SNode**)&pLimit); if (TSDB_CODE_SUCCESS == code) { - pLimit->limit += pLimit->offset; - pLimit->offset = 0; + if (pLimit->limit && pLimit->offset) { + pLimit->limit->datum.i += pLimit->offset->datum.i; + } + if (pLimit->offset) { + pLimit->offset->datum.i = 0; + } cloned = true; } } @@ -601,8 +605,12 @@ int32_t cloneLimit(SLogicNode* pParent, SLogicNode* pChild, uint8_t cloneWhat, b if (pParent->pSlimit && (cloneWhat & CLONE_SLIMIT)) { code = nodesCloneNode(pParent->pSlimit, (SNode**)&pSlimit); if (TSDB_CODE_SUCCESS == code) { - pSlimit->limit += pSlimit->offset; - pSlimit->offset = 0; + if (pSlimit->limit && pSlimit->offset) { + pSlimit->limit->datum.i += pSlimit->offset->datum.i; + } + if (pSlimit->offset) { + pSlimit->offset->datum.i = 0; + } cloned = true; } } diff --git a/source/libs/qcom/src/queryUtil.c b/source/libs/qcom/src/queryUtil.c index 6d637bee98..85b1c543c0 100644 --- a/source/libs/qcom/src/queryUtil.c +++ b/source/libs/qcom/src/queryUtil.c @@ -67,24 +67,29 @@ static bool doValidateSchema(SSchema* pSchema, int32_t numOfCols, int32_t maxLen for (int32_t i = 0; i < numOfCols; ++i) { // 1. valid types if (!isValidDataType(pSchema[i].type)) { + qError("The %d col/tag data type error, type:%d", i, pSchema[i].type); return false; } // 2. valid length for each type if (pSchema[i].type == TSDB_DATA_TYPE_BINARY || pSchema[i].type == TSDB_DATA_TYPE_VARBINARY) { if (pSchema[i].bytes > TSDB_MAX_BINARY_LEN) { + qError("The %d col/tag var data len error, type:%d, len:%d", i, pSchema[i].type, pSchema[i].bytes); return false; } } else if (pSchema[i].type == TSDB_DATA_TYPE_NCHAR) { if (pSchema[i].bytes > TSDB_MAX_NCHAR_LEN) { + qError("The %d col/tag nchar data len error, len:%d", i, pSchema[i].bytes); return false; } } else if (pSchema[i].type == TSDB_DATA_TYPE_GEOMETRY) { if (pSchema[i].bytes > TSDB_MAX_GEOMETRY_LEN) { + qError("The %d col/tag geometry data len error, len:%d", i, pSchema[i].bytes); return false; } } else { if (pSchema[i].bytes != tDataTypes[pSchema[i].type].bytes) { + qError("The %d col/tag data len error, type:%d, len:%d", i, pSchema[i].type, pSchema[i].bytes); return false; } } @@ -92,6 +97,7 @@ static bool doValidateSchema(SSchema* pSchema, int32_t numOfCols, int32_t maxLen // 3. valid column names for (int32_t j = i + 1; j < numOfCols; ++j) { if (strncmp(pSchema[i].name, pSchema[j].name, sizeof(pSchema[i].name) - 1) == 0) { + qError("The %d col/tag name %s is same with %d col/tag name %s", i, pSchema[i].name, j, pSchema[j].name); return false; } } @@ -104,23 +110,28 @@ static bool doValidateSchema(SSchema* pSchema, int32_t numOfCols, int32_t maxLen bool tIsValidSchema(struct SSchema* pSchema, int32_t numOfCols, int32_t numOfTags) { if (!pSchema || !VALIDNUMOFCOLS(numOfCols)) { + qError("invalid numOfCols: %d", numOfCols); return false; } if (!VALIDNUMOFTAGS(numOfTags)) { + qError("invalid numOfTags: %d", numOfTags); return false; } /* first column must be the timestamp, which is a primary key */ if (pSchema[0].type != TSDB_DATA_TYPE_TIMESTAMP) { + qError("invalid first column type: %d", pSchema[0].type); return false; } if (!doValidateSchema(pSchema, numOfCols, TSDB_MAX_BYTES_PER_ROW)) { + qError("validate schema columns failed"); return false; } if (!doValidateSchema(&pSchema[numOfCols], numOfTags, TSDB_MAX_TAGS_LEN)) { + qError("validate schema tags failed"); return false; } diff --git a/source/libs/qworker/src/qworker.c b/source/libs/qworker/src/qworker.c index 5dd43ca064..1df8dcda95 100644 --- a/source/libs/qworker/src/qworker.c +++ b/source/libs/qworker/src/qworker.c @@ -559,13 +559,13 @@ int32_t qwHandlePrePhaseEvents(QW_FPARAMS_DEF, int8_t phase, SQWPhaseInput *inpu QW_ERR_JRET(ctx->pJobInfo->errCode); } - if (atomic_load_8((int8_t *)&ctx->queryEnd) && !ctx->dynamicTask) { - QW_TASK_ELOG("query already end, phase:%d", phase); - QW_ERR_JRET(TSDB_CODE_QW_MSG_ERROR); - } - switch (phase) { case QW_PHASE_PRE_QUERY: { + if (atomic_load_8((int8_t *)&ctx->queryEnd) && !ctx->dynamicTask) { + QW_TASK_ELOG("query already end, phase:%d", phase); + QW_ERR_JRET(TSDB_CODE_QW_MSG_ERROR); + } + if (QW_EVENT_PROCESSED(ctx, QW_EVENT_DROP)) { QW_TASK_ELOG("task already dropped at phase %s", qwPhaseStr(phase)); QW_ERR_JRET(TSDB_CODE_QRY_TASK_STATUS_ERROR); @@ -592,6 +592,11 @@ int32_t qwHandlePrePhaseEvents(QW_FPARAMS_DEF, int8_t phase, SQWPhaseInput *inpu break; } case QW_PHASE_PRE_FETCH: { + if (atomic_load_8((int8_t *)&ctx->queryEnd) && !ctx->dynamicTask) { + QW_TASK_ELOG("query already end, phase:%d", phase); + QW_ERR_JRET(TSDB_CODE_QW_MSG_ERROR); + } + if (QW_EVENT_PROCESSED(ctx, QW_EVENT_DROP) || QW_EVENT_RECEIVED(ctx, QW_EVENT_DROP)) { QW_TASK_WLOG("task dropping or already dropped, phase:%s", qwPhaseStr(phase)); QW_ERR_JRET(ctx->rspCode); @@ -614,6 +619,12 @@ int32_t qwHandlePrePhaseEvents(QW_FPARAMS_DEF, int8_t phase, SQWPhaseInput *inpu break; } case QW_PHASE_PRE_CQUERY: { + if (atomic_load_8((int8_t *)&ctx->queryEnd) && !ctx->dynamicTask) { + QW_TASK_ELOG("query already end, phase:%d", phase); + code = ctx->rspCode; + goto _return; + } + if (QW_EVENT_PROCESSED(ctx, QW_EVENT_DROP)) { QW_TASK_WLOG("task already dropped, phase:%s", qwPhaseStr(phase)); QW_ERR_JRET(ctx->rspCode); diff --git a/source/libs/sync/src/syncMain.c b/source/libs/sync/src/syncMain.c index 4862a4b963..0933fd48c7 100644 --- a/source/libs/sync/src/syncMain.c +++ b/source/libs/sync/src/syncMain.c @@ -3428,7 +3428,8 @@ _out:; ths->pLogBuf->matchIndex, ths->pLogBuf->endIndex); if (code == 0 && ths->state == TAOS_SYNC_STATE_ASSIGNED_LEADER) { - TAOS_CHECK_RETURN(syncNodeUpdateAssignedCommitIndex(ths, matchIndex)); + int64_t index = syncNodeUpdateAssignedCommitIndex(ths, matchIndex); + sTrace("vgId:%d, update assigned commit index %" PRId64 "", ths->vgId, index); if (ths->fsmState != SYNC_FSM_STATE_INCOMPLETE && syncLogBufferCommit(ths->pLogBuf, ths, ths->assignedCommitIndex) < 0) { diff --git a/source/libs/sync/src/syncPipeline.c b/source/libs/sync/src/syncPipeline.c index 3022a1f8ac..18252db9ee 100644 --- a/source/libs/sync/src/syncPipeline.c +++ b/source/libs/sync/src/syncPipeline.c @@ -732,7 +732,11 @@ int32_t syncFsmExecute(SSyncNode* pNode, SSyncFSM* pFsm, ESyncState role, SyncTe pEntry->index, pEntry->term, TMSG_INFO(pEntry->originalRpcType), code, retry); if (retry) { taosMsleep(10); - sError("vgId:%d, retry on fsm commit since %s. index:%" PRId64, pNode->vgId, tstrerror(code), pEntry->index); + if (code == TSDB_CODE_OUT_OF_RPC_MEMORY_QUEUE) { + sError("vgId:%d, failed to execute fsm since %s. index:%" PRId64, pNode->vgId, terrstr(), pEntry->index); + } else { + sDebug("vgId:%d, retry on fsm commit since %s. index:%" PRId64, pNode->vgId, terrstr(), pEntry->index); + } } } while (retry); diff --git a/source/libs/tcs/test/tcsTest.cpp b/source/libs/tcs/test/tcsTest.cpp index 40d9eac7a0..86f2b70486 100644 --- a/source/libs/tcs/test/tcsTest.cpp +++ b/source/libs/tcs/test/tcsTest.cpp @@ -234,6 +234,13 @@ TEST(TcsTest, InterfaceTest) { // TEST(TcsTest, DISABLED_InterfaceNonBlobTest) { TEST(TcsTest, InterfaceNonBlobTest) { +#ifndef TD_ENTERPRISE + // NOTE: this test case will coredump for community edition of taos + // thus we bypass this test case for the moment + // code = tcsGetObjectBlock(object_name, 0, size, check, &pBlock); + // tcsGetObjectBlock succeeded but pBlock is nullptr + // which results in nullptr-access-coredump shortly after +#else int code = 0; bool check = false; bool withcp = false; @@ -348,4 +355,5 @@ TEST(TcsTest, InterfaceNonBlobTest) { GTEST_ASSERT_EQ(code, 0); tcsUninit(); +#endif } diff --git a/source/util/CMakeLists.txt b/source/util/CMakeLists.txt index 2633bb3268..d606d83712 100644 --- a/source/util/CMakeLists.txt +++ b/source/util/CMakeLists.txt @@ -17,10 +17,6 @@ else() MESSAGE(STATUS "enable assert core") endif(${ASSERT_NOT_CORE}) -if(${BUILD_WITH_ANALYSIS}) - add_definitions(-DUSE_ANALYTICS) -endif() - target_include_directories( util PUBLIC "${TD_SOURCE_DIR}/include/util" diff --git a/source/util/src/tqueue.c b/source/util/src/tqueue.c index f531d9ad61..0b4ed6dbc2 100644 --- a/source/util/src/tqueue.c +++ b/source/util/src/tqueue.c @@ -14,14 +14,16 @@ */ #define _DEFAULT_SOURCE -#include "tqueue.h" #include "taoserror.h" #include "tlog.h" +#include "tqueue.h" #include "tutil.h" int64_t tsQueueMemoryAllowed = 0; int64_t tsQueueMemoryUsed = 0; +int64_t tsApplyMemoryAllowed = 0; +int64_t tsApplyMemoryUsed = 0; struct STaosQueue { STaosQnode *head; STaosQnode *tail; @@ -148,20 +150,34 @@ int64_t taosQueueMemorySize(STaosQueue *queue) { } int32_t taosAllocateQitem(int32_t size, EQItype itype, int64_t dataSize, void **item) { - int64_t alloced = atomic_add_fetch_64(&tsQueueMemoryUsed, size + dataSize); - if (alloced > tsQueueMemoryAllowed) { - if (itype == RPC_QITEM) { + int64_t alloced = -1; + + if (itype == RPC_QITEM) { + alloced = atomic_add_fetch_64(&tsQueueMemoryUsed, size + dataSize); + if (alloced > tsQueueMemoryAllowed) { uError("failed to alloc qitem, size:%" PRId64 " alloc:%" PRId64 " allowed:%" PRId64, size + dataSize, alloced, tsQueueMemoryAllowed); (void)atomic_sub_fetch_64(&tsQueueMemoryUsed, size + dataSize); return (terrno = TSDB_CODE_OUT_OF_RPC_MEMORY_QUEUE); } + } else if (itype == APPLY_QITEM) { + alloced = atomic_add_fetch_64(&tsApplyMemoryUsed, size + dataSize); + if (alloced > tsApplyMemoryAllowed) { + uDebug("failed to alloc qitem, size:%" PRId64 " alloc:%" PRId64 " allowed:%" PRId64, size + dataSize, alloced, + tsApplyMemoryAllowed); + (void)atomic_sub_fetch_64(&tsApplyMemoryUsed, size + dataSize); + return (terrno = TSDB_CODE_OUT_OF_RPC_MEMORY_QUEUE); + } } *item = NULL; STaosQnode *pNode = taosMemoryCalloc(1, sizeof(STaosQnode) + size); if (pNode == NULL) { - (void)atomic_sub_fetch_64(&tsQueueMemoryUsed, size + dataSize); + if (itype == RPC_QITEM) { + (void)atomic_sub_fetch_64(&tsQueueMemoryUsed, size + dataSize); + } else if (itype == APPLY_QITEM) { + (void)atomic_sub_fetch_64(&tsApplyMemoryUsed, size + dataSize); + } return terrno; } @@ -178,7 +194,12 @@ void taosFreeQitem(void *pItem) { if (pItem == NULL) return; STaosQnode *pNode = (STaosQnode *)((char *)pItem - sizeof(STaosQnode)); - int64_t alloced = atomic_sub_fetch_64(&tsQueueMemoryUsed, pNode->size + pNode->dataSize); + int64_t alloced = -1; + if (pNode->itype == RPC_QITEM) { + alloced = atomic_sub_fetch_64(&tsQueueMemoryUsed, pNode->size + pNode->dataSize); + } else if (pNode->itype == APPLY_QITEM) { + alloced = atomic_sub_fetch_64(&tsApplyMemoryUsed, pNode->size + pNode->dataSize); + } uTrace("item:%p, node:%p is freed, alloc:%" PRId64, pItem, pNode, alloced); taosMemoryFree(pNode); diff --git a/source/util/test/CMakeLists.txt b/source/util/test/CMakeLists.txt index ec05a4e415..768e465fea 100644 --- a/source/util/test/CMakeLists.txt +++ b/source/util/test/CMakeLists.txt @@ -142,10 +142,6 @@ target_include_directories( PRIVATE "${CMAKE_CURRENT_SOURCE_DIR}/../inc" ) -IF(COMPILER_SUPPORT_AVX2) - MESSAGE(STATUS "AVX2 instructions is ACTIVATED") - set_source_files_properties(decompressTest.cpp PROPERTIES COMPILE_FLAGS -mavx2) -ENDIF() add_executable(decompressTest "decompressTest.cpp") target_link_libraries(decompressTest os util common gtest_main) add_test( diff --git a/source/util/test/decompressTest.cpp b/source/util/test/decompressTest.cpp index e508c489df..b1f7f7e85c 100644 --- a/source/util/test/decompressTest.cpp +++ b/source/util/test/decompressTest.cpp @@ -524,23 +524,20 @@ static void decompressBasicTest(size_t dataSize, const CompF& compress, const De decltype(origData) decompData(origData.size()); // test simple implementation without SIMD instructions - tsSIMDEnable = 0; + tsAVX2Supported = 0; cnt = decompress(compData.data(), compData.size(), decompData.size(), decompData.data(), decompData.size(), ONE_STAGE_COMP, nullptr, 0); ASSERT_EQ(cnt, compData.size() - 1); EXPECT_EQ(origData, decompData); -#ifdef __AVX2__ - if (DataTypeSupportAvx::value) { + taosGetSystemInfo(); + if (DataTypeSupportAvx::value && tsAVX2Supported) { // test AVX2 implementation - tsSIMDEnable = 1; - tsAVX2Supported = 1; cnt = decompress(compData.data(), compData.size(), decompData.size(), decompData.data(), decompData.size(), ONE_STAGE_COMP, nullptr, 0); ASSERT_EQ(cnt, compData.size() - 1); EXPECT_EQ(origData, decompData); } -#endif } template @@ -557,7 +554,7 @@ static void decompressPerfTest(const char* typname, const CompF& compress, const << "; Compression ratio: " << 1.0 * (compData.size() - 1) / cnt << "\n"; decltype(origData) decompData(origData.size()); - tsSIMDEnable = 0; + tsAVX2Supported = 0; auto ms = measureRunTime( [&]() { decompress(compData.data(), compData.size(), decompData.size(), decompData.data(), decompData.size(), @@ -567,10 +564,8 @@ static void decompressPerfTest(const char* typname, const CompF& compress, const std::cout << "Decompression of " << NROUND * DATA_SIZE << " " << typname << " without SIMD costs " << ms << " ms, avg speed: " << NROUND * DATA_SIZE * 1000 / ms << " tuples/s\n"; -#ifdef __AVX2__ - if (DataTypeSupportAvx::value) { - tsSIMDEnable = 1; - tsAVX2Supported = 1; + taosGetSystemInfo(); + if (DataTypeSupportAvx::value && tsAVX2Supported) { ms = measureRunTime( [&]() { decompress(compData.data(), compData.size(), decompData.size(), decompData.data(), decompData.size(), @@ -580,7 +575,6 @@ static void decompressPerfTest(const char* typname, const CompF& compress, const std::cout << "Decompression of " << NROUND * DATA_SIZE << " " << typname << " using AVX2 costs " << ms << " ms, avg speed: " << NROUND * DATA_SIZE * 1000 / ms << " tuples/s\n"; } -#endif } #define RUN_PERF_TEST(typname, comp, decomp, min, max) \ diff --git a/tests/README-CN.md b/tests/README-CN.md new file mode 100644 index 0000000000..ea08e2c3e2 --- /dev/null +++ b/tests/README-CN.md @@ -0,0 +1,232 @@ +# 目录 + +1. [简介](#1-简介) +2. [必备工具](#2-必备工具) +3. [测试指南](#3-测试指南) + - [3.1 单元测试](#31-单元测试) + - [3.2 系统测试](#32-系统测试) + - [3.3 TSIM测试](#33-tsim测试) + - [3.4 冒烟测试](#34-冒烟测试) + - [3.5 混沌测试](#35-混沌测试) + - [3.6 CI测试](#36-ci测试) + +# 1. 简介 + +本手册旨在为开发人员提供有效测试TDengine的全面指导。它分为三个主要部分:简介,必备工具和测试指南。 + +> [!NOTE] +> - 本文档所有的命令和脚本在Linux(Ubuntu 18.04/20.04/22.04)上进行了验证。 +> - 本文档所有的命令和脚本用于在单个主机上运行测试。 + +# 2. 必备工具 + +- 安装Python3 + +```bash +apt install python3 +apt install python3-pip +``` + +- 安装Python依赖工具包 + +```bash +pip3 install pandas psutil fabric2 requests faker simplejson \ + toml pexpect tzlocal distro decorator loguru hyperloglog +``` + +- 安装TDengine的Python连接器 + +```bash +pip3 install taospy taos-ws-py +``` + +- 构建 + +在测试之前,请确保选项“-DBUILD_TOOLS=true -DBUILD_TEST=true -DBUILD_CONTRIB=true”的构建操作已经完成,如果没有,请执行如下命令: + +```bash +cd debug +cmake .. -DBUILD_TOOLS=true -DBUILD_TEST=true -DBUILD_CONTRIB=true +make && make install +``` + +# 3. 测试指南 + +在 `tests` 目录中,TDengine有不同类型的测试。下面是关于如何运行它们以及如何添加新测试用例的简要介绍。 + +### 3.1 单元测试 + +单元测试是最小的可测试单元,用于测试TDengine代码中的函数、方法或类。 + +### 3.1.1 如何运行单个测试用例? + +```bash +cd debug/build/bin +./osTimeTests +``` + +### 3.1.2 如何运行所有测试用例? + +```bash +cd tests/unit-test/ +bash test.sh -e 0 +``` + +### 3.1.3 如何添加测试用例? + +
+ +添加新单元测试用例的详细步骤 + +Google测试框架用于对特定功能模块进行单元测试,请参考以下步骤添加新的测试用例: + +##### a. 创建测试用例文件并开发测试脚本 + +在目标功能模块对应的测试目录下,创建CPP格式的测试文件,编写相应的测试用例。 + +##### b. 更新构建配置 + +修改此目录中的CMakeLists.txt文件, 以确保新的测试文件被包含在编译过程中。配置示例可参考 `source/os/test/CMakeLists.txt` + +##### c. 编译测试代码 + +在项目的根目录下,创建一个编译目录 (例如 debug), 切换到该目录并运行cmake命令 (如 `cmake .. -DBUILD_TEST=1` ) 生成编译文件, +然后运行make命令(如 make)来完成测试代码的编译。 + +##### d. 执行测试 + +在编译目录中找到可执行文件并运行它 (如:`TDengine/debug/build/bin/`)。 + +##### e. 集成用例到CI测试 + +使用add_test命令将新编译的测试用例添加到CI测试集合中,确保新添加的测试用例可以在每次构建运行。 + +
+ +## 3.2 系统测试 + +系统测试是用Python编写的端到端测试用例。其中一些特性仅在企业版中支持和测试,因此在社区版上运行时,它们可能会失败。我们将逐渐通过将用例分成不同的组来解决这个问题。 + +### 3.2.1 如何运行单个测试用例? + +以测试文件 `system-test/2-query/avg.py` 举例,可以使用如下命令运行单个测试用例: + +```bash +cd tests/system-test +python3 ./test.py -f 2-query/avg.py +``` + +### 3.2.2 如何运行所有测试用例? + +```bash +cd tests +./run_all_ci_cases.sh -t python # all python cases +``` + +### 3.2.3 如何添加测试用例? + +
+ +添加新系统测试用例的详细步骤 + +Python测试框架由TDengine团队开发, test.py是测试用例执行和监控的入口程序,使用 `python3 ./test.py -h` 查看更多功能。 + +请参考下面的步骤来添加一个新的测试用例: + +##### a. 创建一个测试用例文件并开发测试用例 + +在目录 `tests/system-test` 下的某个功能目录创建一个测试用例文件, 并参考用例模板 `tests/system-test/0-others/test_case_template.py` 来添加一个新的测试用例。 + +##### b. 执行测试用例 + +使用如下命令执行测试用例, 并确保用例执行成功。 + +``` bash +cd tests/system-test && python3 ./test.py -f 0-others/test_case_template.py +``` + +##### c. 集成用例到CI测试 + +编辑 `tests/parallel_test/cases.task`, 以指定的格式添加测试用例路径。文件的第三列表示是否使用 Address Sanitizer 模式进行测试。 + +```bash +#caseID,rerunTimes,Run with Sanitizer,casePath,caseCommand +,,n,system-test, python3 ./test.py -f 0-others/test_case_template.py +``` + +
+ +## 3.3 TSIM测试 + +在TDengine开发的早期阶段, TDengine团队用C++开发的内部测试框架 TSIM。 + +### 3.3.1 如何运行单个测试用例? + +要运行TSIM测试用例,请执行如下命令: + +```bash +cd tests/script +./test.sh -f tsim/db/basic1.sim +``` + +### 3.3.2 如何运行所有TSIM测试用例? + +```bash +cd tests +./run_all_ci_cases.sh -t legacy # all legacy cases +``` + +### 3.3.3 如何添加TSIM测试用例? + +> [!NOTE] +> TSIM测试框架现已被系统测试弃用,建议在系统测试中增加新的测试用例,请参考 [系统测试](#32-系统测试)。 + +## 3.4 冒烟测试 + +冒烟测试是从系统测试中选择的一组测试用例,也称为基本功能测试,以确保TDengine的关键功能。 + +### 3.4.1 如何运行冒烟测试? + +```bash +cd /root/TDengine/packaging/smokeTest +./test_smoking_selfhost.sh +``` + +### 3.4.2 如何添加冒烟测试用例? + +可以通过更新 `test_smoking_selfhost.sh` 中的 `commands` 变量的值来添加新的case。 + +## 3.5 混沌测试 + +一个简单的工具,以随机的方式执行系统的各种功能测试,期望在没有预定义测试场景的情况下暴露潜在的问题。 + +### 3.5.1 如何运行混沌测试? + +```bash +cd tests/pytest +python3 auto_crash_gen.py +``` + +### 3.5.2 如何增加混沌测试用例? + +1. 添加一个函数,如 `pytest/crash_gen/crash_gen_main.py` 中的 `TaskCreateNewFunction`。 +2. 将 `TaskCreateNewFunction` 集成到 `crash_gen_main.py` 中的 `balance_pickTaskType` 函数中。 + +## 3.6 CI测试 + +CI测试(持续集成测试)是软件开发中的一项重要实践,旨在将代码频繁地自动集成到共享代码库的过程中,构建和测试它以确保代码的质量和稳定性。 + +TDengine CI测试将运行以下三种测试类型中的所有测试用例:单元测试、系统测试和TSIM测试。 + +### 3.6.1 如何运行所有CI测试用例? + +如果这是第一次运行所有CI测试用例,建议添加测试分支,使用如下命令运行: + +```bash +cd tests +./run_all_ci_cases.sh -b main # on main branch +``` + +### 3.6.2 如何添加新的CI测试用例? + +请参考[单元测试](#31-单元测试)、[系统测试](#32-系统测试)和[TSIM测试](#33-tsim测试)部分,了解添加新测试用例的详细步骤,当在上述测试中添加新用例时,它们将在CI测试自动运行。 diff --git a/tests/README.md b/tests/README.md new file mode 100644 index 0000000000..58747d93f7 --- /dev/null +++ b/tests/README.md @@ -0,0 +1,233 @@ +# Table of Contents + +1. [Introduction](#1-introduction) +1. [Prerequisites](#2-prerequisites) +1. [Testing Guide](#3-testing-guide) + - [3.1 Unit Test](#31-unit-test) + - [3.2 System Test](#32-system-test) + - [3.3 Legacy Test](#33-legacy-test) + - [3.4 Smoke Test](#34-smoke-test) + - [3.5 Chaos Test](#35-chaos-test) + - [3.6 CI Test](#36-ci-test) + +# 1. Introduction + +This manual is intended to give developers a comprehensive guidance to test TDengine efficiently. It is divided into three main sections: introduction, prerequisites and testing guide. + +> [!NOTE] +> - The commands and scripts below are verified on Linux (Ubuntu 18.04/20.04/22.04). +> - The commands and steps described below are to run the tests on a single host. + +# 2. Prerequisites + +- Install Python3 + +```bash +apt install python3 +apt install python3-pip +``` + +- Install Python dependencies + +```bash +pip3 install pandas psutil fabric2 requests faker simplejson \ + toml pexpect tzlocal distro decorator loguru hyperloglog +``` + +- Install Python connector for TDengine + +```bash +pip3 install taospy taos-ws-py +``` + +- Building + +Before testing, please make sure the building operation with option `-DBUILD_TOOLS=true -DBUILD_TEST=true -DBUILD_CONTRIB=true` has been done, otherwise execute commands below: + +```bash +cd debug +cmake .. -DBUILD_TOOLS=true -DBUILD_TEST=true -DBUILD_CONTRIB=true +make && make install +``` + +# 3. Testing Guide + +In `tests` directory, there are different types of tests for TDengine. Below is a brief introduction about how to run them and how to add new cases. + +### 3.1 Unit Test + +Unit tests are the smallest testable units, which are used to test functions, methods or classes in TDengine code. + +### 3.1.1 How to run single test case? + +```bash +cd debug/build/bin +./osTimeTests +``` + +### 3.1.2 How to run all unit test cases? + +```bash +cd tests/unit-test/ +bash test.sh -e 0 +``` + +### 3.1.3 How to add new cases? + +
+ +Detailed steps to add new unit test case + +The Google test framwork is used for unit testing to specific function module, please refer to steps below to add a new test case: + +##### a. Create test case file and develop the test scripts + +In the test directory corresponding to the target function module, create test files in CPP format and write corresponding test cases. + +##### b. Update build configuration + +Modify the CMakeLists.txt file in this directory to ensure that the new test files are properly included in the compilation process. See the `source/os/test/CMakeLists.txt` file for configuration examples. + +##### c. Compile test code + +In the root directory of the project, create a compilation directory (e.g., debug), switch to the directory and run CMake commands (e.g., `cmake .. -DBUILD_TEST=1`) to generate a compilation file, + +and then run a compilation command (e.g. make) to complete the compilation of the test code. + +##### d. Execute the test program + +Find the executable file in the compiled directory(e.g. `TDengine/debug/build/bin/`) and run it. + +##### e. Integrate into CI tests + +Use the add_test command to add new compiled test cases into CI test collection, ensure that the new added test cases can be run for every build. + +
+ +## 3.2 System Test + +System tests are end-to-end test cases written in Python from a system point of view. Some of them are designed to test features only in enterprise ediiton, so when running on community edition, they may fail. We'll fix this issue by separating the cases into different gruops in the future. + +### 3.2.1 How to run a single test case? + +Take test file `system-test/2-query/avg.py` for example: + +```bash +cd tests/system-test +python3 ./test.py -f 2-query/avg.py +``` + +### 3.2.2 How to run all system test cases? + +```bash +cd tests +./run_all_ci_cases.sh -t python # all python cases +``` + +### 3.2.3 How to add new case? + +
+ +Detailed steps to add new system test case + +The Python test framework is developed by TDengine team, and test.py is the test case execution and monitoring of the entry program, Use `python3 ./test.py -h` to view more features. + +Please refer to steps below for how to add a new test case: + +##### a. Create a test case file and develop the test cases + +Create a file in `tests/system-test` containing each functional directory and refer to the use case template `tests/system-test/0-others/test_case_template.py` to add a new test case. + +##### b. Execute the test case + +Ensure the test case execution is successful. + +``` bash +cd tests/system-test && python3 ./test.py -f 0-others/test_case_template.py +``` + +##### c. Integrate into CI tests + +Edit `tests/parallel_test/cases.task` and add the testcase path and executions in the specified format. The third column indicates whether to use Address Sanitizer mode for testing. + +```bash +#caseID,rerunTimes,Run with Sanitizer,casePath,caseCommand +,,n,system-test, python3 ./test.py -f 0-others/test_case_template.py +``` + +
+ +## 3.3 Legacy Test + +In the early stage of TDengine development, test cases are run by an internal test framework called TSIM, which is developed in C++. + +### 3.3.1 How to run single test case? + +To run the legacy test cases, please execute the following commands: + +```bash +cd tests/script +./test.sh -f tsim/db/basic1.sim +``` + +### 3.3.2 How to run all legacy test cases? + +```bash +cd tests +./run_all_ci_cases.sh -t legacy # all legacy cases +``` + +### 3.3.3 How to add new cases? + +> [!NOTE] +> TSIM test framwork is deprecated by system test now, it is encouraged to add new test cases in system test, please refer to [System Test](#32-system-test) for details. + +## 3.4 Smoke Test + +Smoke test is a group of test cases selected from system test, which is also known as sanity test to ensure the critical functionalities of TDengine. + +### 3.4.1 How to run test? + +```bash +cd /root/TDengine/packaging/smokeTest +./test_smoking_selfhost.sh +``` + +### 3.4.2 How to add new cases? + +New cases can be added by updating the value of `commands` variable in `test_smoking_selfhost.sh`. + +## 3.5 Chaos Test + +A simple tool to execute various functions of the system in a randomized way, hoping to expose potential problems without a pre-defined test scenario. + +### 3.5.1 How to run test? + +```bash +cd tests/pytest +python3 auto_crash_gen.py +``` + +### 3.5.2 How to add new cases? + +1. Add a function, such as `TaskCreateNewFunction` in `pytest/crash_gen/crash_gen_main.py`. +2. Integrate `TaskCreateNewFunction` into the `balance_pickTaskType` function in `crash_gen_main.py`. + +## 3.6 CI Test + +CI testing (Continuous Integration testing), is an important practice in software development that aims to automate frequent integration of code into a shared codebase, build and test it to ensure code quality and stability. + +TDengine CI testing will run all the test cases from the following three types of tests: unit test, system test and legacy test. + +### 3.6.1 How to run all CI test cases? + +If this is the first time to run all the CI test cases, it is recommended to add the test branch, please run it with following commands: + +```bash +cd tests +./run_all_ci_cases.sh -b main # on main branch +``` + +### 3.6.2 How to add new cases? + +Please refer to the [Unit Test](#31-unit-test)、[System Test](#32-system-test) and [Legacy Test](#33-legacy-test) sections for detailed steps to add new test cases, when new cases are added in aboved tests, they will be run automatically by CI test. diff --git a/tests/army/cluster/arbitrator.py b/tests/army/cluster/arbitrator.py index 9fd8e7b1f3..385358e5cc 100644 --- a/tests/army/cluster/arbitrator.py +++ b/tests/army/cluster/arbitrator.py @@ -35,6 +35,12 @@ class TDTestCase(TBase): time.sleep(1) + tdSql.execute("use db;") + + tdSql.execute("CREATE STABLE meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);") + + tdSql.execute("CREATE TABLE d0 USING meters TAGS (\"California.SanFrancisco\", 2);"); + count = 0 while count < 100: @@ -72,6 +78,8 @@ class TDTestCase(TBase): count += 1 + tdSql.execute("INSERT INTO d0 VALUES (NOW, 10.3, 219, 0.31);") + def stop(self): tdSql.close() tdLog.success(f"{__file__} successfully executed") diff --git a/tests/army/query/function/ans/interp.csv b/tests/army/query/function/ans/interp.csv index 3eaccd887a..1d4e2b0a38 100644 --- a/tests/army/query/function/ans/interp.csv +++ b/tests/army/query/function/ans/interp.csv @@ -1015,3 +1015,108 @@ taos> select _irowts, _isfilled, interp(c1) from test.td32861 where ts between ' taos> select _irowts, _isfilled, interp(c1) from test.td32861 where ts between '2020-01-01 00:00:22' and '2020-01-01 00:00:30' range('2020-01-01 00:00:00', '2020-01-01 00:00:21') every(1s) fill(linear); +taos> select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(prev); + _irowts | interp(c1) | t1 | +=========================================================================== + 2020-02-01 00:00:05.000 | 5 | testts5941 | + 2020-02-01 00:00:06.000 | 5 | testts5941 | + 2020-02-01 00:00:07.000 | 5 | testts5941 | + 2020-02-01 00:00:08.000 | 5 | testts5941 | + 2020-02-01 00:00:09.000 | 5 | testts5941 | + 2020-02-01 00:00:10.000 | 10 | testts5941 | + 2020-02-01 00:00:11.000 | 10 | testts5941 | + 2020-02-01 00:00:12.000 | 10 | testts5941 | + 2020-02-01 00:00:13.000 | 10 | testts5941 | + 2020-02-01 00:00:14.000 | 10 | testts5941 | + 2020-02-01 00:00:15.000 | 15 | testts5941 | + 2020-02-01 00:00:16.000 | 15 | testts5941 | + 2020-02-01 00:00:17.000 | 15 | testts5941 | + 2020-02-01 00:00:18.000 | 15 | testts5941 | + 2020-02-01 00:00:19.000 | 15 | testts5941 | + 2020-02-01 00:00:20.000 | 15 | testts5941 | + +taos> select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(next); + _irowts | interp(c1) | t1 | +=========================================================================== + 2020-02-01 00:00:00.000 | 5 | testts5941 | + 2020-02-01 00:00:01.000 | 5 | testts5941 | + 2020-02-01 00:00:02.000 | 5 | testts5941 | + 2020-02-01 00:00:03.000 | 5 | testts5941 | + 2020-02-01 00:00:04.000 | 5 | testts5941 | + 2020-02-01 00:00:05.000 | 5 | testts5941 | + 2020-02-01 00:00:06.000 | 10 | testts5941 | + 2020-02-01 00:00:07.000 | 10 | testts5941 | + 2020-02-01 00:00:08.000 | 10 | testts5941 | + 2020-02-01 00:00:09.000 | 10 | testts5941 | + 2020-02-01 00:00:10.000 | 10 | testts5941 | + 2020-02-01 00:00:11.000 | 15 | testts5941 | + 2020-02-01 00:00:12.000 | 15 | testts5941 | + 2020-02-01 00:00:13.000 | 15 | testts5941 | + 2020-02-01 00:00:14.000 | 15 | testts5941 | + 2020-02-01 00:00:15.000 | 15 | testts5941 | + +taos> select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(linear); + _irowts | interp(c1) | t1 | +=========================================================================== + 2020-02-01 00:00:05.000 | 5 | testts5941 | + 2020-02-01 00:00:06.000 | 6 | testts5941 | + 2020-02-01 00:00:07.000 | 7 | testts5941 | + 2020-02-01 00:00:08.000 | 8 | testts5941 | + 2020-02-01 00:00:09.000 | 9 | testts5941 | + 2020-02-01 00:00:10.000 | 10 | testts5941 | + 2020-02-01 00:00:11.000 | 11 | testts5941 | + 2020-02-01 00:00:12.000 | 12 | testts5941 | + 2020-02-01 00:00:13.000 | 13 | testts5941 | + 2020-02-01 00:00:14.000 | 14 | testts5941 | + 2020-02-01 00:00:15.000 | 15 | testts5941 | + +taos> select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(null); + _irowts | interp(c1) | t1 | +=========================================================================== + 2020-02-01 00:00:00.000 | NULL | testts5941 | + 2020-02-01 00:00:01.000 | NULL | testts5941 | + 2020-02-01 00:00:02.000 | NULL | testts5941 | + 2020-02-01 00:00:03.000 | NULL | testts5941 | + 2020-02-01 00:00:04.000 | NULL | testts5941 | + 2020-02-01 00:00:05.000 | 5 | testts5941 | + 2020-02-01 00:00:06.000 | NULL | testts5941 | + 2020-02-01 00:00:07.000 | NULL | testts5941 | + 2020-02-01 00:00:08.000 | NULL | testts5941 | + 2020-02-01 00:00:09.000 | NULL | testts5941 | + 2020-02-01 00:00:10.000 | 10 | testts5941 | + 2020-02-01 00:00:11.000 | NULL | testts5941 | + 2020-02-01 00:00:12.000 | NULL | testts5941 | + 2020-02-01 00:00:13.000 | NULL | testts5941 | + 2020-02-01 00:00:14.000 | NULL | testts5941 | + 2020-02-01 00:00:15.000 | 15 | testts5941 | + 2020-02-01 00:00:16.000 | NULL | testts5941 | + 2020-02-01 00:00:17.000 | NULL | testts5941 | + 2020-02-01 00:00:18.000 | NULL | testts5941 | + 2020-02-01 00:00:19.000 | NULL | testts5941 | + 2020-02-01 00:00:20.000 | NULL | testts5941 | + +taos> select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(value, 1); + _irowts | interp(c1) | t1 | +=========================================================================== + 2020-02-01 00:00:00.000 | 1 | testts5941 | + 2020-02-01 00:00:01.000 | 1 | testts5941 | + 2020-02-01 00:00:02.000 | 1 | testts5941 | + 2020-02-01 00:00:03.000 | 1 | testts5941 | + 2020-02-01 00:00:04.000 | 1 | testts5941 | + 2020-02-01 00:00:05.000 | 5 | testts5941 | + 2020-02-01 00:00:06.000 | 1 | testts5941 | + 2020-02-01 00:00:07.000 | 1 | testts5941 | + 2020-02-01 00:00:08.000 | 1 | testts5941 | + 2020-02-01 00:00:09.000 | 1 | testts5941 | + 2020-02-01 00:00:10.000 | 10 | testts5941 | + 2020-02-01 00:00:11.000 | 1 | testts5941 | + 2020-02-01 00:00:12.000 | 1 | testts5941 | + 2020-02-01 00:00:13.000 | 1 | testts5941 | + 2020-02-01 00:00:14.000 | 1 | testts5941 | + 2020-02-01 00:00:15.000 | 15 | testts5941 | + 2020-02-01 00:00:16.000 | 1 | testts5941 | + 2020-02-01 00:00:17.000 | 1 | testts5941 | + 2020-02-01 00:00:18.000 | 1 | testts5941 | + 2020-02-01 00:00:19.000 | 1 | testts5941 | + 2020-02-01 00:00:20.000 | 1 | testts5941 | + diff --git a/tests/army/query/function/in/interp.in b/tests/army/query/function/in/interp.in index 97a9936b8d..1ba768e6e3 100644 --- a/tests/army/query/function/in/interp.in +++ b/tests/army/query/function/in/interp.in @@ -63,3 +63,8 @@ select _irowts, _isfilled, interp(c1) from test.td32861 where ts between '2020-0 select _irowts, _isfilled, interp(c1) from test.td32861 where ts between '2020-01-01 00:00:22' and '2020-01-01 00:00:30' range('2020-01-01 00:00:00', '2020-01-01 00:00:21') every(1s) fill(prev); select _irowts, _isfilled, interp(c1) from test.td32861 where ts between '2020-01-01 00:00:22' and '2020-01-01 00:00:30' range('2020-01-01 00:00:00', '2020-01-01 00:00:21') every(1s) fill(next); select _irowts, _isfilled, interp(c1) from test.td32861 where ts between '2020-01-01 00:00:22' and '2020-01-01 00:00:30' range('2020-01-01 00:00:00', '2020-01-01 00:00:21') every(1s) fill(linear); +select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(prev); +select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(next); +select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(linear); +select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(null); +select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(value, 1); diff --git a/tests/army/query/function/test_interp.py b/tests/army/query/function/test_interp.py index 106ef1e58e..e543d81363 100644 --- a/tests/army/query/function/test_interp.py +++ b/tests/army/query/function/test_interp.py @@ -40,6 +40,9 @@ class TDTestCase(TBase): ) tdSql.execute("create table if not exists test.td32861(ts timestamp, c1 int);") + tdSql.execute("create stable if not exists test.ts5941(ts timestamp, c1 int, c2 int) tags (t1 varchar(30));") + tdSql.execute("create table if not exists test.ts5941_child using test.ts5941 tags ('testts5941');") + tdLog.printNoPrefix("==========step2:insert data") tdSql.execute(f"insert into test.td32727 values ('2020-02-01 00:00:05', 5, 5, 5, 5, 5.0, 5.0, true, 'varchar', 'nchar', 5, 5, 5, 5)") @@ -56,6 +59,9 @@ class TDTestCase(TBase): ('2020-01-01 00:00:15', 15), ('2020-01-01 00:00:21', 21);""" ) + tdSql.execute(f"insert into test.ts5941_child values ('2020-02-01 00:00:05', 5, 5)") + tdSql.execute(f"insert into test.ts5941_child values ('2020-02-01 00:00:10', 10, 10)") + tdSql.execute(f"insert into test.ts5941_child values ('2020-02-01 00:00:15', 15, 15)") def test_normal_query_new(self, testCase): # read sql from .sql file and execute diff --git a/tests/docs-examples-test/jdbc.sh b/tests/docs-examples-test/jdbc.sh index 4fcc5404b6..20147bf91c 100644 --- a/tests/docs-examples-test/jdbc.sh +++ b/tests/docs-examples-test/jdbc.sh @@ -1,4 +1,5 @@ #!/bin/bash +set -e pgrep taosd || taosd >> /dev/null 2>&1 & pgrep taosadapter || taosadapter >> /dev/null 2>&1 & @@ -6,11 +7,12 @@ cd ../../docs/examples/java mvn clean test > jdbc-out.log 2>&1 tail -n 20 jdbc-out.log + totalJDBCCases=`grep 'Tests run' jdbc-out.log | awk -F"[:,]" 'END{ print $2 }'` failed=`grep 'Tests run' jdbc-out.log | awk -F"[:,]" 'END{ print $4 }'` error=`grep 'Tests run' jdbc-out.log | awk -F"[:,]" 'END{ print $6 }'` -totalJDBCFailed=`expr $failed + $error` -totalJDBCSuccess=`expr $totalJDBCCases - $totalJDBCFailed` +totalJDBCFailed=$((failed + error)) +totalJDBCSuccess=$((totalJDBCCases - totalJDBCFailed)) if [ "$totalJDBCSuccess" -gt "0" ]; then echo -e "\n${GREEN} ### Total $totalJDBCSuccess JDBC case(s) succeed! ### ${NC}" @@ -19,4 +21,4 @@ fi if [ "$totalJDBCFailed" -ne "0" ]; then echo -e "\n${RED} ### Total $totalJDBCFailed JDBC case(s) failed! ### ${NC}" exit 8 -fi \ No newline at end of file +fi diff --git a/tests/parallel_test/cases.task b/tests/parallel_test/cases.task index 77efa823be..ebec0ad38e 100644 --- a/tests/parallel_test/cases.task +++ b/tests/parallel_test/cases.task @@ -176,6 +176,11 @@ ,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma2.py -Q 2 ,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma2.py -Q 3 ,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma2.py -Q 4 +,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tbnameIn.py +,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tbnameIn.py -R +,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tbnameIn.py -Q 2 +,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tbnameIn.py -Q 3 +,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tbnameIn.py -Q 4 ,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/nestedQuery2.py ,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/interp_extension.py ,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/interp_extension.py -R diff --git a/tests/pytest/auto_crash_gen.py b/tests/pytest/auto_crash_gen.py index 316f2ead0f..a35beb3395 100755 --- a/tests/pytest/auto_crash_gen.py +++ b/tests/pytest/auto_crash_gen.py @@ -244,7 +244,7 @@ def start_taosd(): else: pass - start_cmd = 'cd %s && python3 test.py >>/dev/null '%(start_path) + start_cmd = 'cd %s && python3 test.py -G >>/dev/null '%(start_path) os.system(start_cmd) def get_cmds(args_list): @@ -371,7 +371,7 @@ Result: {msg_dict[status]} Details Owner: Jayden Jia Start time: {starttime} -End time: {endtime} +End time: {endtime} Hostname: {hostname} Commit: {git_commit} Cmd: {cmd} @@ -380,14 +380,13 @@ Core dir: {core_dir} ''' text_result=text.split("Result: ")[1].split("Details")[0].strip() print(text_result) - if text_result == "success": - send_msg(notification_robot_url, get_msg(text)) + send_msg(notification_robot_url, get_msg(text)) else: - send_msg(alert_robot_url, get_msg(text)) - send_msg(notification_robot_url, get_msg(text)) - - #send_msg(get_msg(text)) + send_msg(alert_robot_url, get_msg(text)) + send_msg(notification_robot_url, get_msg(text)) + + #send_msg(get_msg(text)) except Exception as e: print("exception:", e) exit(status) diff --git a/tests/pytest/auto_crash_gen_valgrind.py b/tests/pytest/auto_crash_gen_valgrind.py index b7af68cd2f..0bd70ebf3f 100755 --- a/tests/pytest/auto_crash_gen_valgrind.py +++ b/tests/pytest/auto_crash_gen_valgrind.py @@ -245,7 +245,7 @@ def start_taosd(): else: pass - start_cmd = 'cd %s && python3 test.py '%(start_path) + start_cmd = 'cd %s && python3 test.py -G'%(start_path) os.system(start_cmd +">>/dev/null") def get_cmds(args_list): @@ -404,24 +404,24 @@ Result: {msg_dict[status]} Details Owner: Jayden Jia Start time: {starttime} -End time: {endtime} +End time: {endtime} Hostname: {hostname} Commit: {git_commit} Cmd: {cmd} Log dir: {log_dir} Core dir: {core_dir} ''' - + text_result=text.split("Result: ")[1].split("Details")[0].strip() print(text_result) - + if text_result == "success": send_msg(notification_robot_url, get_msg(text)) else: - send_msg(alert_robot_url, get_msg(text)) + send_msg(alert_robot_url, get_msg(text)) send_msg(notification_robot_url, get_msg(text)) - - #send_msg(get_msg(text)) + + #send_msg(get_msg(text)) except Exception as e: print("exception:", e) exit(status) diff --git a/tests/pytest/auto_crash_gen_valgrind_cluster.py b/tests/pytest/auto_crash_gen_valgrind_cluster.py index df40b60967..b4b90e1f5e 100755 --- a/tests/pytest/auto_crash_gen_valgrind_cluster.py +++ b/tests/pytest/auto_crash_gen_valgrind_cluster.py @@ -236,7 +236,7 @@ def start_taosd(): else: pass - start_cmd = 'cd %s && python3 test.py -N 4 -M 1 '%(start_path) + start_cmd = 'cd %s && python3 test.py -N 4 -M 1 -G '%(start_path) os.system(start_cmd +">>/dev/null") def get_cmds(args_list): @@ -388,28 +388,28 @@ def main(): text = f''' Result: {msg_dict[status]} - + Details Owner: Jayden Jia Start time: {starttime} -End time: {endtime} +End time: {endtime} Hostname: {hostname} Commit: {git_commit} Cmd: {cmd} Log dir: {log_dir} Core dir: {core_dir} ''' - + text_result=text.split("Result: ")[1].split("Details")[0].strip() print(text_result) - + if text_result == "success": send_msg(notification_robot_url, get_msg(text)) else: - send_msg(alert_robot_url, get_msg(text)) - send_msg(notification_robot_url, get_msg(text)) - - #send_msg(get_msg(text)) + send_msg(alert_robot_url, get_msg(text)) + send_msg(notification_robot_url, get_msg(text)) + + #send_msg(get_msg(text)) except Exception as e: print("exception:", e) exit(status) diff --git a/tests/run_all_ci_cases.sh b/tests/run_all_ci_cases.sh index 41040f3c43..15d3e9f6a9 100755 --- a/tests/run_all_ci_cases.sh +++ b/tests/run_all_ci_cases.sh @@ -23,40 +23,16 @@ function printHelp() { echo " -b [Build test branch] Build test branch (default: null)" echo " Options: " echo " e.g., -b main (pull main branch, build and install)" + echo " -t [Run test cases] Run test cases type(default: all)" + echo " Options: " + echo " e.g., -t all/python/legacy" echo " -s [Save cases log] Save cases log(default: notsave)" echo " Options:" - echo " e.g., -c notsave : do not save the log " - echo " -c save : default save ci case log in Project dir/tests/ci_bak" + echo " e.g., -s notsave : do not save the log " + echo " -s save : default save ci case log in Project dir/tests/ci_bak" exit 0 } -# Initialization parameter -PROJECT_DIR="" -BRANCH="" -SAVE_LOG="notsave" - -# Parse command line parameters -while getopts "hb:d:s:" arg; do - case $arg in - d) - PROJECT_DIR=$OPTARG - ;; - b) - BRANCH=$OPTARG - ;; - s) - SAVE_LOG=$OPTARG - ;; - h) - printHelp - ;; - ?) - echo "Usage: ./$(basename $0) -h" - exit 1 - ;; - esac -done - function get_DIR() { today=`date +"%Y%m%d"` if [ -z "$PROJECT_DIR" ]; then @@ -95,13 +71,6 @@ function get_DIR() { fi } -get_DIR -echo "PROJECT_DIR = $PROJECT_DIR" -echo "TDENGINE_DIR = $TDENGINE_DIR" -echo "BUILD_DIR = $BUILD_DIR" -echo "BACKUP_DIR = $BACKUP_DIR" - - function buildTDengine() { print_color "$GREEN" "TDengine build start" @@ -111,14 +80,14 @@ function buildTDengine() { # pull tdinternal code cd "$TDENGINE_DIR/../" print_color "$GREEN" "Git pull TDinternal code..." - git remote prune origin > /dev/null - git remote update > /dev/null + # git remote prune origin > /dev/null + # git remote update > /dev/null # pull tdengine code cd $TDENGINE_DIR print_color "$GREEN" "Git pull TDengine code..." - git remote prune origin > /dev/null - git remote update > /dev/null + # git remote prune origin > /dev/null + # git remote update > /dev/null REMOTE_COMMIT=`git rev-parse --short remotes/origin/$branch` LOCAL_COMMIT=`git rev-parse --short @` print_color "$GREEN" " LOCAL: $LOCAL_COMMIT" @@ -130,12 +99,12 @@ function buildTDengine() { print_color "$GREEN" "Repo need to pull" fi - git reset --hard - git checkout -- . + # git reset --hard + # git checkout -- . git checkout $branch - git checkout -- . - git clean -f - git pull + # git checkout -- . + # git clean -f + # git pull [ -d $TDENGINE_DIR/../debug ] || mkdir $TDENGINE_DIR/../debug cd $TDENGINE_DIR/../debug @@ -148,15 +117,15 @@ function buildTDengine() { print_color "$GREEN" "$makecmd" $makecmd - make -j 8 install + make -j $(nproc) install else TDENGINE_DIR="$PROJECT_DIR" # pull tdengine code cd $TDENGINE_DIR print_color "$GREEN" "Git pull TDengine code..." - git remote prune origin > /dev/null - git remote update > /dev/null + # git remote prune origin > /dev/null + # git remote update > /dev/null REMOTE_COMMIT=`git rev-parse --short remotes/origin/$branch` LOCAL_COMMIT=`git rev-parse --short @` print_color "$GREEN" " LOCAL: $LOCAL_COMMIT" @@ -168,12 +137,12 @@ function buildTDengine() { print_color "$GREEN" "Repo need to pull" fi - git reset --hard - git checkout -- . + # git reset --hard + # git checkout -- . git checkout $branch - git checkout -- . - git clean -f - git pull + # git checkout -- . + # git clean -f + # git pull [ -d $TDENGINE_DIR/debug ] || mkdir $TDENGINE_DIR/debug cd $TDENGINE_DIR/debug @@ -186,24 +155,12 @@ function buildTDengine() { print_color "$GREEN" "$makecmd" $makecmd - make -j 8 install + make -j $(nproc) install fi print_color "$GREEN" "TDengine build end" } - -# Check and get the branch name -if [ -n "$BRANCH" ] ; then - branch="$BRANCH" - print_color "$GREEN" "Testing branch: $branch " - print_color "$GREEN" "Build is required for this test!" - buildTDengine -else - print_color "$GREEN" "Build is not required for this test!" -fi - - function runCasesOneByOne () { while read -r line; do if [[ "$line" != "#"* ]]; then @@ -257,7 +214,7 @@ function runUnitTest() { cd $BUILD_DIR pgrep taosd || taosd >> /dev/null 2>&1 & sleep 10 - ctest -E "cunit_test" -j8 + ctest -E "cunit_test" -j4 print_color "$GREEN" "3.0 unit test done" } @@ -307,7 +264,6 @@ function runPythonCases() { fi } - function runTest() { print_color "$GREEN" "run Test" @@ -315,9 +271,9 @@ function runTest() { [ -d sim ] && rm -rf sim [ -f $TDENGINE_ALLCI_REPORT ] && rm $TDENGINE_ALLCI_REPORT - runUnitTest runSimCases runPythonCases + runUnitTest stopTaosd cd $TDENGINE_DIR/tests/script @@ -337,7 +293,7 @@ function stopTaosd { sleep 1 PID=`ps -ef|grep -w taosd | grep -v grep | awk '{print $2}'` done - print_color "$GREEN" "Stop tasod end" + print_color "$GREEN" "Stop taosd end" } function stopTaosadapter { @@ -350,10 +306,52 @@ function stopTaosadapter { sleep 1 PID=`ps -ef|grep -w taosd | grep -v grep | awk '{print $2}'` done - print_color "$GREEN" "Stop tasoadapter end" + print_color "$GREEN" "Stop taosadapter end" } +###################### +# main entry +###################### + +# Initialization parameter +PROJECT_DIR="" +BRANCH="" +TEST_TYPE="" +SAVE_LOG="notsave" + +# Parse command line parameters +while getopts "hb:d:t:s:" arg; do + case $arg in + d) + PROJECT_DIR=$OPTARG + ;; + b) + BRANCH=$OPTARG + ;; + t) + TEST_TYPE=$OPTARG + ;; + s) + SAVE_LOG=$OPTARG + ;; + h) + printHelp + ;; + ?) + echo "Usage: ./$(basename $0) -h" + exit 1 + ;; + esac +done + +get_DIR +echo "PROJECT_DIR = $PROJECT_DIR" +echo "TDENGINE_DIR = $TDENGINE_DIR" +echo "BUILD_DIR = $BUILD_DIR" +echo "BACKUP_DIR = $BACKUP_DIR" + +# Run all ci case WORK_DIR=$TDENGINE_DIR date >> $WORK_DIR/date.log @@ -361,7 +359,24 @@ print_color "$GREEN" "Run all ci test cases" | tee -a $WORK_DIR/date.log stopTaosd -runTest +# Check and get the branch name +if [ -n "$BRANCH" ] ; then + branch="$BRANCH" + print_color "$GREEN" "Testing branch: $branch " + print_color "$GREEN" "Build is required for this test!" + buildTDengine +else + print_color "$GREEN" "Build is not required for this test!" +fi + +# Run different types of case +if [ -z "$TEST_TYPE" -o "$TEST_TYPE" = "all" -o "$TEST_TYPE" = "ALL" ]; then + runTest +elif [ "$TEST_TYPE" = "python" -o "$TEST_TYPE" = "PYTHON" ]; then + runPythonCases +elif [ "$TEST_TYPE" = "legacy" -o "$TEST_TYPE" = "LEGACY" ]; then + runSimCases +fi date >> $WORK_DIR/date.log print_color "$GREEN" "End of ci test cases" | tee -a $WORK_DIR/date.log \ No newline at end of file diff --git a/tests/run_local_coverage.sh b/tests/run_local_coverage.sh index f82d37faa4..ca3175a051 100755 --- a/tests/run_local_coverage.sh +++ b/tests/run_local_coverage.sh @@ -41,49 +41,6 @@ function printHelp() { } -PROJECT_DIR="" -CAPTURE_GCDA_DIR="" -TEST_CASE="task" -UNIT_TEST_CASE="" -BRANCH="" -BRANCH_BUILD="" -LCOV_DIR="/usr/local/bin" - -# Parse command line parameters -while getopts "hd:b:f:c:u:i:l:" arg; do - case $arg in - d) - PROJECT_DIR=$OPTARG - ;; - b) - BRANCH=$OPTARG - ;; - f) - CAPTURE_GCDA_DIR=$OPTARG - ;; - c) - TEST_CASE=$OPTARG - ;; - u) - UNIT_TEST_CASE=$OPTARG - ;; - i) - BRANCH_BUILD=$OPTARG - ;; - l) - LCOV_DIR=$OPTARG - ;; - h) - printHelp - ;; - ?) - echo "Usage: ./$(basename $0) -h" - exit 1 - ;; - esac -done - - # Find the project/tdengine/build/capture directory function get_DIR() { today=`date +"%Y%m%d"` @@ -118,18 +75,6 @@ function get_DIR() { } -# Show all parameters -get_DIR -echo "PROJECT_DIR = $PROJECT_DIR" -echo "TDENGINE_DIR = $TDENGINE_DIR" -echo "BUILD_DIR = $BUILD_DIR" -echo "CAPTURE_GCDA_DIR = $CAPTURE_GCDA_DIR" -echo "TEST_CASE = $TEST_CASE" -echo "UNIT_TEST_CASE = $UNIT_TEST_CASE" -echo "BRANCH_BUILD = $BRANCH_BUILD" -echo "LCOV_DIR = $LCOV_DIR" - - function buildTDengine() { print_color "$GREEN" "TDengine build start" @@ -139,14 +84,14 @@ function buildTDengine() { # pull tdinternal code cd "$TDENGINE_DIR/../" print_color "$GREEN" "Git pull TDinternal code..." - git remote prune origin > /dev/null - git remote update > /dev/null + # git remote prune origin > /dev/null + # git remote update > /dev/null # pull tdengine code cd $TDENGINE_DIR print_color "$GREEN" "Git pull TDengine code..." - git remote prune origin > /dev/null - git remote update > /dev/null + # git remote prune origin > /dev/null + # git remote update > /dev/null REMOTE_COMMIT=`git rev-parse --short remotes/origin/$branch` LOCAL_COMMIT=`git rev-parse --short @` print_color "$GREEN" " LOCAL: $LOCAL_COMMIT" @@ -158,12 +103,12 @@ function buildTDengine() { print_color "$GREEN" "Repo need to pull" fi - git reset --hard - git checkout -- . + # git reset --hard + # git checkout -- . git checkout $branch - git checkout -- . - git clean -f - git pull + # git checkout -- . + # git clean -f + # git pull [ -d $TDENGINE_DIR/../debug ] || mkdir $TDENGINE_DIR/../debug cd $TDENGINE_DIR/../debug @@ -183,8 +128,8 @@ function buildTDengine() { # pull tdengine code cd $TDENGINE_DIR print_color "$GREEN" "Git pull TDengine code..." - git remote prune origin > /dev/null - git remote update > /dev/null + # git remote prune origin > /dev/null + # git remote update > /dev/null REMOTE_COMMIT=`git rev-parse --short remotes/origin/$branch` LOCAL_COMMIT=`git rev-parse --short @` print_color "$GREEN" " LOCAL: $LOCAL_COMMIT" @@ -196,12 +141,12 @@ function buildTDengine() { print_color "$GREEN" "Repo need to pull" fi - git reset --hard - git checkout -- . + # git reset --hard + # git checkout -- . git checkout $branch - git checkout -- . - git clean -f - git pull + # git checkout -- . + # git clean -f + # git pull [ -d $TDENGINE_DIR/debug ] || mkdir $TDENGINE_DIR/debug cd $TDENGINE_DIR/debug @@ -220,44 +165,6 @@ function buildTDengine() { print_color "$GREEN" "TDengine build end" } -# Check and get the branch name and build branch -if [ -n "$BRANCH" ] && [ -z "$BRANCH_BUILD" ] ; then - branch="$BRANCH" - print_color "$GREEN" "Testing branch: $branch " - print_color "$GREEN" "Build is required for this test!" - buildTDengine -elif [ -n "$BRANCH_BUILD" ] && [ "$BRANCH_BUILD" = "YES" -o "$BRANCH_BUILD" = "yes" ] ; then - CURRENT_DIR=$(pwd) - echo "CURRENT_DIR: $CURRENT_DIR" - if [ -d .git ]; then - CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD) - echo "CURRENT_BRANCH: $CURRENT_BRANCH" - else - echo "The current directory is not a Git repository" - fi - branch="$CURRENT_BRANCH" - print_color "$GREEN" "Testing branch: $branch " - print_color "$GREEN" "Build is required for this test!" - buildTDengine -elif [ -n "$BRANCH_BUILD" ] && [ "$BRANCH_BUILD" = "ONLY_INSTALL" -o "$BRANCH_BUILD" = "only_install" ] ; then - CURRENT_DIR=$(pwd) - echo "CURRENT_DIR: $CURRENT_DIR" - if [ -d .git ]; then - CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD) - echo "CURRENT_BRANCH: $CURRENT_BRANCH" - else - echo "The current directory is not a Git repository" - fi - branch="$CURRENT_BRANCH" - print_color "$GREEN" "Testing branch: $branch " - print_color "$GREEN" "not build,only install!" - cd $TDENGINE_DIR/debug - make -j $(nproc) install -elif [ -z "$BRANCH" ] && [ -z "$BRANCH_BUILD" ] ; then - print_color "$GREEN" "Build is not required for this test!" -fi - - function runCasesOneByOne () { while read -r line; do if [[ "$line" != "#"* ]]; then @@ -481,10 +388,108 @@ function stopTaosadapter { } +###################### +# main entry +###################### + +# Initialization parameter +PROJECT_DIR="" +CAPTURE_GCDA_DIR="" +TEST_CASE="task" +UNIT_TEST_CASE="" +BRANCH="" +BRANCH_BUILD="" +LCOV_DIR="/usr/local/bin" + +# Parse command line parameters +while getopts "hd:b:f:c:u:i:l:" arg; do + case $arg in + d) + PROJECT_DIR=$OPTARG + ;; + b) + BRANCH=$OPTARG + ;; + f) + CAPTURE_GCDA_DIR=$OPTARG + ;; + c) + TEST_CASE=$OPTARG + ;; + u) + UNIT_TEST_CASE=$OPTARG + ;; + i) + BRANCH_BUILD=$OPTARG + ;; + l) + LCOV_DIR=$OPTARG + ;; + h) + printHelp + ;; + ?) + echo "Usage: ./$(basename $0) -h" + exit 1 + ;; + esac +done + + +# Show all parameters +get_DIR +echo "PROJECT_DIR = $PROJECT_DIR" +echo "TDENGINE_DIR = $TDENGINE_DIR" +echo "BUILD_DIR = $BUILD_DIR" +echo "CAPTURE_GCDA_DIR = $CAPTURE_GCDA_DIR" +echo "TEST_CASE = $TEST_CASE" +echo "UNIT_TEST_CASE = $UNIT_TEST_CASE" +echo "BRANCH_BUILD = $BRANCH_BUILD" +echo "LCOV_DIR = $LCOV_DIR" + date >> $TDENGINE_DIR/date.log print_color "$GREEN" "Run local coverage test cases" | tee -a $TDENGINE_DIR/date.log + +# Check and get the branch name and build branch +if [ -n "$BRANCH" ] && [ -z "$BRANCH_BUILD" ] ; then + branch="$BRANCH" + print_color "$GREEN" "Testing branch: $branch " + print_color "$GREEN" "Build is required for this test!" + buildTDengine +elif [ -n "$BRANCH_BUILD" ] && [ "$BRANCH_BUILD" = "YES" -o "$BRANCH_BUILD" = "yes" ] ; then + CURRENT_DIR=$(pwd) + echo "CURRENT_DIR: $CURRENT_DIR" + if [ -d .git ]; then + CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD) + echo "CURRENT_BRANCH: $CURRENT_BRANCH" + else + echo "The current directory is not a Git repository" + fi + branch="$CURRENT_BRANCH" + print_color "$GREEN" "Testing branch: $branch " + print_color "$GREEN" "Build is required for this test!" + buildTDengine +elif [ -n "$BRANCH_BUILD" ] && [ "$BRANCH_BUILD" = "ONLY_INSTALL" -o "$BRANCH_BUILD" = "only_install" ] ; then + CURRENT_DIR=$(pwd) + echo "CURRENT_DIR: $CURRENT_DIR" + if [ -d .git ]; then + CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD) + echo "CURRENT_BRANCH: $CURRENT_BRANCH" + else + echo "The current directory is not a Git repository" + fi + branch="$CURRENT_BRANCH" + print_color "$GREEN" "Testing branch: $branch " + print_color "$GREEN" "not build,only install!" + cd $TDENGINE_DIR/debug + make -j $(nproc) install +elif [ -z "$BRANCH" ] && [ -z "$BRANCH_BUILD" ] ; then + print_color "$GREEN" "Build is not required for this test!" +fi + + stopTaosd runTest diff --git a/tests/script/telemetry/crash-report/.env.example b/tests/script/telemetry/crash-report/.env.example new file mode 100644 index 0000000000..f7d50f40c9 --- /dev/null +++ b/tests/script/telemetry/crash-report/.env.example @@ -0,0 +1,6 @@ +EXCLUDE_IP="192.168.1.10" +SERVER_IP="192.168.1.11" +HTTP_SERV_IP="192.168.1.12" +HTTP_SERV_PORT=8080 +FEISHU_MSG_URL="https://open.feishu.cn/open-apis/bot/v2/hook/*******" +OWNER="Jayden Jia" diff --git a/tests/script/telemetry/crash-report/CrashCounter.py b/tests/script/telemetry/crash-report/CrashCounter.py new file mode 100644 index 0000000000..a89567da3d --- /dev/null +++ b/tests/script/telemetry/crash-report/CrashCounter.py @@ -0,0 +1,308 @@ +from datetime import date +from datetime import timedelta +import os +import json +import re +import requests +import subprocess +from dotenv import load_dotenv + +# load .env +# You should have a .env file in the same directory as this script +# You can exec: cp .env.example .env +load_dotenv() + +# define version +version = "3.3.2.*" +version_pattern_str = version.replace('.', r'\.').replace('*', r'\d+') +version_pattern = re.compile(rf'^{version_pattern_str}$') +version_stack_list = list() + +# define ip + +ip = os.getenv("EXCLUDE_IP") +server_ip = os.getenv("SERVER_IP") +http_serv_ip = os.getenv("HTTP_SERV_IP") +http_serv_port = os.getenv("HTTP_SERV_PORT") +owner = os.getenv("OWNER") + +# feishu-msg url +feishu_msg_url = os.getenv("FEISHU_MSG_URL") + +# get today +today = date.today() + +# Define the file and parameters +path="/data/telemetry/crash-report/" +trace_report_path = path + "trace_report" +os.makedirs(path, exist_ok=True) +os.makedirs(trace_report_path, exist_ok=True) + +assert_script_path = path + "filter_assert.sh" +nassert_script_path = path + "filter_nassert.sh" + +# get files for the past 7 days +def get_files(): + files = "" + for i in range(1,8): + #print ((today - timedelta(days=i)).strftime("%Y%m%d")) + files = files + path + (today - timedelta(days=i)).strftime("%Y%m%d") + ".txt " + return files.strip().split(" ") + +# Define the AWK script as a string with proper escaping +def get_res(file_path): + # Execute the script + command = ['bash', file_path, version, ip] + get_files() + process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True) + + # Capture the output and errors + output, errors = process.communicate() + + # Check for errors + if process.returncode != 0: + return errors + else: + return output.rstrip() + +def get_sum(output): + # Split the output into lines + lines = output.strip().split('\n') + + # Initialize the sum + total_sum = 0 + + # Iterate over each line + for line in lines: + # Split each line by space to separate the columns + parts = line.split() + + # The first part of the line is the number, convert it to integer + if parts: # Check if there are any elements in the parts list + number = int(parts[0]) + total_sum += number + + return total_sum + +def convert_html(data): + # convert data to json + start_time = get_files()[6].split("/")[-1].split(".")[0] + end_time = get_files()[0].split("/")[-1].split(".")[0] + html_report_file = f'{start_time}_{end_time}.html' + json_data = json.dumps(data) + + # Create HTML content + html_content = f''' + + + + + + Stack Trace Report + + + +

Stack Trace Report From {start_time} To {end_time}

+ + + + + + + + + + + + +
Key Stack InfoVersionsNum Of CrashesFull Stack Info
+ + + + +''' + # Write the HTML content to a file + + with open(f'{trace_report_path}/{html_report_file}', 'w') as f: + f.write(html_content) + return html_report_file + +def get_version_stack_list(res): + for line in res.strip().split('\n'): + version_list = list() + version_stack_dict = dict() + count = line.split()[0] + key_stack_info = line.split()[1] + for file in get_files(): + with open(file, 'r') as infile: + for line in infile: + line = line.strip() + data = json.loads(line) + # print(line) + if ip not in line and version_pattern.search(data["version"]) and key_stack_info in line: + if data["version"] not in version_list: + version_list.append(data["version"]) + full_stack_info = data["stackInfo"] + version_stack_dict["key_stack_info"] = key_stack_info + version_stack_dict["full_stack_info"] = full_stack_info + version_stack_dict["version_list"] = version_list + version_stack_dict["count"] = count + # print(version_stack_dict) + version_stack_list.append(version_stack_dict) + return version_stack_list + +# get msg info +def get_msg(text): + return { + "msg_type": "post", + "content": { + "post": { + "zh_cn": { + "title": "Telemetry Statistics", + "content": [ + [{ + "tag": "text", + "text": text + } + ]] + } + } + } + } + +# post msg +def send_msg(json): + headers = { + 'Content-Type': 'application/json' + } + + req = requests.post(url=feishu_msg_url, headers=headers, json=json) + inf = req.json() + if "StatusCode" in inf and inf["StatusCode"] == 0: + pass + else: + print(inf) + + +def format_results(results): + # Split the results into lines + lines = results.strip().split('\n') + + # Parse lines into a list of tuples (number, rest_of_line) + parsed_lines = [] + for line in lines: + parts = line.split(maxsplit=1) + if len(parts) == 2: + number = int(parts[0]) # Convert the number part to an integer + parsed_lines.append((number, parts[1])) + + # Sort the parsed lines by the first element (number) in descending order + parsed_lines.sort(reverse=True, key=lambda x: x[0]) + + # Determine the maximum width of the first column for alignment + # max_width = max(len(str(item[0])) for item in parsed_lines) + if parsed_lines: + max_width = max(len(str(item[0])) for item in parsed_lines) + else: + max_width = 0 + + # Format each line to align the numbers and function names with indentation + formatted_lines = [] + for number, text in parsed_lines: + formatted_line = f" {str(number).rjust(max_width)} {text}" + formatted_lines.append(formatted_line) + + # Join the formatted lines into a single string + return '\n'.join(formatted_lines) + +# # send report to feishu +def send_report(res, sum, html_report_file): + content = f''' + version: v{version} + from: {get_files()[6].split("/")[-1].split(".")[0]} + to: {get_files()[0].split("/")[-1].split(".")[0]} + ip: {server_ip} + owner: {owner} + result: \n{format_results(res)}\n + total crashes: {sum}\n + details: http://{http_serv_ip}:{http_serv_port}/{html_report_file} + ''' + print(get_msg(content)) + send_msg(get_msg(content)) + # print(content) + +# for none-taosAssertDebug +nassert_res = get_res(nassert_script_path) +# print(nassert_res) + +# for taosAssertDebug +assert_res = get_res(assert_script_path) +# print(assert_res) + +# combine the results +res = nassert_res + assert_res + +# get version stack list +version_stack_list = get_version_stack_list(res) if len(res) > 0 else list() + +# convert to html +html_report_file = convert_html(version_stack_list) + +# get sum +sum = get_sum(res) + +# send report +send_report(res, sum, html_report_file) + diff --git a/tests/script/telemetry/crash-report/CrashCounter.py.old b/tests/script/telemetry/crash-report/CrashCounter.py.old new file mode 100644 index 0000000000..66edc8d63e --- /dev/null +++ b/tests/script/telemetry/crash-report/CrashCounter.py.old @@ -0,0 +1,128 @@ +from datetime import date +from datetime import timedelta +import os +import re +import requests +from dotenv import load_dotenv + +# load .env +load_dotenv() + +# define version +version = "3.3.*" + +ip = os.getenv("EXCLUDE_IP") +server_ip = os.getenv("SERVER_IP") +owner = os.getenv("OWNER") + +# feishu-msg url +feishu_msg_url = os.getenv("FEISHU_MSG_URL") + +today = date.today() +#today = date(2023,8,7) +path="/data/telemetry/crash-report/" + +# get files for the past 7 days +def get_files(): + files = "" + for i in range(1,8): + #print ((today - timedelta(days=i)).strftime("%Y%m%d")) + files = files + path + (today - timedelta(days=i)).strftime("%Y%m%d") + ".txt " + + return files + +# for none-taosAssertDebug +filter1_cmd = '''grep '"version":"%s"' %s \ +| grep "taosd(" \ +| awk -F "stackInfo" '{print $2}' \ +| grep -v "taosAssertDebug" \ +| grep -v %s \ +| awk -F "taosd" '{print $3}' \ +| cut -d")" -f 1 \ +| cut -d"(" -f 2 \ +| sort | uniq -c ''' % (version, get_files(), ip) + +# for taosAssertDebug +filter2_cmd = '''grep '"version":"%s"' %s \ +| grep "taosd(" \ +| awk -F "stackInfo" '{print $2}' \ +| grep "taosAssertDebug" \ +| grep -v %s \ +| awk -F "taosd" '{print $3}' \ +| cut -d")" -f 1 \ +| cut -d"(" -f 2 \ +| sort | uniq -c ''' % (version, get_files(), ip) + +# get msg info +def get_msg(text): + return { + "msg_type": "post", + "content": { + "post": { + "zh_cn": { + "title": "Telemetry Statistics", + "content": [ + [{ + "tag": "text", + "text": text + } + ]] + } + } + } + } + +# post msg +def send_msg(json): + headers = { + 'Content-Type': 'application/json' + } + + req = requests.post(url=group_url, headers=headers, json=json) + inf = req.json() + if "StatusCode" in inf and inf["StatusCode"] == 0: + pass + else: + print(inf) + +# exec cmd and return res +def get_output(cmd): + text = os.popen(cmd) + lines = text.read() + text.close() + return lines + +# get sum +def get_count(output): + res = re.findall(" \d+ ", output) + sum1 = 0 + for r in res: + sum1 = sum1 + int(r.strip()) + return sum1 + +# print total crash count +def print_result(): + #print(f"Files for statistics: {get_files()}\n") + sum1 = get_count(get_output(filter1_cmd)) + sum2 = get_count(get_output(filter2_cmd)) + total = sum1 + sum2 + #print(f"total crashes: {total}") + return total + +# send report to feishu +def send_report(): + content = f''' + test scope: Telemetry Statistics + owner: {owner} + ip: {server_ip} + from: {get_files().split(" ")[6].split("/")[4].split(".")[0]} + to: {get_files().split(" ")[0].split("/")[4].split(".")[0]} + filter1 result: {get_output(filter1_cmd)} + filter2 result: {get_output(filter2_cmd)} + total crashes: {print_result()} + ''' + #send_msg(get_msg(content)) + print(content) + +print_result() +send_report() diff --git a/tests/script/telemetry/crash-report/README-CN.md b/tests/script/telemetry/crash-report/README-CN.md new file mode 100644 index 0000000000..e0deab9f5b --- /dev/null +++ b/tests/script/telemetry/crash-report/README-CN.md @@ -0,0 +1,61 @@ +# 目录 + +1. [介绍](#1-介绍) +1. [前置条件](#2-前置条件) +1. [运行](#3-运行) + +# 1. 介绍 + +本手册旨在为开发人员提供全面的指导,以收集过去7天的崩溃信息并将其报告到飞书通知群。 + +> [!NOTE] +> - 下面的命令和脚本已在 Linux(CentOS 7.9.2009)上验证. + +# 2. 前置条件 + +- 安装 Python3 + +```bash +yum install python3 +yum install python3-pip +``` + +- 安装 Python 依赖 + +```bash +pip3 install requests python-dotenv +``` + +- 调整 .env 文件 + +```bash +cd $DIR/telemetry/crash-report +cp .env.example .env +vim .env +... +``` + +- .env 样例 + +```bash +# 过滤器排除 IP(公司网络出口 IP) +EXCLUDE_IP="192.168.1.10" +# 英文官网服务器 IP +SERVER_IP="192.168.1.11" +# 内网提供 HTTP 服务的 IP 及端口,用于提供 HTML 报告浏览 +HTTP_SERV_IP="192.168.1.12" +HTTP_SERV_PORT=8080 +# 飞书群机器人 webhook 地址 +FEISHU_MSG_URL="https://open.feishu.cn/open-apis/bot/v2/hook/*******" +# 负责人 +OWNER="Jayden Jia" +``` + +# 3. 运行 + +在 $DIR/telemetry/crash-report 目录中,有类似文件名为 202501**.txt 的一些文件。Python 脚本会将从这些文本文件中收集崩溃信息,并将报告发送到您的飞书机器人群组中。 + +```bash +cd $DIR/telemetry/crash-report +python3 CrashCounter.py +``` diff --git a/tests/script/telemetry/crash-report/README.md b/tests/script/telemetry/crash-report/README.md new file mode 100644 index 0000000000..a47c9bc8bb --- /dev/null +++ b/tests/script/telemetry/crash-report/README.md @@ -0,0 +1,61 @@ +# Table of Contents + +1. [Introduction](#1-introduction) +1. [Prerequisites](#2-prerequisites) +1. [Running](#3-running) + +# 1. Introduction + +This manual is intended to give developers comprehensive guidance to collect crash information from the past 7 days and report it to the FeiShu notification group. + +> [!NOTE] +> - The commands and scripts below are verified on Linux (CentOs 7.9.2009). + +# 2. Prerequisites + +- Install Python3 + +```bash +yum install python3 +yum install python3-pip +``` + +- Install Python dependencies + +```bash +pip3 install requests python-dotenv +``` + +- Adjust .env file + +```bash +cd $DIR/telemetry/crash-report +cp .env.example .env +vim .env +... +``` + +- Example for .env + +```bash +# Filter to exclude IP (Company network export IP) +EXCLUDE_IP="192.168.1.10" +# Official website server IP +SERVER_IP="192.168.1.11" +# Internal network providing HTTP service IP and port, used for HTML report browsing +HTTP_SERV_IP="192.168.1.12" +HTTP_SERV_PORT=8080 +# Webhook address for feiShu group bot +FEISHU_MSG_URL="https://open.feishu.cn/open-apis/bot/v2/hook/*******" +# Owner +OWNER="Jayden Jia" +``` + +# 3. Running + +In `$DIR/telemetry/crash-report` directory, there are several files with names like 202501**.txt. The python script will collect crash information from these text files and send report to your Feishu bot group. + +```bash +cd $DIR/telemetry/crash-report +python3 CrashCounter.py +``` diff --git a/tests/script/telemetry/crash-report/filter1.sh b/tests/script/telemetry/crash-report/filter1.sh new file mode 100755 index 0000000000..3cb36a18ad --- /dev/null +++ b/tests/script/telemetry/crash-report/filter1.sh @@ -0,0 +1,15 @@ +#!/bin/bash +source .env +filesPath="/data/telemetry/crash-report" +version="3.0.4.1" +taosdataIp=$EXCLUDE_IP +grep "\"version\":\"${version}\"" ${filesPath}/*.txt \ +| grep "taosd(" \ +| awk -F "stackInfo" '{print $2}' \ +| grep -v "taosAssertDebug" \ +| grep -v ${taosdataIp} \ +| awk -F "taosd" '{print $2}' \ +| cut -d")" -f 1 \ +| cut -d"(" -f 2 \ +| sort | uniq -c + diff --git a/tests/script/telemetry/crash-report/filter2.sh b/tests/script/telemetry/crash-report/filter2.sh new file mode 100755 index 0000000000..4ad545345e --- /dev/null +++ b/tests/script/telemetry/crash-report/filter2.sh @@ -0,0 +1,14 @@ +#!/bin/bash +source .env +filesPath="/data/telemetry/crash-report" +version="3.0.4.1" +taosdataIp=$EXCLUDE_IP +grep "\"version\":\"${version}\"" ${filesPath}/*.txt \ +| grep "taosd(" \ +| awk -F "stackInfo" '{print $2}' \ +| grep "taosAssertDebug" \ +| grep -v ${taosdataIp} \ +| awk -F "taosd" '{print $3}' \ +| cut -d")" -f 1 \ +| cut -d"(" -f 2 \ +| sort | uniq -c diff --git a/tests/script/telemetry/crash-report/filter_assert.sh b/tests/script/telemetry/crash-report/filter_assert.sh new file mode 100755 index 0000000000..2d56287fc9 --- /dev/null +++ b/tests/script/telemetry/crash-report/filter_assert.sh @@ -0,0 +1,67 @@ +#!/bin/bash + +# Extract version and IP from the first two arguments +version="$1" +ip="$2" +shift 2 # Remove the first two arguments, leaving only file paths + +# All remaining arguments are considered as file paths +file_paths="$@" + +# Execute the awk script and capture the output +readarray -t output < <(awk -v version="$version" -v ip="$ip" ' +BEGIN { + RS = "\\n"; # Set the record separator to newline + FS = ","; # Set the field separator to comma + total = 0; # Initialize total count + version_regex = version; # Use the passed version pattern + ip_regex = ip; # Use the passed IP pattern +} +{ + start_collecting = 0; + version_matched = 0; + ip_excluded = 0; + + # Check each field within a record + for (i = 1; i <= NF; i++) { + if ($i ~ /"ip":"[^"]*"/ && $i ~ ip_regex) { + ip_excluded = 1; + } + if ($i ~ /"version":"[^"]*"/ && $i ~ version_regex) { + version_matched = 1; + } + } + + if (!ip_excluded && version_matched) { + for (i = 1; i <= NF; i++) { + if ($i ~ /taosAssertDebug/ && start_collecting == 0) { + start_collecting = 1; + continue; + } + if (start_collecting == 1 && $i ~ /taosd\(([^)]+)\)/) { + match($i, /taosd\(([^)]+)\)/, arr); + if (arr[1] != "") { + count[arr[1]]++; + total++; + break; + } + } + } + } +} +END { + for (c in count) { + printf "%d %s\n", count[c], c; + } + print "Total count:", total; +}' $file_paths) + +# Capture the function details and total count into separate variables +function_details=$(printf "%s\n" "${output[@]::${#output[@]}-1}") +total_count="${output[-1]}" + +# Output or use the variables as needed +#echo "Function Details:" +echo "$function_details" +#echo "Total Count:" +#echo "$total_count" diff --git a/tests/script/telemetry/crash-report/filter_nassert.sh b/tests/script/telemetry/crash-report/filter_nassert.sh new file mode 100755 index 0000000000..2a5acdfbf1 --- /dev/null +++ b/tests/script/telemetry/crash-report/filter_nassert.sh @@ -0,0 +1,74 @@ +#!/bin/bash + +# Pass version, ip, and file paths as arguments +version="$1" +ip="$2" +shift 2 # Shift the first two arguments to get file paths +file_paths="$@" + +# Execute awk and capture the output +readarray -t output < <(awk -v version="$version" -v ip="$ip" ' +BEGIN { + RS = "\\n"; # Set the record separator to newline + total = 0; # Initialize total count + version_regex = "\"version\":\"" version; # Construct the regex for version + ip_regex = "\"ip\":\"" ip "\""; # Construct the regex for IP +} +{ + found = 0; # Initialize the found flag to false + start_collecting = 1; # Start collecting by default, unless taosAssertDebug is encountered + split($0, parts, "\\n"); # Split each record by newline + + # Check for version and IP in each part + version_matched = 0; + ip_excluded = 0; + for (i in parts) { + if (parts[i] ~ version_regex) { + version_matched = 1; # Set flag if version is matched + } + if (parts[i] ~ ip_regex) { + ip_excluded = 1; # Set flag if IP is matched + break; # No need to continue if IP is excluded + } + } + + # Process only if version is matched and IP is not excluded + if (version_matched && !ip_excluded) { + for (i in parts) { + if (parts[i] ~ /taosAssertDebug/) { + start_collecting = 0; # Skip this record if taosAssertDebug is encountered + break; # Exit the loop + } + } + if (start_collecting == 1) { # Continue processing if taosAssertDebug is not found + for (i in parts) { + if (found == 0 && parts[i] ~ /frame:.*taosd\([^)]+\)/) { + # Match the first frame that meets the condition + match(parts[i], /taosd\(([^)]+)\)/, a); # Extract the function name + if (a[1] != "") { + count[a[1]]++; # Increment the count for this function name + total++; # Increment the total count + found = 1; # Set found flag to true + break; # Exit the loop once the function is found + } + } + } + } + } +} +END { + for (c in count) { + printf "%d %s\n", count[c], c; # Print the count and function name formatted + } + print total; # Print the total count alone +}' $file_paths) # Note the removal of quotes around "$file_paths" to handle multiple paths + +# Capture the function details and total count into separate variables +function_details=$(printf "%s\n" "${output[@]::${#output[@]}-1}") # Join array elements with newlines +total_count="${output[-1]}" # The last element + +# Output or use the variables as needed +#echo "Function Details:" +echo "$function_details" +#echo "Total Count:" +#echo "$total_count" diff --git a/tests/system-test/0-others/grant.py b/tests/system-test/0-others/grant.py index 9e54d9ca37..490541539f 100644 --- a/tests/system-test/0-others/grant.py +++ b/tests/system-test/0-others/grant.py @@ -158,9 +158,21 @@ class TDTestCase: tdSql.query(f'show grants;') tdSql.checkEqual(len(tdSql.queryResult), 1) infoFile.write(";".join(map(str,tdSql.queryResult[0])) + "\n") + tdLog.info(f"show grants: {tdSql.queryResult[0]}") + expireTimeStr=tdSql.queryResult[0][1] + serviceTimeStr=tdSql.queryResult[0][2] + tdLog.info(f"expireTimeStr: {expireTimeStr}, serviceTimeStr: {serviceTimeStr}") + expireTime = time.mktime(time.strptime(expireTimeStr, "%Y-%m-%d %H:%M:%S")) + serviceTime = time.mktime(time.strptime(serviceTimeStr, "%Y-%m-%d %H:%M:%S")) + tdLog.info(f"expireTime: {expireTime}, serviceTime: {serviceTime}") + tdSql.checkEqual(True, abs(expireTime - serviceTime - 864000) < 15) tdSql.query(f'show grants full;') - tdSql.checkEqual(len(tdSql.queryResult), 31) - + nGrantItems = 31 + tdSql.checkEqual(len(tdSql.queryResult), nGrantItems) + tdSql.checkEqual(tdSql.queryResult[0][2], serviceTimeStr) + for i in range(1, nGrantItems): + tdSql.checkEqual(tdSql.queryResult[i][2], expireTimeStr) + if infoFile: infoFile.flush() diff --git a/tests/system-test/0-others/test_case_template.py b/tests/system-test/0-others/test_case_template.py new file mode 100644 index 0000000000..fa1a9b5ade --- /dev/null +++ b/tests/system-test/0-others/test_case_template.py @@ -0,0 +1,55 @@ + +from util.log import tdLog +from util.cases import tdCases +from util.sql import tdSql +from util.dnodes import tdDnodes +from util.dnodes import * +from util.common import * + + +class TDTestCase: + + """ + Here is the class description for the whole file cases + """ + + # add the configuration of the client and server here + clientCfgDict = {'debugFlag': 131} + updatecfgDict = { + "debugFlag" : "131", + "queryBufferSize" : 10240, + 'clientCfg' : clientCfgDict + } + + def init(self, conn, logSql, replicaVar=1): + tdLog.debug(f"start to excute {__file__}") + tdSql.init(conn.cursor()) + self.replicaVar = int(replicaVar) + + + def test_function(self): # case function should be named start with test_ + """ + Here is the function description for single test: + Test case for custom function + """ + tdLog.info(f"Test case test custom function") + # excute the sql + tdSql.execute(f"create database db_test_function") + tdSql.execute(f"create table db_test_function.stb (ts timestamp, c1 int, c2 float, c3 double) tags (t1 int unsigned);") + # qury the result and return the result + tdSql.query(f"show databases") + # print result and check the result + database_info = tdLog.info(f"{tdSql.queryResult}") + tdSql.checkRows(3) + tdSql.checkData(2,0,"db_test_function") + + + def run(self): + self.test_function() + + def stop(self): + tdSql.close() + tdLog.success(f"{__file__} successfully executed") + +tdCases.addLinux(__file__, TDTestCase()) +tdCases.addWindows(__file__, TDTestCase()) diff --git a/tests/system-test/2-query/pk_varchar.py b/tests/system-test/2-query/pk_varchar.py index 167e1079d5..1bfc35147a 100644 --- a/tests/system-test/2-query/pk_varchar.py +++ b/tests/system-test/2-query/pk_varchar.py @@ -153,7 +153,7 @@ class TDTestCase: tdSql.checkData(9, 1, '8') tdSql.checkData(9, 2, 8) - tdSql.query('select * from d1.st order by ts limit 2;') + tdSql.query('select * from d1.st order by ts,pk limit 2;') tdSql.checkRows(2) tdSql.checkData(0, 0, datetime.datetime(2021, 4, 19, 0, 0)) tdSql.checkData(0, 1, '1') @@ -286,7 +286,7 @@ class TDTestCase: tdSql.checkData(9, 1, '8') tdSql.checkData(9, 2, 8) - tdSql.query('select * from d2.st order by ts limit 2;') + tdSql.query('select * from d2.st order by ts,pk limit 2;') tdSql.checkRows(2) tdSql.checkData(0, 0, datetime.datetime(2021, 4, 19, 0, 0)) tdSql.checkData(0, 1, '1') diff --git a/tests/system-test/2-query/tbnameIn.py b/tests/system-test/2-query/tbnameIn.py new file mode 100644 index 0000000000..91fdf9f73e --- /dev/null +++ b/tests/system-test/2-query/tbnameIn.py @@ -0,0 +1,145 @@ +################################################################### +# Copyright (c) 2016 by TAOS Technologies, Inc. +# All rights reserved. +# +# This file is proprietary and confidential to TAOS Technologies. +# No part of this file may be reproduced, stored, transmitted, +# disclosed or used in any form or by any means other than as +# expressly provided by the written permission from Jianhui Tao +# +################################################################### + +# -*- coding: utf-8 -*- + +import sys +import taos +from util.log import * +from util.cases import * +from util.sql import * + + +class TDTestCase: + def init(self, conn, logSql, replicaVar=1): + self.replicaVar = int(replicaVar) + tdLog.debug(f"start to excute {__file__}") + tdSql.init(conn.cursor()) + + def inTest(self, dbname="db"): + tdSql.execute(f'drop database if exists {dbname}') + tdSql.execute(f'create database {dbname}') + tdSql.execute(f'use {dbname}') + tdSql.execute(f'CREATE STABLE {dbname}.`st1` (`ts` TIMESTAMP, `v1` INT) TAGS (`t1` INT);') + tdSql.execute(f'CREATE STABLE {dbname}.`st2` (`ts` TIMESTAMP, `v1` INT) TAGS (`t1` INT);') + tdSql.execute(f'CREATE TABLE {dbname}.`t11` USING {dbname}.`st1` (`t1`) TAGS (11);') + tdSql.execute(f'CREATE TABLE {dbname}.`t12` USING {dbname}.`st1` (`t1`) TAGS (12);') + tdSql.execute(f'CREATE TABLE {dbname}.`t21` USING {dbname}.`st2` (`t1`) TAGS (21);') + tdSql.execute(f'CREATE TABLE {dbname}.`t22` USING {dbname}.`st2` (`t1`) TAGS (22);') + tdSql.execute(f'CREATE TABLE {dbname}.`ta` (`ts` TIMESTAMP, `v1` INT);') + + tdSql.execute(f"insert into {dbname}.t11 values ( '2025-01-21 00:11:01', 111 )") + tdSql.execute(f"insert into {dbname}.t11 values ( '2025-01-21 00:11:02', 112 )") + tdSql.execute(f"insert into {dbname}.t11 values ( '2025-01-21 00:11:03', 113 )") + tdSql.execute(f"insert into {dbname}.t12 values ( '2025-01-21 00:12:01', 121 )") + tdSql.execute(f"insert into {dbname}.t12 values ( '2025-01-21 00:12:02', 122 )") + tdSql.execute(f"insert into {dbname}.t12 values ( '2025-01-21 00:12:03', 123 )") + + tdSql.execute(f"insert into {dbname}.t21 values ( '2025-01-21 00:21:01', 211 )") + tdSql.execute(f"insert into {dbname}.t21 values ( '2025-01-21 00:21:02', 212 )") + tdSql.execute(f"insert into {dbname}.t21 values ( '2025-01-21 00:21:03', 213 )") + tdSql.execute(f"insert into {dbname}.t22 values ( '2025-01-21 00:22:01', 221 )") + tdSql.execute(f"insert into {dbname}.t22 values ( '2025-01-21 00:22:02', 222 )") + tdSql.execute(f"insert into {dbname}.t22 values ( '2025-01-21 00:22:03', 223 )") + + tdSql.execute(f"insert into {dbname}.ta values ( '2025-01-21 00:00:01', 1 )") + tdSql.execute(f"insert into {dbname}.ta values ( '2025-01-21 00:00:02', 2 )") + tdSql.execute(f"insert into {dbname}.ta values ( '2025-01-21 00:00:03', 3 )") + + tdLog.debug(f"-------------- step1: normal table test ------------------") + tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('ta');") + tdSql.checkRows(1) + tdSql.checkData(0, 0, '2025-01-21 00:00:03') + tdSql.checkData(0, 1, 3) + + tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('ta', 't21');") + tdSql.checkRows(1) + tdSql.checkData(0, 0, '2025-01-21 00:00:03') + tdSql.checkData(0, 1, 3) + + tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('t21');") + tdSql.checkRows(0) + + tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('ta') and tbname in ('ta');") + tdSql.checkRows(1) + tdSql.checkData(0, 0, '2025-01-21 00:00:03') + tdSql.checkData(0, 1, 3) + + tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('ta') or tbname in ('ta');") + tdSql.checkRows(1) + tdSql.checkData(0, 0, '2025-01-21 00:00:03') + tdSql.checkData(0, 1, 3) + + tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('ta') or tbname in ('tb');") + tdSql.checkRows(1) + tdSql.checkData(0, 0, '2025-01-21 00:00:03') + tdSql.checkData(0, 1, 3) + + tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('ta', 't21') and tbname in ('ta');") + tdSql.checkRows(1) + tdSql.checkData(0, 0, '2025-01-21 00:00:03') + tdSql.checkData(0, 1, 3) + + tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('ta', 't21') and tbname in ('t21');") + tdSql.checkRows(0) + + tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('t21') or tbname in ('ta');") + tdSql.checkRows(1) + tdSql.checkData(0, 0, '2025-01-21 00:00:03') + tdSql.checkData(0, 1, 3) + + tdLog.debug(f"-------------- step2: super table test ------------------") + tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('t11');") + tdSql.checkRows(1) + tdSql.checkData(0, 0, '2025-01-21 00:11:03') + tdSql.checkData(0, 1, 113) + + tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('t21');") + tdSql.checkRows(0) + + tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('ta', 't21');") + tdSql.checkRows(0) + + tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('t21', 't12');") + tdSql.checkRows(1) + tdSql.checkData(0, 0, '2025-01-21 00:12:03') + tdSql.checkData(0, 1, 123) + + tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('ta') and tbname in ('t12');") + tdSql.checkRows(0) + + tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('t12') or tbname in ('t11');") + tdSql.checkRows(1) + tdSql.checkData(0, 0, '2025-01-21 00:12:03') + tdSql.checkData(0, 1, 123) + + tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('ta') or tbname in ('t21');") + tdSql.checkRows(0) + + tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('t12', 't21') and tbname in ('t21');") + tdSql.checkRows(0) + + tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('t12', 't11') and tbname in ('t11');") + tdSql.checkRows(1) + tdSql.checkData(0, 0, '2025-01-21 00:11:03') + tdSql.checkData(0, 1, 113) + + + def run(self): + self.inTest() + + def stop(self): + tdSql.close() + tdLog.success("%s successfully executed" % __file__) + + +tdCases.addWindows(__file__, TDTestCase()) +tdCases.addLinux(__file__, TDTestCase()) diff --git a/tests/system-test/test.py b/tests/system-test/test.py index 0d40544be8..ab1bdc21d3 100644 --- a/tests/system-test/test.py +++ b/tests/system-test/test.py @@ -58,12 +58,12 @@ def checkRunTimeError(): if hwnd: os.system("TASKKILL /F /IM taosd.exe") -# +# # run case on previous cluster # def runOnPreviousCluster(host, config, fileName): print("enter run on previeous") - + # load case module sep = "/" if platform.system().lower() == 'windows': @@ -113,8 +113,9 @@ if __name__ == "__main__": asan = False independentMnode = False previousCluster = False - opts, args = getopt.gnu_getopt(sys.argv[1:], 'f:p:m:l:scghrd:k:e:N:M:Q:C:RWD:n:i:aP', [ - 'file=', 'path=', 'master', 'logSql', 'stop', 'cluster', 'valgrind', 'help', 'restart', 'updateCfgDict', 'killv', 'execCmd','dnodeNums','mnodeNums','queryPolicy','createDnodeNums','restful','websocket','adaptercfgupdate','replicaVar','independentMnode','previous']) + crashGen = False + opts, args = getopt.gnu_getopt(sys.argv[1:], 'f:p:m:l:scghrd:k:e:N:M:Q:C:RWD:n:i:aP:G', [ + 'file=', 'path=', 'master', 'logSql', 'stop', 'cluster', 'valgrind', 'help', 'restart', 'updateCfgDict', 'killv', 'execCmd','dnodeNums','mnodeNums','queryPolicy','createDnodeNums','restful','websocket','adaptercfgupdate','replicaVar','independentMnode','previous',"crashGen"]) for key, value in opts: if key in ['-h', '--help']: tdLog.printNoPrefix( @@ -141,6 +142,7 @@ if __name__ == "__main__": tdLog.printNoPrefix('-i independentMnode Mnode') tdLog.printNoPrefix('-a address sanitizer mode') tdLog.printNoPrefix('-P run case with [P]revious cluster, do not create new cluster to run case.') + tdLog.printNoPrefix('-G crashGen mode') sys.exit(0) @@ -208,7 +210,7 @@ if __name__ == "__main__": if key in ['-R', '--restful']: restful = True - + if key in ['-W', '--websocket']: websocket = True @@ -228,6 +230,10 @@ if __name__ == "__main__": if key in ['-P', '--previous']: previousCluster = True + if key in ['-G', '--crashGen']: + crashGen = True + + # # do exeCmd command # @@ -405,7 +411,7 @@ if __name__ == "__main__": for dnode in tdDnodes.dnodes: tdDnodes.starttaosd(dnode.index) tdCases.logSql(logSql) - + if restful or websocket: tAdapter.deploy(adapter_cfg_dict) tAdapter.start() @@ -450,7 +456,7 @@ if __name__ == "__main__": else: tdLog.debug(res) tdLog.exit(f"alter queryPolicy to {queryPolicy} failed") - + if ucase is not None and hasattr(ucase, 'noConn') and ucase.noConn == True: conn = None else: @@ -640,7 +646,7 @@ if __name__ == "__main__": else: tdLog.debug(res) tdLog.exit(f"alter queryPolicy to {queryPolicy} failed") - + # run case if testCluster: @@ -692,6 +698,7 @@ if __name__ == "__main__": # tdDnodes.StopAllSigint() tdLog.info("Address sanitizer mode finished") else: - tdDnodes.stopAll() + if not crashGen: + tdDnodes.stopAll() tdLog.info("stop all td process finished") sys.exit(0) diff --git a/tests/unit-test/test.sh b/tests/unit-test/test.sh index 21461bc6a5..46fc0aedb3 100755 --- a/tests/unit-test/test.sh +++ b/tests/unit-test/test.sh @@ -7,10 +7,10 @@ function usage() { } ent=1 -while getopts "eh" opt; do +while getopts "e:h" opt; do case $opt in e) - ent=1 + ent="$OPTARG" ;; h) usage