Merge branch 'main' into fix/disp_lost

This commit is contained in:
Haojun Liao 2025-01-24 17:15:22 +08:00
commit 79284e7414
132 changed files with 3374 additions and 1011 deletions

View File

@ -10,7 +10,36 @@
简体中文 | [English](README.md) | [TDengine 云服务](https://cloud.taosdata.com/?utm_medium=cn&utm_source=github) | 很多职位正在热招中,请看[这里](https://www.taosdata.com/careers/)
# TDengine 简介
# 目录
1. [TDengine 简介](#1-tdengine-简介)
1. [文档](#2-文档)
1. [必备工具](#3-必备工具)
- [3.1 Linux预备](#31-linux系统)
- [3.2 macOS预备](#32-macos系统)
- [3.3 Windows预备](#33-windows系统)
- [3.4 克隆仓库](#34-克隆仓库)
1. [构建](#4-构建)
- [4.1 Linux系统上构建](#41-linux系统上构建)
- [4.2 macOS系统上构建](#42-macos系统上构建)
- [4.3 Windows系统上构建](#43-windows系统上构建)
1. [打包](#5-打包)
1. [安装](#6-安装)
- [6.1 Linux系统上安装](#61-linux系统上安装)
- [6.2 macOS系统上安装](#62-macos系统上安装)
- [6.3 Windows系统上安装](#63-windows系统上安装)
1. [快速运行](#7-快速运行)
- [7.1 Linux系统上运行](#71-linux系统上运行)
- [7.2 macOS系统上运行](#72-macos系统上运行)
- [7.3 Windows系统上运行](#73-windows系统上运行)
1. [测试](#8-测试)
1. [版本发布](#9-版本发布)
1. [工作流](#10-工作流)
1. [覆盖率](#11-覆盖率)
1. [成为社区贡献者](#12-成为社区贡献者)
# 1. 简介
TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series Database, TSDB)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外TDengine 还提供缓存、数据订阅、流式计算等功能是一极简的时序数据处理平台最大程度的减小系统设计的复杂度降低研发和运营成本。与其他时序数据库相比TDengine 的主要优势如下:
@ -26,12 +55,82 @@ TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series
- **核心开源**TDengine 的核心代码包括集群功能全部开源截止到2022年8月1日全球超过 135.9k 个运行实例GitHub Star 18.7kFork 4.4k,社区活跃。
# 文档
了解TDengine高级功能的完整列表请 [点击](https://tdengine.com/tdengine/)。体验TDengine最简单的方式是通过[TDengine云平台](https://cloud.tdengine.com)。
关于完整的使用手册,系统架构和更多细节,请参考 [TDengine 文档](https://docs.taosdata.com) 或者 [TDengine Documentation](https://docs.tdengine.com)。
# 2. 文档
# 构建
关于完整的使用手册,系统架构和更多细节,请参考 [TDengine](https://www.taosdata.com/) 或者 [TDengine 官方文档](https://docs.taosdata.com)。
# 3. 必备工具
## 3.1 Linux系统
<details>
<summary>安装Linux必备工具</summary>
### Ubuntu 18.04、20.04、22.04
```bash
sudo apt-get udpate
sudo apt-get install -y gcc cmake build-essential git libjansson-dev \
libsnappy-dev liblzma-dev zlib1g-dev pkg-config
```
### CentOS 8
```bash
sudo yum update
yum install -y epel-release gcc gcc-c++ make cmake git perl dnf-plugins-core
yum config-manager --set-enabled powertools
yum install -y zlib-static xz-devel snappy-devel jansson-devel pkgconfig libatomic-static libstdc++-static
```
</details>
## 3.2 macOS系统
<details>
<summary>安装macOS必备工具</summary>
根据提示安装依赖工具 [brew](https://brew.sh/).
```bash
brew install argp-standalone gflags pkgconfig
```
</details>
## 3.3 Windows系统
<details>
<summary>安装Windows必备工具</summary>
进行中。
</details>
## 3.4 克隆仓库
<details>
<summary>克隆仓库</summary>
通过如下命令将TDengine仓库克隆到指定计算机:
```bash
git clone https://github.com/taosdata/TDengine.git
cd TDengine
```
> **注意:**
> 想连接更多TDengine连接器可参考以下仓库: [JDBC连接器](https://github.com/taosdata/taos-connector-jdbc), [Go连接器](https://github.com/taosdata/driver-go), [Python连接器](https://github.com/taosdata/taos-connector-python), [Node.js连接器](https://github.com/taosdata/taos-connector-node), [C#连接器](https://github.com/taosdata/taos-connector-dotnet), [Rust连接器](https://github.com/taosdata/taos-connector-rust).
</details>
# 4. 构建
TDengine 目前可以在 Linux、 Windows、macOS 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。目前不支持使用交叉编译器构建。
用户可根据需求选择通过源码、[容器](https://docs.taosdata.com/get-started/docker/)、[安装包](https://docs.taosdata.com/get-started/package/)或[Kubernetes](https://docs.taosdata.com/deployment/k8s/)来安装。本快速指南仅适用于通过源码安装。
@ -40,309 +139,257 @@ TDengine 还提供一组辅助工具软件 taosTools目前它包含 taosBench
为了构建TDengine, 请使用 [CMake](https://cmake.org/) 3.13.0 或者更高版本。
## 安装工具
## 4.1 Linux系统上构建
### Ubuntu 18.04 及以上版本 & Debian
<details>
```bash
sudo apt-get install -y gcc cmake build-essential git libssl-dev libgflags2.2 libgflags-dev
```
<summary>Linux系统上构建步骤</summary>
#### 为 taos-tools 安装编译需要的软件
为了在 Ubuntu/Debian 系统上编译 [taos-tools](https://github.com/taosdata/taos-tools) 需要安装如下软件:
```bash
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev zlib1g pkg-config
```
### CentOS 7.9
```bash
sudo yum install epel-release
sudo yum update
sudo yum install -y gcc gcc-c++ make cmake3 gflags git openssl-devel
sudo ln -sf /usr/bin/cmake3 /usr/bin/cmake
```
### CentOS 8/Fedora/Rocky Linux
```bash
sudo dnf install -y gcc gcc-c++ gflags make cmake epel-release git openssl-devel
```
#### 在 CentOS 上构建 taosTools 安装依赖软件
#### CentOS 7.9
```
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
```
#### CentOS 8/Fedora/Rocky Linux
```
sudo yum install -y epel-release
sudo yum install -y dnf-plugins-core
sudo yum config-manager --set-enabled powertools
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
```
注意:由于 snappy 缺乏 pkg-config 支持(参考 [链接](https://github.com/google/snappy/pull/86)),会导致 cmake 提示无法发现 libsnappy实际上工作正常。
若 powertools 安装失败,可以尝试改用:
```
sudo yum config-manager --set-enabled powertools
```
#### CentOS + devtoolset
除上述编译依赖包,需要执行以下命令:
```
sudo yum install centos-release-scl
sudo yum install devtoolset-9 devtoolset-9-libatomic-devel
scl enable devtoolset-9 -- bash
```
### macOS
```
brew install argp-standalone gflags pkgconfig
```
### 设置 golang 开发环境
TDengine 包含数个使用 Go 语言开发的组件比如taosAdapter, 请参考 golang.org 官方文档设置 go 开发环境。
请使用 1.20 及以上版本。对于中国用户,我们建议使用代理来加速软件包下载。
```
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct
```
缺省是不会构建 taosAdapter, 但您可以使用以下命令选择构建 taosAdapter 作为 RESTful 接口的服务。
```
cmake .. -DBUILD_HTTP=false
```
### 设置 rust 开发环境
TDengine 包含数个使用 Rust 语言开发的组件. 请参考 rust-lang.org 官方文档设置 rust 开发环境。
## 获取源码
首先,你需要从 GitHub 克隆源码:
```bash
git clone https://github.com/taosdata/TDengine.git
cd TDengine
```
如果使用 https 协议下载比较慢,可以通过修改 ~/.gitconfig 文件添加以下两行设置使用 ssh 协议下载。需要首先上传 ssh 密钥到 GitHub详细方法请参考 GitHub 官方文档。
```
[url "git@github.com:"]
insteadOf = https://github.com/
```
## 特别说明
[JDBC 连接器](https://github.com/taosdata/taos-connector-jdbc) [Go 连接器](https://github.com/taosdata/driver-go)[Python 连接器](https://github.com/taosdata/taos-connector-python)[Node.js 连接器](https://github.com/taosdata/taos-connector-node)[C# 连接器](https://github.com/taosdata/taos-connector-dotnet) [Rust 连接器](https://github.com/taosdata/taos-connector-rust) 和 [Grafana 插件](https://github.com/taosdata/grafanaplugin)已移到独立仓库。
## 构建 TDengine
### Linux 系统
可以运行代码仓库中的 `build.sh` 脚本编译出 TDengine 和 taosTools包含 taosBenchmark 和 taosdump
可以通过以下命令使用脚本 `build.sh` 编译TDengine和taosTools包括taosBenchmark和taosdump:
```bash
./build.sh
```
这个脚本等价于执行如下命令:
也可以通过以下命令进行构建:
```bash
mkdir debug
cd debug
mkdir debug && cd debug
cmake .. -DBUILD_TOOLS=true -DBUILD_CONTRIB=true
make
```
您也可以选择使用 jemalloc 作为内存分配器,替代默认的 glibc
可以使用Jemalloc作为内存分配器而不是使用glibc:
```bash
apt install autoconf
cmake .. -DJEMALLOC_ENABLED=true
```
在 X86-64、X86、arm64 平台上TDengine 生成脚本可以自动检测机器架构。也可以手动配置 CPUTYPE 参数来指定 CPU 类型,如 aarch64 等。
aarch64
TDengine构建脚本可以自动检测x86、x86-64、arm64平台上主机的体系结构。
您也可以通过CPUTYPE选项手动指定架构:
```bash
cmake .. -DCPUTYPE=aarch64 && cmake --build .
```
### Windows 系统
</details>
如果你使用的是 Visual Studio 2013 版本:
## 4.2 macOS系统上构建
打开 cmd.exe执行 vcvarsall.bat 时,为 64 位操作系统指定“x86_amd64”为 32 位操作系统指定“x86”。
<details>
```bash
<summary>macOS系统上构建步骤</summary>
请安装XCode命令行工具和cmake。使用XCode 11.4+在Catalina和Big Sur上完成验证。
```shell
mkdir debug && cd debug
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" < x86_amd64 | x86 >
cmake .. && cmake --build .
```
</details>
## 4.3 Windows系统上构建
<details>
<summary>Windows系统上构建步骤</summary>
如果您使用的是Visual Studio 2013请执行“cmd.exe”打开命令窗口执行如下命令。
执行vcvarsall.bat时64位的Windows请指定“amd64”32位的Windows请指定“x86”。
```cmd
mkdir debug && cd debug
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" < amd64 | x86 >
cmake .. -G "NMake Makefiles"
nmake
```
如果你使用的是 Visual Studio 2019 或 2017 版本:
如果您使用Visual Studio 2019或2017:
打开 cmd.exe执行 vcvarsall.bat 时,为 64 位操作系统指定“x64”为 32 位操作系统指定“x86”。
请执行“cmd.exe”打开命令窗口执行如下命令。
执行vcvarsall.bat时64位的Windows请指定“x64”32位的Windows请指定“x86”。
```bash
```cmd
mkdir debug && cd debug
"c:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvarsall.bat" < x64 | x86 >
cmake .. -G "NMake Makefiles"
nmake
```
你也可以从开始菜单中找到"Visual Studio < 2019 | 2017 >"菜单项,根据你的系统选择"x64 Native Tools Command Prompt for VS < 2019 | 2017 >"或"x86 Native Tools Command Prompt for VS < 2019 | 2017 >",打开命令行窗口,执行:
或者您可以通过点击Windows开始菜单打开命令窗口->“Visual Studio < 2019 | 2017 >”文件夹->“x64原生工具命令提示符VS < 2019 | 2017 >”或“x86原生工具命令提示符VS < 2019 | 2017 >”取决于你的Windows是什么架构然后执行命令如下
```bash
```cmd
mkdir debug && cd debug
cmake .. -G "NMake Makefiles"
nmake
```
</details>
### macOS 系统
# 5. 打包
安装 XCode 命令行工具和 cmake. 在 Catalina 和 Big Sur 操作系统上,需要安装 XCode 11.4+ 版本。
由于一些组件依赖关系TDengine社区安装程序不能仅由该存储库创建。我们仍在努力改进。
# 6. 安装
## 6.1 Linux系统上安装
<details>
<summary>Linux系统上安装详细步骤</summary>
构建成功后TDengine可以通过以下命令进行安装:
```bash
mkdir debug && cd debug
cmake .. && cmake --build .
sudo make install
```
从源代码安装还将为TDengine配置服务管理。用户也可以使用[TDengine安装包](https://docs.taosdata.com/get-started/package/)进行安装。
# 安装
</details>
## Linux 系统
## 6.2 macOS系统上安装
生成完成后,安装 TDengine
<details>
<summary>macOS系统上安装详细步骤</summary>
构建成功后TDengine可以通过以下命令进行安装:
```bash
sudo make install
```
用户可以在[文件目录结构](https://docs.taosdata.com/reference/directory/)中了解更多在操作系统中生成的目录或文件。
</details>
从源代码安装也会为 TDengine 配置服务管理 ,用户也可以选择[从安装包中安装](https://docs.taosdata.com/get-started/package/)。
## 6.3 Windows系统上安装
安装成功后,在终端中启动 TDengine 服务:
<details>
```bash
sudo systemctl start taosd
```
<summary>Windows系统上安装详细步骤</summary>
用户可以使用 TDengine CLI 来连接 TDengine 服务,在终端中,输入:
```bash
taos
```
如果 TDengine CLI 连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印出错误消息。
## Windows 系统
生成完成后,安装 TDengine
构建成功后TDengine可以通过以下命令进行安装:
```cmd
nmake install
```
## macOS 系统
</details>
生成完成后,安装 TDengine
# 7. 快速运行
## 7.1 Linux系统上运行
<details>
<summary>Linux系统上运行详细步骤</summary>
在Linux系统上安装TDengine完成后在终端运行如下命令启动服务:
```bash
sudo make install
sudo systemctl start taosd
```
用户可以在[文件目录结构](https://docs.taosdata.com/reference/directory/)中了解更多在操作系统中生成的目录或文件。
从源代码安装也会为 TDengine 配置服务管理 ,用户也可以选择[从安装包中安装](https://docs.taosdata.com/get-started/package/)。
安装成功后,可以在应用程序中双击 TDengine 图标启动服务,或者在终端中启动 TDengine 服务:
```bash
sudo launchctl start com.tdengine.taosd
```
用户可以使用 TDengine CLI 来连接 TDengine 服务,在终端中,输入:
然后用户可以通过如下命令使用TDengine命令行连接TDengine服务:
```bash
taos
```
如果 TDengine CLI 连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印出错误消息。
如果TDengine 命令行连接服务器成功,系统将打印欢迎信息和版本信息。否则,将显示连接错误信息。
## 快速运行
如果不希望以服务方式运行 TDengine也可以在终端中直接运行它。也即在生成完成后执行以下命令在 Windows 下,生成的可执行文件会带有 .exe 后缀,例如会名为 taosd.exe
如果您不想将TDengine作为服务运行您可以在当前终端中运行它。例如要在构建完成后快速启动TDengine服务器在终端中运行以下命令我们以Linux为例Windows上的命令为 `taosd.exe`
```bash
./build/bin/taosd -c test/cfg
```
在另一个终端,使用 TDengine CLI 连接服务器:
在另一个终端使用TDengine命令行连接服务器:
```bash
./build/bin/taos -c test/cfg
```
"-c test/cfg"指定系统配置文件所在目录。
选项 `-c test/cfg` 指定系统配置文件的目录。
# 体验 TDengine
</details>
在 TDengine 终端中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行插入查询操作。
## 7.2 macOS系统上运行
```sql
CREATE DATABASE demo;
USE demo;
CREATE TABLE t (ts TIMESTAMP, speed INT);
INSERT INTO t VALUES('2019-07-15 00:00:00', 10);
INSERT INTO t VALUES('2019-07-15 01:00:00', 20);
SELECT * FROM t;
ts | speed |
===================================
19-07-15 00:00:00.000| 10|
19-07-15 01:00:00.000| 20|
Query OK, 2 row(s) in set (0.001700s)
<details>
<summary>macOS系统上运行详细步骤</summary>
在macOS上安装完成后启动服务双击/applications/TDengine启动程序或者在终端中执行如下命令
```bash
sudo launchctl start com.tdengine.taosd
```
# 应用开发
然后在终端中使用如下命令通过TDengine命令行连接TDengine服务器:
## 官方连接器
```bash
taos
```
TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用:
如果TDengine命令行连接服务器成功系统将打印欢迎信息和版本信息。否则将显示错误信息。
- [Java](https://docs.taosdata.com/reference/connector/java/)
- [C/C++](https://docs.taosdata.com/reference/connector/cpp/)
- [Python](https://docs.taosdata.com/reference/connector/python/)
- [Go](https://docs.taosdata.com/reference/connector/go/)
- [Node.js](https://docs.taosdata.com/reference/connector/node/)
- [Rust](https://docs.taosdata.com/reference/connector/rust/)
- [C#](https://docs.taosdata.com/reference/connector/csharp/)
- [RESTful API](https://docs.taosdata.com/reference/connector/rest-api/)
</details>
# 成为社区贡献者
## 7.3 Windows系统上运行
<details>
<summary>Windows系统上运行详细步骤</summary>
您可以使用以下命令在Windows平台上启动TDengine服务器:
```cmd
.\build\bin\taosd.exe -c test\cfg
```
在另一个终端上使用TDengine命令行连接服务器:
```cmd
.\build\bin\taos.exe -c test\cfg
```
选项 `-c test/cfg` 指定系统配置文件的目录。
</details>
# 8. 测试
有关如何在TDengine上运行不同类型的测试请参考 [TDengine测试](./tests/README-CN.md)
# 9. 版本发布
TDengine发布版本的完整列表请参考 [版本列表](https://github.com/taosdata/TDengine/releases)
# 10. 工作流
TDengine构建检查工作流可以在参考 [Github Action](https://github.com/taosdata/TDengine/actions/workflows/taosd-ci-build.yml), 更多的工作流正在创建中,将很快可用。
# 11. 覆盖率
最新的TDengine测试覆盖率报告可参考 [coveralls.io](https://coveralls.io/github/taosdata/TDengine)
<details>
<summary>如何在本地运行测试覆盖率报告?</summary>
在本地创建测试覆盖率报告HTML格式请运行以下命令:
```bash
cd tests
bash setup-lcov.sh -v 1.16 && ./run_local_coverage.sh -b main -c task
# on main branch and run cases in longtimeruning_cases.task
# for more infomation about options please refer to ./run_local_coverage.sh -h
```
> **注意:**
> 请注意,-b和-i选项将使用-DCOVER=true选项重新编译TDengine这可能需要花费一些时间。
</details>
# 12. 成为社区贡献者
点击 [这里](https://www.taosdata.com/contributor),了解如何成为 TDengine 的贡献者。
# 加入技术交流群
TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小 T 为好友,即可入群。

394
README.md
View File

@ -26,24 +26,33 @@ English | [简体中文](README-CN.md) | [TDengine Cloud](https://cloud.tdengine
# Table of Contents
1. [What is TDengine?](#1-what-is-tdengine)
2. [Documentation](#2-documentation)
3. [Building](#3-building)
1. [Install build tools](#31-install-build-tools)
1. [Get the source codes](#32-get-the-source-codes)
1. [Special Note](#33-special-note)
1. [Build TDengine](#34-build-tdengine)
4. [Installing](#4-installing)
1. [On Linux platform](#41-on-linux-platform)
1. [On Windows platform](#42-on-windows-platform)
1. [On macOS platform](#43-on-macos-platform)
1. [Quick Run](#44-quick-run)
5. [Try TDengine](#5-try-tdengine)
6. [Developing with TDengine](#6-developing-with-tdengine)
7. [Contribute to TDengine](#7-contribute-to-tdengine)
8. [Join the TDengine Community](#8-join-the-tdengine-community)
1. [Introduction](#1-introduction)
1. [Documentation](#2-documentation)
1. [Prerequisites](#3-prerequisites)
- [3.1 Prerequisites On Linux](#31-on-linux)
- [3.2 Prerequisites On macOS](#32-on-macos)
- [3.3 Prerequisites On Windows](#33-on-windows)
- [3.4 Clone the repo](#34-clone-the-repo)
1. [Building](#4-building)
- [4.1 Build on Linux](#41-build-on-linux)
- [4.2 Build on macOS](#42-build-on-macos)
- [4.3 Build On Windows](#43-build-on-windows)
1. [Packaging](#5-packaging)
1. [Installation](#6-installation)
- [6.1 Install on Linux](#61-install-on-linux)
- [6.2 Install on macOS](#62-install-on-macos)
- [6.3 Install on Windows](#63-install-on-windows)
1. [Running](#7-running)
- [7.1 Run TDengine on Linux](#71-run-tdengine-on-linux)
- [7.2 Run TDengine on macOS](#72-run-tdengine-on-macos)
- [7.3 Run TDengine on Windows](#73-run-tdengine-on-windows)
1. [Testing](#8-testing)
1. [Releasing](#9-releasing)
1. [Workflow](#10-workflow)
1. [Coverage](#11-coverage)
1. [Contributing](#12-contributing)
# 1. What is TDengine
# 1. Introduction
TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-series databases with the following advantages:
@ -65,132 +74,91 @@ For a full list of TDengine competitive advantages, please [check here](https://
For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.tdengine.com) ([TDengine 文档](https://docs.taosdata.com))
# 3. Building
# 3. Prerequisites
At the moment, TDengine server supports running on Linux/Windows/macOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. Right now we don't support build with cross-compiling environment.
## 3.1 On Linux
You can choose to install through source code, [container](https://docs.tdengine.com/get-started/docker/), [installation package](https://docs.tdengine.com/get-started/package/) or [Kubernetes](https://docs.tdengine.com/deployment/k8s/). This quick guide only applies to installing from source.
<details>
TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine.
<summary>Install required tools on Linux</summary>
To build TDengine, use [CMake](https://cmake.org/) 3.13.0 or higher versions in the project directory.
## 3.1 Install build tools
### Ubuntu 18.04 and above or Debian
### For Ubuntu 18.04、20.04、22.04
```bash
sudo apt-get install -y gcc cmake build-essential git libssl-dev libgflags2.2 libgflags-dev
sudo apt-get udpate
sudo apt-get install -y gcc cmake build-essential git libjansson-dev \
libsnappy-dev liblzma-dev zlib1g-dev pkg-config
```
#### Install build dependencies for taosTools
To build the [taosTools](https://github.com/taosdata/taos-tools) on Ubuntu/Debian, the following packages need to be installed.
### For CentOS 8
```bash
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev zlib1g pkg-config
```
### CentOS 7.9
```bash
sudo yum install epel-release
sudo yum update
sudo yum install -y gcc gcc-c++ make cmake3 gflags git openssl-devel
sudo ln -sf /usr/bin/cmake3 /usr/bin/cmake
yum install -y epel-release gcc gcc-c++ make cmake git perl dnf-plugins-core
yum config-manager --set-enabled powertools
yum install -y zlib-static xz-devel snappy-devel jansson-devel pkgconfig libatomic-static libstdc++-static
```
### CentOS 8/Fedora/Rocky Linux
</details>
## 3.2 On macOS
<details>
<summary>Install required tools on macOS</summary>
Please intall the dependencies with [brew](https://brew.sh/).
```bash
sudo dnf install -y gcc gcc-c++ make cmake epel-release gflags git openssl-devel
```
#### Install build dependencies for taosTools on CentOS
#### CentOS 7.9
```
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
```
#### CentOS 8/Fedora/Rocky Linux
```
sudo yum install -y epel-release
sudo yum install -y dnf-plugins-core
sudo yum config-manager --set-enabled powertools
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
```
Note: Since snappy lacks pkg-config support (refer to [link](https://github.com/google/snappy/pull/86)), it leads a cmake prompt libsnappy not found. But snappy still works well.
If the PowerTools installation fails, you can try to use:
```
sudo yum config-manager --set-enabled powertools
```
#### For CentOS + devtoolset
Besides above dependencies, please run following commands:
```
sudo yum install centos-release-scl
sudo yum install devtoolset-9 devtoolset-9-libatomic-devel
scl enable devtoolset-9 -- bash
```
### macOS
```
brew install argp-standalone gflags pkgconfig
```
### Setup golang environment
</details>
TDengine includes a few components like taosAdapter developed by Go language. Please refer to golang.org official documentation for golang environment setup.
## 3.3 On Windows
Please use version 1.20+. For the user in China, we recommend using a proxy to accelerate package downloading.
<details>
```
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct
```
<summary>Install required tools on Windows</summary>
The default will not build taosAdapter, but you can use the following command to build taosAdapter as the service for RESTful interface.
Work in Progress.
```
cmake .. -DBUILD_HTTP=false
```
</details>
### Setup rust environment
## 3.4 Clone the repo
TDengine includes a few components developed by Rust language. Please refer to rust-lang.org official documentation for rust environment setup.
<details>
## 3.2 Get the source codes
<summary>Clone the repo</summary>
First of all, you may clone the source codes from github:
Clone the repository to the target machine:
```bash
git clone https://github.com/taosdata/TDengine.git
cd TDengine
```
You can modify the file ~/.gitconfig to use ssh protocol instead of https for better download speed. You will need to upload ssh public key to GitHub first. Please refer to GitHub official documentation for detail.
```
[url "git@github.com:"]
insteadOf = https://github.com/
```
> **NOTE:**
> TDengine Connectors can be found in following repositories: [JDBC Connector](https://github.com/taosdata/taos-connector-jdbc), [Go Connector](https://github.com/taosdata/driver-go), [Python Connector](https://github.com/taosdata/taos-connector-python), [Node.js Connector](https://github.com/taosdata/taos-connector-node), [C# Connector](https://github.com/taosdata/taos-connector-dotnet), [Rust Connector](https://github.com/taosdata/taos-connector-rust).
## 3.3 Special Note
</details>
[JDBC Connector](https://github.com/taosdata/taos-connector-jdbc) [Go Connector](https://github.com/taosdata/driver-go)[Python Connector](https://github.com/taosdata/taos-connector-python)[Node.js Connector](https://github.com/taosdata/taos-connector-node)[C# Connector](https://github.com/taosdata/taos-connector-dotnet) [Rust Connector](https://github.com/taosdata/taos-connector-rust) and [Grafana plugin](https://github.com/taosdata/grafanaplugin) has been moved to standalone repository.
# 4. Building
## 3.4 Build TDengine
At the moment, TDengine server supports running on Linux/Windows/MacOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service. TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. Right now we don't support build with cross-compiling environment.
### On Linux platform
You can choose to install through source code, [container](https://docs.tdengine.com/get-started/deploy-in-docker/), [installation package](https://docs.tdengine.com/get-started/deploy-from-package/) or [Kubernetes](https://docs.tdengine.com/operations-and-maintenance/deploy-your-cluster/#kubernetes-deployment). This quick guide only applies to install from source.
TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine.
To build TDengine, use [CMake](https://cmake.org/) 3.13.0 or higher versions in the project directory.
## 4.1 Build on Linux
<details>
<summary>Detailed steps to build on Linux</summary>
You can run the bash script `build.sh` to build both TDengine and taosTools including taosBenchmark and taosdump as below:
@ -201,29 +169,46 @@ You can run the bash script `build.sh` to build both TDengine and taosTools incl
It equals to execute following commands:
```bash
mkdir debug
cd debug
mkdir debug && cd debug
cmake .. -DBUILD_TOOLS=true -DBUILD_CONTRIB=true
make
```
You can use Jemalloc as memory allocator instead of glibc:
```
apt install autoconf
```bash
cmake .. -DJEMALLOC_ENABLED=true
```
TDengine build script can detect the host machine's architecture on X86-64, X86, arm64 platform.
You can also specify CPUTYPE option like aarch64 too if the detection result is not correct:
aarch64:
TDengine build script can auto-detect the host machine's architecture on x86, x86-64, arm64 platform.
You can also specify architecture manually by CPUTYPE option:
```bash
cmake .. -DCPUTYPE=aarch64 && cmake --build .
```
### On Windows platform
</details>
## 4.2 Build on macOS
<details>
<summary>Detailed steps to build on macOS</summary>
Please install XCode command line tools and cmake. Verified with XCode 11.4+ on Catalina and Big Sur.
```shell
mkdir debug && cd debug
cmake .. && cmake --build .
```
</details>
## 4.3 Build on Windows
<details>
<summary>Detailed steps to build on Windows</summary>
If you use the Visual Studio 2013, please open a command window by executing "cmd.exe".
Please specify "amd64" for 64 bits Windows or specify "x86" for 32 bits Windows when you execute vcvarsall.bat.
@ -254,31 +239,67 @@ mkdir debug && cd debug
cmake .. -G "NMake Makefiles"
nmake
```
</details>
### On macOS platform
# 5. Packaging
Please install XCode command line tools and cmake. Verified with XCode 11.4+ on Catalina and Big Sur.
The TDengine community installer can NOT be created by this repository only, due to some component dependencies. We are still working on this improvement.
```shell
mkdir debug && cd debug
cmake .. && cmake --build .
```
# 6. Installation
# 4. Installing
## 6.1 Install on Linux
## 4.1 On Linux platform
<details>
After building successfully, TDengine can be installed by
<summary>Detailed steps to install on Linux</summary>
After building successfully, TDengine can be installed by:
```bash
sudo make install
```
Users can find more information about directories installed on the system in the [directory and files](https://docs.tdengine.com/reference/directory/) section.
Installing from source code will also configure service management for TDengine. Users can also choose to [install from packages](https://docs.tdengine.com/get-started/deploy-from-package/) for it.
Installing from source code will also configure service management for TDengine.Users can also choose to [install from packages](https://docs.tdengine.com/get-started/package/) for it.
</details>
To start the service after installation, in a terminal, use:
## 6.2 Install on macOS
<details>
<summary>Detailed steps to install on macOS</summary>
After building successfully, TDengine can be installed by:
```bash
sudo make install
```
</details>
## 6.3 Install on Windows
<details>
<summary>Detailed steps to install on windows</summary>
After building successfully, TDengine can be installed by:
```cmd
nmake install
```
</details>
# 7. Running
## 7.1 Run TDengine on Linux
<details>
<summary>Detailed steps to run on Linux</summary>
To start the service after installation on linux, in a terminal, use:
```bash
sudo systemctl start taosd
@ -292,27 +313,29 @@ taos
If TDengine CLI connects the server successfully, welcome messages and version info are printed. Otherwise, an error message is shown.
## 4.2 On Windows platform
After building successfully, TDengine can be installed by:
```cmd
nmake install
```
## 4.3 On macOS platform
After building successfully, TDengine can be installed by:
If you don't want to run TDengine as a service, you can run it in current shell. For example, to quickly start a TDengine server after building, run the command below in terminal: (We take Linux as an example, command on Windows will be `taosd.exe`)
```bash
sudo make install
./build/bin/taosd -c test/cfg
```
Users can find more information about directories installed on the system in the [directory and files](https://docs.tdengine.com/reference/directory/) section.
In another terminal, use the TDengine CLI to connect the server:
Installing from source code will also configure service management for TDengine.Users can also choose to [install from packages](https://docs.tdengine.com/get-started/package/) for it.
```bash
./build/bin/taos -c test/cfg
```
To start the service after installation, double-click the /applications/TDengine to start the program, or in a terminal, use:
Option `-c test/cfg` specifies the system configuration file directory.
</details>
## 7.2 Run TDengine on macOS
<details>
<summary>Detailed steps to run on macOS</summary>
To start the service after installation on macOS, double-click the /applications/TDengine to start the program, or in a terminal, use:
```bash
sudo launchctl start com.tdengine.taosd
@ -326,64 +349,63 @@ taos
If TDengine CLI connects the server successfully, welcome messages and version info are printed. Otherwise, an error message is shown.
## 4.4 Quick Run
</details>
If you don't want to run TDengine as a service, you can run it in current shell. For example, to quickly start a TDengine server after building, run the command below in terminal: (We take Linux as an example, command on Windows will be `taosd.exe`)
```bash
./build/bin/taosd -c test/cfg
## 7.3 Run TDengine on Windows
<details>
<summary>Detailed steps to run on windows</summary>
You can start TDengine server on Windows platform with below commands:
```cmd
.\build\bin\taosd.exe -c test\cfg
```
In another terminal, use the TDengine CLI to connect the server:
```bash
./build/bin/taos -c test/cfg
```cmd
.\build\bin\taos.exe -c test\cfg
```
option "-c test/cfg" specifies the system configuration file directory.
# 5. Try TDengine
</details>
It is easy to run SQL commands from TDengine CLI which is the same as other SQL databases.
# 8. Testing
```sql
CREATE DATABASE demo;
USE demo;
CREATE TABLE t (ts TIMESTAMP, speed INT);
INSERT INTO t VALUES('2019-07-15 00:00:00', 10);
INSERT INTO t VALUES('2019-07-15 01:00:00', 20);
SELECT * FROM t;
ts | speed |
===================================
19-07-15 00:00:00.000| 10|
19-07-15 01:00:00.000| 20|
Query OK, 2 row(s) in set (0.001700s)
For how to run different types of tests on TDengine, please see [Testing TDengine](./tests/README.md).
# 9. Releasing
For the complete list of TDengine Releases, please see [Releases](https://github.com/taosdata/TDengine/releases).
# 10. Workflow
TDengine build check workflow can be found in this [Github Action](https://github.com/taosdata/TDengine/actions/workflows/taosd-ci-build.yml). More workflows will be available soon.
# 11. Coverage
Latest TDengine test coverage report can be found on [coveralls.io](https://coveralls.io/github/taosdata/TDengine)
<details>
<summary>How to run the coverage report locally?</summary>
To create the test coverage report (in HTML format) locally, please run following commands:
```bash
cd tests
bash setup-lcov.sh -v 1.16 && ./run_local_coverage.sh -b main -c task
# on main branch and run cases in longtimeruning_cases.task
# for more infomation about options please refer to ./run_local_coverage.sh -h
```
> **NOTE:**
> Please note that the -b and -i options will recompile TDengine with the -DCOVER=true option, which may take a amount of time.
# 6. Developing with TDengine
</details>
## Official Connectors
# 12. Contributing
TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation.
- [Java](https://docs.tdengine.com/reference/connectors/java/)
- [C/C++](https://docs.tdengine.com/reference/connectors/cpp/)
- [Python](https://docs.tdengine.com/reference/connectors/python/)
- [Go](https://docs.tdengine.com/reference/connectors/go/)
- [Node.js](https://docs.tdengine.com/reference/connectors/node/)
- [Rust](https://docs.tdengine.com/reference/connectors/rust/)
- [C#](https://docs.tdengine.com/reference/connectors/csharp/)
- [RESTful API](https://docs.tdengine.com/reference/connectors/rest-api/)
# 7. Contribute to TDengine
Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to the project.
# 8. Join the TDengine Community
For more information about TDengine, you can follow us on social media and join our Discord server:
- [Discord](https://discord.com/invite/VZdSuUg4pS)
- [Twitter](https://twitter.com/TDengineDB)
- [LinkedIn](https://www.linkedin.com/company/tdengine/)
- [YouTube](https://www.youtube.com/@tdengine)
Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to TDengine.

View File

@ -166,6 +166,10 @@ IF(${BUILD_WITH_ANALYSIS})
set(BUILD_WITH_S3 ON)
ENDIF()
IF(${TD_LINUX})
set(BUILD_WITH_ANALYSIS ON)
ENDIF()
IF(${BUILD_S3})
IF(${BUILD_WITH_S3})
@ -205,13 +209,6 @@ option(
off
)
option(
BUILD_WITH_NURAFT
"If build with NuRaft"
OFF
)
option(
BUILD_WITH_UV
"If build with libuv"

View File

@ -2,7 +2,7 @@
IF (DEFINED VERNUMBER)
SET(TD_VER_NUMBER ${VERNUMBER})
ELSE ()
SET(TD_VER_NUMBER "3.3.5.0.alpha")
SET(TD_VER_NUMBER "3.3.5.2.alpha")
ENDIF ()
IF (DEFINED VERCOMPATIBLE)

View File

@ -17,7 +17,6 @@ elseif(${BUILD_WITH_COS})
file(MAKE_DIRECTORY $ENV{HOME}/.cos-local.1/)
cat("${TD_SUPPORT_DIR}/mxml_CMakeLists.txt.in" ${CONTRIB_TMP_FILE3})
cat("${TD_SUPPORT_DIR}/apr_CMakeLists.txt.in" ${CONTRIB_TMP_FILE3})
cat("${TD_SUPPORT_DIR}/curl_CMakeLists.txt.in" ${CONTRIB_TMP_FILE3})
endif(${BUILD_WITH_COS})
configure_file(${CONTRIB_TMP_FILE3} "${TD_CONTRIB_DIR}/deps-download/CMakeLists.txt")
@ -148,9 +147,7 @@ endif(${BUILD_WITH_SQLITE})
# s3
if(${BUILD_WITH_S3})
cat("${TD_SUPPORT_DIR}/ssl_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
cat("${TD_SUPPORT_DIR}/xml2_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
cat("${TD_SUPPORT_DIR}/curl_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
cat("${TD_SUPPORT_DIR}/libs3_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
cat("${TD_SUPPORT_DIR}/azure_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
add_definitions(-DUSE_S3)
@ -183,6 +180,11 @@ if(${BUILD_GEOS})
cat("${TD_SUPPORT_DIR}/geos_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
endif()
if (${BUILD_WITH_ANALYSIS})
cat("${TD_SUPPORT_DIR}/ssl_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
cat("${TD_SUPPORT_DIR}/curl_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
endif()
#
if(${BUILD_PCRE2})
cat("${TD_SUPPORT_DIR}/pcre2_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})

View File

@ -20,14 +20,6 @@ if(${BUILD_WITH_SQLITE})
add_subdirectory(sqlite)
endif(${BUILD_WITH_SQLITE})
if(${BUILD_WITH_CRAFT})
add_subdirectory(craft)
endif(${BUILD_WITH_CRAFT})
if(${BUILD_WITH_TRAFT})
# add_subdirectory(traft)
endif(${BUILD_WITH_TRAFT})
if(${BUILD_S3})
add_subdirectory(azure)
endif()

View File

@ -109,7 +109,7 @@ If you are using Maven to manage your project, simply add the following dependen
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.2</version>
</dependency>
```

View File

@ -26,7 +26,8 @@ Flink Connector supports all platforms that can run Flink 1.19 and above version
| Flink Connector Version | Major Changes | TDengine Version|
|-------------------------| ------------------------------------ | ---------------- |
| 2.0.0 | 1.Support SQL queries on data in TDengine database. <br/> 2. Support CDC subscription to data in TDengine database.<br/> 3. Supports reading and writing to TDengine database using Table SQL. | 3.3.5.0 and higher|
| 2.0.1 | Sink supports writing types from Rowdata implementations.| - |
| 2.0.0 | 1.Support SQL queries on data in TDengine database. <br/> 2. Support CDC subscription to data in TDengine database.<br/> 3. Supports reading and writing to TDengine database using Table SQL. | 3.3.5.1 and higher|
| 1.0.0 | Support Sink function to write data from other sources to TDengine in the future.| 3.3.2.0 and higher|
## Exception and error codes
@ -114,7 +115,7 @@ If using Maven to manage a project, simply add the following dependencies in pom
<dependency>
<groupId>com.taosdata.flink</groupId>
<artifactId>flink-connector-tdengine</artifactId>
<version>2.0.0</version>
<version>2.0.1</version>
</dependency>
```

View File

@ -33,6 +33,8 @@ The JDBC driver implementation for TDengine strives to be consistent with relati
| taos-jdbcdriver Version | Major Changes | TDengine Version |
| ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ |
| 3.5.3 | Support unsigned data types in WebSocket connections. | - |
| 3.5.2 | Fixed WebSocket result set free bug. | - |
| 3.5.1 | Fixed the getObject issue in data subscription. | - |
| 3.5.0 | 1. Optimized the performance of WebSocket connection parameter binding, supporting parameter binding queries using binary data. <br/> 2. Optimized the performance of small queries in WebSocket connection. <br/> 3. Added support for setting time zone and app info on WebSocket connection. | 3.3.5.0 and higher |
| 3.4.0 | 1. Replaced fastjson library with jackson. <br/> 2. WebSocket uses a separate protocol identifier. <br/> 3. Optimized background thread usage to avoid user misuse leading to timeouts. | - |
@ -127,24 +129,27 @@ Please refer to the specific error codes:
TDengine currently supports timestamp, numeric, character, boolean types, and the corresponding Java type conversions are as follows:
| TDengine DataType | JDBCType |
| ----------------- | ------------------ |
| TIMESTAMP | java.sql.Timestamp |
| INT | java.lang.Integer |
| BIGINT | java.lang.Long |
| FLOAT | java.lang.Float |
| DOUBLE | java.lang.Double |
| SMALLINT | java.lang.Short |
| TINYINT | java.lang.Byte |
| BOOL | java.lang.Boolean |
| BINARY | byte array |
| NCHAR | java.lang.String |
| JSON | java.lang.String |
| VARBINARY | byte[] |
| GEOMETRY | byte[] |
| TDengine DataType | JDBCType | Remark |
| ----------------- | -------------------- | --------------------------------------- |
| TIMESTAMP | java.sql.Timestamp | |
| BOOL | java.lang.Boolean | |
| TINYINT | java.lang.Byte | |
| TINYINT UNSIGNED | java.lang.Short | only supported in WebSocket connections |
| SMALLINT | java.lang.Short | |
| SMALLINT UNSIGNED | java.lang.Integer | only supported in WebSocket connections |
| INT | java.lang.Integer | |
| INT UNSIGNED | java.lang.Long | only supported in WebSocket connections |
| BIGINT | java.lang.Long | |
| BIGINT UNSIGNED | java.math.BigInteger | only supported in WebSocket connections |
| FLOAT | java.lang.Float | |
| DOUBLE | java.lang.Double | |
| BINARY | byte array | |
| NCHAR | java.lang.String | |
| JSON | java.lang.String | only supported in tags |
| VARBINARY | byte[] | |
| GEOMETRY | byte[] | |
**Note**: JSON type is only supported in tags.
Due to historical reasons, the BINARY type in TDengine is not truly binary data and is no longer recommended. Please use VARBINARY type instead.
**Note**: Due to historical reasons, the BINARY type in TDengine is not truly binary data and is no longer recommended. Please use VARBINARY type instead.
GEOMETRY type is binary data in little endian byte order, complying with the WKB standard. For more details, please refer to [Data Types](../../sql-manual/data-types/)
For the WKB standard, please refer to [Well-Known Binary (WKB)](https://libgeos.org/specifications/wkb/)
For the Java connector, you can use the jts library to conveniently create GEOMETRY type objects, serialize them, and write to TDengine. Here is an example [Geometry Example](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/GeometryDemo.java)

View File

@ -25,6 +25,10 @@ Download links for TDengine 3.x version installation packages are as follows:
import Release from "/components/ReleaseV3";
## 3.3.5.2
<Release type="tdengine" version="3.3.5.2" />
## 3.3.5.0
<Release type="tdengine" version="3.3.5.0" />

View File

@ -0,0 +1,43 @@
---
title: TDengine 3.3.5.2 Release Notes
sidebar_label: 3.3.5.2
description: Version 3.3.5.2 Notes
slug: /release-history/release-notes/3.3.5.2
---
## Features
1. feat: taosX now support multiple stables with template for MQTT
## Enhancements
1. enh: improve taosX error message if database is invalid
2. enh: use poetry group depencencies and reduce dep when install [#251](https://github.com/taosdata/taos-connector-python/issues/251)
3. enh: improve backup restore using taosX
4. enh: during the multi-level storage data migration, if the migration time is too long, it may cause the Vnode to switch leader
5. enh: adjust the systemctl strategy for managing the taosd process, if three consecutive restarts fail within 60 seconds, the next restart will be delayed until 900 seconds later
## Fixes
1. fix: the maxRetryWaitTime parameter is used to control the maximum reconnection timeout time for the client when the cluster is unable to provide services, but it does not take effect when encountering a Sync timeout error
2. fix: supports immediate subscription to the new tag value after modifying the tag value of the sub-table
3. fix: the tmq_consumer_poll function for data subscription does not return an error code when the call fails
4. fix: taosd may crash when more than 100 views are created and the show views command is executed
5. fix: when using stmt2 to insert data, if not all data columns are bound, the insertion operation will fail
6. fix: when using stmt2 to insert data, if the database name or table name is enclosed in backticks, the insertion operation will fail
7. fix: when closing a vnode, if there are ongoing file merge tasks, taosd may crash
8. fix: frequent execution of the “drop table with tb_uid” statement may lead to a deadlock in taosd
9. fix: the potential deadlock during the switching of log files
10. fix: prohibit the creation of databases with the same names as system databases (information_schema, performance_schema)
11. fix: when the inner query of a nested query come from a super table, the sorting information cannot be pushed up
12. fix: incorrect error reporting when attempting to write Geometry data types that do not conform to topological specifications through the STMT interface
13. fix: when using the percentile function and session window in a query statement, if an error occurs, taosd may crash
14. fix: the issue of being unable to dynamically modify system parameters
15. fix: random error of tranlict transaction in replication
16. fix: the same consumer executes the unsubscribe operation and immediately attempts to subscribe to other different topics, the subscription API will return an error
17. fix: fix CVE-2022-28948 security issue in go connector
18. fix: when a subquery in a view contains an ORDER BY clause with an alias, and the query function itself also has an alias, querying the view will result in an error
19. fix: when changing the database from a single replica to a mulit replica, if there are some metadata generated by earlier versions that are no longer used in the new version, the modification operation will fail
20. fix: column names were not correctly copied when using SELECT * FROM subqueries
21. fix: when performing max/min function on string type data, the results are inaccurate and taosd will crash
22. fix: stream computing does not support the use of the HAVING clause, but no error is reported during creation
23. fix: the version information displayed by taos shell for the server is inaccurate, such as being unable to correctly distinguish between the community edition and the enterprise edition
24. fix: in certain specific query scenarios, when JOIN and CAST are used together, taosd may crash

View File

@ -5,6 +5,7 @@ slug: /release-history/release-notes
[3.3.5.0](./3-3-5-0/)
[3.3.5.2](./3.3.5.2)
[3.3.4.8](./3-3-4-8/)
[3.3.4.3](./3-3-4-3/)

View File

@ -19,7 +19,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.3</version>
</dependency>
<dependency>
<groupId>org.locationtech.jts</groupId>

View File

@ -47,7 +47,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.3</version>
</dependency>
</dependencies>

View File

@ -18,7 +18,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.3</version>
</dependency>
<!-- druid -->
<dependency>

View File

@ -17,7 +17,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.3</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>

View File

@ -47,7 +47,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.3</version>
</dependency>
<dependency>

View File

@ -70,7 +70,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.3</version>
</dependency>
<dependency>

View File

@ -67,7 +67,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.3</version>
<!-- <scope>system</scope>-->
<!-- <systemPath>${project.basedir}/src/main/resources/lib/taos-jdbcdriver-2.0.15-dist.jar</systemPath>-->
</dependency>

View File

@ -263,7 +263,7 @@ splitSql.setSelect("ts, current, voltage, phase, groupid, location")
Class<SourceRecords<RowData>> typeClass = (Class<SourceRecords<RowData>>) (Class<?>) SourceRecords.class;
SourceSplitSql sql = new SourceSplitSql("select ts, `current`, voltage, phase, tbname from meters");
TDengineSource<SourceRecords<RowData>> source = new TDengineSource<>(connProps, sql, typeClass);
DataStreamSource<SourceRecords<RowData>> input = env.fromSource(source, WatermarkStrategy.noWatermarks(), "kafka-source");
DataStreamSource<SourceRecords<RowData>> input = env.fromSource(source, WatermarkStrategy.noWatermarks(), "tdengine-source");
DataStream<String> resultStream = input.map((MapFunction<SourceRecords<RowData>, String>) records -> {
StringBuilder sb = new StringBuilder();
Iterator<RowData> iterator = records.iterator();
@ -304,7 +304,7 @@ splitSql.setSelect("ts, current, voltage, phase, groupid, location")
config.setProperty(TDengineCdcParams.VALUE_DESERIALIZER, "RowData");
config.setProperty(TDengineCdcParams.VALUE_DESERIALIZER_ENCODING, "UTF-8");
TDengineCdcSource<RowData> tdengineSource = new TDengineCdcSource<>("topic_meters", config, RowData.class);
DataStreamSource<RowData> input = env.fromSource(tdengineSource, WatermarkStrategy.noWatermarks(), "kafka-source");
DataStreamSource<RowData> input = env.fromSource(tdengineSource, WatermarkStrategy.noWatermarks(), "tdengine-source");
DataStream<String> resultStream = input.map((MapFunction<RowData, String>) rowData -> {
StringBuilder sb = new StringBuilder();
sb.append("tsxx: " + rowData.getTimestamp(0, 0) +
@ -343,7 +343,7 @@ splitSql.setSelect("ts, current, voltage, phase, groupid, location")
Class<ConsumerRecords<RowData>> typeClass = (Class<ConsumerRecords<RowData>>) (Class<?>) ConsumerRecords.class;
TDengineCdcSource<ConsumerRecords<RowData>> tdengineSource = new TDengineCdcSource<>("topic_meters", config, typeClass);
DataStreamSource<ConsumerRecords<RowData>> input = env.fromSource(tdengineSource, WatermarkStrategy.noWatermarks(), "kafka-source");
DataStreamSource<ConsumerRecords<RowData>> input = env.fromSource(tdengineSource, WatermarkStrategy.noWatermarks(), "tdengine-source");
DataStream<String> resultStream = input.map((MapFunction<ConsumerRecords<RowData>, String>) records -> {
Iterator<ConsumerRecord<RowData>> iterator = records.iterator();
StringBuilder sb = new StringBuilder();
@ -388,7 +388,7 @@ splitSql.setSelect("ts, current, voltage, phase, groupid, location")
config.setProperty(TDengineCdcParams.VALUE_DESERIALIZER, "com.taosdata.flink.entity.ResultDeserializer");
config.setProperty(TDengineCdcParams.VALUE_DESERIALIZER_ENCODING, "UTF-8");
TDengineCdcSource<ResultBean> tdengineSource = new TDengineCdcSource<>("topic_meters", config, ResultBean.class);
DataStreamSource<ResultBean> input = env.fromSource(tdengineSource, WatermarkStrategy.noWatermarks(), "kafka-source");
DataStreamSource<ResultBean> input = env.fromSource(tdengineSource, WatermarkStrategy.noWatermarks(), "tdengine-source");
DataStream<String> resultStream = input.map((MapFunction<ResultBean, String>) rowData -> {
StringBuilder sb = new StringBuilder();
sb.append("ts: " + rowData.getTs() +

View File

@ -22,7 +22,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.3</version>
</dependency>
<!-- ANCHOR_END: dep-->

View File

@ -2,6 +2,7 @@ package com.taos.example;
import com.taosdata.jdbc.ws.TSWSPreparedStatement;
import java.math.BigInteger;
import java.sql.*;
import java.util.Random;
@ -26,7 +27,12 @@ public class WSParameterBindingFullDemo {
"binary_col BINARY(100), " +
"nchar_col NCHAR(100), " +
"varbinary_col VARBINARY(100), " +
"geometry_col GEOMETRY(100)) " +
"geometry_col GEOMETRY(100)," +
"utinyint_col tinyint unsigned," +
"usmallint_col smallint unsigned," +
"uint_col int unsigned," +
"ubigint_col bigint unsigned" +
") " +
"tags (" +
"int_tag INT, " +
"double_tag DOUBLE, " +
@ -34,7 +40,12 @@ public class WSParameterBindingFullDemo {
"binary_tag BINARY(100), " +
"nchar_tag NCHAR(100), " +
"varbinary_tag VARBINARY(100), " +
"geometry_tag GEOMETRY(100))"
"geometry_tag GEOMETRY(100)," +
"utinyint_tag tinyint unsigned," +
"usmallint_tag smallint unsigned," +
"uint_tag int unsigned," +
"ubigint_tag bigint unsigned" +
")"
};
private static final int numOfSubTable = 10, numOfRow = 10;
@ -79,7 +90,7 @@ public class WSParameterBindingFullDemo {
// set table name
pstmt.setTableName("ntb_json_" + i);
// set tags
pstmt.setTagJson(1, "{\"device\":\"device_" + i + "\"}");
pstmt.setTagJson(0, "{\"device\":\"device_" + i + "\"}");
// set columns
long current = System.currentTimeMillis();
for (int j = 0; j < numOfRow; j++) {
@ -94,25 +105,29 @@ public class WSParameterBindingFullDemo {
}
private static void stmtAll(Connection conn) throws SQLException {
String sql = "INSERT INTO ? using stb tags(?,?,?,?,?,?,?) VALUES (?,?,?,?,?,?,?,?)";
String sql = "INSERT INTO ? using stb tags(?,?,?,?,?,?,?,?,?,?,?) VALUES (?,?,?,?,?,?,?,?,?,?,?,?)";
try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) {
// set table name
pstmt.setTableName("ntb");
// set tags
pstmt.setTagInt(1, 1);
pstmt.setTagDouble(2, 1.1);
pstmt.setTagBoolean(3, true);
pstmt.setTagString(4, "binary_value");
pstmt.setTagNString(5, "nchar_value");
pstmt.setTagVarbinary(6, new byte[] { (byte) 0x98, (byte) 0xf4, 0x6e });
pstmt.setTagGeometry(7, new byte[] {
pstmt.setTagInt(0, 1);
pstmt.setTagDouble(1, 1.1);
pstmt.setTagBoolean(2, true);
pstmt.setTagString(3, "binary_value");
pstmt.setTagNString(4, "nchar_value");
pstmt.setTagVarbinary(5, new byte[] { (byte) 0x98, (byte) 0xf4, 0x6e });
pstmt.setTagGeometry(6, new byte[] {
0x01, 0x01, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x59,
0x40, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x59, 0x40 });
pstmt.setTagShort(7, (short)255);
pstmt.setTagInt(8, 65535);
pstmt.setTagLong(9, 4294967295L);
pstmt.setTagBigInteger(10, new BigInteger("18446744073709551615"));
long current = System.currentTimeMillis();
@ -129,6 +144,10 @@ public class WSParameterBindingFullDemo {
0x00, 0x00, 0x00, 0x59,
0x40, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x59, 0x40 });
pstmt.setShort(9, (short)255);
pstmt.setInt(10, 65535);
pstmt.setLong(11, 4294967295L);
pstmt.setObject(12, new BigInteger("18446744073709551615"));
pstmt.addBatch();
pstmt.executeBatch();
System.out.println("Successfully inserted rows to example_all_type_stmt.ntb");

View File

@ -89,7 +89,7 @@ TDengine 提供了丰富的应用程序开发接口,为了便于用户快速
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.3</version>
</dependency>
```

View File

@ -19,7 +19,8 @@ TDengine 面向多种写入场景而很多写入场景下TDengine 的存
```SQL
COMPACT DATABASE db_name [start with 'XXXX'] [end with 'YYYY']
COMPACT [db_name.]VGROUPS IN (vgroup_id1, vgroup_id2, ...) [start with 'XXXX'] [end with 'YYYY']
SHOW COMPACTS [compact_id]
SHOW COMPACTS
SHOW COMPACT compact_id;
KILL COMPACT compact_id
```

View File

@ -69,7 +69,7 @@ taosExplorer 服务页面中,进入“系统管理 - 备份”页面,在“
1. 数据库:需要备份的数据库名称。一个备份计划只能备份一个数据库/超级表。
2. 超级表:需要备份的超级表名称。如果不填写,则备份整个数据库。
3. 下次执行时间:首次执行备份任务的日期时间。
4. 备份周期:备份点之间的时间间隔。注意:备份周期必须于数据库的 WAL_RETENTION_PERIOD 参数值。
4. 备份周期:备份点之间的时间间隔。注意:备份周期必须于数据库的 WAL_RETENTION_PERIOD 参数值。
5. 错误重试次数:对于可通过重试解决的错误,系统会按照此次数进行重试。
6. 错误重试间隔:每次重试之间的时间间隔。
7. 目录:存储备份文件的目录。
@ -152,4 +152,4 @@ Caused by:
```sql
alter
database test wal_retention_period 3600;
```
```

View File

@ -24,7 +24,8 @@ Flink Connector 支持所有能运行 Flink 1.19 及以上版本的平台。
## 版本历史
| Flink Connector 版本 | 主要变化 | TDengine 版本 |
| ------------------| ------------------------------------ | ---------------- |
| 2.0.0 | 1. 支持 SQL 查询 TDengine 数据库中的数据<br/> 2. 支持 CDC 订阅 TDengine 数据库中的数据<br/> 3. 支持 Table SQL 方式读取和写入 TDengine 数据库| 3.3.5.0 及以上版本 |
| 2.0.1 | Sink 支持对所有继承自 RowData 并已实现的类型进行数据写入| - |
| 2.0.0 | 1. 支持 SQL 查询 TDengine 数据库中的数据<br/> 2. 支持 CDC 订阅 TDengine 数据库中的数据<br/> 3. 支持 Table SQL 方式读取和写入 TDengine 数据库| 3.3.5.1 及以上版本 |
| 1.0.0 | 支持 Sink 功能,将来着其他数据源的数据写入到 TDengine| 3.3.2.0 及以上版本|
## 异常和错误码
@ -111,7 +112,7 @@ env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.AT_LEAST_ONCE);
<dependency>
<groupId>com.taosdata.flink</groupId>
<artifactId>flink-connector-tdengine</artifactId>
<version>2.0.0</version>
<version>2.0.1</version>
</dependency>
```

View File

@ -0,0 +1,35 @@
---
sidebar_label: Tableau
title: 与 Tableau 集成
---
Tableau 是一款知名的商业智能工具,它支持多种数据源,可方便地连接、导入和整合数据。并且可以通过直观的操作界面,让用户创建丰富多样的可视化图表,并具备强大的分析和筛选功能,为数据决策提供有力支持。
## 前置条件
准备以下环境:
- TDengine 3.3.5.4以上版本集群已部署并正常运行(企业及社区版均可)
- taosAdapter 能够正常运行。详细参考 [taosAdapter 使用手册](../../../reference/components/taosadapter)
- Tableau 桌面版安装并运行(如未安装,请下载并安装 Windows 操作系统 32/64 位 [Tableau 桌面版](https://www.tableau.com/products/desktop/download) )。安装 Tableau 桌面版请参考 [官方文档](https://www.tableau.com)。
- ODBC 驱动安装成功。详细参考[安装 ODBC 驱动](../../../reference/connector/odbc/#安装)
- ODBC 数据源配置成功。详细参考[配置ODBC数据源](../../../reference/connector/odbc/#配置数据源)
## 加载和分析 TDengine 数据
**第 1 步**,在 Windows 系统环境下启动 Tableau之后在其连接页面中搜索 “ODBC”并选择 “其他数据库 (ODBC)”。
**第 2 步**,点击 DNS 单选框,接着选择已配置好的数据源(MyTDengine),然后点击连接按钮。待连接成功后,删除字符串附加部分的内容,最后点击登录按钮即可。
![tableau-odbc](./tableau/tableau-odbc.jpg)
**第 3 步**,在弹出的工作簿页面中,会显示已连接的数据源。点击数据库的下拉列表,会显示需要进行数据分析的数据库。在此基础上,点击表选项中的查找按钮,即可将该数据库下的所有表显示出来。然后,拖动需要分析的表到右侧区域,即可显示出表结构。
![tableau-workbook](./tableau/tableau-table.jpg)
**第 4 步**,点击下方的"立即更新"按钮,即可将表中的数据展示出来。
![tableau-workbook](./tableau/tableau-data.jpg)
**第 5 步**,点击窗口下方的"工作表",弹出数据分析窗口, 并展示分析表的所有字段,将字段拖动到行列即可展示出图表。
![tableau-workbook](./tableau/tableau-analysis.jpg)

Binary file not shown.

After

Width:  |  Height:  |  Size: 235 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 238 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 237 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

View File

@ -306,7 +306,7 @@ charset 的有效值是 UTF-8。
|compressor | |支持动态修改 重启生效 |内部参数,用于有损压缩设置|
**补充说明**
1. 在 3.4.0.0 之后,所有配置参数都将被持久化到本地存储,重启数据库服务后,将默认使用持久化的配置参数列表;如果您希望继续使用 config 文件中配置的参数,需设置 forceReadConfig 为 1。
1. 在 3.3.5.0 之后,所有配置参数都将被持久化到本地存储,重启数据库服务后,将默认使用持久化的配置参数列表;如果您希望继续使用 config 文件中配置的参数,需设置 forceReadConfig 为 1。
2. 在 3.2.0.0 ~ 3.3.0.0(不包含)版本生效,启用该参数后不能回退到升级前的版本
3. TSZ 压缩算法是通过数据预测技术完成的压缩,所以更适合有规律变化的数据
4. TSZ 压缩时间会更长一些,如果您的服务器 CPU 空闲多,存储空间小的情况下适合选用

View File

@ -205,7 +205,7 @@ REDISTRIBUTE VGROUP vgroup_no DNODE dnode_id1 [DNODE dnode_id2] [DNODE dnode_id3
BALANCE VGROUP LEADER
```
触发集群所有 vgroup 中的 leader 重新选主,对集群各节点进行负载再均衡操作。
触发集群所有 vgroup 中的 leader 重新选主,对集群各节点进行负载再均衡操作。(企业版功能)
## 查看数据库工作状态

View File

@ -33,6 +33,8 @@ TDengine 的 JDBC 驱动实现尽可能与关系型数据库驱动保持一致
| taos-jdbcdriver 版本 | 主要变化 | TDengine 版本 |
| ------------------| ---------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------- |
| 3.5.3 | 在 WebSocket 连接上支持无符号数据类型 | - |
| 3.5.2 | 解决了 WebSocket 查询结果集释放 bug | - |
| 3.5.1 | 解决了数据订阅获取时间戳对象类型问题 | - |
| 3.5.0 | 1. 优化了 WebSocket 连接参数绑定性能,支持参数绑定查询使用二进制数据 <br/> 2. 优化了 WebSocket 连接在小查询上的性能 <br/> 3. WebSocket 连接上支持设置时区和应用信息 | 3.3.5.0 及更高版本 |
| 3.4.0 | 1. 使用 jackson 库替换 fastjson 库 <br/> 2. WebSocket 采用独立协议标识 <br/> 3. 优化后台拉取线程使用,避免用户误用导致超时 | - |
@ -127,24 +129,27 @@ JDBC 连接器可能报错的错误码包括 4 种:
TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对应类型转换如下:
| TDengine DataType | JDBCType |
| ----------------- | ------------------ |
| TIMESTAMP | java.sql.Timestamp |
| INT | java.lang.Integer |
| BIGINT | java.lang.Long |
| FLOAT | java.lang.Float |
| DOUBLE | java.lang.Double |
| SMALLINT | java.lang.Short |
| TINYINT | java.lang.Byte |
| BOOL | java.lang.Boolean |
| BINARY | byte array |
| NCHAR | java.lang.String |
| JSON | java.lang.String |
| VARBINARY | byte[] |
| GEOMETRY | byte[] |
| TDengine DataType | JDBCType | 备注|
| ----------------- | -------------------- |-------------------- |
| TIMESTAMP | java.sql.Timestamp ||
| BOOL | java.lang.Boolean ||
| TINYINT | java.lang.Byte ||
| TINYINT UNSIGNED | java.lang.Short |仅在 WebSocket 连接方式支持|
| SMALLINT | java.lang.Short ||
| SMALLINT UNSIGNED | java.lang.Integer |仅在 WebSocket 连接方式支持|
| INT | java.lang.Integer ||
| INT UNSIGNED | java.lang.Long |仅在 WebSocket 连接方式支持|
| BIGINT | java.lang.Long ||
| BIGINT UNSIGNED | java.math.BigInteger |仅在 WebSocket 连接方式支持|
| FLOAT | java.lang.Float ||
| DOUBLE | java.lang.Double ||
| BINARY | byte array ||
| NCHAR | java.lang.String ||
| JSON | java.lang.String |仅在 tag 中支持|
| VARBINARY | byte[] ||
| GEOMETRY | byte[] ||
**注意**JSON 类型仅在 tag 中支持。
由于历史原因TDengine中的BINARY底层不是真正的二进制数据已不建议使用。请用VARBINARY类型代替。
**注意**由于历史原因TDengine中的BINARY底层不是真正的二进制数据已不建议使用。请用VARBINARY类型代替。
GEOMETRY类型是little endian字节序的二进制数据符合WKB规范。详细信息请参考 [数据类型](../../taos-sql/data-type/#数据类型)
WKB规范请参考[Well-Known Binary (WKB)](https://libgeos.org/specifications/wkb/)
对于java连接器可以使用jts库来方便的创建GEOMETRY类型对象序列化后写入TDengine这里有一个样例[Geometry示例](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/GeometryDemo.java)

View File

@ -191,7 +191,7 @@ TDengine 存储的数据包括采集的时序数据以及库、表相关的元
在进行海量数据管理时为了实现水平扩展通常需要采用数据分片sharding和数据分区partitioning策略。TDengine 通过 vnode 来实现数据分片,并通过按时间段划分数据文件来实现时序数据的分区。
vnode 不仅负责处理时序数据的写入、查询和计算任务还承担着负载均衡、数据恢复以及支持异构环境的重要角色。为了实现这些目标TDengine 将一个 dnode 根据其计算和存储资源切分为多个 vnode。这些 vnode 的管理过程对应用程序是完全透明的由TDengine 自动完成。
vnode 不仅负责处理时序数据的写入、查询和计算任务还承担着负载均衡、数据恢复以及支持异构环境的重要角色。为了实现这些目标TDengine 将一个 dnode 根据其计算和存储资源切分为多个 vnode。这些 vnode 的管理过程对应用程序是完全透明的由TDengine 自动完成。
对于单个数据采集点,无论其数据量有多大,一个 vnode 都拥有足够的计算资源和存储资源来应对(例如,如果每秒生成一条 16B 的记录,一年产生的原始数据量也不到 0.5GB。因此TDengine 将一张表即一个数据采集点的所有数据都存储在一个vnode 中,避免将同一数据采集点的数据分散到两个或多个 dnode 上。同时,一个 vnode 可以存储多个数据采集点(表)的数据,最大可容纳的表数目上限为 100 万。设计上,一个 vnode 中的所有表都属于同一个数据库。
@ -371,4 +371,4 @@ alter dnode 1 "/mnt/disk2/taos 1";
3. 多级存储目前不支持删除已经挂载的硬盘的功能。
4. 0 级存储至少存在一个 disable_create_new_file 为 0 的挂载点1 级 和 2 级存储没有该限制。
:::
:::

View File

@ -24,6 +24,10 @@ TDengine 3.x 各版本安装包下载链接如下:
import Release from "/components/ReleaseV3";
## 3.3.5.2
<Release type="tdengine" version="3.3.5.2" />
## 3.3.5.0
<Release type="tdengine" version="3.3.5.0" />

View File

@ -0,0 +1,42 @@
---
title: 3.3.5.2 版本说明
sidebar_label: 3.3.5.2
description: 3.3.5.2 版本说明
---
## 特性
1. 特性taosX MQTT 数据源支持根据模板创建多个超级表
## 优化
1. 优化:改进 taosX 数据库不可用时的错误信息
2. 优化:使用 Poetry 标准管理依赖项并减少 Python 连接器安装依赖项 [#251](https://github.com/taosdata/taos-connector-python/issues/251)
3. 优化taosX 增量备份和恢复优化
4. 优化:在多级存储数据迁移过程中,如果迁移时间过长,可能会导致 Vnode 切主
5. 优化:调整 systemctl 守护 taosd 进程的策略,如果 60 秒内连续三次重启失败,下次重启将推迟至 900 秒后
## 修复
1. 修复maxRetryWaitTime 参数用于控制当集群无法提供服务时客户端的最大重连超时时间,但在遇到 Sync timeout 错误时,该参数不生效
2. 修复:支持在修改子表的 tag 值后,即时订阅到更新后的 tag 值
3. 修复:数据订阅的 tmq_consumer_poll 函数调用失败时没有返回错误码
4. 修复:当创建超过 100 个视图并执行 show views 命令时taosd 可能会发生崩溃
5. 修复:当使用 stmt2 写入数据时,如果未绑定所有的数据列,写入操作将会失败
6. 修复:当使用 stmt2 写入数据时,如果数据库名或表名使用了反引号,写入操作将会失败
7. 修复:关闭 vnode 时如果有正在进行的文件合并任务taosd 可能会崩溃
8. 修复:频繁执行 drop table with `tb_uid` 语句可能导致 taosd 死锁
9. 修复:日志文件切换过程中可能出现的死锁问题
10. 修复禁止创建与系统库information_schema, performance_schema同名的数据库
11. 修复:当嵌套查询的内层查询来源于超级表时,排序信息无法被上推
12. 修复:通过 STMT 接口尝试写入不符合拓扑规范的 Geometry 数据类型时误报错误
13. 修复:在查询语句中使用 percentile 函数和会话窗口时如果出现错误taosd 可能会崩溃
14. 修复:无法动态修改系统参数的问题
15. 修复:订阅同步偶发 Translict transaction 错误
16. 修复:同一消费者在执行取消订阅操作后,立即尝试订阅其他不同的主题时,会返回错误
17. 修复Go 连接器安全修复 CVE-2022-28948
18. 修复:当视图中的子查询包含带别名的 ORDER BY 子句,并且查询函数自身也带有别名时,查询该视图会引发错误
19. 修复:在将数据库从单副本修改为多副本时,如果存在一些由较早版本生成且在新版本中已不再使用的元数据,会导致修改操作失败
20. 修复:在使用 SELECT * FROM 子查询时,列名未能正确复制到外层查询
21. 修复:对字符串类型数据执行 max/min 函数时,结果不准确且 taosd 可能会崩溃
22. 修复:流式计算不支持使用 HAVING 语句,但在创建时未报告错误
23. 修复taos shell 显示的服务端版本信息不准确,例如无法正确区分社区版和企业版
24. 修复:在某些特定的查询场景下,当 JOIN 和 CAST 联合使用时taosd 可能会崩溃

View File

@ -4,6 +4,7 @@ sidebar_label: 版本说明
description: 各版本版本说明
---
[3.3.5.2](./3.3.5.2)
[3.3.5.0](./3.3.5.0)
[3.3.4.8](./3.3.4.8)
[3.3.4.3](./3.3.4.3)

View File

@ -86,7 +86,7 @@ int32_t taosAnalBufWriteDataEnd(SAnalyticBuf *pBuf);
int32_t taosAnalBufClose(SAnalyticBuf *pBuf);
void taosAnalBufDestroy(SAnalyticBuf *pBuf);
const char *taosAnalAlgoStr(EAnalAlgoType algoType);
const char *taosAnalysisAlgoType(EAnalAlgoType algoType);
EAnalAlgoType taosAnalAlgoInt(const char *algoName);
const char *taosAnalAlgoUrlStr(EAnalAlgoType algoType);

View File

@ -34,6 +34,9 @@ extern "C" {
#define GLOBAL_CONFIG_FILE_VERSION 1
#define LOCAL_CONFIG_FILE_VERSION 1
#define RPC_MEMORY_USAGE_RATIO 0.1
#define QUEUE_MEMORY_USAGE_RATIO 0.6
typedef enum {
DND_CA_SM4 = 1,
} EEncryptAlgor;
@ -110,6 +113,7 @@ extern int32_t tsNumOfQnodeFetchThreads;
extern int32_t tsNumOfSnodeStreamThreads;
extern int32_t tsNumOfSnodeWriteThreads;
extern int64_t tsQueueMemoryAllowed;
extern int64_t tsApplyMemoryAllowed;
extern int32_t tsRetentionSpeedLimitMB;
extern int32_t tsNumOfCompactThreads;

View File

@ -268,7 +268,7 @@ typedef struct SStoreMeta {
const void* (*extractTagVal)(const void* tag, int16_t type, STagVal* tagVal); // todo remove it
int32_t (*getTableUidByName)(void* pVnode, char* tbName, uint64_t* uid);
int32_t (*getTableTypeByName)(void* pVnode, char* tbName, ETableType* tbType);
int32_t (*getTableTypeSuidByName)(void* pVnode, char* tbName, ETableType* tbType, uint64_t* suid);
int32_t (*getTableNameByUid)(void* pVnode, uint64_t uid, char* tbName);
bool (*isTableExisted)(void* pVnode, tb_uid_t uid);

View File

@ -313,9 +313,9 @@ typedef struct SOrderByExprNode {
} SOrderByExprNode;
typedef struct SLimitNode {
ENodeType type; // QUERY_NODE_LIMIT
int64_t limit;
int64_t offset;
ENodeType type; // QUERY_NODE_LIMIT
SValueNode* limit;
SValueNode* offset;
} SLimitNode;
typedef struct SStateWindowNode {
@ -681,6 +681,7 @@ int32_t nodesValueNodeToVariant(const SValueNode* pNode, SVariant* pVal);
int32_t nodesMakeValueNodeFromString(char* literal, SValueNode** ppValNode);
int32_t nodesMakeValueNodeFromBool(bool b, SValueNode** ppValNode);
int32_t nodesMakeValueNodeFromInt32(int32_t value, SNode** ppNode);
int32_t nodesMakeValueNodeFromInt64(int64_t value, SNode** ppNode);
char* nodesGetFillModeString(EFillMode mode);
int32_t nodesMergeConds(SNode** pDst, SNodeList** pSrc);

View File

@ -55,6 +55,7 @@ typedef struct {
typedef enum {
DEF_QITEM = 0,
RPC_QITEM = 1,
APPLY_QITEM = 2,
} EQItype;
typedef void (*FItem)(SQueueInfo *pInfo, void *pItem);

View File

@ -6,12 +6,6 @@ SUCCESS_FILE="success.txt"
FAILED_FILE="failed.txt"
REPORT_FILE="report.txt"
# Initialize/clear result files
> "$SUCCESS_FILE"
> "$FAILED_FILE"
> "$LOG_FILE"
> "$REPORT_FILE"
# Switch to the target directory
TARGET_DIR="../../tests/system-test/"
@ -24,6 +18,12 @@ else
exit 1
fi
# Initialize/clear result files
> "$SUCCESS_FILE"
> "$FAILED_FILE"
> "$LOG_FILE"
> "$REPORT_FILE"
# Define the Python commands to execute
commands=(
"python3 ./test.py -f 2-query/join.py"
@ -102,4 +102,4 @@ fi
echo "Detailed logs can be found in: $(realpath "$LOG_FILE")"
echo "Successful commands can be found in: $(realpath "$SUCCESS_FILE")"
echo "Failed commands can be found in: $(realpath "$FAILED_FILE")"
echo "Test report can be found in: $(realpath "$REPORT_FILE")"
echo "Test report can be found in: $(realpath "$REPORT_FILE")"

View File

@ -131,6 +131,8 @@ typedef struct SStmtQueue {
SStmtQNode* head;
SStmtQNode* tail;
uint64_t qRemainNum;
TdThreadMutex mutex;
TdThreadCond waitCond;
} SStmtQueue;
typedef struct STscStmt {

View File

@ -253,7 +253,7 @@ void taos_cleanup(void) {
taosCloseRef(id);
nodesDestroyAllocatorSet();
// cleanupAppInfo();
cleanupAppInfo();
rpcCleanup();
tscDebug("rpc cleanup");

View File

@ -39,31 +39,39 @@ static FORCE_INLINE int32_t stmtAllocQNodeFromBuf(STableBufInfo* pTblBuf, void**
}
bool stmtDequeue(STscStmt* pStmt, SStmtQNode** param) {
while (0 == atomic_load_64(&pStmt->queue.qRemainNum)) {
taosUsleep(1);
return false;
(void)taosThreadMutexLock(&pStmt->queue.mutex);
while (0 == atomic_load_64((int64_t*)&pStmt->queue.qRemainNum)) {
(void)taosThreadCondWait(&pStmt->queue.waitCond, &pStmt->queue.mutex);
if (atomic_load_8((int8_t*)&pStmt->queue.stopQueue)) {
(void)taosThreadMutexUnlock(&pStmt->queue.mutex);
return false;
}
}
SStmtQNode* orig = pStmt->queue.head;
SStmtQNode* node = pStmt->queue.head->next;
pStmt->queue.head = pStmt->queue.head->next;
// taosMemoryFreeClear(orig);
*param = node;
(void)atomic_sub_fetch_64(&pStmt->queue.qRemainNum, 1);
(void)atomic_sub_fetch_64((int64_t*)&pStmt->queue.qRemainNum, 1);
(void)taosThreadMutexUnlock(&pStmt->queue.mutex);
*param = node;
return true;
}
void stmtEnqueue(STscStmt* pStmt, SStmtQNode* param) {
(void)taosThreadMutexLock(&pStmt->queue.mutex);
pStmt->queue.tail->next = param;
pStmt->queue.tail = param;
pStmt->stat.bindDataNum++;
(void)atomic_add_fetch_64(&pStmt->queue.qRemainNum, 1);
(void)taosThreadCondSignal(&(pStmt->queue.waitCond));
(void)taosThreadMutexUnlock(&pStmt->queue.mutex);
}
static int32_t stmtCreateRequest(STscStmt* pStmt) {
@ -415,9 +423,11 @@ void stmtResetQueueTableBuf(STableBufInfo* pTblBuf, SStmtQueue* pQueue) {
pTblBuf->buffIdx = 1;
pTblBuf->buffOffset = sizeof(*pQueue->head);
(void)taosThreadMutexLock(&pQueue->mutex);
pQueue->head = pQueue->tail = pTblBuf->pCurBuff;
pQueue->qRemainNum = 0;
pQueue->head->next = NULL;
(void)taosThreadMutexUnlock(&pQueue->mutex);
}
int32_t stmtCleanExecInfo(STscStmt* pStmt, bool keepTable, bool deepClean) {
@ -809,6 +819,8 @@ int32_t stmtStartBindThread(STscStmt* pStmt) {
}
int32_t stmtInitQueue(STscStmt* pStmt) {
(void)taosThreadCondInit(&pStmt->queue.waitCond, NULL);
(void)taosThreadMutexInit(&pStmt->queue.mutex, NULL);
STMT_ERR_RET(stmtAllocQNodeFromBuf(&pStmt->sql.siInfo.tbBuf, (void**)&pStmt->queue.head));
pStmt->queue.tail = pStmt->queue.head;
@ -1619,11 +1631,18 @@ int stmtClose(TAOS_STMT* stmt) {
pStmt->queue.stopQueue = true;
(void)taosThreadMutexLock(&pStmt->queue.mutex);
(void)taosThreadCondSignal(&(pStmt->queue.waitCond));
(void)taosThreadMutexUnlock(&pStmt->queue.mutex);
if (pStmt->bindThreadInUse) {
(void)taosThreadJoin(pStmt->bindThread, NULL);
pStmt->bindThreadInUse = false;
}
(void)taosThreadCondDestroy(&pStmt->queue.waitCond);
(void)taosThreadMutexDestroy(&pStmt->queue.mutex);
STMT_DLOG("stmt %p closed, stbInterlaceMode: %d, statInfo: ctgGetTbMetaNum=>%" PRId64 ", getCacheTbInfo=>%" PRId64
", parseSqlNum=>%" PRId64 ", pStmt->stat.bindDataNum=>%" PRId64
", settbnameAPI:%u, bindAPI:%u, addbatchAPI:%u, execAPI:%u"
@ -1757,7 +1776,9 @@ _return:
}
int stmtGetParamNum(TAOS_STMT* stmt, int* nums) {
int code = 0;
STscStmt* pStmt = (STscStmt*)stmt;
int32_t preCode = pStmt->errCode;
STMT_DLOG_E("start to get param num");
@ -1765,7 +1786,7 @@ int stmtGetParamNum(TAOS_STMT* stmt, int* nums) {
return pStmt->errCode;
}
STMT_ERR_RET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS));
STMT_ERRI_JRET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS));
if (pStmt->bInfo.needParse && pStmt->sql.runTimes && pStmt->sql.type > 0 &&
STMT_TYPE_MULTI_INSERT != pStmt->sql.type) {
@ -1777,23 +1798,29 @@ int stmtGetParamNum(TAOS_STMT* stmt, int* nums) {
pStmt->exec.pRequest = NULL;
}
STMT_ERR_RET(stmtCreateRequest(pStmt));
STMT_ERRI_JRET(stmtCreateRequest(pStmt));
if (pStmt->bInfo.needParse) {
STMT_ERR_RET(stmtParseSql(pStmt));
STMT_ERRI_JRET(stmtParseSql(pStmt));
}
if (STMT_TYPE_QUERY == pStmt->sql.type) {
*nums = taosArrayGetSize(pStmt->sql.pQuery->pPlaceholderValues);
} else {
STMT_ERR_RET(stmtFetchColFields(stmt, nums, NULL));
STMT_ERRI_JRET(stmtFetchColFields(stmt, nums, NULL));
}
return TSDB_CODE_SUCCESS;
_return:
pStmt->errCode = preCode;
return code;
}
int stmtGetParam(TAOS_STMT* stmt, int idx, int* type, int* bytes) {
int code = 0;
STscStmt* pStmt = (STscStmt*)stmt;
int32_t preCode = pStmt->errCode;
STMT_DLOG_E("start to get param");
@ -1802,10 +1829,10 @@ int stmtGetParam(TAOS_STMT* stmt, int idx, int* type, int* bytes) {
}
if (STMT_TYPE_QUERY == pStmt->sql.type) {
STMT_RET(TSDB_CODE_TSC_STMT_API_ERROR);
STMT_ERRI_JRET(TSDB_CODE_TSC_STMT_API_ERROR);
}
STMT_ERR_RET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS));
STMT_ERRI_JRET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS));
if (pStmt->bInfo.needParse && pStmt->sql.runTimes && pStmt->sql.type > 0 &&
STMT_TYPE_MULTI_INSERT != pStmt->sql.type) {
@ -1817,27 +1844,29 @@ int stmtGetParam(TAOS_STMT* stmt, int idx, int* type, int* bytes) {
pStmt->exec.pRequest = NULL;
}
STMT_ERR_RET(stmtCreateRequest(pStmt));
STMT_ERRI_JRET(stmtCreateRequest(pStmt));
if (pStmt->bInfo.needParse) {
STMT_ERR_RET(stmtParseSql(pStmt));
STMT_ERRI_JRET(stmtParseSql(pStmt));
}
int32_t nums = 0;
TAOS_FIELD_E* pField = NULL;
STMT_ERR_RET(stmtFetchColFields(stmt, &nums, &pField));
STMT_ERRI_JRET(stmtFetchColFields(stmt, &nums, &pField));
if (idx >= nums) {
tscError("idx %d is too big", idx);
taosMemoryFree(pField);
STMT_ERR_RET(TSDB_CODE_INVALID_PARA);
STMT_ERRI_JRET(TSDB_CODE_INVALID_PARA);
}
*type = pField[idx].type;
*bytes = pField[idx].bytes;
taosMemoryFree(pField);
_return:
return TSDB_CODE_SUCCESS;
taosMemoryFree(pField);
pStmt->errCode = preCode;
return code;
}
TAOS_RES* stmtUseResult(TAOS_STMT* stmt) {

View File

@ -39,31 +39,35 @@ static FORCE_INLINE int32_t stmtAllocQNodeFromBuf(STableBufInfo* pTblBuf, void**
}
static bool stmtDequeue(STscStmt2* pStmt, SStmtQNode** param) {
(void)taosThreadMutexLock(&pStmt->queue.mutex);
while (0 == atomic_load_64((int64_t*)&pStmt->queue.qRemainNum)) {
taosUsleep(1);
return false;
(void)taosThreadCondWait(&pStmt->queue.waitCond, &pStmt->queue.mutex);
if (atomic_load_8((int8_t*)&pStmt->queue.stopQueue)) {
(void)taosThreadMutexUnlock(&pStmt->queue.mutex);
return false;
}
}
SStmtQNode* orig = pStmt->queue.head;
SStmtQNode* node = pStmt->queue.head->next;
pStmt->queue.head = pStmt->queue.head->next;
// taosMemoryFreeClear(orig);
*param = node;
(void)atomic_sub_fetch_64((int64_t*)&pStmt->queue.qRemainNum, 1);
(void)taosThreadMutexUnlock(&pStmt->queue.mutex);
return true;
}
static void stmtEnqueue(STscStmt2* pStmt, SStmtQNode* param) {
(void)taosThreadMutexLock(&pStmt->queue.mutex);
pStmt->queue.tail->next = param;
pStmt->queue.tail = param;
pStmt->stat.bindDataNum++;
(void)atomic_add_fetch_64((int64_t*)&pStmt->queue.qRemainNum, 1);
(void)taosThreadCondSignal(&(pStmt->queue.waitCond));
(void)taosThreadMutexUnlock(&pStmt->queue.mutex);
}
static int32_t stmtCreateRequest(STscStmt2* pStmt) {
@ -339,9 +343,11 @@ static void stmtResetQueueTableBuf(STableBufInfo* pTblBuf, SStmtQueue* pQueue) {
pTblBuf->buffIdx = 1;
pTblBuf->buffOffset = sizeof(*pQueue->head);
(void)taosThreadMutexLock(&pQueue->mutex);
pQueue->head = pQueue->tail = pTblBuf->pCurBuff;
pQueue->qRemainNum = 0;
pQueue->head->next = NULL;
(void)taosThreadMutexUnlock(&pQueue->mutex);
}
static int32_t stmtCleanExecInfo(STscStmt2* pStmt, bool keepTable, bool deepClean) {
@ -735,6 +741,8 @@ static int32_t stmtStartBindThread(STscStmt2* pStmt) {
}
static int32_t stmtInitQueue(STscStmt2* pStmt) {
(void)taosThreadCondInit(&pStmt->queue.waitCond, NULL);
(void)taosThreadMutexInit(&pStmt->queue.mutex, NULL);
STMT_ERR_RET(stmtAllocQNodeFromBuf(&pStmt->sql.siInfo.tbBuf, (void**)&pStmt->queue.head));
pStmt->queue.tail = pStmt->queue.head;
@ -1066,13 +1074,16 @@ static int stmtFetchColFields2(STscStmt2* pStmt, int32_t* fieldNum, TAOS_FIELD_E
}
static int stmtFetchStbColFields2(STscStmt2* pStmt, int32_t* fieldNum, TAOS_FIELD_ALL** fields) {
int32_t code = 0;
int32_t preCode = pStmt->errCode;
if (pStmt->errCode != TSDB_CODE_SUCCESS) {
return pStmt->errCode;
}
if (STMT_TYPE_QUERY == pStmt->sql.type) {
tscError("invalid operation to get query column fileds");
STMT_ERR_RET(TSDB_CODE_TSC_STMT_API_ERROR);
STMT_ERRI_JRET(TSDB_CODE_TSC_STMT_API_ERROR);
}
STableDataCxt** pDataBlock = NULL;
@ -1084,21 +1095,25 @@ static int stmtFetchStbColFields2(STscStmt2* pStmt, int32_t* fieldNum, TAOS_FIEL
(STableDataCxt**)taosHashGet(pStmt->exec.pBlockHash, pStmt->bInfo.tbFName, strlen(pStmt->bInfo.tbFName));
if (NULL == pDataBlock) {
tscError("table %s not found in exec blockHash", pStmt->bInfo.tbFName);
STMT_ERR_RET(TSDB_CODE_APP_ERROR);
STMT_ERRI_JRET(TSDB_CODE_APP_ERROR);
}
}
STMT_ERR_RET(qBuildStmtStbColFields(*pDataBlock, pStmt->bInfo.boundTags, pStmt->bInfo.preCtbname, fieldNum, fields));
STMT_ERRI_JRET(qBuildStmtStbColFields(*pDataBlock, pStmt->bInfo.boundTags, pStmt->bInfo.preCtbname, fieldNum, fields));
if (pStmt->bInfo.tbType == TSDB_SUPER_TABLE) {
pStmt->bInfo.needParse = true;
qDestroyStmtDataBlock(*pDataBlock);
if (taosHashRemove(pStmt->exec.pBlockHash, pStmt->bInfo.tbFName, strlen(pStmt->bInfo.tbFName)) != 0) {
tscError("get fileds %s remove exec blockHash fail", pStmt->bInfo.tbFName);
STMT_ERR_RET(TSDB_CODE_APP_ERROR);
STMT_ERRI_JRET(TSDB_CODE_APP_ERROR);
}
}
return TSDB_CODE_SUCCESS;
_return:
pStmt->errCode = preCode;
return code;
}
/*
SArray* stmtGetFreeCol(STscStmt2* pStmt, int32_t* idx) {
@ -1748,11 +1763,18 @@ int stmtClose2(TAOS_STMT2* stmt) {
pStmt->queue.stopQueue = true;
(void)taosThreadMutexLock(&pStmt->queue.mutex);
(void)taosThreadCondSignal(&(pStmt->queue.waitCond));
(void)taosThreadMutexUnlock(&pStmt->queue.mutex);
if (pStmt->bindThreadInUse) {
(void)taosThreadJoin(pStmt->bindThread, NULL);
pStmt->bindThreadInUse = false;
}
(void)taosThreadCondDestroy(&pStmt->queue.waitCond);
(void)taosThreadMutexDestroy(&pStmt->queue.mutex);
if (pStmt->options.asyncExecFn && !pStmt->semWaited) {
if (tsem_wait(&pStmt->asyncQuerySem) != 0) {
tscError("failed to wait asyncQuerySem");
@ -1824,7 +1846,7 @@ int stmtParseColFields2(TAOS_STMT2* stmt) {
if (pStmt->exec.pRequest && STMT_TYPE_QUERY == pStmt->sql.type && pStmt->sql.runTimes) {
taos_free_result(pStmt->exec.pRequest);
pStmt->exec.pRequest = NULL;
STMT_ERR_RET(stmtCreateRequest(pStmt));
STMT_ERRI_JRET(stmtCreateRequest(pStmt));
}
STMT_ERRI_JRET(stmtCreateRequest(pStmt));
@ -1850,7 +1872,9 @@ int stmtGetStbColFields2(TAOS_STMT2* stmt, int* nums, TAOS_FIELD_ALL** fields) {
}
int stmtGetParamNum2(TAOS_STMT2* stmt, int* nums) {
int32_t code = 0;
STscStmt2* pStmt = (STscStmt2*)stmt;
int32_t preCode = pStmt->errCode;
STMT_DLOG_E("start to get param num");
@ -1858,7 +1882,7 @@ int stmtGetParamNum2(TAOS_STMT2* stmt, int* nums) {
return pStmt->errCode;
}
STMT_ERR_RET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS));
STMT_ERRI_JRET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS));
if (pStmt->bInfo.needParse && pStmt->sql.runTimes && pStmt->sql.type > 0 &&
STMT_TYPE_MULTI_INSERT != pStmt->sql.type) {
@ -1870,19 +1894,23 @@ int stmtGetParamNum2(TAOS_STMT2* stmt, int* nums) {
pStmt->exec.pRequest = NULL;
}
STMT_ERR_RET(stmtCreateRequest(pStmt));
STMT_ERRI_JRET(stmtCreateRequest(pStmt));
if (pStmt->bInfo.needParse) {
STMT_ERR_RET(stmtParseSql(pStmt));
STMT_ERRI_JRET(stmtParseSql(pStmt));
}
if (STMT_TYPE_QUERY == pStmt->sql.type) {
*nums = taosArrayGetSize(pStmt->sql.pQuery->pPlaceholderValues);
} else {
STMT_ERR_RET(stmtFetchColFields2(stmt, nums, NULL));
STMT_ERRI_JRET(stmtFetchColFields2(stmt, nums, NULL));
}
return TSDB_CODE_SUCCESS;
_return:
pStmt->errCode = preCode;
return code;
}
TAOS_RES* stmtUseResult2(TAOS_STMT2* stmt) {

View File

@ -74,8 +74,9 @@ enum {
};
typedef struct {
tmr_h timer;
int32_t rsetId;
tmr_h timer;
int32_t rsetId;
TdThreadMutex lock;
} SMqMgmt;
struct tmq_list_t {
@ -1600,16 +1601,25 @@ void tmqFreeImpl(void* handle) {
static void tmqMgmtInit(void) {
tmqInitRes = 0;
if (taosThreadMutexInit(&tmqMgmt.lock, NULL) != 0){
goto END;
}
tmqMgmt.timer = taosTmrInit(1000, 100, 360000, "TMQ");
if (tmqMgmt.timer == NULL) {
tmqInitRes = terrno;
goto END;
}
tmqMgmt.rsetId = taosOpenRef(10000, tmqFreeImpl);
if (tmqMgmt.rsetId < 0) {
tmqInitRes = terrno;
goto END;
}
return;
END:
tmqInitRes = terrno;
}
void tmqMgmtClose(void) {
@ -1618,9 +1628,27 @@ void tmqMgmtClose(void) {
tmqMgmt.timer = NULL;
}
if (tmqMgmt.rsetId >= 0) {
if (tmqMgmt.rsetId > 0) {
(void) taosThreadMutexLock(&tmqMgmt.lock);
tmq_t *tmq = taosIterateRef(tmqMgmt.rsetId, 0);
int64_t refId = 0;
while (tmq) {
refId = tmq->refId;
if (refId == 0) {
break;
}
atomic_store_8(&tmq->status, TMQ_CONSUMER_STATUS__CLOSED);
if (taosRemoveRef(tmqMgmt.rsetId, tmq->refId) != 0) {
qWarn("taosRemoveRef tmq refId:%" PRId64 " failed, error:%s", refId, tstrerror(terrno));
}
tmq = taosIterateRef(tmqMgmt.rsetId, refId);
}
taosCloseRef(tmqMgmt.rsetId);
tmqMgmt.rsetId = -1;
(void)taosThreadMutexUnlock(&tmqMgmt.lock);
}
}
@ -2617,8 +2645,13 @@ int32_t tmq_unsubscribe(tmq_t* tmq) {
int32_t tmq_consumer_close(tmq_t* tmq) {
if (tmq == NULL) return TSDB_CODE_INVALID_PARA;
int32_t code = 0;
code = taosThreadMutexLock(&tmqMgmt.lock);
if (atomic_load_8(&tmq->status) == TMQ_CONSUMER_STATUS__CLOSED){
goto end;
}
tqInfoC("consumer:0x%" PRIx64 " start to close consumer, status:%d", tmq->consumerId, tmq->status);
int32_t code = tmq_unsubscribe(tmq);
code = tmq_unsubscribe(tmq);
if (code == 0) {
atomic_store_8(&tmq->status, TMQ_CONSUMER_STATUS__CLOSED);
code = taosRemoveRef(tmqMgmt.rsetId, tmq->refId);
@ -2626,6 +2659,9 @@ int32_t tmq_consumer_close(tmq_t* tmq) {
tqErrorC("tmq close failed to remove ref:%" PRId64 ", code:%d", tmq->refId, code);
}
}
end:
code = taosThreadMutexUnlock(&tmqMgmt.lock);
return code;
}

View File

@ -237,6 +237,63 @@ int main(int argc, char** argv) {
return RUN_ALL_TESTS();
}
TEST(stmt2Case, stmt2_test_limit) {
TAOS* taos = taos_connect("localhost", "root", "taosdata", "", 0);
ASSERT_NE(taos, nullptr);
do_query(taos, "drop database if exists stmt2_testdb_7");
do_query(taos, "create database IF NOT EXISTS stmt2_testdb_7");
do_query(taos, "create stable stmt2_testdb_7.stb (ts timestamp, b binary(10)) tags(t1 int, t2 binary(10))");
do_query(taos,
"insert into stmt2_testdb_7.tb2 using stmt2_testdb_7.stb tags(2,'xyz') values(1591060628000, "
"'abc'),(1591060628001,'def'),(1591060628004, 'hij')");
do_query(taos, "use stmt2_testdb_7");
TAOS_STMT2_OPTION option = {0, true, true, NULL, NULL};
TAOS_STMT2* stmt = taos_stmt2_init(taos, &option);
ASSERT_NE(stmt, nullptr);
const char* sql = "select * from stmt2_testdb_7.tb2 where ts > ? and ts < ? limit ?";
int code = taos_stmt2_prepare(stmt, sql, 0);
checkError(stmt, code);
int t64_len[1] = {sizeof(int64_t)};
int b_len[1] = {3};
int x = 2;
int x_len = sizeof(int);
int64_t ts[2] = {1591060627000, 1591060628005};
TAOS_STMT2_BIND params[3] = {{TSDB_DATA_TYPE_TIMESTAMP, &ts[0], t64_len, NULL, 1},
{TSDB_DATA_TYPE_TIMESTAMP, &ts[1], t64_len, NULL, 1},
{TSDB_DATA_TYPE_INT, &x, &x_len, NULL, 1}};
TAOS_STMT2_BIND* paramv = &params[0];
TAOS_STMT2_BINDV bindv = {1, NULL, NULL, &paramv};
code = taos_stmt2_bind_param(stmt, &bindv, -1);
checkError(stmt, code);
taos_stmt2_exec(stmt, NULL);
checkError(stmt, code);
TAOS_RES* pRes = taos_stmt2_result(stmt);
ASSERT_NE(pRes, nullptr);
int getRecordCounts = 0;
while ((taos_fetch_row(pRes))) {
getRecordCounts++;
}
ASSERT_EQ(getRecordCounts, 2);
taos_stmt2_close(stmt);
do_query(taos, "drop database if exists stmt2_testdb_7");
taos_close(taos);
}
TEST(stmt2Case, insert_stb_get_fields_Test) {
TAOS* taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(taos, nullptr);
@ -735,7 +792,7 @@ TEST(stmt2Case, insert_ntb_get_fields_Test) {
{
const char* sql = "insert into stmt2_testdb_4.? values(?,?)";
printf("case 2 : %s\n", sql);
getFieldsError(taos, sql, TSDB_CODE_PAR_TABLE_NOT_EXIST);
getFieldsError(taos, sql, TSDB_CODE_TSC_STMT_TBNAME_ERROR);
}
// case 3 : wrong para nums
@ -1496,8 +1553,51 @@ TEST(stmt2Case, geometry) {
checkError(stmt, code);
ASSERT_EQ(affected_rows, 3);
// test wrong wkb input
unsigned char wkb2[3][61] = {
{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0xF0, 0x3F, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40,
},
{0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f},
{0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40}};
params[1].buffer = wkb2;
code = taos_stmt2_bind_param(stmt, &bindv, -1);
ASSERT_EQ(code, TSDB_CODE_FUNC_FUNTION_PARA_VALUE);
taos_stmt2_close(stmt);
do_query(taos, "DROP DATABASE IF EXISTS stmt2_testdb_13");
taos_close(taos);
}
// TD-33582
TEST(stmt2Case, errcode) {
TAOS* taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(taos, nullptr);
do_query(taos, "DROP DATABASE IF EXISTS stmt2_testdb_14");
do_query(taos, "CREATE DATABASE IF NOT EXISTS stmt2_testdb_14");
do_query(taos, "use stmt2_testdb_14");
TAOS_STMT2_OPTION option = {0};
TAOS_STMT2* stmt = taos_stmt2_init(taos, &option);
ASSERT_NE(stmt, nullptr);
char* sql = "select * from t where ts > ? and name = ? foo = ?";
int code = taos_stmt2_prepare(stmt, sql, 0);
checkError(stmt, code);
int fieldNum = 0;
TAOS_FIELD_ALL* pFields = NULL;
code = taos_stmt2_get_fields(stmt, &fieldNum, &pFields);
ASSERT_EQ(code, TSDB_CODE_PAR_SYNTAX_ERROR);
// get fail dont influence the next stmt prepare
sql = "nsert into ? (ts, name) values (?, ?)";
code = taos_stmt_prepare(stmt, sql, 0);
checkError(stmt, code);
}
#pragma GCC diagnostic pop

View File

@ -212,15 +212,6 @@ void insertData(TAOS *taos, TAOS_STMT_OPTIONS *option, const char *sql, int CTB_
void getFields(TAOS *taos, const char *sql, int expectedALLFieldNum, TAOS_FIELD_E *expectedTagFields,
int expectedTagFieldNum, TAOS_FIELD_E *expectedColFields, int expectedColFieldNum) {
// create database and table
do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_3");
do_query(taos, "CREATE DATABASE IF NOT EXISTS stmt_testdb_3");
do_query(taos, "USE stmt_testdb_3");
do_query(
taos,
"CREATE STABLE IF NOT EXISTS stmt_testdb_3.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS "
"(groupId INT, location BINARY(24))");
TAOS_STMT *stmt = taos_stmt_init(taos);
ASSERT_NE(stmt, nullptr);
int code = taos_stmt_prepare(stmt, sql, 0);
@ -267,6 +258,24 @@ void getFields(TAOS *taos, const char *sql, int expectedALLFieldNum, TAOS_FIELD_
taos_stmt_close(stmt);
}
void getFieldsError(TAOS *taos, const char *sql, int expectedErrocode) {
TAOS_STMT *stmt = taos_stmt_init(taos);
ASSERT_NE(stmt, nullptr);
STscStmt *pStmt = (STscStmt *)stmt;
int code = taos_stmt_prepare(stmt, sql, 0);
int fieldNum = 0;
TAOS_FIELD_E *pFields = NULL;
code = taos_stmt_get_tag_fields(stmt, &fieldNum, &pFields);
ASSERT_EQ(code, expectedErrocode);
ASSERT_EQ(pStmt->errCode, TSDB_CODE_SUCCESS);
taosMemoryFree(pFields);
taos_stmt_close(stmt);
}
} // namespace
int main(int argc, char **argv) {
@ -298,6 +307,15 @@ TEST(stmtCase, get_fields) {
TAOS *taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(taos, nullptr);
// create database and table
do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_3");
do_query(taos, "CREATE DATABASE IF NOT EXISTS stmt_testdb_3");
do_query(taos, "USE stmt_testdb_3");
do_query(
taos,
"CREATE STABLE IF NOT EXISTS stmt_testdb_3.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS "
"(groupId INT, location BINARY(24))");
// nomarl test
{
TAOS_FIELD_E tagFields[2] = {{"groupid", TSDB_DATA_TYPE_INT, 0, 0, sizeof(int)},
{"location", TSDB_DATA_TYPE_BINARY, 0, 0, 24}};
@ -307,6 +325,12 @@ TEST(stmtCase, get_fields) {
{"phase", TSDB_DATA_TYPE_FLOAT, 0, 0, sizeof(float)}};
getFields(taos, "INSERT INTO ? USING meters TAGS(?,?) VALUES (?,?,?,?)", 7, &tagFields[0], 2, &colFields[0], 4);
}
// error case [TD-33570]
{ getFieldsError(taos, "INSERT INTO ? VALUES (?,?,?,?)", TSDB_CODE_TSC_STMT_TBNAME_ERROR); }
{ getFieldsError(taos, "INSERT INTO ? USING meters TAGS(?,?) VALUES (?,?,?,?)", TSDB_CODE_TSC_STMT_TBNAME_ERROR); }
do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_3");
taos_close(taos);
}
@ -520,9 +544,6 @@ TEST(stmtCase, geometry) {
int code = taos_stmt_prepare(stmt, stmt_sql, 0);
checkError(stmt, code);
// code = taos_stmt_set_tbname(stmt, "tb1");
// checkError(stmt, code);
code = taos_stmt_bind_param_batch(stmt, params);
checkError(stmt, code);
@ -532,11 +553,58 @@ TEST(stmtCase, geometry) {
code = taos_stmt_execute(stmt);
checkError(stmt, code);
//test wrong wkb input
unsigned char wkb2[3][61] = {
{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0xF0, 0x3F, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40,
},
{0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f},
{0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40}};
params[1].buffer = wkb2;
code = taos_stmt_bind_param_batch(stmt, params);
ASSERT_EQ(code, TSDB_CODE_FUNC_FUNTION_PARA_VALUE);
taosMemoryFree(t64_len);
taosMemoryFree(wkb_len);
taos_stmt_close(stmt);
do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_5");
taos_close(taos);
}
//TD-33582
TEST(stmtCase, errcode) {
TAOS *taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(taos, nullptr);
do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_4");
do_query(taos, "CREATE DATABASE IF NOT EXISTS stmt_testdb_4");
do_query(taos, "USE stmt_testdb_4");
do_query(
taos,
"CREATE STABLE IF NOT EXISTS stmt_testdb_4.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS "
"(groupId INT, location BINARY(24))");
TAOS_STMT *stmt = taos_stmt_init(taos);
ASSERT_NE(stmt, nullptr);
char *sql = "select * from t where ts > ? and name = ? foo = ?";
int code = taos_stmt_prepare(stmt, sql, 0);
checkError(stmt, code);
int fieldNum = 0;
TAOS_FIELD_E *pFields = NULL;
code = stmtGetParamNum(stmt, &fieldNum);
ASSERT_EQ(code, TSDB_CODE_PAR_SYNTAX_ERROR);
code = taos_stmt_get_tag_fields(stmt, &fieldNum, &pFields);
ASSERT_EQ(code, TSDB_CODE_PAR_SYNTAX_ERROR);
// get fail dont influence the next stmt prepare
sql = "nsert into ? (ts, name) values (?, ?)";
code = taos_stmt_prepare(stmt, sql, 0);
checkError(stmt, code);
}
#pragma GCC diagnostic pop

View File

@ -15,6 +15,10 @@ if(DEFINED GRANT_CFG_INCLUDE_DIR)
add_definitions(-DGRANTS_CFG)
endif()
if(${BUILD_WITH_ANALYSIS})
add_definitions(-DUSE_ANALYTICS)
endif()
if(TD_GRANT)
ADD_DEFINITIONS(-D_GRANT)
endif()
@ -34,7 +38,9 @@ endif()
target_include_directories(
common
PUBLIC "$ENV{HOME}/.cos-local.2/include"
PUBLIC "${TD_SOURCE_DIR}/include/common"
PRIVATE "${CMAKE_CURRENT_SOURCE_DIR}/inc"
PRIVATE "${GRANT_CFG_INCLUDE_DIR}"
)
@ -45,30 +51,39 @@ if(${TD_WINDOWS})
PRIVATE "${TD_SOURCE_DIR}/contrib/pthread"
PRIVATE "${TD_SOURCE_DIR}/contrib/msvcregex"
)
endif()
target_link_libraries(
common
PUBLIC os
PUBLIC util
INTERFACE api
)
target_link_libraries(
common
PUBLIC os
PUBLIC util
INTERFACE api
)
else()
find_library(CURL_LIBRARY curl $ENV{HOME}/.cos-local.2/lib NO_DEFAULT_PATH)
find_library(SSL_LIBRARY ssl $ENV{HOME}/.cos-local.2/lib64 $ENV{HOME}/.cos-local.2/lib NO_DEFAULT_PATH)
find_library(CRYPTO_LIBRARY crypto $ENV{HOME}/.cos-local.2/lib64 $ENV{HOME}/.cos-local.2/lib NO_DEFAULT_PATH)
target_link_libraries(
common
PUBLIC ${CURL_LIBRARY}
PUBLIC ${SSL_LIBRARY}
PUBLIC ${CRYPTO_LIBRARY}
PUBLIC os
PUBLIC util
INTERFACE api
)
endif()
if(${BUILD_S3})
if(${BUILD_WITH_S3})
target_include_directories(
common
PUBLIC "$ENV{HOME}/.cos-local.2/include"
)
set(CMAKE_FIND_LIBRARY_SUFFIXES ".a")
set(CMAKE_PREFIX_PATH $ENV{HOME}/.cos-local.2)
find_library(S3_LIBRARY s3)
find_library(CURL_LIBRARY curl $ENV{HOME}/.cos-local.2/lib NO_DEFAULT_PATH)
find_library(XML2_LIBRARY xml2)
find_library(SSL_LIBRARY ssl $ENV{HOME}/.cos-local.2/lib64 $ENV{HOME}/.cos-local.2/lib NO_DEFAULT_PATH)
find_library(CRYPTO_LIBRARY crypto $ENV{HOME}/.cos-local.2/lib64 $ENV{HOME}/.cos-local.2/lib NO_DEFAULT_PATH)
target_link_libraries(
common

View File

@ -36,7 +36,7 @@ typedef struct {
static SAlgoMgmt tsAlgos = {0};
static int32_t taosAnalBufGetCont(SAnalyticBuf *pBuf, char **ppCont, int64_t *pContLen);
const char *taosAnalAlgoStr(EAnalAlgoType type) {
const char *taosAnalysisAlgoType(EAnalAlgoType type) {
switch (type) {
case ANAL_ALGO_TYPE_ANOMALY_DETECT:
return "anomaly-detection";
@ -60,7 +60,7 @@ const char *taosAnalAlgoUrlStr(EAnalAlgoType type) {
EAnalAlgoType taosAnalAlgoInt(const char *name) {
for (EAnalAlgoType i = 0; i < ANAL_ALGO_TYPE_END; ++i) {
if (strcasecmp(name, taosAnalAlgoStr(i)) == 0) {
if (strcasecmp(name, taosAnalysisAlgoType(i)) == 0) {
return i;
}
}
@ -188,12 +188,12 @@ int32_t taosAnalGetAlgoUrl(const char *algoName, EAnalAlgoType type, char *url,
SAnalyticsUrl *pUrl = taosHashAcquire(tsAlgos.hash, name, nameLen);
if (pUrl != NULL) {
tstrncpy(url, pUrl->url, urlLen);
uDebug("algo:%s, type:%s, url:%s", algoName, taosAnalAlgoStr(type), url);
uDebug("algo:%s, type:%s, url:%s", algoName, taosAnalysisAlgoType(type), url);
} else {
url[0] = 0;
terrno = TSDB_CODE_ANA_ALGO_NOT_FOUND;
code = terrno;
uError("algo:%s, type:%s, url not found", algoName, taosAnalAlgoStr(type));
uError("algo:%s, type:%s, url not found", algoName, taosAnalysisAlgoType(type));
}
if (taosThreadMutexUnlock(&tsAlgos.lock) != 0) {
@ -216,20 +216,20 @@ static size_t taosCurlWriteData(char *pCont, size_t contLen, size_t nmemb, void
return 0;
}
int64_t newDataSize = (int64_t) contLen * nmemb;
int64_t newDataSize = (int64_t)contLen * nmemb;
int64_t size = pRsp->dataLen + newDataSize;
if (pRsp->data == NULL) {
pRsp->data = taosMemoryMalloc(size + 1);
if (pRsp->data == NULL) {
uError("failed to prepare recv buffer for post rsp, len:%d, code:%s", (int32_t) size + 1, tstrerror(terrno));
return 0; // return the recv length, if failed, return 0
uError("failed to prepare recv buffer for post rsp, len:%d, code:%s", (int32_t)size + 1, tstrerror(terrno));
return 0; // return the recv length, if failed, return 0
}
} else {
char* p = taosMemoryRealloc(pRsp->data, size + 1);
char *p = taosMemoryRealloc(pRsp->data, size + 1);
if (p == NULL) {
uError("failed to prepare recv buffer for post rsp, len:%d, code:%s", (int32_t) size + 1, tstrerror(terrno));
return 0; // return the recv length, if failed, return 0
uError("failed to prepare recv buffer for post rsp, len:%d, code:%s", (int32_t)size + 1, tstrerror(terrno));
return 0; // return the recv length, if failed, return 0
}
pRsp->data = p;
@ -473,7 +473,7 @@ static int32_t taosAnalJsonBufWriteColMeta(SAnalyticBuf *pBuf, int32_t colIndex,
}
int32_t bufLen = tsnprintf(buf, sizeof(buf), " [\"%s\", \"%s\", %d]%s\n", colName, tDataTypes[colType].name,
tDataTypes[colType].bytes, last ? "" : ",");
tDataTypes[colType].bytes, last ? "" : ",");
if (taosWriteFile(pBuf->filePtr, buf, bufLen) != bufLen) {
return terrno;
}
@ -779,7 +779,9 @@ int32_t tsosAnalBufOpen(SAnalyticBuf *pBuf, int32_t numOfCols) { return 0; }
int32_t taosAnalBufWriteOptStr(SAnalyticBuf *pBuf, const char *optName, const char *optVal) { return 0; }
int32_t taosAnalBufWriteOptInt(SAnalyticBuf *pBuf, const char *optName, int64_t optVal) { return 0; }
int32_t taosAnalBufWriteOptFloat(SAnalyticBuf *pBuf, const char *optName, float optVal) { return 0; }
int32_t taosAnalBufWriteColMeta(SAnalyticBuf *pBuf, int32_t colIndex, int32_t colType, const char *colName) { return 0; }
int32_t taosAnalBufWriteColMeta(SAnalyticBuf *pBuf, int32_t colIndex, int32_t colType, const char *colName) {
return 0;
}
int32_t taosAnalBufWriteDataBegin(SAnalyticBuf *pBuf) { return 0; }
int32_t taosAnalBufWriteColBegin(SAnalyticBuf *pBuf, int32_t colIndex) { return 0; }
int32_t taosAnalBufWriteColData(SAnalyticBuf *pBuf, int32_t colIndex, int32_t colType, void *colValue) { return 0; }
@ -788,7 +790,7 @@ int32_t taosAnalBufWriteDataEnd(SAnalyticBuf *pBuf) { return 0; }
int32_t taosAnalBufClose(SAnalyticBuf *pBuf) { return 0; }
void taosAnalBufDestroy(SAnalyticBuf *pBuf) {}
const char *taosAnalAlgoStr(EAnalAlgoType algoType) { return 0; }
const char *taosAnalysisAlgoType(EAnalAlgoType algoType) { return 0; }
EAnalAlgoType taosAnalAlgoInt(const char *algoName) { return 0; }
const char *taosAnalAlgoUrlStr(EAnalAlgoType algoType) { return 0; }

View File

@ -14,12 +14,12 @@
*/
#define _DEFAULT_SOURCE
#include "tglobal.h"
#include "cJSON.h"
#include "defines.h"
#include "os.h"
#include "osString.h"
#include "tconfig.h"
#include "tglobal.h"
#include "tgrant.h"
#include "tjson.h"
#include "tlog.h"
@ -500,7 +500,9 @@ int32_t taosSetS3Cfg(SConfig *pCfg) {
TAOS_RETURN(TSDB_CODE_SUCCESS);
}
struct SConfig *taosGetCfg() { return tsCfg; }
struct SConfig *taosGetCfg() {
return tsCfg;
}
static int32_t taosLoadCfg(SConfig *pCfg, const char **envCmd, const char *inputCfgDir, const char *envFile,
char *apolloUrl) {
@ -818,8 +820,13 @@ static int32_t taosAddServerCfg(SConfig *pCfg) {
tsNumOfSnodeWriteThreads = tsNumOfCores / 4;
tsNumOfSnodeWriteThreads = TRANGE(tsNumOfSnodeWriteThreads, 2, 4);
tsQueueMemoryAllowed = tsTotalMemoryKB * 1024 * 0.1;
tsQueueMemoryAllowed = TRANGE(tsQueueMemoryAllowed, TSDB_MAX_MSG_SIZE * 10LL, TSDB_MAX_MSG_SIZE * 10000LL);
tsQueueMemoryAllowed = tsTotalMemoryKB * 1024 * RPC_MEMORY_USAGE_RATIO * QUEUE_MEMORY_USAGE_RATIO;
tsQueueMemoryAllowed = TRANGE(tsQueueMemoryAllowed, TSDB_MAX_MSG_SIZE * QUEUE_MEMORY_USAGE_RATIO * 10LL,
TSDB_MAX_MSG_SIZE * QUEUE_MEMORY_USAGE_RATIO * 10000LL);
tsApplyMemoryAllowed = tsTotalMemoryKB * 1024 * RPC_MEMORY_USAGE_RATIO * (1 - QUEUE_MEMORY_USAGE_RATIO);
tsApplyMemoryAllowed = TRANGE(tsApplyMemoryAllowed, TSDB_MAX_MSG_SIZE * (1 - QUEUE_MEMORY_USAGE_RATIO) * 10LL,
TSDB_MAX_MSG_SIZE * (1 - QUEUE_MEMORY_USAGE_RATIO) * 10000LL);
tsLogBufferMemoryAllowed = tsTotalMemoryKB * 1024 * 0.1;
tsLogBufferMemoryAllowed = TRANGE(tsLogBufferMemoryAllowed, TSDB_MAX_MSG_SIZE * 10LL, TSDB_MAX_MSG_SIZE * 10000LL);
@ -857,7 +864,7 @@ static int32_t taosAddServerCfg(SConfig *pCfg) {
TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "numOfSnodeSharedThreads", tsNumOfSnodeStreamThreads, 2, 1024, CFG_SCOPE_SERVER, CFG_DYN_SERVER_LAZY,CFG_CATEGORY_LOCAL));
TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "numOfSnodeUniqueThreads", tsNumOfSnodeWriteThreads, 2, 1024, CFG_SCOPE_SERVER, CFG_DYN_SERVER_LAZY,CFG_CATEGORY_LOCAL));
TAOS_CHECK_RETURN(cfgAddInt64(pCfg, "rpcQueueMemoryAllowed", tsQueueMemoryAllowed, TSDB_MAX_MSG_SIZE * 10L, INT64_MAX, CFG_SCOPE_SERVER, CFG_DYN_SERVER,CFG_CATEGORY_GLOBAL));
TAOS_CHECK_RETURN(cfgAddInt64(pCfg, "rpcQueueMemoryAllowed", tsQueueMemoryAllowed, TSDB_MAX_MSG_SIZE * RPC_MEMORY_USAGE_RATIO * 10L, INT64_MAX, CFG_SCOPE_SERVER, CFG_DYN_SERVER,CFG_CATEGORY_GLOBAL));
TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "syncElectInterval", tsElectInterval, 10, 1000 * 60 * 24 * 2, CFG_SCOPE_SERVER, CFG_DYN_NONE,CFG_CATEGORY_GLOBAL));
TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "syncHeartbeatInterval", tsHeartbeatInterval, 10, 1000 * 60 * 24 * 2, CFG_SCOPE_SERVER, CFG_DYN_NONE,CFG_CATEGORY_GLOBAL));
TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "syncHeartbeatTimeout", tsHeartbeatTimeout, 10, 1000 * 60 * 24 * 2, CFG_SCOPE_SERVER, CFG_DYN_NONE,CFG_CATEGORY_GLOBAL));
@ -1572,7 +1579,8 @@ static int32_t taosSetServerCfg(SConfig *pCfg) {
tsNumOfSnodeWriteThreads = pItem->i32;
TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "rpcQueueMemoryAllowed");
tsQueueMemoryAllowed = pItem->i64;
tsQueueMemoryAllowed = cfgGetItem(pCfg, "rpcQueueMemoryAllowed")->i64 * QUEUE_MEMORY_USAGE_RATIO;
tsApplyMemoryAllowed = cfgGetItem(pCfg, "rpcQueueMemoryAllowed")->i64 * (1 - QUEUE_MEMORY_USAGE_RATIO);
TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "simdEnable");
tsSIMDEnable = (bool)pItem->bval;
@ -2395,6 +2403,12 @@ static int32_t taosCfgDynamicOptionsForServer(SConfig *pCfg, const char *name) {
code = TSDB_CODE_SUCCESS;
goto _exit;
}
if (strcasecmp("rpcQueueMemoryAllowed", name) == 0) {
tsQueueMemoryAllowed = cfgGetItem(pCfg, "rpcQueueMemoryAllowed")->i64 * QUEUE_MEMORY_USAGE_RATIO;
tsApplyMemoryAllowed = cfgGetItem(pCfg, "rpcQueueMemoryAllowed")->i64 * (1 - QUEUE_MEMORY_USAGE_RATIO);
code = TSDB_CODE_SUCCESS;
goto _exit;
}
if (strcasecmp(name, "numOfCompactThreads") == 0) {
#ifdef TD_ENTERPRISE
@ -2500,7 +2514,6 @@ static int32_t taosCfgDynamicOptionsForServer(SConfig *pCfg, const char *name) {
{"experimental", &tsExperimental},
{"numOfRpcSessions", &tsNumOfRpcSessions},
{"rpcQueueMemoryAllowed", &tsQueueMemoryAllowed},
{"shellActivityTimer", &tsShellActivityTimer},
{"readTimeout", &tsReadTimeout},
{"safetyCheckLevel", &tsSafetyCheckLevel},

View File

@ -181,7 +181,7 @@ void dmSendStatusReq(SDnodeMgmt *pMgmt) {
req.numOfSupportVnodes = tsNumOfSupportVnodes;
req.numOfDiskCfg = tsDiskCfgNum;
req.memTotal = tsTotalMemoryKB * 1024;
req.memAvail = req.memTotal - tsQueueMemoryAllowed - 16 * 1024 * 1024;
req.memAvail = req.memTotal - tsQueueMemoryAllowed - tsApplyMemoryAllowed - 16 * 1024 * 1024;
tstrncpy(req.dnodeEp, tsLocalEp, TSDB_EP_LEN);
tstrncpy(req.machineId, pMgmt->pData->machineId, TSDB_MACHINE_ID_LEN + 1);

View File

@ -323,7 +323,7 @@ int32_t vmPutRpcMsgToQueue(SVnodeMgmt *pMgmt, EQueueType qtype, SRpcMsg *pRpc) {
return TSDB_CODE_INVALID_MSG;
}
EQItype itype = APPLY_QUEUE == qtype ? DEF_QITEM : RPC_QITEM;
EQItype itype = APPLY_QUEUE == qtype ? APPLY_QITEM : RPC_QITEM;
SRpcMsg *pMsg;
code = taosAllocateQitem(sizeof(SRpcMsg), itype, pRpc->contLen, (void **)&pMsg);
if (code) {

View File

@ -36,7 +36,8 @@ void Testbase::InitLog(const char* path) {
tstrncpy(tsLogDir, path, PATH_MAX);
taosGetSystemInfo();
tsQueueMemoryAllowed = tsTotalMemoryKB * 0.1;
tsQueueMemoryAllowed = tsTotalMemoryKB * 0.06;
tsApplyMemoryAllowed = tsTotalMemoryKB * 0.04;
if (taosInitLog("taosdlog", 1, false) != 0) {
printf("failed to init log file\n");
}

View File

@ -16,10 +16,10 @@ if(TD_ENTERPRISE)
ELSEIF(${BUILD_WITH_COS})
add_definitions(-DUSE_COS)
endif()
endif()
if(${BUILD_WITH_ANALYSIS})
add_definitions(-DUSE_ANALYTICS)
endif()
if(${BUILD_WITH_ANALYSIS})
add_definitions(-DUSE_ANALYTICS)
endif()
add_library(mnode STATIC ${MNODE_SRC})

View File

@ -22,8 +22,9 @@
extern "C" {
#endif
int32_t mndInitAnode(SMnode *pMnode);
void mndCleanupAnode(SMnode *pMnode);
int32_t mndInitAnode(SMnode* pMnode);
void mndCleanupAnode(SMnode* pMnode);
void mndRetrieveAlgoList(SMnode* pMnode, SArray* pFc, SArray* pAd);
#ifdef __cplusplus
}

View File

@ -637,6 +637,32 @@ static void mndCancelGetNextAnode(SMnode *pMnode, void *pIter) {
sdbCancelFetchByType(pSdb, pIter, SDB_ANODE);
}
// todo handle multiple anode case, remove the duplicate algos
void mndRetrieveAlgoList(SMnode* pMnode, SArray* pFc, SArray* pAd) {
SSdb *pSdb = pMnode->pSdb;
void *pIter = NULL;
SAnodeObj *pObj = NULL;
while (1) {
pIter = sdbFetch(pSdb, SDB_ANODE, pIter, (void **)&pObj);
if (pIter == NULL) {
break;
}
if (pObj->numOfAlgos >= ANAL_ALGO_TYPE_END) {
if (pObj->algos[ANAL_ALGO_TYPE_ANOMALY_DETECT] != NULL) {
taosArrayAddAll(pAd, pObj->algos[ANAL_ALGO_TYPE_ANOMALY_DETECT]);
}
if (pObj->algos[ANAL_ALGO_TYPE_FORECAST] != NULL) {
taosArrayAddAll(pFc, pObj->algos[ANAL_ALGO_TYPE_FORECAST]);
}
}
sdbRelease(pSdb, pObj);
}
}
static int32_t mndRetrieveAnodesFull(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pBlock, int32_t rows) {
SMnode *pMnode = pReq->info.node;
SSdb *pSdb = pMnode->pSdb;
@ -661,7 +687,7 @@ static int32_t mndRetrieveAnodesFull(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock
code = colDataSetVal(pColInfo, numOfRows, (const char *)&pObj->id, false);
if (code != 0) goto _end;
STR_TO_VARSTR(buf, taosAnalAlgoStr(t));
STR_TO_VARSTR(buf, taosAnalysisAlgoType(t));
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
code = colDataSetVal(pColInfo, numOfRows, buf, false);
if (code != 0) goto _end;
@ -900,5 +926,6 @@ int32_t mndInitAnode(SMnode *pMnode) {
}
void mndCleanupAnode(SMnode *pMnode) {}
void mndRetrieveAlgoList(SMnode *pMnode, SArray *pFc, SArray *pAd) {}
#endif

View File

@ -19,6 +19,7 @@
#include "mndSync.h"
#include "thttp.h"
#include "tjson.h"
#include "mndAnode.h"
typedef struct {
int64_t numOfDnode;
@ -32,6 +33,7 @@ typedef struct {
int64_t totalPoints;
int64_t totalStorage;
int64_t compStorage;
int32_t numOfAnalysisAlgos;
} SMnodeStat;
static void mndGetStat(SMnode* pMnode, SMnodeStat* pStat) {
@ -58,18 +60,21 @@ static void mndGetStat(SMnode* pMnode, SMnodeStat* pStat) {
sdbRelease(pSdb, pVgroup);
}
}
pStat->numOfChildTable = 100;
pStat->numOfColumn = 200;
pStat->totalPoints = 300;
pStat->totalStorage = 400;
pStat->compStorage = 500;
static int32_t algoToJson(const void* pObj, SJson* pJson) {
const SAnodeAlgo* pNode = (const SAnodeAlgo*)pObj;
int32_t code = tjsonAddStringToObject(pJson, "name", pNode->name);
return code;
}
static void mndBuildRuntimeInfo(SMnode* pMnode, SJson* pJson) {
SMnodeStat mstat = {0};
int32_t code = 0;
int32_t lino = 0;
SArray* pFcList = NULL;
SArray* pAdList = NULL;
mndGetStat(pMnode, &mstat);
TAOS_CHECK_GOTO(tjsonAddDoubleToObject(pJson, "numOfDnode", mstat.numOfDnode), &lino, _OVER);
@ -82,8 +87,55 @@ static void mndBuildRuntimeInfo(SMnode* pMnode, SJson* pJson) {
TAOS_CHECK_GOTO(tjsonAddDoubleToObject(pJson, "numOfPoint", mstat.totalPoints), &lino, _OVER);
TAOS_CHECK_GOTO(tjsonAddDoubleToObject(pJson, "totalStorage", mstat.totalStorage), &lino, _OVER);
TAOS_CHECK_GOTO(tjsonAddDoubleToObject(pJson, "compStorage", mstat.compStorage), &lino, _OVER);
pFcList = taosArrayInit(4, sizeof(SAnodeAlgo));
pAdList = taosArrayInit(4, sizeof(SAnodeAlgo));
if (pFcList == NULL || pAdList == NULL) {
lino = __LINE__;
goto _OVER;
}
mndRetrieveAlgoList(pMnode, pFcList, pAdList);
if (taosArrayGetSize(pFcList) > 0) {
SJson* items = tjsonAddArrayToObject(pJson, "forecast");
TSDB_CHECK_NULL(items, code, lino, _OVER, terrno);
for (int32_t i = 0; i < taosArrayGetSize(pFcList); ++i) {
SJson* item = tjsonCreateObject();
TSDB_CHECK_NULL(item, code, lino, _OVER, terrno);
TAOS_CHECK_GOTO(tjsonAddItemToArray(items, item), &lino, _OVER);
SAnodeAlgo* p = taosArrayGet(pFcList, i);
TSDB_CHECK_NULL(p, code, lino, _OVER, terrno);
TAOS_CHECK_GOTO(tjsonAddStringToObject(item, "name", p->name), &lino, _OVER);
}
}
if (taosArrayGetSize(pAdList) > 0) {
SJson* items1 = tjsonAddArrayToObject(pJson, "anomaly_detection");
TSDB_CHECK_NULL(items1, code, lino, _OVER, terrno);
for (int32_t i = 0; i < taosArrayGetSize(pAdList); ++i) {
SJson* item = tjsonCreateObject();
TSDB_CHECK_NULL(item, code, lino, _OVER, terrno);
TAOS_CHECK_GOTO(tjsonAddItemToArray(items1, item), &lino, _OVER);
SAnodeAlgo* p = taosArrayGet(pAdList, i);
TSDB_CHECK_NULL(p, code, lino, _OVER, terrno);
TAOS_CHECK_GOTO(tjsonAddStringToObject(item, "name", p->name), &lino, _OVER);
}
}
_OVER:
if (code != 0) mError("failed to mndBuildRuntimeInfo at line:%d since %s", lino, tstrerror(code));
taosArrayDestroy(pFcList);
taosArrayDestroy(pAdList);
if (code != 0) {
mError("failed to mndBuildRuntimeInfo at line:%d since %s", lino, tstrerror(code));
}
}
static char* mndBuildTelemetryReport(SMnode* pMnode) {
@ -136,21 +188,24 @@ static int32_t mndProcessTelemTimer(SRpcMsg* pReq) {
int32_t line = 0;
SMnode* pMnode = pReq->info.node;
STelemMgmt* pMgmt = &pMnode->telemMgmt;
if (!tsEnableTelem) return 0;
if (!tsEnableTelem) {
return 0;
}
(void)taosThreadMutexLock(&pMgmt->lock);
char* pCont = mndBuildTelemetryReport(pMnode);
(void)taosThreadMutexUnlock(&pMgmt->lock);
if (pCont == NULL) {
return 0;
}
TSDB_CHECK_NULL(pCont, code, line, _end, terrno);
code = taosSendTelemReport(&pMgmt->addrMgt, tsTelemUri, tsTelemPort, pCont, strlen(pCont), HTTP_FLAT);
taosMemoryFree(pCont);
return code;
_end:
if (code != 0) {
mError("%s failed to send at line %d since %s", __func__, line, tstrerror(code));
mError("%s failed to send telemetry report, line %d since %s", __func__, line, tstrerror(code));
}
taosMemoryFree(pCont);
return code;
@ -161,15 +216,17 @@ int32_t mndInitTelem(SMnode* pMnode) {
STelemMgmt* pMgmt = &pMnode->telemMgmt;
(void)taosThreadMutexInit(&pMgmt->lock, NULL);
if ((code = taosGetEmail(pMgmt->email, sizeof(pMgmt->email))) != 0)
if ((code = taosGetEmail(pMgmt->email, sizeof(pMgmt->email))) != 0) {
mWarn("failed to get email since %s", tstrerror(code));
}
code = taosTelemetryMgtInit(&pMgmt->addrMgt, tsTelemServer);
if (code != 0) {
mError("failed to init telemetry management since %s", tstrerror(code));
return code;
}
mndSetMsgHandle(pMnode, TDMT_MND_TELEM_TIMER, mndProcessTelemTimer);
mndSetMsgHandle(pMnode, TDMT_MND_TELEM_TIMER, mndProcessTelemTimer);
return 0;
}

View File

@ -130,7 +130,7 @@ int32_t metaGetTableNameByUid(void *pVnode, uint64_t uid, char *tbName);
int metaGetTableSzNameByUid(void *meta, uint64_t uid, char *tbName);
int metaGetTableUidByName(void *pVnode, char *tbName, uint64_t *uid);
int metaGetTableTypeByName(void *meta, char *tbName, ETableType *tbType);
int metaGetTableTypeSuidByName(void *meta, char *tbName, ETableType *tbType, uint64_t* suid);
int metaGetTableTtlByUid(void *meta, uint64_t uid, int64_t *ttlDays);
bool metaIsTableExist(void *pVnode, tb_uid_t uid);
int32_t metaGetCachedTableUidList(void *pVnode, tb_uid_t suid, const uint8_t *key, int32_t keyLen, SArray *pList,

View File

@ -190,13 +190,20 @@ int metaGetTableUidByName(void *pVnode, char *tbName, uint64_t *uid) {
return 0;
}
int metaGetTableTypeByName(void *pVnode, char *tbName, ETableType *tbType) {
int metaGetTableTypeSuidByName(void *pVnode, char *tbName, ETableType *tbType, uint64_t* suid) {
int code = 0;
SMetaReader mr = {0};
metaReaderDoInit(&mr, ((SVnode *)pVnode)->pMeta, META_READER_LOCK);
code = metaGetTableEntryByName(&mr, tbName);
if (code == 0) *tbType = mr.me.type;
if (TSDB_CHILD_TABLE == mr.me.type) {
*suid = mr.me.ctbEntry.suid;
} else if (TSDB_SUPER_TABLE == mr.me.type) {
*suid = mr.me.uid;
} else {
*suid = 0;
}
metaReaderClear(&mr);
return code;

View File

@ -559,6 +559,7 @@ struct STsdbSnapWriter {
SIterMerger* tombIterMerger;
// writer
bool toSttOnly;
SFSetWriter* fsetWriter;
} ctx[1];
};
@ -622,6 +623,7 @@ static int32_t tsdbSnapWriteFileSetOpenReader(STsdbSnapWriter* writer) {
int32_t code = 0;
int32_t lino = 0;
writer->ctx->toSttOnly = false;
if (writer->ctx->fset) {
#if 0
// open data reader
@ -656,6 +658,14 @@ static int32_t tsdbSnapWriteFileSetOpenReader(STsdbSnapWriter* writer) {
// open stt reader array
SSttLvl* lvl;
TARRAY2_FOREACH(writer->ctx->fset->lvlArr, lvl) {
if (lvl->level != 0) {
if (TARRAY2_SIZE(lvl->fobjArr) > 0) {
writer->ctx->toSttOnly = true;
}
continue; // Only merge level 0
}
STFileObj* fobj;
TARRAY2_FOREACH(lvl->fobjArr, fobj) {
SSttFileReader* reader;
@ -782,7 +792,7 @@ static int32_t tsdbSnapWriteFileSetOpenWriter(STsdbSnapWriter* writer) {
SFSetWriterConfig config = {
.tsdb = writer->tsdb,
.toSttOnly = false,
.toSttOnly = writer->ctx->toSttOnly,
.compactVersion = writer->compactVersion,
.minRow = writer->minRow,
.maxRow = writer->maxRow,
@ -791,7 +801,7 @@ static int32_t tsdbSnapWriteFileSetOpenWriter(STsdbSnapWriter* writer) {
.fid = writer->ctx->fid,
.cid = writer->commitID,
.did = writer->ctx->did,
.level = 0,
.level = writer->ctx->toSttOnly ? 1 : 0,
};
// merge stt files to either data or a new stt file
if (writer->ctx->fset) {

View File

@ -94,7 +94,7 @@ void initMetadataAPI(SStoreMeta* pMeta) {
pMeta->getTableTagsByUid = metaGetTableTagsByUids;
pMeta->getTableUidByName = metaGetTableUidByName;
pMeta->getTableTypeByName = metaGetTableTypeByName;
pMeta->getTableTypeSuidByName = metaGetTableTypeSuidByName;
pMeta->getTableNameByUid = metaGetTableNameByUid;
pMeta->getTableSchema = vnodeGetTableSchema;

View File

@ -49,6 +49,35 @@ int32_t fillTableColCmpr(SMetaReader *reader, SSchemaExt *pExt, int32_t numOfCol
return 0;
}
void vnodePrintTableMeta(STableMetaRsp* pMeta) {
if (!(qDebugFlag & DEBUG_DEBUG)) {
return;
}
qDebug("tbName:%s", pMeta->tbName);
qDebug("stbName:%s", pMeta->stbName);
qDebug("dbFName:%s", pMeta->dbFName);
qDebug("dbId:%" PRId64, pMeta->dbId);
qDebug("numOfTags:%d", pMeta->numOfTags);
qDebug("numOfColumns:%d", pMeta->numOfColumns);
qDebug("precision:%d", pMeta->precision);
qDebug("tableType:%d", pMeta->tableType);
qDebug("sversion:%d", pMeta->sversion);
qDebug("tversion:%d", pMeta->tversion);
qDebug("suid:%" PRIu64, pMeta->suid);
qDebug("tuid:%" PRIu64, pMeta->tuid);
qDebug("vgId:%d", pMeta->vgId);
qDebug("sysInfo:%d", pMeta->sysInfo);
if (pMeta->pSchemas) {
for (int32_t i = 0; i < (pMeta->numOfColumns + pMeta->numOfTags); ++i) {
SSchema* pSchema = pMeta->pSchemas + i;
qDebug("%d col/tag: type:%d, flags:%d, colId:%d, bytes:%d, name:%s", i, pSchema->type, pSchema->flags, pSchema->colId, pSchema->bytes, pSchema->name);
}
}
}
int32_t vnodeGetTableMeta(SVnode *pVnode, SRpcMsg *pMsg, bool direct) {
STableInfoReq infoReq = {0};
STableMetaRsp metaRsp = {0};
@ -155,6 +184,8 @@ int32_t vnodeGetTableMeta(SVnode *pVnode, SRpcMsg *pMsg, bool direct) {
goto _exit;
}
vnodePrintTableMeta(&metaRsp);
// encode and send response
rspLen = tSerializeSTableMetaRsp(NULL, 0, &metaRsp);
if (rspLen < 0) {

View File

@ -202,13 +202,13 @@ do { \
#define EXPLAIN_SUM_ROW_END() do { varDataSetLen(tbuf, tlen); tlen += VARSTR_HEADER_SIZE; } while (0)
#define EXPLAIN_ROW_APPEND_LIMIT_IMPL(_pLimit, sl) do { \
if (_pLimit) { \
if (_pLimit && ((SLimitNode*)_pLimit)->limit) { \
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT); \
SLimitNode* pLimit = (SLimitNode*)_pLimit; \
EXPLAIN_ROW_APPEND(((sl) ? EXPLAIN_SLIMIT_FORMAT : EXPLAIN_LIMIT_FORMAT), pLimit->limit); \
EXPLAIN_ROW_APPEND(((sl) ? EXPLAIN_SLIMIT_FORMAT : EXPLAIN_LIMIT_FORMAT), pLimit->limit->datum.i); \
if (pLimit->offset) { \
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT); \
EXPLAIN_ROW_APPEND(((sl) ? EXPLAIN_SOFFSET_FORMAT : EXPLAIN_OFFSET_FORMAT), pLimit->offset);\
EXPLAIN_ROW_APPEND(((sl) ? EXPLAIN_SOFFSET_FORMAT : EXPLAIN_OFFSET_FORMAT), pLimit->offset->datum.i);\
} \
} \
} while (0)

View File

@ -676,9 +676,9 @@ static int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx
EXPLAIN_ROW_APPEND(EXPLAIN_WIN_OFFSET_FORMAT, pStart->literal, pEnd->literal);
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
}
if (NULL != pJoinNode->pJLimit) {
if (NULL != pJoinNode->pJLimit && NULL != ((SLimitNode*)pJoinNode->pJLimit)->limit) {
SLimitNode* pJLimit = (SLimitNode*)pJoinNode->pJLimit;
EXPLAIN_ROW_APPEND(EXPLAIN_JLIMIT_FORMAT, pJLimit->limit);
EXPLAIN_ROW_APPEND(EXPLAIN_JLIMIT_FORMAT, pJLimit->limit->datum.i);
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
}
if (IS_WINDOW_JOIN(pJoinNode->subType)) {

View File

@ -668,4 +668,4 @@ int32_t createAnomalywindowOperatorInfo(SOperatorInfo* downstream, SPhysiNode* p
}
void destroyForecastInfo(void* param) {}
#endif
#endif

View File

@ -49,13 +49,13 @@ typedef enum {
static FilterCondType checkTagCond(SNode* cond);
static int32_t optimizeTbnameInCond(void* metaHandle, int64_t suid, SArray* list, SNode* pTagCond, SStorageAPI* pAPI);
static int32_t optimizeTbnameInCondImpl(void* metaHandle, SArray* list, SNode* pTagCond, SStorageAPI* pStoreAPI);
static int32_t optimizeTbnameInCondImpl(void* metaHandle, SArray* list, SNode* pTagCond, SStorageAPI* pStoreAPI, uint64_t suid);
static int32_t getTableList(void* pVnode, SScanPhysiNode* pScanNode, SNode* pTagCond, SNode* pTagIndexCond,
STableListInfo* pListInfo, uint8_t* digest, const char* idstr, SStorageAPI* pStorageAPI);
static int64_t getLimit(const SNode* pLimit) { return NULL == pLimit ? -1 : ((SLimitNode*)pLimit)->limit; }
static int64_t getOffset(const SNode* pLimit) { return NULL == pLimit ? -1 : ((SLimitNode*)pLimit)->offset; }
static int64_t getLimit(const SNode* pLimit) { return (NULL == pLimit || NULL == ((SLimitNode*)pLimit)->limit) ? -1 : ((SLimitNode*)pLimit)->limit->datum.i; }
static int64_t getOffset(const SNode* pLimit) { return (NULL == pLimit || NULL == ((SLimitNode*)pLimit)->offset) ? -1 : ((SLimitNode*)pLimit)->offset->datum.i; }
static void releaseColInfoData(void* pCol);
void initResultRowInfo(SResultRowInfo* pResultRowInfo) {
@ -1061,7 +1061,7 @@ static int32_t optimizeTbnameInCond(void* pVnode, int64_t suid, SArray* list, SN
int32_t ntype = nodeType(cond);
if (ntype == QUERY_NODE_OPERATOR) {
ret = optimizeTbnameInCondImpl(pVnode, list, cond, pAPI);
ret = optimizeTbnameInCondImpl(pVnode, list, cond, pAPI, suid);
}
if (ntype != QUERY_NODE_LOGIC_CONDITION || ((SLogicConditionNode*)cond)->condType != LOGIC_COND_TYPE_AND) {
@ -1080,7 +1080,7 @@ static int32_t optimizeTbnameInCond(void* pVnode, int64_t suid, SArray* list, SN
SListCell* cell = pList->pHead;
for (int i = 0; i < len; i++) {
if (cell == NULL) break;
if (optimizeTbnameInCondImpl(pVnode, list, cell->pNode, pAPI) == 0) {
if (optimizeTbnameInCondImpl(pVnode, list, cell->pNode, pAPI, suid) == 0) {
hasTbnameCond = true;
break;
}
@ -1099,7 +1099,7 @@ static int32_t optimizeTbnameInCond(void* pVnode, int64_t suid, SArray* list, SN
// only return uid that does not contained in pExistedUidList
static int32_t optimizeTbnameInCondImpl(void* pVnode, SArray* pExistedUidList, SNode* pTagCond,
SStorageAPI* pStoreAPI) {
SStorageAPI* pStoreAPI, uint64_t suid) {
if (nodeType(pTagCond) != QUERY_NODE_OPERATOR) {
return -1;
}
@ -1148,10 +1148,13 @@ static int32_t optimizeTbnameInCondImpl(void* pVnode, SArray* pExistedUidList, S
for (int i = 0; i < numOfTables; i++) {
char* name = taosArrayGetP(pTbList, i);
uint64_t uid = 0;
uint64_t uid = 0, csuid = 0;
if (pStoreAPI->metaFn.getTableUidByName(pVnode, name, &uid) == 0) {
ETableType tbType = TSDB_TABLE_MAX;
if (pStoreAPI->metaFn.getTableTypeByName(pVnode, name, &tbType) == 0 && tbType == TSDB_CHILD_TABLE) {
if (pStoreAPI->metaFn.getTableTypeSuidByName(pVnode, name, &tbType, &csuid) == 0 && tbType == TSDB_CHILD_TABLE) {
if (suid != csuid) {
continue;
}
if (NULL == uHash || taosHashGet(uHash, &uid, sizeof(uid)) == NULL) {
STUidTagInfo s = {.uid = uid, .name = name, .pTagVal = NULL};
void* tmp = taosArrayPush(pExistedUidList, &s);

View File

@ -12,13 +12,12 @@
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include "executorInt.h"
#include "filter.h"
#include "function.h"
#include "functionMgt.h"
#include "operator.h"
#include "querytask.h"
#include "storageapi.h"
#include "tanalytics.h"
#include "tcommon.h"
#include "tcompare.h"
@ -29,24 +28,24 @@
#ifdef USE_ANALYTICS
typedef struct {
char algoName[TSDB_ANALYTIC_ALGO_NAME_LEN];
char algoUrl[TSDB_ANALYTIC_ALGO_URL_LEN];
char algoOpt[TSDB_ANALYTIC_ALGO_OPTION_LEN];
int64_t maxTs;
int64_t minTs;
int64_t numOfRows;
uint64_t groupId;
int64_t optRows;
int64_t cachedRows;
int32_t numOfBlocks;
int16_t resTsSlot;
int16_t resValSlot;
int16_t resLowSlot;
int16_t resHighSlot;
int16_t inputTsSlot;
int16_t inputValSlot;
int8_t inputValType;
int8_t inputPrecision;
char algoName[TSDB_ANALYTIC_ALGO_NAME_LEN];
char algoUrl[TSDB_ANALYTIC_ALGO_URL_LEN];
char algoOpt[TSDB_ANALYTIC_ALGO_OPTION_LEN];
int64_t maxTs;
int64_t minTs;
int64_t numOfRows;
uint64_t groupId;
int64_t optRows;
int64_t cachedRows;
int32_t numOfBlocks;
int16_t resTsSlot;
int16_t resValSlot;
int16_t resLowSlot;
int16_t resHighSlot;
int16_t inputTsSlot;
int16_t inputValSlot;
int8_t inputValType;
int8_t inputPrecision;
SAnalyticBuf analBuf;
} SForecastSupp;
@ -118,7 +117,7 @@ static int32_t forecastCacheBlock(SForecastSupp* pSupp, SSDataBlock* pBlock, con
static int32_t forecastCloseBuf(SForecastSupp* pSupp) {
SAnalyticBuf* pBuf = &pSupp->analBuf;
int32_t code = 0;
int32_t code = 0;
for (int32_t i = 0; i < 2; ++i) {
code = taosAnalBufWriteColEnd(pBuf, i);
@ -178,7 +177,6 @@ static int32_t forecastCloseBuf(SForecastSupp* pSupp) {
code = taosAnalBufWriteOptInt(pBuf, "start", start);
if (code != 0) return code;
bool hasEvery = taosAnalGetOptInt(pSupp->algoOpt, "every", &every);
if (!hasEvery) {
qDebug("forecast every not found from %s, use %" PRId64, pSupp->algoOpt, every);
@ -192,14 +190,14 @@ static int32_t forecastCloseBuf(SForecastSupp* pSupp) {
static int32_t forecastAnalysis(SForecastSupp* pSupp, SSDataBlock* pBlock, const char* pId) {
SAnalyticBuf* pBuf = &pSupp->analBuf;
int32_t resCurRow = pBlock->info.rows;
int8_t tmpI8;
int16_t tmpI16;
int32_t tmpI32;
int64_t tmpI64;
float tmpFloat;
double tmpDouble;
int32_t code = 0;
int32_t resCurRow = pBlock->info.rows;
int8_t tmpI8;
int16_t tmpI16;
int32_t tmpI32;
int64_t tmpI64;
float tmpFloat;
double tmpDouble;
int32_t code = 0;
SColumnInfoData* pResValCol = taosArrayGet(pBlock->pDataBlock, pSupp->resValSlot);
if (NULL == pResValCol) {
@ -356,8 +354,8 @@ _OVER:
}
static int32_t forecastAggregateBlocks(SForecastSupp* pSupp, SSDataBlock* pResBlock, const char* pId) {
int32_t code = TSDB_CODE_SUCCESS;
int32_t lino = 0;
int32_t code = TSDB_CODE_SUCCESS;
int32_t lino = 0;
SAnalyticBuf* pBuf = &pSupp->analBuf;
code = forecastCloseBuf(pSupp);
@ -542,7 +540,7 @@ static int32_t forecastParseAlgo(SForecastSupp* pSupp) {
static int32_t forecastCreateBuf(SForecastSupp* pSupp) {
SAnalyticBuf* pBuf = &pSupp->analBuf;
int64_t ts = 0; // taosGetTimestampMs();
int64_t ts = 0; // taosGetTimestampMs();
pBuf->bufType = ANALYTICS_BUF_TYPE_JSON_COL;
snprintf(pBuf->fileName, sizeof(pBuf->fileName), "%s/tdengine-forecast-%" PRId64, tsTempDir, ts);

View File

@ -1185,7 +1185,7 @@ int32_t createHashJoinOperatorInfo(SOperatorInfo** pDownstream, int32_t numOfDow
pInfo->tblTimeRange.skey = pJoinNode->timeRange.skey;
pInfo->tblTimeRange.ekey = pJoinNode->timeRange.ekey;
pInfo->ctx.limit = pJoinNode->node.pLimit ? ((SLimitNode*)pJoinNode->node.pLimit)->limit : INT64_MAX;
pInfo->ctx.limit = (pJoinNode->node.pLimit && ((SLimitNode*)pJoinNode->node.pLimit)->limit) ? ((SLimitNode*)pJoinNode->node.pLimit)->limit->datum.i : INT64_MAX;
setOperatorInfo(pOperator, "HashJoinOperator", QUERY_NODE_PHYSICAL_PLAN_HASH_JOIN, false, OP_NOT_OPENED, pInfo, pTaskInfo);

View File

@ -3592,7 +3592,7 @@ int32_t mJoinInitWindowCtx(SMJoinOperatorInfo* pJoin, SSortMergeJoinPhysiNode* p
switch (pJoinNode->subType) {
case JOIN_STYPE_ASOF:
pCtx->asofOpType = pJoinNode->asofOpType;
pCtx->jLimit = pJoinNode->pJLimit ? ((SLimitNode*)pJoinNode->pJLimit)->limit : 1;
pCtx->jLimit = (pJoinNode->pJLimit && ((SLimitNode*)pJoinNode->pJLimit)->limit) ? ((SLimitNode*)pJoinNode->pJLimit)->limit->datum.i : 1;
pCtx->eqRowsAcq = ASOF_EQ_ROW_INCLUDED(pCtx->asofOpType);
pCtx->lowerRowsAcq = (JOIN_TYPE_RIGHT != pJoin->joinType) ? ASOF_LOWER_ROW_INCLUDED(pCtx->asofOpType) : ASOF_GREATER_ROW_INCLUDED(pCtx->asofOpType);
pCtx->greaterRowsAcq = (JOIN_TYPE_RIGHT != pJoin->joinType) ? ASOF_GREATER_ROW_INCLUDED(pCtx->asofOpType) : ASOF_LOWER_ROW_INCLUDED(pCtx->asofOpType);
@ -3609,7 +3609,7 @@ int32_t mJoinInitWindowCtx(SMJoinOperatorInfo* pJoin, SSortMergeJoinPhysiNode* p
SWindowOffsetNode* pOffsetNode = (SWindowOffsetNode*)pJoinNode->pWindowOffset;
SValueNode* pWinBegin = (SValueNode*)pOffsetNode->pStartOffset;
SValueNode* pWinEnd = (SValueNode*)pOffsetNode->pEndOffset;
pCtx->jLimit = pJoinNode->pJLimit ? ((SLimitNode*)pJoinNode->pJLimit)->limit : INT64_MAX;
pCtx->jLimit = (pJoinNode->pJLimit && ((SLimitNode*)pJoinNode->pJLimit)->limit) ? ((SLimitNode*)pJoinNode->pJLimit)->limit->datum.i : INT64_MAX;
pCtx->winBeginOffset = pWinBegin->datum.i;
pCtx->winEndOffset = pWinEnd->datum.i;
pCtx->eqRowsAcq = (pCtx->winBeginOffset <= 0 && pCtx->winEndOffset >= 0);
@ -3662,7 +3662,7 @@ int32_t mJoinInitMergeCtx(SMJoinOperatorInfo* pJoin, SSortMergeJoinPhysiNode* pJ
pCtx->hashCan = pJoin->probe->keyNum > 0;
if (JOIN_STYPE_ASOF == pJoinNode->subType || JOIN_STYPE_WIN == pJoinNode->subType) {
pCtx->jLimit = pJoinNode->pJLimit ? ((SLimitNode*)pJoinNode->pJLimit)->limit : 1;
pCtx->jLimit = (pJoinNode->pJLimit && ((SLimitNode*)pJoinNode->pJLimit)->limit) ? ((SLimitNode*)pJoinNode->pJLimit)->limit->datum.i : 1;
pJoin->subType = JOIN_STYPE_OUTER;
pJoin->build->eqRowLimit = pCtx->jLimit;
pJoin->grpResetFp = mLeftJoinGroupReset;

View File

@ -986,7 +986,7 @@ static int32_t mJoinInitTableInfo(SMJoinOperatorInfo* pJoin, SSortMergeJoinPhysi
pTable->multiEqGrpRows = !((JOIN_STYPE_SEMI == pJoin->subType || JOIN_STYPE_ANTI == pJoin->subType) && NULL == pJoin->pFPreFilter);
pTable->multiRowsGrp = !((JOIN_STYPE_SEMI == pJoin->subType || JOIN_STYPE_ANTI == pJoin->subType) && NULL == pJoin->pPreFilter);
if (JOIN_STYPE_ASOF == pJoinNode->subType) {
pTable->eqRowLimit = pJoinNode->pJLimit ? ((SLimitNode*)pJoinNode->pJLimit)->limit : 1;
pTable->eqRowLimit = (pJoinNode->pJLimit && ((SLimitNode*)pJoinNode->pJLimit)->limit) ? ((SLimitNode*)pJoinNode->pJLimit)->limit->datum.i : 1;
}
} else {
pTable->multiEqGrpRows = true;
@ -1169,7 +1169,7 @@ static FORCE_INLINE SSDataBlock* mJoinRetrieveImpl(SMJoinOperatorInfo* pJoin, SM
static int32_t mJoinInitCtx(SMJoinOperatorInfo* pJoin, SSortMergeJoinPhysiNode* pJoinNode) {
pJoin->ctx.mergeCtx.groupJoin = pJoinNode->grpJoin;
pJoin->ctx.mergeCtx.limit = pJoinNode->node.pLimit ? ((SLimitNode*)pJoinNode->node.pLimit)->limit : INT64_MAX;
pJoin->ctx.mergeCtx.limit = (pJoinNode->node.pLimit && ((SLimitNode*)pJoinNode->node.pLimit)->limit) ? ((SLimitNode*)pJoinNode->node.pLimit)->limit->datum.i : INT64_MAX;
pJoin->retrieveFp = pJoinNode->grpJoin ? mJoinGrpRetrieveImpl : mJoinRetrieveImpl;
pJoin->outBlkId = pJoinNode->node.pOutputDataBlockDesc->dataBlockId;

View File

@ -84,9 +84,11 @@ int32_t createSortOperatorInfo(SOperatorInfo* downstream, SSortPhysiNode* pSortN
calcSortOperMaxTupleLength(pInfo, pSortNode->pSortKeys);
pInfo->maxRows = -1;
if (pSortNode->node.pLimit) {
if (pSortNode->node.pLimit && ((SLimitNode*)pSortNode->node.pLimit)->limit) {
SLimitNode* pLimit = (SLimitNode*)pSortNode->node.pLimit;
if (pLimit->limit > 0) pInfo->maxRows = pLimit->limit + pLimit->offset;
if (pLimit->limit->datum.i > 0) {
pInfo->maxRows = pLimit->limit->datum.i + (pLimit->offset ? pLimit->offset->datum.i : 0);
}
}
pOperator->exprSupp.pCtx =

View File

@ -63,7 +63,7 @@ static void doKeepPrevRows(STimeSliceOperatorInfo* pSliceInfo, const SSDataBlock
pkey->isNull = false;
char* val = colDataGetData(pColInfoData, rowIndex);
if (IS_VAR_DATA_TYPE(pkey->type)) {
memcpy(pkey->pData, val, varDataLen(val));
memcpy(pkey->pData, val, varDataTLen(val));
} else {
memcpy(pkey->pData, val, pkey->bytes);
}
@ -87,7 +87,7 @@ static void doKeepNextRows(STimeSliceOperatorInfo* pSliceInfo, const SSDataBlock
if (!IS_VAR_DATA_TYPE(pkey->type)) {
memcpy(pkey->pData, val, pkey->bytes);
} else {
memcpy(pkey->pData, val, varDataLen(val));
memcpy(pkey->pData, val, varDataTLen(val));
}
} else {
pkey->isNull = true;

View File

@ -1417,15 +1417,15 @@ int32_t createIntervalOperatorInfo(SOperatorInfo* downstream, SIntervalPhysiNode
pInfo->interval = interval;
pInfo->twAggSup = as;
pInfo->binfo.mergeResultBlock = pPhyNode->window.mergeDataBlock;
if (pPhyNode->window.node.pLimit) {
if (pPhyNode->window.node.pLimit && ((SLimitNode*)pPhyNode->window.node.pLimit)->limit) {
SLimitNode* pLimit = (SLimitNode*)pPhyNode->window.node.pLimit;
pInfo->limited = true;
pInfo->limit = pLimit->limit + pLimit->offset;
pInfo->limit = pLimit->limit->datum.i + (pLimit->offset ? pLimit->offset->datum.i : 0);
}
if (pPhyNode->window.node.pSlimit) {
if (pPhyNode->window.node.pSlimit && ((SLimitNode*)pPhyNode->window.node.pSlimit)->limit) {
SLimitNode* pLimit = (SLimitNode*)pPhyNode->window.node.pSlimit;
pInfo->slimited = true;
pInfo->slimit = pLimit->limit + pLimit->offset;
pInfo->slimit = pLimit->limit->datum.i + (pLimit->offset ? pLimit->offset->datum.i : 0);
pInfo->curGroupId = UINT64_MAX;
}

View File

@ -864,7 +864,11 @@ SSortMergeJoinPhysiNode* createDummySortMergeJoinPhysiNode(SJoinTestParam* param
SLimitNode* limitNode = NULL;
code = nodesMakeNode(QUERY_NODE_LIMIT, (SNode**)&limitNode);
assert(limitNode);
limitNode->limit = param->jLimit;
code = nodesMakeNode(QUERY_NODE_VALUE, (SNode**)&limitNode->limit);
assert(limitNode->limit);
limitNode->limit->node.resType.type = TSDB_DATA_TYPE_BIGINT;
limitNode->limit->node.resType.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes;
limitNode->limit->datum.i = param->jLimit;
p->pJLimit = (SNode*)limitNode;
}

View File

@ -1418,6 +1418,7 @@ SNode* qptMakeExprNode(SNode** ppNode) {
SNode* qptMakeLimitNode(SNode** ppNode) {
SNode* pNode = NULL;
int32_t code = 0;
if (QPT_NCORRECT_LOW_PROB()) {
return qptMakeRandNode(&pNode);
}
@ -1429,15 +1430,27 @@ SNode* qptMakeLimitNode(SNode** ppNode) {
if (!qptCtx.param.correctExpected) {
if (taosRand() % 2) {
pLimit->limit = taosRand() * ((taosRand() % 2) ? 1 : -1);
code = nodesMakeNode(QUERY_NODE_VALUE, (SNode**)&pLimit->limit);
assert(pLimit->limit);
pLimit->limit->node.resType.type = TSDB_DATA_TYPE_BIGINT;
pLimit->limit->node.resType.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes;
pLimit->limit->datum.i = taosRand() * ((taosRand() % 2) ? 1 : -1);
}
if (taosRand() % 2) {
pLimit->offset = taosRand() * ((taosRand() % 2) ? 1 : -1);
code = nodesMakeNode(QUERY_NODE_VALUE, (SNode**)&pLimit->offset);
assert(pLimit->offset);
pLimit->offset->node.resType.type = TSDB_DATA_TYPE_BIGINT;
pLimit->offset->node.resType.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes;
pLimit->offset->datum.i = taosRand() * ((taosRand() % 2) ? 1 : -1);
}
} else {
pLimit->limit = taosRand();
pLimit->limit->datum.i = taosRand();
if (taosRand() % 2) {
pLimit->offset = taosRand();
code = nodesMakeNode(QUERY_NODE_VALUE, (SNode**)&pLimit->offset);
assert(pLimit->offset);
pLimit->offset->node.resType.type = TSDB_DATA_TYPE_BIGINT;
pLimit->offset->node.resType.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes;
pLimit->offset->datum.i = taosRand();
}
}

View File

@ -784,6 +784,7 @@ static bool funcNotSupportStringSma(SFunctionNode* pFunc) {
case FUNCTION_TYPE_SPREAD_PARTIAL:
case FUNCTION_TYPE_SPREAD_MERGE:
case FUNCTION_TYPE_TWA:
case FUNCTION_TYPE_ELAPSED:
pParam = nodesListGetNode(pFunc->pParameterList, 0);
if (pParam && nodesIsExprNode(pParam) && (IS_VAR_DATA_TYPE(((SExprNode*)pParam)->resType.type))) {
return true;

View File

@ -52,7 +52,7 @@
if (NULL == (pSrc)->fldname) { \
break; \
} \
int32_t code = nodesCloneNode((pSrc)->fldname, &((pDst)->fldname)); \
int32_t code = nodesCloneNode((SNode*)(pSrc)->fldname, (SNode**)&((pDst)->fldname)); \
if (NULL == (pDst)->fldname) { \
return code; \
} \
@ -346,8 +346,8 @@ static int32_t orderByExprNodeCopy(const SOrderByExprNode* pSrc, SOrderByExprNod
}
static int32_t limitNodeCopy(const SLimitNode* pSrc, SLimitNode* pDst) {
COPY_SCALAR_FIELD(limit);
COPY_SCALAR_FIELD(offset);
CLONE_NODE_FIELD(limit);
CLONE_NODE_FIELD(offset);
return TSDB_CODE_SUCCESS;
}

View File

@ -4933,9 +4933,9 @@ static const char* jkLimitOffset = "Offset";
static int32_t limitNodeToJson(const void* pObj, SJson* pJson) {
const SLimitNode* pNode = (const SLimitNode*)pObj;
int32_t code = tjsonAddIntegerToObject(pJson, jkLimitLimit, pNode->limit);
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddIntegerToObject(pJson, jkLimitOffset, pNode->offset);
int32_t code = tjsonAddObject(pJson, jkLimitLimit, nodeToJson, pNode->limit);
if (TSDB_CODE_SUCCESS == code && pNode->offset) {
code = tjsonAddObject(pJson, jkLimitOffset, nodeToJson, pNode->offset);
}
return code;
@ -4944,9 +4944,9 @@ static int32_t limitNodeToJson(const void* pObj, SJson* pJson) {
static int32_t jsonToLimitNode(const SJson* pJson, void* pObj) {
SLimitNode* pNode = (SLimitNode*)pObj;
int32_t code = tjsonGetBigIntValue(pJson, jkLimitLimit, &pNode->limit);
int32_t code = jsonToNodeObject(pJson, jkLimitLimit, (SNode**)&pNode->limit);
if (TSDB_CODE_SUCCESS == code) {
code = tjsonGetBigIntValue(pJson, jkLimitOffset, &pNode->offset);
code = jsonToNodeObject(pJson, jkLimitOffset, (SNode**)&pNode->offset);
}
return code;

View File

@ -1246,9 +1246,9 @@ enum { LIMIT_CODE_LIMIT = 1, LIMIT_CODE_OFFSET };
static int32_t limitNodeToMsg(const void* pObj, STlvEncoder* pEncoder) {
const SLimitNode* pNode = (const SLimitNode*)pObj;
int32_t code = tlvEncodeI64(pEncoder, LIMIT_CODE_LIMIT, pNode->limit);
if (TSDB_CODE_SUCCESS == code) {
code = tlvEncodeI64(pEncoder, LIMIT_CODE_OFFSET, pNode->offset);
int32_t code = tlvEncodeObj(pEncoder, LIMIT_CODE_LIMIT, nodeToMsg, pNode->limit);
if (TSDB_CODE_SUCCESS == code && pNode->offset) {
code = tlvEncodeObj(pEncoder, LIMIT_CODE_OFFSET, nodeToMsg, pNode->offset);
}
return code;
@ -1262,10 +1262,10 @@ static int32_t msgToLimitNode(STlvDecoder* pDecoder, void* pObj) {
tlvForEach(pDecoder, pTlv, code) {
switch (pTlv->type) {
case LIMIT_CODE_LIMIT:
code = tlvDecodeI64(pTlv, &pNode->limit);
code = msgToNodeFromTlv(pTlv, (void**)&pNode->limit);
break;
case LIMIT_CODE_OFFSET:
code = tlvDecodeI64(pTlv, &pNode->offset);
code = msgToNodeFromTlv(pTlv, (void**)&pNode->offset);
break;
default:
break;

View File

@ -1106,8 +1106,12 @@ void nodesDestroyNode(SNode* pNode) {
case QUERY_NODE_ORDER_BY_EXPR:
nodesDestroyNode(((SOrderByExprNode*)pNode)->pExpr);
break;
case QUERY_NODE_LIMIT: // no pointer field
case QUERY_NODE_LIMIT: {
SLimitNode* pLimit = (SLimitNode*)pNode;
nodesDestroyNode((SNode*)pLimit->limit);
nodesDestroyNode((SNode*)pLimit->offset);
break;
}
case QUERY_NODE_STATE_WINDOW: {
SStateWindowNode* pState = (SStateWindowNode*)pNode;
nodesDestroyNode(pState->pCol);
@ -3097,6 +3101,25 @@ int32_t nodesMakeValueNodeFromInt32(int32_t value, SNode** ppNode) {
return code;
}
int32_t nodesMakeValueNodeFromInt64(int64_t value, SNode** ppNode) {
SValueNode* pValNode = NULL;
int32_t code = nodesMakeNode(QUERY_NODE_VALUE, (SNode**)&pValNode);
if (TSDB_CODE_SUCCESS == code) {
pValNode->node.resType.type = TSDB_DATA_TYPE_BIGINT;
pValNode->node.resType.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes;
code = nodesSetValueNodeValue(pValNode, &value);
if (TSDB_CODE_SUCCESS == code) {
pValNode->translate = true;
pValNode->isNull = false;
*ppNode = (SNode*)pValNode;
} else {
nodesDestroyNode((SNode*)pValNode);
}
}
return code;
}
bool nodesIsStar(SNode* pNode) {
return (QUERY_NODE_COLUMN == nodeType(pNode)) && ('\0' == ((SColumnNode*)pNode)->tableAlias[0]) &&
(0 == strcmp(((SColumnNode*)pNode)->colName, "*"));

View File

@ -152,7 +152,7 @@ SNode* createTempTableNode(SAstCreateContext* pCxt, SNode* pSubquery, SToken
SNode* createJoinTableNode(SAstCreateContext* pCxt, EJoinType type, EJoinSubType stype, SNode* pLeft, SNode* pRight,
SNode* pJoinCond);
SNode* createViewNode(SAstCreateContext* pCxt, SToken* pDbName, SToken* pViewName);
SNode* createLimitNode(SAstCreateContext* pCxt, const SToken* pLimit, const SToken* pOffset);
SNode* createLimitNode(SAstCreateContext* pCxt, SNode* pLimit, SNode* pOffset);
SNode* createOrderByExprNode(SAstCreateContext* pCxt, SNode* pExpr, EOrder order, ENullOrder nullOrder);
SNode* createSessionWindowNode(SAstCreateContext* pCxt, SNode* pCol, SNode* pGap);
SNode* createStateWindowNode(SAstCreateContext* pCxt, SNode* pExpr);

19
source/libs/parser/inc/sql.y Normal file → Executable file
View File

@ -1078,6 +1078,10 @@ signed_integer(A) ::= NK_MINUS(B) NK_INTEGER(C).
A = createValueNode(pCxt, TSDB_DATA_TYPE_BIGINT, &t);
}
unsigned_integer(A) ::= NK_INTEGER(B). { A = createValueNode(pCxt, TSDB_DATA_TYPE_BIGINT, &B); }
unsigned_integer(A) ::= NK_QUESTION(B). { A = releaseRawExprNode(pCxt, createRawExprNode(pCxt, &B, createPlaceholderValueNode(pCxt, &B))); }
signed_float(A) ::= NK_FLOAT(B). { A = createValueNode(pCxt, TSDB_DATA_TYPE_DOUBLE, &B); }
signed_float(A) ::= NK_PLUS NK_FLOAT(B). { A = createValueNode(pCxt, TSDB_DATA_TYPE_DOUBLE, &B); }
signed_float(A) ::= NK_MINUS(B) NK_FLOAT(C). {
@ -1098,6 +1102,7 @@ signed_literal(A) ::= NULL(B).
signed_literal(A) ::= literal_func(B). { A = releaseRawExprNode(pCxt, B); }
signed_literal(A) ::= NK_QUESTION(B). { A = createPlaceholderValueNode(pCxt, &B); }
%type literal_list { SNodeList* }
%destructor literal_list { nodesDestroyList($$); }
literal_list(A) ::= signed_literal(B). { A = createNodeList(pCxt, B); }
@ -1480,7 +1485,7 @@ window_offset_literal(A) ::= NK_MINUS(B) NK_VARIABLE(C).
}
jlimit_clause_opt(A) ::= . { A = NULL; }
jlimit_clause_opt(A) ::= JLIMIT NK_INTEGER(B). { A = createLimitNode(pCxt, &B, NULL); }
jlimit_clause_opt(A) ::= JLIMIT unsigned_integer(B). { A = createLimitNode(pCxt, B, NULL); }
/************************************************ query_specification *************************************************/
query_specification(A) ::=
@ -1660,14 +1665,14 @@ order_by_clause_opt(A) ::= .
order_by_clause_opt(A) ::= ORDER BY sort_specification_list(B). { A = B; }
slimit_clause_opt(A) ::= . { A = NULL; }
slimit_clause_opt(A) ::= SLIMIT NK_INTEGER(B). { A = createLimitNode(pCxt, &B, NULL); }
slimit_clause_opt(A) ::= SLIMIT NK_INTEGER(B) SOFFSET NK_INTEGER(C). { A = createLimitNode(pCxt, &B, &C); }
slimit_clause_opt(A) ::= SLIMIT NK_INTEGER(C) NK_COMMA NK_INTEGER(B). { A = createLimitNode(pCxt, &B, &C); }
slimit_clause_opt(A) ::= SLIMIT unsigned_integer(B). { A = createLimitNode(pCxt, B, NULL); }
slimit_clause_opt(A) ::= SLIMIT unsigned_integer(B) SOFFSET unsigned_integer(C). { A = createLimitNode(pCxt, B, C); }
slimit_clause_opt(A) ::= SLIMIT unsigned_integer(C) NK_COMMA unsigned_integer(B). { A = createLimitNode(pCxt, B, C); }
limit_clause_opt(A) ::= . { A = NULL; }
limit_clause_opt(A) ::= LIMIT NK_INTEGER(B). { A = createLimitNode(pCxt, &B, NULL); }
limit_clause_opt(A) ::= LIMIT NK_INTEGER(B) OFFSET NK_INTEGER(C). { A = createLimitNode(pCxt, &B, &C); }
limit_clause_opt(A) ::= LIMIT NK_INTEGER(C) NK_COMMA NK_INTEGER(B). { A = createLimitNode(pCxt, &B, &C); }
limit_clause_opt(A) ::= LIMIT unsigned_integer(B). { A = createLimitNode(pCxt, B, NULL); }
limit_clause_opt(A) ::= LIMIT unsigned_integer(B) OFFSET unsigned_integer(C). { A = createLimitNode(pCxt, B, C); }
limit_clause_opt(A) ::= LIMIT unsigned_integer(C) NK_COMMA unsigned_integer(B). { A = createLimitNode(pCxt, B, C); }
/************************************************ subquery ************************************************************/
subquery(A) ::= NK_LP(B) query_expression(C) NK_RP(D). { A = createRawExprNodeExt(pCxt, &B, &D, C); }

View File

@ -1287,14 +1287,14 @@ _err:
return NULL;
}
SNode* createLimitNode(SAstCreateContext* pCxt, const SToken* pLimit, const SToken* pOffset) {
SNode* createLimitNode(SAstCreateContext* pCxt, SNode* pLimit, SNode* pOffset) {
CHECK_PARSER_STATUS(pCxt);
SLimitNode* limitNode = NULL;
pCxt->errCode = nodesMakeNode(QUERY_NODE_LIMIT, (SNode**)&limitNode);
CHECK_MAKE_NODE(limitNode);
limitNode->limit = taosStr2Int64(pLimit->z, NULL, 10);
limitNode->limit = (SValueNode*)pLimit;
if (NULL != pOffset) {
limitNode->offset = taosStr2Int64(pOffset->z, NULL, 10);
limitNode->offset = (SValueNode*)pOffset;
}
return (SNode*)limitNode;
_err:

View File

@ -2751,6 +2751,9 @@ static int32_t parseInsertBody(SInsertParseContext* pCxt, SVnodeModifyOpStmt* pS
if (TSDB_CODE_SUCCESS == code && hasData) {
code = parseInsertTableClause(pCxt, pStmt, &token);
}
if (TSDB_CODE_PAR_TABLE_NOT_EXIST == code && pCxt->preCtbname) {
code = TSDB_CODE_TSC_STMT_TBNAME_ERROR;
}
}
if (TSDB_CODE_SUCCESS == code && !pCxt->missCache) {

View File

@ -4729,16 +4729,20 @@ static int32_t translateJoinTable(STranslateContext* pCxt, SJoinTableNode* pJoin
return buildInvalidOperationMsg(&pCxt->msgBuf, "WINDOW_OFFSET required for WINDOW join");
}
if (TSDB_CODE_SUCCESS == code && NULL != pJoinTable->pJLimit) {
if (TSDB_CODE_SUCCESS == code && NULL != pJoinTable->pJLimit && NULL != ((SLimitNode*)pJoinTable->pJLimit)->limit) {
if (*pSType != JOIN_STYPE_ASOF && *pSType != JOIN_STYPE_WIN) {
return buildInvalidOperationMsgExt(&pCxt->msgBuf, "JLIMIT not supported for %s join",
getFullJoinTypeString(type, *pSType));
}
SLimitNode* pJLimit = (SLimitNode*)pJoinTable->pJLimit;
if (pJLimit->limit > JOIN_JLIMIT_MAX_VALUE || pJLimit->limit < 0) {
code = translateExpr(pCxt, (SNode**)&pJLimit->limit);
if (TSDB_CODE_SUCCESS != code) {
return code;
}
if (pJLimit->limit->datum.i > JOIN_JLIMIT_MAX_VALUE || pJLimit->limit->datum.i < 0) {
return buildInvalidOperationMsg(&pCxt->msgBuf, "JLIMIT value is out of valid range [0, 1024]");
}
if (0 == pJLimit->limit) {
if (0 == pJLimit->limit->datum.i) {
pCurrSmt->isEmptyResult = true;
}
}
@ -6994,16 +6998,32 @@ static int32_t translateFrom(STranslateContext* pCxt, SNode** pTable) {
}
static int32_t checkLimit(STranslateContext* pCxt, SSelectStmt* pSelect) {
if ((NULL != pSelect->pLimit && pSelect->pLimit->offset < 0) ||
(NULL != pSelect->pSlimit && pSelect->pSlimit->offset < 0)) {
int32_t code = 0;
if (pSelect->pLimit && pSelect->pLimit->limit) {
code = translateExpr(pCxt, (SNode**)&pSelect->pLimit->limit);
}
if (TSDB_CODE_SUCCESS == code && pSelect->pLimit && pSelect->pLimit->offset) {
code = translateExpr(pCxt, (SNode**)&pSelect->pLimit->offset);
}
if (TSDB_CODE_SUCCESS == code && pSelect->pSlimit && pSelect->pSlimit->limit) {
code = translateExpr(pCxt, (SNode**)&pSelect->pSlimit->limit);
}
if (TSDB_CODE_SUCCESS == code && pSelect->pSlimit && pSelect->pSlimit->offset) {
code = translateExpr(pCxt, (SNode**)&pSelect->pSlimit->offset);
}
if ((TSDB_CODE_SUCCESS == code) &&
((NULL != pSelect->pLimit && pSelect->pLimit->offset && pSelect->pLimit->offset->datum.i < 0) ||
(NULL != pSelect->pSlimit && pSelect->pSlimit->offset && pSelect->pSlimit->offset->datum.i < 0))) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_OFFSET_LESS_ZERO);
}
if (NULL != pSelect->pSlimit && (NULL == pSelect->pPartitionByList && NULL == pSelect->pGroupByList)) {
if ((TSDB_CODE_SUCCESS == code) && NULL != pSelect->pSlimit && (NULL == pSelect->pPartitionByList && NULL == pSelect->pGroupByList)) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_SLIMIT_LEAK_PARTITION_GROUP_BY);
}
return TSDB_CODE_SUCCESS;
return code;
}
static int32_t createPrimaryKeyColByTable(STranslateContext* pCxt, STableNode* pTable, SNode** pPrimaryKey) {
@ -7482,7 +7502,14 @@ static int32_t translateSetOperOrderBy(STranslateContext* pCxt, SSetOperator* pS
}
static int32_t checkSetOperLimit(STranslateContext* pCxt, SLimitNode* pLimit) {
if ((NULL != pLimit && pLimit->offset < 0)) {
int32_t code = 0;
if (pLimit && pLimit->limit) {
code = translateExpr(pCxt, (SNode**)&pLimit->limit);
}
if (TSDB_CODE_SUCCESS == code && pLimit && pLimit->offset) {
code = translateExpr(pCxt, (SNode**)&pLimit->offset);
}
if (TSDB_CODE_SUCCESS == code && (NULL != pLimit && NULL != pLimit->offset && pLimit->offset->datum.i < 0)) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_OFFSET_LESS_ZERO);
}
return TSDB_CODE_SUCCESS;

View File

@ -196,23 +196,16 @@ TEST_F(ParserInitialATest, alterDatabase) {
setAlterDbFsync(200);
setAlterDbWal(1);
setAlterDbCacheModel(TSDB_CACHE_MODEL_LAST_ROW);
#ifndef _STORAGE
setAlterDbSttTrigger(-1);
#else
setAlterDbSttTrigger(16);
#endif
setAlterDbBuffer(16);
setAlterDbPages(128);
setAlterDbReplica(3);
setAlterDbWalRetentionPeriod(10);
setAlterDbWalRetentionSize(20);
#ifndef _STORAGE
run("ALTER DATABASE test BUFFER 16 CACHEMODEL 'last_row' CACHESIZE 32 WAL_FSYNC_PERIOD 200 KEEP 10 PAGES 128 "
"REPLICA 3 WAL_LEVEL 1 WAL_RETENTION_PERIOD 10 WAL_RETENTION_SIZE 20");
#else
run("ALTER DATABASE test BUFFER 16 CACHEMODEL 'last_row' CACHESIZE 32 WAL_FSYNC_PERIOD 200 KEEP 10 PAGES 128 "
"REPLICA 3 WAL_LEVEL 1 STT_TRIGGER 16 WAL_RETENTION_PERIOD 10 WAL_RETENTION_SIZE 20");
#endif
"REPLICA 3 WAL_LEVEL 1 "
"STT_TRIGGER 16 "
"WAL_RETENTION_PERIOD 10 WAL_RETENTION_SIZE 20");
clearAlterDbReq();
initAlterDb("test");

View File

@ -292,11 +292,7 @@ TEST_F(ParserInitialCTest, createDatabase) {
setDbWalRetentionSize(-1);
setDbWalRollPeriod(10);
setDbWalSegmentSize(20);
#ifndef _STORAGE
setDbSstTrigger(1);
#else
setDbSstTrigger(16);
#endif
setDbHashPrefix(3);
setDbHashSuffix(4);
setDbTsdbPageSize(32);
@ -354,7 +350,7 @@ TEST_F(ParserInitialCTest, createDatabase) {
"WAL_RETENTION_SIZE -1 "
"WAL_ROLL_PERIOD 10 "
"WAL_SEGMENT_SIZE 20 "
"STT_TRIGGER 16 "
"STT_TRIGGER 1 "
"TABLE_PREFIX 3 "
"TABLE_SUFFIX 4 "
"TSDB_PAGESIZE 32");

View File

@ -3705,8 +3705,14 @@ static int32_t rewriteTailOptCreateLimit(SNode* pLimit, SNode* pOffset, SNode**
if (NULL == pLimitNode) {
return code;
}
pLimitNode->limit = NULL == pLimit ? -1 : ((SValueNode*)pLimit)->datum.i;
pLimitNode->offset = NULL == pOffset ? 0 : ((SValueNode*)pOffset)->datum.i;
code = nodesMakeValueNodeFromInt64(NULL == pLimit ? -1 : ((SValueNode*)pLimit)->datum.i, (SNode**)&pLimitNode->limit);
if (TSDB_CODE_SUCCESS != code) {
return code;
}
code = nodesMakeValueNodeFromInt64(NULL == pOffset ? 0 : ((SValueNode*)pOffset)->datum.i, (SNode**)&pLimitNode->offset);
if (TSDB_CODE_SUCCESS != code) {
return code;
}
*pOutput = (SNode*)pLimitNode;
return TSDB_CODE_SUCCESS;
}

View File

@ -1823,9 +1823,9 @@ static int32_t createAggPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChildren,
if (NULL == pAgg) {
return terrno;
}
if (pAgg->node.pSlimit) {
if (pAgg->node.pSlimit && ((SLimitNode*)pAgg->node.pSlimit)->limit) {
pSubPlan->dynamicRowThreshold = true;
pSubPlan->rowsThreshold = ((SLimitNode*)pAgg->node.pSlimit)->limit;
pSubPlan->rowsThreshold = ((SLimitNode*)pAgg->node.pSlimit)->limit->datum.i;
}
pAgg->mergeDataBlock = (GROUP_ACTION_KEEP == pAggLogicNode->node.groupAction ? false : true);

View File

@ -133,8 +133,12 @@ static int32_t splCreateExchangeNode(SSplitContext* pCxt, SLogicNode* pChild, SE
nodesDestroyNode((SNode*)pExchange);
return code;
}
((SLimitNode*)pChild->pLimit)->limit += ((SLimitNode*)pChild->pLimit)->offset;
((SLimitNode*)pChild->pLimit)->offset = 0;
if (((SLimitNode*)pChild->pLimit)->limit && ((SLimitNode*)pChild->pLimit)->offset) {
((SLimitNode*)pChild->pLimit)->limit->datum.i += ((SLimitNode*)pChild->pLimit)->offset->datum.i;
}
if (((SLimitNode*)pChild->pLimit)->offset) {
((SLimitNode*)pChild->pLimit)->offset->datum.i = 0;
}
}
*pOutput = pExchange;
@ -679,8 +683,12 @@ static int32_t stbSplCreateMergeNode(SSplitContext* pCxt, SLogicSubplan* pSubpla
if (TSDB_CODE_SUCCESS == code && NULL != pSplitNode->pLimit) {
pMerge->node.pLimit = NULL;
code = nodesCloneNode(pSplitNode->pLimit, &pMerge->node.pLimit);
((SLimitNode*)pSplitNode->pLimit)->limit += ((SLimitNode*)pSplitNode->pLimit)->offset;
((SLimitNode*)pSplitNode->pLimit)->offset = 0;
if (((SLimitNode*)pSplitNode->pLimit)->limit && ((SLimitNode*)pSplitNode->pLimit)->offset) {
((SLimitNode*)pSplitNode->pLimit)->limit->datum.i += ((SLimitNode*)pSplitNode->pLimit)->offset->datum.i;
}
if (((SLimitNode*)pSplitNode->pLimit)->offset) {
((SLimitNode*)pSplitNode->pLimit)->offset->datum.i = 0;
}
}
if (TSDB_CODE_SUCCESS == code) {
code = stbSplRewriteFromMergeNode(pMerge, pSplitNode);
@ -1427,8 +1435,12 @@ static int32_t stbSplGetSplitNodeForScan(SStableSplitInfo* pInfo, SLogicNode** p
if (NULL == (*pSplitNode)->pLimit) {
return code;
}
((SLimitNode*)pInfo->pSplitNode->pLimit)->limit += ((SLimitNode*)pInfo->pSplitNode->pLimit)->offset;
((SLimitNode*)pInfo->pSplitNode->pLimit)->offset = 0;
if (((SLimitNode*)pInfo->pSplitNode->pLimit)->limit && ((SLimitNode*)pInfo->pSplitNode->pLimit)->offset) {
((SLimitNode*)pInfo->pSplitNode->pLimit)->limit->datum.i += ((SLimitNode*)pInfo->pSplitNode->pLimit)->offset->datum.i;
}
if (((SLimitNode*)pInfo->pSplitNode->pLimit)->offset) {
((SLimitNode*)pInfo->pSplitNode->pLimit)->offset->datum.i = 0;
}
}
}
return TSDB_CODE_SUCCESS;
@ -1579,8 +1591,12 @@ static int32_t stbSplSplitMergeScanNode(SSplitContext* pCxt, SLogicSubplan* pSub
int32_t code = stbSplCreateMergeScanNode(pScan, &pMergeScan, &pMergeKeys);
if (TSDB_CODE_SUCCESS == code) {
if (NULL != pMergeScan->pLimit) {
((SLimitNode*)pMergeScan->pLimit)->limit += ((SLimitNode*)pMergeScan->pLimit)->offset;
((SLimitNode*)pMergeScan->pLimit)->offset = 0;
if (((SLimitNode*)pMergeScan->pLimit)->limit && ((SLimitNode*)pMergeScan->pLimit)->offset) {
((SLimitNode*)pMergeScan->pLimit)->limit->datum.i += ((SLimitNode*)pMergeScan->pLimit)->offset->datum.i;
}
if (((SLimitNode*)pMergeScan->pLimit)->offset) {
((SLimitNode*)pMergeScan->pLimit)->offset->datum.i = 0;
}
}
code = stbSplCreateMergeNode(pCxt, pSubplan, (SLogicNode*)pScan, pMergeKeys, pMergeScan, groupSort, true);
}

View File

@ -592,8 +592,12 @@ int32_t cloneLimit(SLogicNode* pParent, SLogicNode* pChild, uint8_t cloneWhat, b
if (pParent->pLimit && (cloneWhat & CLONE_LIMIT)) {
code = nodesCloneNode(pParent->pLimit, (SNode**)&pLimit);
if (TSDB_CODE_SUCCESS == code) {
pLimit->limit += pLimit->offset;
pLimit->offset = 0;
if (pLimit->limit && pLimit->offset) {
pLimit->limit->datum.i += pLimit->offset->datum.i;
}
if (pLimit->offset) {
pLimit->offset->datum.i = 0;
}
cloned = true;
}
}
@ -601,8 +605,12 @@ int32_t cloneLimit(SLogicNode* pParent, SLogicNode* pChild, uint8_t cloneWhat, b
if (pParent->pSlimit && (cloneWhat & CLONE_SLIMIT)) {
code = nodesCloneNode(pParent->pSlimit, (SNode**)&pSlimit);
if (TSDB_CODE_SUCCESS == code) {
pSlimit->limit += pSlimit->offset;
pSlimit->offset = 0;
if (pSlimit->limit && pSlimit->offset) {
pSlimit->limit->datum.i += pSlimit->offset->datum.i;
}
if (pSlimit->offset) {
pSlimit->offset->datum.i = 0;
}
cloned = true;
}
}

View File

@ -67,24 +67,29 @@ static bool doValidateSchema(SSchema* pSchema, int32_t numOfCols, int32_t maxLen
for (int32_t i = 0; i < numOfCols; ++i) {
// 1. valid types
if (!isValidDataType(pSchema[i].type)) {
qError("The %d col/tag data type error, type:%d", i, pSchema[i].type);
return false;
}
// 2. valid length for each type
if (pSchema[i].type == TSDB_DATA_TYPE_BINARY || pSchema[i].type == TSDB_DATA_TYPE_VARBINARY) {
if (pSchema[i].bytes > TSDB_MAX_BINARY_LEN) {
qError("The %d col/tag var data len error, type:%d, len:%d", i, pSchema[i].type, pSchema[i].bytes);
return false;
}
} else if (pSchema[i].type == TSDB_DATA_TYPE_NCHAR) {
if (pSchema[i].bytes > TSDB_MAX_NCHAR_LEN) {
qError("The %d col/tag nchar data len error, len:%d", i, pSchema[i].bytes);
return false;
}
} else if (pSchema[i].type == TSDB_DATA_TYPE_GEOMETRY) {
if (pSchema[i].bytes > TSDB_MAX_GEOMETRY_LEN) {
qError("The %d col/tag geometry data len error, len:%d", i, pSchema[i].bytes);
return false;
}
} else {
if (pSchema[i].bytes != tDataTypes[pSchema[i].type].bytes) {
qError("The %d col/tag data len error, type:%d, len:%d", i, pSchema[i].type, pSchema[i].bytes);
return false;
}
}
@ -92,6 +97,7 @@ static bool doValidateSchema(SSchema* pSchema, int32_t numOfCols, int32_t maxLen
// 3. valid column names
for (int32_t j = i + 1; j < numOfCols; ++j) {
if (strncmp(pSchema[i].name, pSchema[j].name, sizeof(pSchema[i].name) - 1) == 0) {
qError("The %d col/tag name %s is same with %d col/tag name %s", i, pSchema[i].name, j, pSchema[j].name);
return false;
}
}
@ -104,23 +110,28 @@ static bool doValidateSchema(SSchema* pSchema, int32_t numOfCols, int32_t maxLen
bool tIsValidSchema(struct SSchema* pSchema, int32_t numOfCols, int32_t numOfTags) {
if (!pSchema || !VALIDNUMOFCOLS(numOfCols)) {
qError("invalid numOfCols: %d", numOfCols);
return false;
}
if (!VALIDNUMOFTAGS(numOfTags)) {
qError("invalid numOfTags: %d", numOfTags);
return false;
}
/* first column must be the timestamp, which is a primary key */
if (pSchema[0].type != TSDB_DATA_TYPE_TIMESTAMP) {
qError("invalid first column type: %d", pSchema[0].type);
return false;
}
if (!doValidateSchema(pSchema, numOfCols, TSDB_MAX_BYTES_PER_ROW)) {
qError("validate schema columns failed");
return false;
}
if (!doValidateSchema(&pSchema[numOfCols], numOfTags, TSDB_MAX_TAGS_LEN)) {
qError("validate schema tags failed");
return false;
}

View File

@ -559,13 +559,13 @@ int32_t qwHandlePrePhaseEvents(QW_FPARAMS_DEF, int8_t phase, SQWPhaseInput *inpu
QW_ERR_JRET(ctx->pJobInfo->errCode);
}
if (atomic_load_8((int8_t *)&ctx->queryEnd) && !ctx->dynamicTask) {
QW_TASK_ELOG("query already end, phase:%d", phase);
QW_ERR_JRET(TSDB_CODE_QW_MSG_ERROR);
}
switch (phase) {
case QW_PHASE_PRE_QUERY: {
if (atomic_load_8((int8_t *)&ctx->queryEnd) && !ctx->dynamicTask) {
QW_TASK_ELOG("query already end, phase:%d", phase);
QW_ERR_JRET(TSDB_CODE_QW_MSG_ERROR);
}
if (QW_EVENT_PROCESSED(ctx, QW_EVENT_DROP)) {
QW_TASK_ELOG("task already dropped at phase %s", qwPhaseStr(phase));
QW_ERR_JRET(TSDB_CODE_QRY_TASK_STATUS_ERROR);
@ -592,6 +592,11 @@ int32_t qwHandlePrePhaseEvents(QW_FPARAMS_DEF, int8_t phase, SQWPhaseInput *inpu
break;
}
case QW_PHASE_PRE_FETCH: {
if (atomic_load_8((int8_t *)&ctx->queryEnd) && !ctx->dynamicTask) {
QW_TASK_ELOG("query already end, phase:%d", phase);
QW_ERR_JRET(TSDB_CODE_QW_MSG_ERROR);
}
if (QW_EVENT_PROCESSED(ctx, QW_EVENT_DROP) || QW_EVENT_RECEIVED(ctx, QW_EVENT_DROP)) {
QW_TASK_WLOG("task dropping or already dropped, phase:%s", qwPhaseStr(phase));
QW_ERR_JRET(ctx->rspCode);
@ -614,6 +619,12 @@ int32_t qwHandlePrePhaseEvents(QW_FPARAMS_DEF, int8_t phase, SQWPhaseInput *inpu
break;
}
case QW_PHASE_PRE_CQUERY: {
if (atomic_load_8((int8_t *)&ctx->queryEnd) && !ctx->dynamicTask) {
QW_TASK_ELOG("query already end, phase:%d", phase);
code = ctx->rspCode;
goto _return;
}
if (QW_EVENT_PROCESSED(ctx, QW_EVENT_DROP)) {
QW_TASK_WLOG("task already dropped, phase:%s", qwPhaseStr(phase));
QW_ERR_JRET(ctx->rspCode);

View File

@ -3428,7 +3428,8 @@ _out:;
ths->pLogBuf->matchIndex, ths->pLogBuf->endIndex);
if (code == 0 && ths->state == TAOS_SYNC_STATE_ASSIGNED_LEADER) {
TAOS_CHECK_RETURN(syncNodeUpdateAssignedCommitIndex(ths, matchIndex));
int64_t index = syncNodeUpdateAssignedCommitIndex(ths, matchIndex);
sTrace("vgId:%d, update assigned commit index %" PRId64 "", ths->vgId, index);
if (ths->fsmState != SYNC_FSM_STATE_INCOMPLETE &&
syncLogBufferCommit(ths->pLogBuf, ths, ths->assignedCommitIndex) < 0) {

View File

@ -732,7 +732,11 @@ int32_t syncFsmExecute(SSyncNode* pNode, SSyncFSM* pFsm, ESyncState role, SyncTe
pEntry->index, pEntry->term, TMSG_INFO(pEntry->originalRpcType), code, retry);
if (retry) {
taosMsleep(10);
sError("vgId:%d, retry on fsm commit since %s. index:%" PRId64, pNode->vgId, tstrerror(code), pEntry->index);
if (code == TSDB_CODE_OUT_OF_RPC_MEMORY_QUEUE) {
sError("vgId:%d, failed to execute fsm since %s. index:%" PRId64, pNode->vgId, terrstr(), pEntry->index);
} else {
sDebug("vgId:%d, retry on fsm commit since %s. index:%" PRId64, pNode->vgId, terrstr(), pEntry->index);
}
}
} while (retry);

View File

@ -234,6 +234,13 @@ TEST(TcsTest, InterfaceTest) {
// TEST(TcsTest, DISABLED_InterfaceNonBlobTest) {
TEST(TcsTest, InterfaceNonBlobTest) {
#ifndef TD_ENTERPRISE
// NOTE: this test case will coredump for community edition of taos
// thus we bypass this test case for the moment
// code = tcsGetObjectBlock(object_name, 0, size, check, &pBlock);
// tcsGetObjectBlock succeeded but pBlock is nullptr
// which results in nullptr-access-coredump shortly after
#else
int code = 0;
bool check = false;
bool withcp = false;
@ -348,4 +355,5 @@ TEST(TcsTest, InterfaceNonBlobTest) {
GTEST_ASSERT_EQ(code, 0);
tcsUninit();
#endif
}

Some files were not shown because too many files have changed in this diff Show More