Merge branch 'main' into doc/TD-33327

This commit is contained in:
xiao-77 2025-02-05 11:28:19 +08:00
commit 521184b281
276 changed files with 9703 additions and 2174 deletions

26
.github/CODEOWNERS vendored Normal file
View File

@ -0,0 +1,26 @@
# reference
# https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners
# merge team
# @guanshengliang Shengliang Guan
# @zitsen Linhe Huo
# @wingwing2005 Ya Qiang Li
# @feici02 WANG Xu
# @hzcheng Hongze Cheng
# @dapan1121 Pan Wei
# @sheyanjie-qq She Yanjie
# @pigzhou ZacharyZhou
* @taosdata/merge
/.github/ @feici02
/cmake/ @guanshengliang
/contrib/ @guanshengliang
/deps/ @guanshengliang
/docs/ @guanshengliang @zitsen
/examples/ @guanshengliang @zitsen
/include/ @guanshengliang @hzcheng @dapan1121
/packaging/ @feici02
/source/ @guanshengliang @hzcheng @dapan1121
/tests/ @guanshengliang @zitsen
/tools/ @guanshengliang @zitsen
/utils/ @guanshengliang

View File

@ -42,7 +42,7 @@ jobs:
cmake .. -DBUILD_TOOLS=true \
-DBUILD_KEEPER=true \
-DBUILD_HTTP=false \
-DBUILD_TEST=false \
-DBUILD_TEST=true \
-DBUILD_DEPENDENCY_TESTS=false
make -j 4
sudo make install

View File

@ -368,8 +368,8 @@ def pre_test_build_win() {
'''
bat '''
cd %WIN_COMMUNITY_ROOT%/tests/ci
pip3 install taospy==2.7.16
pip3 install taos-ws-py==0.3.5
pip3 install taospy==2.7.21
pip3 install taos-ws-py==0.3.8
xcopy /e/y/i/f %WIN_INTERNAL_ROOT%\\debug\\build\\lib\\taos.dll C:\\Windows\\System32
'''
return 1

View File

@ -1,6 +1,5 @@
<p>
<p align="center">
<a href="https://tdengine.com" target="_blank">
<a href="https://www.taosdata.com" target="_blank">
<img
src="docs/assets/tdengine.svg"
alt="TDengine"
@ -8,16 +7,39 @@
/>
</a>
</p>
<p>
[![Build Status](https://travis-ci.org/taosdata/TDengine.svg?branch=master)](https://travis-ci.org/taosdata/TDengine)
[![Build status](https://ci.appveyor.com/api/projects/status/kf3pwh2or5afsgl9/branch/master?svg=true)](https://ci.appveyor.com/project/sangshuduo/tdengine-2n8ge/branch/master)
[![Coverage Status](https://coveralls.io/repos/github/taosdata/TDengine/badge.svg?branch=3.0)](https://coveralls.io/github/taosdata/TDengine?branch=3.0)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4201/badge)](https://bestpractices.coreinfrastructure.org/projects/4201)
简体中文 | [English](README.md) | [TDengine 云服务](https://cloud.taosdata.com/?utm_medium=cn&utm_source=github) | 很多职位正在热招中,请看[这里](https://www.taosdata.com/careers/)
简体中文 | [English](README.md) | [TDengine 云服务](https://cloud.taosdata.com/?utm_medium=cn&utm_source=github) | 很多职位正在热招中,请看[这里](https://www.taosdata.com/cn/careers/)
# 目录
# TDengine 简介
1. [TDengine 简介](#1-tdengine-简介)
1. [文档](#2-文档)
1. [必备工具](#3-必备工具)
- [3.1 Linux预备](#31-linux系统)
- [3.2 macOS预备](#32-macos系统)
- [3.3 Windows预备](#33-windows系统)
- [3.4 克隆仓库](#34-克隆仓库)
1. [构建](#4-构建)
- [4.1 Linux系统上构建](#41-linux系统上构建)
- [4.2 macOS系统上构建](#42-macos系统上构建)
- [4.3 Windows系统上构建](#43-windows系统上构建)
1. [打包](#5-打包)
1. [安装](#6-安装)
- [6.1 Linux系统上安装](#61-linux系统上安装)
- [6.2 macOS系统上安装](#62-macos系统上安装)
- [6.3 Windows系统上安装](#63-windows系统上安装)
1. [快速运行](#7-快速运行)
- [7.1 Linux系统上运行](#71-linux系统上运行)
- [7.2 macOS系统上运行](#72-macos系统上运行)
- [7.3 Windows系统上运行](#73-windows系统上运行)
1. [测试](#8-测试)
1. [版本发布](#9-版本发布)
1. [工作流](#10-工作流)
1. [覆盖率](#11-覆盖率)
1. [成为社区贡献者](#12-成为社区贡献者)
# 1. 简介
TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series Database, TSDB)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外TDengine 还提供缓存、数据订阅、流式计算等功能是一极简的时序数据处理平台最大程度的减小系统设计的复杂度降低研发和运营成本。与其他时序数据库相比TDengine 的主要优势如下:
@ -33,323 +55,335 @@ TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series
- **核心开源**TDengine 的核心代码包括集群功能全部开源截止到2022年8月1日全球超过 135.9k 个运行实例GitHub Star 18.7kFork 4.4k,社区活跃。
# 文档
了解TDengine高级功能的完整列表请 [点击](https://tdengine.com/tdengine/)。体验TDengine最简单的方式是通过[TDengine云平台](https://cloud.tdengine.com)。
关于完整的使用手册,系统架构和更多细节,请参考 [TDengine 文档](https://docs.taosdata.com) 或者 [TDengine Documentation](https://docs.tdengine.com)。
# 2. 文档
# 构建
关于完整的使用手册,系统架构和更多细节,请参考 [TDengine](https://www.taosdata.com/) 或者 [TDengine 官方文档](https://docs.taosdata.com)。
用户可根据需求选择通过[容器](https://docs.taosdata.com/get-started/docker/)、[安装包](https://docs.taosdata.com/get-started/package/)、[Kubernetes](https://docs.taosdata.com/deployment/k8s/)来安装或直接使用无需安装部署的[云服务](https://cloud.taosdata.com/)。本快速指南是面向想自己编译、打包、测试的开发者的。
如果想编译或测试TDengine连接器请访问以下仓库: [JDBC连接器](https://github.com/taosdata/taos-connector-jdbc), [Go连接器](https://github.com/taosdata/driver-go), [Python连接器](https://github.com/taosdata/taos-connector-python), [Node.js连接器](https://github.com/taosdata/taos-connector-node), [C#连接器](https://github.com/taosdata/taos-connector-dotnet), [Rust连接器](https://github.com/taosdata/taos-connector-rust).
# 3. 前置条件
TDengine 目前可以在 Linux、 Windows、macOS 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。目前不支持使用交叉编译器构建。
用户可根据需求选择通过源码、[容器](https://docs.taosdata.com/get-started/docker/)、[安装包](https://docs.taosdata.com/get-started/package/)或[Kubernetes](https://docs.taosdata.com/deployment/k8s/)来安装。本快速指南仅适用于通过源码安装。
TDengine 还提供一组辅助工具软件 taosTools目前它包含 taosBenchmark曾命名为 taosdemo和 taosdump 两个软件。默认 TDengine 编译不包含 taosTools, 您可以在编译 TDengine 时使用`cmake .. -DBUILD_TOOLS=true` 来同时编译 taosTools。
## 3.1 Linux系统
为了构建TDengine, 请使用 [CMake](https://cmake.org/) 3.13.0 或者更高版本。
<details>
## 安装工具
<summary>安装Linux必备工具</summary>
### Ubuntu 18.04 及以上版本 & Debian
### Ubuntu 18.04、20.04、22.04
```bash
sudo apt-get install -y gcc cmake build-essential git libssl-dev libgflags2.2 libgflags-dev
sudo apt-get udpate
sudo apt-get install -y gcc cmake build-essential git libjansson-dev \
libsnappy-dev liblzma-dev zlib1g-dev pkg-config
```
#### 为 taos-tools 安装编译需要的软件
为了在 Ubuntu/Debian 系统上编译 [taos-tools](https://github.com/taosdata/taos-tools) 需要安装如下软件:
### CentOS 8
```bash
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev zlib1g pkg-config
```
### CentOS 7.9
```bash
sudo yum install epel-release
sudo yum update
sudo yum install -y gcc gcc-c++ make cmake3 gflags git openssl-devel
sudo ln -sf /usr/bin/cmake3 /usr/bin/cmake
yum install -y epel-release gcc gcc-c++ make cmake git perl dnf-plugins-core
yum config-manager --set-enabled powertools
yum install -y zlib-static xz-devel snappy-devel jansson-devel pkgconfig libatomic-static libstdc++-static
```
### CentOS 8/Fedora/Rocky Linux
</details>
## 3.2 macOS系统
<details>
<summary>安装macOS必备工具</summary>
根据提示安装依赖工具 [brew](https://brew.sh/).
```bash
sudo dnf install -y gcc gcc-c++ gflags make cmake epel-release git openssl-devel
```
#### 在 CentOS 上构建 taosTools 安装依赖软件
#### CentOS 7.9
```
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
```
#### CentOS 8/Fedora/Rocky Linux
```
sudo yum install -y epel-release
sudo yum install -y dnf-plugins-core
sudo yum config-manager --set-enabled powertools
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
```
注意:由于 snappy 缺乏 pkg-config 支持(参考 [链接](https://github.com/google/snappy/pull/86)),会导致 cmake 提示无法发现 libsnappy实际上工作正常。
若 powertools 安装失败,可以尝试改用:
```
sudo yum config-manager --set-enabled powertools
```
#### CentOS + devtoolset
除上述编译依赖包,需要执行以下命令:
```
sudo yum install centos-release-scl
sudo yum install devtoolset-9 devtoolset-9-libatomic-devel
scl enable devtoolset-9 -- bash
```
### macOS
```
brew install argp-standalone gflags pkgconfig
```
### 设置 golang 开发环境
</details>
TDengine 包含数个使用 Go 语言开发的组件比如taosAdapter, 请参考 golang.org 官方文档设置 go 开发环境。
## 3.3 Windows系统
请使用 1.20 及以上版本。对于中国用户,我们建议使用代理来加速软件包下载。
<details>
```
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct
```
<summary>安装Windows必备工具</summary>
缺省是不会构建 taosAdapter, 但您可以使用以下命令选择构建 taosAdapter 作为 RESTful 接口的服务
进行中。
```
cmake .. -DBUILD_HTTP=false
```
</details>
### 设置 rust 开发环境
## 3.4 克隆仓库
TDengine 包含数个使用 Rust 语言开发的组件. 请参考 rust-lang.org 官方文档设置 rust 开发环境。
## 获取源码
首先,你需要从 GitHub 克隆源码:
通过如下命令将TDengine仓库克隆到指定计算机:
```bash
git clone https://github.com/taosdata/TDengine.git
cd TDengine
```
如果使用 https 协议下载比较慢,可以通过修改 ~/.gitconfig 文件添加以下两行设置使用 ssh 协议下载。需要首先上传 ssh 密钥到 GitHub详细方法请参考 GitHub 官方文档。
```
[url "git@github.com:"]
insteadOf = https://github.com/
```
## 特别说明
# 4. 构建
[JDBC 连接器](https://github.com/taosdata/taos-connector-jdbc) [Go 连接器](https://github.com/taosdata/driver-go)[Python 连接器](https://github.com/taosdata/taos-connector-python)[Node.js 连接器](https://github.com/taosdata/taos-connector-node)[C# 连接器](https://github.com/taosdata/taos-connector-dotnet) [Rust 连接器](https://github.com/taosdata/taos-connector-rust) 和 [Grafana 插件](https://github.com/taosdata/grafanaplugin)已移到独立仓库
TDengine 还提供一组辅助工具软件 taosTools目前它包含 taosBenchmark曾命名为 taosdemo和 taosdump 两个软件。默认 TDengine 编译不包含 taosTools, 您可以在编译 TDengine 时使用`cmake .. -DBUILD_TOOLS=true` 来同时编译 taosTools。
为了构建TDengine, 请使用 [CMake](https://cmake.org/) 3.13.0 或者更高版本。
## 构建 TDengine
## 4.1 Linux系统上构建
### Linux 系统
<details>
可以运行代码仓库中的 `build.sh` 脚本编译出 TDengine 和 taosTools包含 taosBenchmark 和 taosdump
<summary>Linux系统上构建步骤</summary>
可以通过以下命令使用脚本 `build.sh` 编译TDengine和taosTools包括taosBenchmark和taosdump:
```bash
./build.sh
```
这个脚本等价于执行如下命令:
也可以通过以下命令进行构建:
```bash
mkdir debug
cd debug
mkdir debug && cd debug
cmake .. -DBUILD_TOOLS=true -DBUILD_CONTRIB=true
make
```
您也可以选择使用 jemalloc 作为内存分配器,替代默认的 glibc
可以使用Jemalloc作为内存分配器而不是使用glibc:
```bash
apt install autoconf
cmake .. -DJEMALLOC_ENABLED=true
```
在 X86-64、X86、arm64 平台上TDengine 生成脚本可以自动检测机器架构。也可以手动配置 CPUTYPE 参数来指定 CPU 类型,如 aarch64 等。
aarch64
TDengine构建脚本可以自动检测x86、x86-64、arm64平台上主机的体系结构。
您也可以通过CPUTYPE选项手动指定架构:
```bash
cmake .. -DCPUTYPE=aarch64 && cmake --build .
```
### Windows 系统
</details>
如果你使用的是 Visual Studio 2013 版本:
## 4.2 macOS系统上构建
打开 cmd.exe执行 vcvarsall.bat 时,为 64 位操作系统指定“x86_amd64”为 32 位操作系统指定“x86”。
<details>
```bash
<summary>macOS系统上构建步骤</summary>
请安装XCode命令行工具和cmake。使用XCode 11.4+在Catalina和Big Sur上完成验证。
```shell
mkdir debug && cd debug
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" < x86_amd64 | x86 >
cmake .. && cmake --build .
```
</details>
## 4.3 Windows系统上构建
<details>
<summary>Windows系统上构建步骤</summary>
如果您使用的是Visual Studio 2013请执行“cmd.exe”打开命令窗口执行如下命令。
执行vcvarsall.bat时64位的Windows请指定“amd64”32位的Windows请指定“x86”。
```cmd
mkdir debug && cd debug
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" < amd64 | x86 >
cmake .. -G "NMake Makefiles"
nmake
```
如果你使用的是 Visual Studio 2019 或 2017 版本:
如果您使用Visual Studio 2019或2017:
打开 cmd.exe执行 vcvarsall.bat 时,为 64 位操作系统指定“x64”为 32 位操作系统指定“x86”。
请执行“cmd.exe”打开命令窗口执行如下命令。
执行vcvarsall.bat时64位的Windows请指定“x64”32位的Windows请指定“x86”。
```bash
```cmd
mkdir debug && cd debug
"c:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvarsall.bat" < x64 | x86 >
cmake .. -G "NMake Makefiles"
nmake
```
你也可以从开始菜单中找到"Visual Studio < 2019 | 2017 >"菜单项,根据你的系统选择"x64 Native Tools Command Prompt for VS < 2019 | 2017 >"或"x86 Native Tools Command Prompt for VS < 2019 | 2017 >",打开命令行窗口,执行:
或者您可以通过点击Windows开始菜单打开命令窗口->“Visual Studio < 2019 | 2017 >”文件夹->“x64原生工具命令提示符VS < 2019 | 2017 >”或“x86原生工具命令提示符VS < 2019 | 2017 >”取决于你的Windows是什么架构然后执行命令如下
```bash
```cmd
mkdir debug && cd debug
cmake .. -G "NMake Makefiles"
nmake
```
</details>
### macOS 系统
# 5. 打包
安装 XCode 命令行工具和 cmake. 在 Catalina 和 Big Sur 操作系统上,需要安装 XCode 11.4+ 版本。
由于一些组件依赖关系TDengine社区安装程序不能仅由该存储库创建。我们仍在努力改进。
# 6. 安装
## 6.1 Linux系统上安装
<details>
<summary>Linux系统上安装详细步骤</summary>
构建成功后TDengine可以通过以下命令进行安装:
```bash
mkdir debug && cd debug
cmake .. && cmake --build .
sudo make install
```
从源代码安装还将为TDengine配置服务管理。用户也可以使用[TDengine安装包](https://docs.taosdata.com/get-started/package/)进行安装。
# 安装
</details>
## Linux 系统
## 6.2 macOS系统上安装
生成完成后,安装 TDengine
<details>
<summary>macOS系统上安装详细步骤</summary>
构建成功后TDengine可以通过以下命令进行安装:
```bash
sudo make install
```
用户可以在[文件目录结构](https://docs.taosdata.com/reference/directory/)中了解更多在操作系统中生成的目录或文件。
</details>
从源代码安装也会为 TDengine 配置服务管理 ,用户也可以选择[从安装包中安装](https://docs.taosdata.com/get-started/package/)。
## 6.3 Windows系统上安装
安装成功后,在终端中启动 TDengine 服务:
<details>
```bash
sudo systemctl start taosd
```
<summary>Windows系统上安装详细步骤</summary>
用户可以使用 TDengine CLI 来连接 TDengine 服务,在终端中,输入:
```bash
taos
```
如果 TDengine CLI 连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印出错误消息。
## Windows 系统
生成完成后,安装 TDengine
构建成功后TDengine可以通过以下命令进行安装:
```cmd
nmake install
```
## macOS 系统
</details>
生成完成后,安装 TDengine
# 7. 快速运行
## 7.1 Linux系统上运行
<details>
<summary>Linux系统上运行详细步骤</summary>
在Linux系统上安装TDengine完成后在终端运行如下命令启动服务:
```bash
sudo make install
sudo systemctl start taosd
```
用户可以在[文件目录结构](https://docs.taosdata.com/reference/directory/)中了解更多在操作系统中生成的目录或文件。
从源代码安装也会为 TDengine 配置服务管理 ,用户也可以选择[从安装包中安装](https://docs.taosdata.com/get-started/package/)。
安装成功后,可以在应用程序中双击 TDengine 图标启动服务,或者在终端中启动 TDengine 服务:
```bash
sudo launchctl start com.tdengine.taosd
```
用户可以使用 TDengine CLI 来连接 TDengine 服务,在终端中,输入:
然后用户可以通过如下命令使用TDengine命令行连接TDengine服务:
```bash
taos
```
如果 TDengine CLI 连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印出错误消息。
如果TDengine 命令行连接服务器成功,系统将打印欢迎信息和版本信息。否则,将显示连接错误信息。
## 快速运行
如果不希望以服务方式运行 TDengine也可以在终端中直接运行它。也即在生成完成后执行以下命令在 Windows 下,生成的可执行文件会带有 .exe 后缀,例如会名为 taosd.exe
如果您不想将TDengine作为服务运行您可以在当前终端中运行它。例如要在构建完成后快速启动TDengine服务器在终端中运行以下命令我们以Linux为例Windows上的命令为 `taosd.exe`
```bash
./build/bin/taosd -c test/cfg
```
在另一个终端,使用 TDengine CLI 连接服务器:
在另一个终端使用TDengine命令行连接服务器:
```bash
./build/bin/taos -c test/cfg
```
"-c test/cfg"指定系统配置文件所在目录。
选项 `-c test/cfg` 指定系统配置文件的目录。
# 体验 TDengine
</details>
在 TDengine 终端中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行插入查询操作。
## 7.2 macOS系统上运行
```sql
CREATE DATABASE demo;
USE demo;
CREATE TABLE t (ts TIMESTAMP, speed INT);
INSERT INTO t VALUES('2019-07-15 00:00:00', 10);
INSERT INTO t VALUES('2019-07-15 01:00:00', 20);
SELECT * FROM t;
ts | speed |
===================================
19-07-15 00:00:00.000| 10|
19-07-15 01:00:00.000| 20|
Query OK, 2 row(s) in set (0.001700s)
<details>
<summary>macOS系统上运行详细步骤</summary>
在macOS上安装完成后启动服务双击/applications/TDengine启动程序或者在终端中执行如下命令
```bash
sudo launchctl start com.tdengine.taosd
```
# 应用开发
然后在终端中使用如下命令通过TDengine命令行连接TDengine服务器:
## 官方连接器
```bash
taos
```
TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用:
如果TDengine命令行连接服务器成功系统将打印欢迎信息和版本信息。否则将显示错误信息。
- [Java](https://docs.taosdata.com/reference/connector/java/)
- [C/C++](https://docs.taosdata.com/reference/connector/cpp/)
- [Python](https://docs.taosdata.com/reference/connector/python/)
- [Go](https://docs.taosdata.com/reference/connector/go/)
- [Node.js](https://docs.taosdata.com/reference/connector/node/)
- [Rust](https://docs.taosdata.com/reference/connector/rust/)
- [C#](https://docs.taosdata.com/reference/connector/csharp/)
- [RESTful API](https://docs.taosdata.com/reference/connector/rest-api/)
</details>
# 成为社区贡献者
## 7.3 Windows系统上运行
<details>
<summary>Windows系统上运行详细步骤</summary>
您可以使用以下命令在Windows平台上启动TDengine服务器:
```cmd
.\build\bin\taosd.exe -c test\cfg
```
在另一个终端上使用TDengine命令行连接服务器:
```cmd
.\build\bin\taos.exe -c test\cfg
```
选项 `-c test/cfg` 指定系统配置文件的目录。
</details>
# 8. 测试
有关如何在TDengine上运行不同类型的测试请参考 [TDengine测试](./tests/README-CN.md)
# 9. 版本发布
TDengine发布版本的完整列表请参考 [版本列表](https://github.com/taosdata/TDengine/releases)
# 10. 工作流
TDengine构建检查工作流可以在参考 [Github Action](https://github.com/taosdata/TDengine/actions/workflows/taosd-ci-build.yml), 更多的工作流正在创建中,将很快可用。
# 11. 覆盖率
最新的TDengine测试覆盖率报告可参考 [coveralls.io](https://coveralls.io/github/taosdata/TDengine)
<details>
<summary>如何在本地运行测试覆盖率报告?</summary>
在本地创建测试覆盖率报告HTML格式请运行以下命令:
```bash
cd tests
bash setup-lcov.sh -v 1.16 && ./run_local_coverage.sh -b main -c task
# on main branch and run cases in longtimeruning_cases.task
# for more infomation about options please refer to ./run_local_coverage.sh -h
```
> **注意:**
> 请注意,-b和-i选项将使用-DCOVER=true选项重新编译TDengine这可能需要花费一些时间。
</details>
# 12. 成为社区贡献者
点击 [这里](https://www.taosdata.com/contributor),了解如何成为 TDengine 的贡献者。
# 加入技术交流群
TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小 T 为好友,即可入群。

394
README.md
View File

@ -10,10 +10,10 @@
[![GitHub Actions Workflow Status](https://img.shields.io/github/actions/workflow/status/taosdata/tdengine/taosd-ci-build.yml)](https://github.com/taosdata/TDengine/actions/workflows/taosd-ci-build.yml)
[![Coverage Status](https://coveralls.io/repos/github/taosdata/TDengine/badge.svg?branch=3.0)](https://coveralls.io/github/taosdata/TDengine?branch=3.0)
![GitHub commit activity](https://img.shields.io/github/commit-activity/m/taosdata/tdengine)
[![GitHub commit activity](https://img.shields.io/github/commit-activity/m/taosdata/tdengine)](https://github.com/feici02/TDengine/commits/main/)
<br />
![GitHub Release](https://img.shields.io/github/v/release/taosdata/tdengine)
![GitHub License](https://img.shields.io/github/license/taosdata/tdengine)
[![GitHub Release](https://img.shields.io/github/v/release/taosdata/tdengine)](https://github.com/taosdata/TDengine/releases)
[![GitHub License](https://img.shields.io/github/license/taosdata/tdengine)](https://github.com/taosdata/TDengine/blob/main/LICENSE)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4201/badge)](https://bestpractices.coreinfrastructure.org/projects/4201)
<br />
[![Twitter Follow](https://img.shields.io/twitter/follow/tdenginedb?label=TDengine&style=social)](https://twitter.com/tdenginedb)
@ -26,24 +26,33 @@ English | [简体中文](README-CN.md) | [TDengine Cloud](https://cloud.tdengine
# Table of Contents
1. [What is TDengine?](#1-what-is-tdengine)
2. [Documentation](#2-documentation)
3. [Building](#3-building)
1. [Install build tools](#31-install-build-tools)
1. [Get the source codes](#32-get-the-source-codes)
1. [Special Note](#33-special-note)
1. [Build TDengine](#34-build-tdengine)
4. [Installing](#4-installing)
1. [On Linux platform](#41-on-linux-platform)
1. [On Windows platform](#42-on-windows-platform)
1. [On macOS platform](#43-on-macos-platform)
1. [Quick Run](#44-quick-run)
5. [Try TDengine](#5-try-tdengine)
6. [Developing with TDengine](#6-developing-with-tdengine)
7. [Contribute to TDengine](#7-contribute-to-tdengine)
8. [Join the TDengine Community](#8-join-the-tdengine-community)
1. [Introduction](#1-introduction)
1. [Documentation](#2-documentation)
1. [Prerequisites](#3-prerequisites)
- [3.1 Prerequisites On Linux](#31-on-linux)
- [3.2 Prerequisites On macOS](#32-on-macos)
- [3.3 Prerequisites On Windows](#33-on-windows)
- [3.4 Clone the repo](#34-clone-the-repo)
1. [Building](#4-building)
- [4.1 Build on Linux](#41-build-on-linux)
- [4.2 Build on macOS](#42-build-on-macos)
- [4.3 Build On Windows](#43-build-on-windows)
1. [Packaging](#5-packaging)
1. [Installation](#6-installation)
- [6.1 Install on Linux](#61-install-on-linux)
- [6.2 Install on macOS](#62-install-on-macos)
- [6.3 Install on Windows](#63-install-on-windows)
1. [Running](#7-running)
- [7.1 Run TDengine on Linux](#71-run-tdengine-on-linux)
- [7.2 Run TDengine on macOS](#72-run-tdengine-on-macos)
- [7.3 Run TDengine on Windows](#73-run-tdengine-on-windows)
1. [Testing](#8-testing)
1. [Releasing](#9-releasing)
1. [Workflow](#10-workflow)
1. [Coverage](#11-coverage)
1. [Contributing](#12-contributing)
# 1. What is TDengine
# 1. Introduction
TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-series databases with the following advantages:
@ -65,132 +74,85 @@ For a full list of TDengine competitive advantages, please [check here](https://
For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.tdengine.com) ([TDengine 文档](https://docs.taosdata.com))
# 3. Building
You can choose to install TDengine via [container](https://docs.tdengine.com/get-started/deploy-in-docker/), [installation package](https://docs.tdengine.com/get-started/deploy-from-package/), [Kubernetes](https://docs.tdengine.com/operations-and-maintenance/deploy-your-cluster/#kubernetes-deployment) or try [fully managed service](https://cloud.tdengine.com/) without installation. This quick guide is for developers who want to contribute, build, release and test TDengine by themselves.
At the moment, TDengine server supports running on Linux/Windows/macOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. Right now we don't support build with cross-compiling environment.
For contributing/building/testing TDengine Connectors, please check the following repositories: [JDBC Connector](https://github.com/taosdata/taos-connector-jdbc), [Go Connector](https://github.com/taosdata/driver-go), [Python Connector](https://github.com/taosdata/taos-connector-python), [Node.js Connector](https://github.com/taosdata/taos-connector-node), [C# Connector](https://github.com/taosdata/taos-connector-dotnet), [Rust Connector](https://github.com/taosdata/taos-connector-rust).
You can choose to install through source code, [container](https://docs.tdengine.com/get-started/docker/), [installation package](https://docs.tdengine.com/get-started/package/) or [Kubernetes](https://docs.tdengine.com/deployment/k8s/). This quick guide only applies to installing from source.
# 3. Prerequisites
TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine.
At the moment, TDengine server supports running on Linux/Windows/MacOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service. TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. Right now we don't support build with cross-compiling environment.
To build TDengine, use [CMake](https://cmake.org/) 3.13.0 or higher versions in the project directory.
## 3.1 On Linux
## 3.1 Install build tools
<details>
### Ubuntu 18.04 and above or Debian
<summary>Install required tools on Linux</summary>
### For Ubuntu 18.04、20.04、22.04
```bash
sudo apt-get install -y gcc cmake build-essential git libssl-dev libgflags2.2 libgflags-dev
sudo apt-get udpate
sudo apt-get install -y gcc cmake build-essential git libjansson-dev \
libsnappy-dev liblzma-dev zlib1g-dev pkg-config
```
#### Install build dependencies for taosTools
To build the [taosTools](https://github.com/taosdata/taos-tools) on Ubuntu/Debian, the following packages need to be installed.
### For CentOS 8
```bash
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev zlib1g pkg-config
```
### CentOS 7.9
```bash
sudo yum install epel-release
sudo yum update
sudo yum install -y gcc gcc-c++ make cmake3 gflags git openssl-devel
sudo ln -sf /usr/bin/cmake3 /usr/bin/cmake
yum install -y epel-release gcc gcc-c++ make cmake git perl dnf-plugins-core
yum config-manager --set-enabled powertools
yum install -y zlib-static xz-devel snappy-devel jansson-devel pkgconfig libatomic-static libstdc++-static
```
### CentOS 8/Fedora/Rocky Linux
</details>
## 3.2 On macOS
<details>
<summary>Install required tools on macOS</summary>
Please intall the dependencies with [brew](https://brew.sh/).
```bash
sudo dnf install -y gcc gcc-c++ make cmake epel-release gflags git openssl-devel
```
#### Install build dependencies for taosTools on CentOS
#### CentOS 7.9
```
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
```
#### CentOS 8/Fedora/Rocky Linux
```
sudo yum install -y epel-release
sudo yum install -y dnf-plugins-core
sudo yum config-manager --set-enabled powertools
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
```
Note: Since snappy lacks pkg-config support (refer to [link](https://github.com/google/snappy/pull/86)), it leads a cmake prompt libsnappy not found. But snappy still works well.
If the PowerTools installation fails, you can try to use:
```
sudo yum config-manager --set-enabled powertools
```
#### For CentOS + devtoolset
Besides above dependencies, please run following commands:
```
sudo yum install centos-release-scl
sudo yum install devtoolset-9 devtoolset-9-libatomic-devel
scl enable devtoolset-9 -- bash
```
### macOS
```
brew install argp-standalone gflags pkgconfig
```
### Setup golang environment
</details>
TDengine includes a few components like taosAdapter developed by Go language. Please refer to golang.org official documentation for golang environment setup.
## 3.3 On Windows
Please use version 1.20+. For the user in China, we recommend using a proxy to accelerate package downloading.
<details>
```
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct
```
<summary>Install required tools on Windows</summary>
The default will not build taosAdapter, but you can use the following command to build taosAdapter as the service for RESTful interface.
Work in Progress.
```
cmake .. -DBUILD_HTTP=false
```
</details>
### Setup rust environment
## 3.4 Clone the repo
TDengine includes a few components developed by Rust language. Please refer to rust-lang.org official documentation for rust environment setup.
## 3.2 Get the source codes
First of all, you may clone the source codes from github:
Clone the repository to the target machine:
```bash
git clone https://github.com/taosdata/TDengine.git
cd TDengine
```
You can modify the file ~/.gitconfig to use ssh protocol instead of https for better download speed. You will need to upload ssh public key to GitHub first. Please refer to GitHub official documentation for detail.
</details>
```
[url "git@github.com:"]
insteadOf = https://github.com/
```
# 4. Building
## 3.3 Special Note
TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine.
[JDBC Connector](https://github.com/taosdata/taos-connector-jdbc) [Go Connector](https://github.com/taosdata/driver-go)[Python Connector](https://github.com/taosdata/taos-connector-python)[Node.js Connector](https://github.com/taosdata/taos-connector-node)[C# Connector](https://github.com/taosdata/taos-connector-dotnet) [Rust Connector](https://github.com/taosdata/taos-connector-rust) and [Grafana plugin](https://github.com/taosdata/grafanaplugin) has been moved to standalone repository.
To build TDengine, use [CMake](https://cmake.org/) 3.13.0 or higher versions in the project directory.
## 3.4 Build TDengine
## 4.1 Build on Linux
### On Linux platform
<details>
<summary>Detailed steps to build on Linux</summary>
You can run the bash script `build.sh` to build both TDengine and taosTools including taosBenchmark and taosdump as below:
@ -201,29 +163,46 @@ You can run the bash script `build.sh` to build both TDengine and taosTools incl
It equals to execute following commands:
```bash
mkdir debug
cd debug
mkdir debug && cd debug
cmake .. -DBUILD_TOOLS=true -DBUILD_CONTRIB=true
make
```
You can use Jemalloc as memory allocator instead of glibc:
```
apt install autoconf
```bash
cmake .. -DJEMALLOC_ENABLED=true
```
TDengine build script can detect the host machine's architecture on X86-64, X86, arm64 platform.
You can also specify CPUTYPE option like aarch64 too if the detection result is not correct:
aarch64:
TDengine build script can auto-detect the host machine's architecture on x86, x86-64, arm64 platform.
You can also specify architecture manually by CPUTYPE option:
```bash
cmake .. -DCPUTYPE=aarch64 && cmake --build .
```
### On Windows platform
</details>
## 4.2 Build on macOS
<details>
<summary>Detailed steps to build on macOS</summary>
Please install XCode command line tools and cmake. Verified with XCode 11.4+ on Catalina and Big Sur.
```shell
mkdir debug && cd debug
cmake .. && cmake --build .
```
</details>
## 4.3 Build on Windows
<details>
<summary>Detailed steps to build on Windows</summary>
If you use the Visual Studio 2013, please open a command window by executing "cmd.exe".
Please specify "amd64" for 64 bits Windows or specify "x86" for 32 bits Windows when you execute vcvarsall.bat.
@ -254,31 +233,67 @@ mkdir debug && cd debug
cmake .. -G "NMake Makefiles"
nmake
```
</details>
### On macOS platform
# 5. Packaging
Please install XCode command line tools and cmake. Verified with XCode 11.4+ on Catalina and Big Sur.
The TDengine community installer can NOT be created by this repository only, due to some component dependencies. We are still working on this improvement.
```shell
mkdir debug && cd debug
cmake .. && cmake --build .
```
# 6. Installation
# 4. Installing
## 6.1 Install on Linux
## 4.1 On Linux platform
<details>
After building successfully, TDengine can be installed by
<summary>Detailed steps to install on Linux</summary>
After building successfully, TDengine can be installed by:
```bash
sudo make install
```
Users can find more information about directories installed on the system in the [directory and files](https://docs.tdengine.com/reference/directory/) section.
Installing from source code will also configure service management for TDengine. Users can also choose to [install from packages](https://docs.tdengine.com/get-started/deploy-from-package/) for it.
Installing from source code will also configure service management for TDengine.Users can also choose to [install from packages](https://docs.tdengine.com/get-started/package/) for it.
</details>
To start the service after installation, in a terminal, use:
## 6.2 Install on macOS
<details>
<summary>Detailed steps to install on macOS</summary>
After building successfully, TDengine can be installed by:
```bash
sudo make install
```
</details>
## 6.3 Install on Windows
<details>
<summary>Detailed steps to install on windows</summary>
After building successfully, TDengine can be installed by:
```cmd
nmake install
```
</details>
# 7. Running
## 7.1 Run TDengine on Linux
<details>
<summary>Detailed steps to run on Linux</summary>
To start the service after installation on linux, in a terminal, use:
```bash
sudo systemctl start taosd
@ -292,27 +307,29 @@ taos
If TDengine CLI connects the server successfully, welcome messages and version info are printed. Otherwise, an error message is shown.
## 4.2 On Windows platform
After building successfully, TDengine can be installed by:
```cmd
nmake install
```
## 4.3 On macOS platform
After building successfully, TDengine can be installed by:
If you don't want to run TDengine as a service, you can run it in current shell. For example, to quickly start a TDengine server after building, run the command below in terminal: (We take Linux as an example, command on Windows will be `taosd.exe`)
```bash
sudo make install
./build/bin/taosd -c test/cfg
```
Users can find more information about directories installed on the system in the [directory and files](https://docs.tdengine.com/reference/directory/) section.
In another terminal, use the TDengine CLI to connect the server:
Installing from source code will also configure service management for TDengine.Users can also choose to [install from packages](https://docs.tdengine.com/get-started/package/) for it.
```bash
./build/bin/taos -c test/cfg
```
To start the service after installation, double-click the /applications/TDengine to start the program, or in a terminal, use:
Option `-c test/cfg` specifies the system configuration file directory.
</details>
## 7.2 Run TDengine on macOS
<details>
<summary>Detailed steps to run on macOS</summary>
To start the service after installation on macOS, double-click the /applications/TDengine to start the program, or in a terminal, use:
```bash
sudo launchctl start com.tdengine.taosd
@ -326,64 +343,63 @@ taos
If TDengine CLI connects the server successfully, welcome messages and version info are printed. Otherwise, an error message is shown.
## 4.4 Quick Run
</details>
If you don't want to run TDengine as a service, you can run it in current shell. For example, to quickly start a TDengine server after building, run the command below in terminal: (We take Linux as an example, command on Windows will be `taosd.exe`)
```bash
./build/bin/taosd -c test/cfg
## 7.3 Run TDengine on Windows
<details>
<summary>Detailed steps to run on windows</summary>
You can start TDengine server on Windows platform with below commands:
```cmd
.\build\bin\taosd.exe -c test\cfg
```
In another terminal, use the TDengine CLI to connect the server:
```bash
./build/bin/taos -c test/cfg
```cmd
.\build\bin\taos.exe -c test\cfg
```
option "-c test/cfg" specifies the system configuration file directory.
# 5. Try TDengine
</details>
It is easy to run SQL commands from TDengine CLI which is the same as other SQL databases.
# 8. Testing
```sql
CREATE DATABASE demo;
USE demo;
CREATE TABLE t (ts TIMESTAMP, speed INT);
INSERT INTO t VALUES('2019-07-15 00:00:00', 10);
INSERT INTO t VALUES('2019-07-15 01:00:00', 20);
SELECT * FROM t;
ts | speed |
===================================
19-07-15 00:00:00.000| 10|
19-07-15 01:00:00.000| 20|
Query OK, 2 row(s) in set (0.001700s)
For how to run different types of tests on TDengine, please see [Testing TDengine](./tests/README.md).
# 9. Releasing
For the complete list of TDengine Releases, please see [Releases](https://github.com/taosdata/TDengine/releases).
# 10. Workflow
TDengine build check workflow can be found in this [Github Action](https://github.com/taosdata/TDengine/actions/workflows/taosd-ci-build.yml). More workflows will be available soon.
# 11. Coverage
Latest TDengine test coverage report can be found on [coveralls.io](https://coveralls.io/github/taosdata/TDengine)
<details>
<summary>How to run the coverage report locally?</summary>
To create the test coverage report (in HTML format) locally, please run following commands:
```bash
cd tests
bash setup-lcov.sh -v 1.16 && ./run_local_coverage.sh -b main -c task
# on main branch and run cases in longtimeruning_cases.task
# for more infomation about options please refer to ./run_local_coverage.sh -h
```
> **NOTE:**
> Please note that the -b and -i options will recompile TDengine with the -DCOVER=true option, which may take a amount of time.
# 6. Developing with TDengine
</details>
## Official Connectors
# 12. Contributing
TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation.
- [Java](https://docs.tdengine.com/reference/connectors/java/)
- [C/C++](https://docs.tdengine.com/reference/connectors/cpp/)
- [Python](https://docs.tdengine.com/reference/connectors/python/)
- [Go](https://docs.tdengine.com/reference/connectors/go/)
- [Node.js](https://docs.tdengine.com/reference/connectors/node/)
- [Rust](https://docs.tdengine.com/reference/connectors/rust/)
- [C#](https://docs.tdengine.com/reference/connectors/csharp/)
- [RESTful API](https://docs.tdengine.com/reference/connectors/rest-api/)
# 7. Contribute to TDengine
Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to the project.
# 8. Join the TDengine Community
For more information about TDengine, you can follow us on social media and join our Discord server:
- [Discord](https://discord.com/invite/VZdSuUg4pS)
- [Twitter](https://twitter.com/TDengineDB)
- [LinkedIn](https://www.linkedin.com/company/tdengine/)
- [YouTube](https://www.youtube.com/@tdengine)
Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to TDengine.

View File

@ -166,6 +166,10 @@ IF(${BUILD_WITH_ANALYSIS})
set(BUILD_WITH_S3 ON)
ENDIF()
IF(${TD_LINUX})
set(BUILD_WITH_ANALYSIS ON)
ENDIF()
IF(${BUILD_S3})
IF(${BUILD_WITH_S3})
@ -205,13 +209,6 @@ option(
off
)
option(
BUILD_WITH_NURAFT
"If build with NuRaft"
OFF
)
option(
BUILD_WITH_UV
"If build with libuv"

View File

@ -2,7 +2,7 @@
IF (DEFINED VERNUMBER)
SET(TD_VER_NUMBER ${VERNUMBER})
ELSE ()
SET(TD_VER_NUMBER "3.3.5.0.alpha")
SET(TD_VER_NUMBER "3.3.5.2.alpha")
ENDIF ()
IF (DEFINED VERCOMPATIBLE)

View File

@ -17,7 +17,6 @@ elseif(${BUILD_WITH_COS})
file(MAKE_DIRECTORY $ENV{HOME}/.cos-local.1/)
cat("${TD_SUPPORT_DIR}/mxml_CMakeLists.txt.in" ${CONTRIB_TMP_FILE3})
cat("${TD_SUPPORT_DIR}/apr_CMakeLists.txt.in" ${CONTRIB_TMP_FILE3})
cat("${TD_SUPPORT_DIR}/curl_CMakeLists.txt.in" ${CONTRIB_TMP_FILE3})
endif(${BUILD_WITH_COS})
configure_file(${CONTRIB_TMP_FILE3} "${TD_CONTRIB_DIR}/deps-download/CMakeLists.txt")
@ -148,9 +147,7 @@ endif(${BUILD_WITH_SQLITE})
# s3
if(${BUILD_WITH_S3})
cat("${TD_SUPPORT_DIR}/ssl_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
cat("${TD_SUPPORT_DIR}/xml2_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
cat("${TD_SUPPORT_DIR}/curl_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
cat("${TD_SUPPORT_DIR}/libs3_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
cat("${TD_SUPPORT_DIR}/azure_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
add_definitions(-DUSE_S3)
@ -183,6 +180,11 @@ if(${BUILD_GEOS})
cat("${TD_SUPPORT_DIR}/geos_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
endif()
if (${BUILD_WITH_ANALYSIS})
cat("${TD_SUPPORT_DIR}/ssl_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
cat("${TD_SUPPORT_DIR}/curl_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
endif()
#
if(${BUILD_PCRE2})
cat("${TD_SUPPORT_DIR}/pcre2_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})

View File

@ -20,14 +20,6 @@ if(${BUILD_WITH_SQLITE})
add_subdirectory(sqlite)
endif(${BUILD_WITH_SQLITE})
if(${BUILD_WITH_CRAFT})
add_subdirectory(craft)
endif(${BUILD_WITH_CRAFT})
if(${BUILD_WITH_TRAFT})
# add_subdirectory(traft)
endif(${BUILD_WITH_TRAFT})
if(${BUILD_S3})
add_subdirectory(azure)
endif()

View File

@ -109,7 +109,7 @@ If you are using Maven to manage your project, simply add the following dependen
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.2</version>
</dependency>
```

View File

@ -15,6 +15,19 @@ When inserting data using parameter binding, it can avoid the resource consumpti
**Tips: It is recommended to use parameter binding for data insertion**
:::note
We only recommend using the following two forms of SQL for parameter binding data insertion:
```sql
a. Subtables already exists:
1. INSERT INTO meters (tbname, ts, current, voltage, phase) VALUES(?, ?, ?, ?, ?)
b. Automatic table creation on insert:
1. INSERT INTO meters (tbname, ts, current, voltage, phase, location, group_id) VALUES(?, ?, ?, ?, ?, ?, ?)
2. INSERT INTO ? USING meters TAGS (?, ?) VALUES (?, ?, ?, ?)
```
:::
Next, we continue to use smart meters as an example to demonstrate the efficient writing functionality of parameter binding with various language connectors:
1. Prepare a parameterized SQL insert statement for inserting data into the supertable `meters`. This statement allows dynamically specifying subtable names, tags, and column values.

View File

@ -26,8 +26,9 @@ Flink Connector supports all platforms that can run Flink 1.19 and above version
| Flink Connector Version | Major Changes | TDengine Version|
|-------------------------| ------------------------------------ | ---------------- |
| 2.0.0 | 1.Support SQL queries on data in TDengine database <br/> 2. Support CDC subscription to data in TDengine database<br/> 3. Supports reading and writing to TDengine database using Table SQL | 3.3.5.0 and above versions|
| 1.0.0 | Support Sink function to write data from other sources to TDengine in the future| 3.3.2.0 and above versions|
| 2.0.1 | Sink supports writing types from Rowdata implementations.| - |
| 2.0.0 | 1.Support SQL queries on data in TDengine database. <br/> 2. Support CDC subscription to data in TDengine database.<br/> 3. Supports reading and writing to TDengine database using Table SQL. | 3.3.5.1 and higher|
| 1.0.0 | Support Sink function to write data from other sources to TDengine in the future.| 3.3.2.0 and higher|
## Exception and error codes
@ -114,7 +115,7 @@ If using Maven to manage a project, simply add the following dependencies in pom
<dependency>
<groupId>com.taosdata.flink</groupId>
<artifactId>flink-connector-tdengine</artifactId>
<version>2.0.0</version>
<version>2.0.1</version>
</dependency>
```

View File

@ -13,9 +13,9 @@ Through the Python connector of TDengine, Superset can support TDengine data sou
Prepare the following environment:
- TDengine is installed and running normally (both Enterprise and Community versions are available)
- taosAdapter is running normally, refer to [taosAdapter](../../../reference/components/taosAdapter)
- taosAdapter is running normally, refer to [taosAdapter](../../../tdengine-reference/components/taosadapter/)
- Apache Superset version 2.1.0 or above is already installed, refre to [Apache Superset](https://superset.apache.org/)
## Install TDengine Python Connector
The Python connector of TDengine comes with a connection driver that supports Superset in versions 2.1.18 and later, which will be automatically installed in the Superset directory and provide data source services.

View File

@ -43,7 +43,7 @@ After modifying configuration file parameters, you need to restart the *taosd* s
|resolveFQDNRetryTime | Cancelled after 3.x |Not supported |Number of retries when FQDN resolution fails|
|timeToGetAvailableConn | Cancelled after 3.3.4.x |Maximum waiting time to get an available connection, range 10-50000000, in milliseconds, default value 500000|
|maxShellConns | Cancelled after 3.x |Supported, effective after restart|Maximum number of connections allowed|
|maxRetryWaitTime | |Supported, effective after restart|Maximum timeout for reconnection, default value is 10s|
|maxRetryWaitTime | |Supported, effective after restart|Maximum timeout for reconnection,calculated from the time of retry,range is 0-86400000,in milliseconds, default value 10000|
|shareConnLimit |Added in 3.3.4.0 |Supported, effective after restart|Number of requests a connection can share, range 1-512, default value 10|
|readTimeout |Added in 3.3.4.0 |Supported, effective after restart|Minimum timeout for a single request, range 64-604800, in seconds, default value 900|
@ -77,12 +77,7 @@ After modifying configuration file parameters, you need to restart the *taosd* s
|minReservedMemorySize | |Not supported |The minimum reserved system available memory size, all memory except reserved can be used for queries, unit: MB, default reserved size is 20% of system physical memory, value range 1024-1000000000|
|singleQueryMaxMemorySize| |Not supported |The memory limit that a single query can use on a single node (dnode), exceeding this limit will return an error, unit: MB, default value: 0 (no limit), value range 0-1000000000|
|filterScalarMode | |Not supported |Force scalar filter mode, 0: off; 1: on, default value 0|
|queryPlannerTrace | |Supported, effective immediately |Internal parameter, whether the query plan outputs detailed logs|
|queryNodeChunkSize | |Supported, effective immediately |Internal parameter, chunk size of the query plan|
|queryUseNodeAllocator | |Supported, effective immediately |Internal parameter, allocation method of the query plan|
|queryMaxConcurrentTables| |Not supported |Internal parameter, concurrency number of the query plan|
|queryRsmaTolerance | |Not supported |Internal parameter, tolerance time for determining which level of rsma data to query, in milliseconds|
|enableQueryHb | |Supported, effective immediately |Internal parameter, whether to send query heartbeat messages|
|pqSortMemThreshold | |Not supported |Internal parameter, memory threshold for sorting|
### Region Related

View File

@ -268,7 +268,22 @@ An exporter used by Prometheus that exposes hardware and operating system metric
### Getting the VGroup ID of a table
You can access the HTTP interface `http://<fqdn>:6041/rest/vgid?db=<db>&table=<table>` to get the VGroup ID of a table.
You can send a POST request to the HTTP interface `http://<fqdn>:<port>/rest/sql/<db>/vgid` to get the VGroup ID of a table.
The body should be a JSON array of multiple table names.
Example: Get the VGroup ID for the database power and tables d_bind_1 and d_bind_2.
```shell
curl --location 'http://127.0.0.1:6041/rest/sql/power/vgid' \
--user 'root:taosdata' \
--data '["d_bind_1","d_bind_2"]'
```
response:
```json
{"code":0,"vgIDs":[153,152]}
```
## Memory Usage Optimization Methods

View File

@ -65,7 +65,7 @@ database_option: {
- MINROWS: The minimum number of records in a file block, default is 100.
- KEEP: Indicates the number of days data files are kept, default value is 3650, range [1, 365000], and must be greater than or equal to 3 times the DURATION parameter value. The database will automatically delete data that has been saved for longer than the KEEP value to free up storage space. KEEP can use unit-specified formats, such as KEEP 100h, KEEP 10d, etc., supports m (minutes), h (hours), and d (days) three units. It can also be written without a unit, like KEEP 50, where the default unit is days. The enterprise version supports multi-tier storage feature, thus, multiple retention times can be set (multiple separated by commas, up to 3, satisfying keep 0 \<= keep 1 \<= keep 2, such as KEEP 100h,100d,3650d); the community version does not support multi-tier storage feature (even if multiple retention times are configured, it will not take effect, KEEP will take the longest retention time).
- KEEP_TIME_OFFSET: Effective from version 3.2.0.0. The delay execution time for deleting or migrating data that has been saved for longer than the KEEP value, default value is 0 (hours). After the data file's save time exceeds KEEP, the deletion or migration operation will not be executed immediately, but will wait an additional interval specified by this parameter, to avoid peak business periods.
- STT_TRIGGER: Indicates the number of file merges triggered by disk files. The open-source version is fixed at 1, the enterprise version can be set from 1 to 16. For scenarios with few tables and high-frequency writing, this parameter is recommended to use the default configuration; for scenarios with many tables and low-frequency writing, this parameter is recommended to be set to a larger value.
- STT_TRIGGER: Indicates the number of file merges triggered by disk files. For scenarios with few tables and high-frequency writing, this parameter is recommended to use the default configuration; for scenarios with many tables and low-frequency writing, this parameter is recommended to be set to a larger value.
- SINGLE_STABLE: Indicates whether only one supertable can be created in this database, used in cases where the supertable has a very large number of columns.
- 0: Indicates that multiple supertables can be created.
- 1: Indicates that only one supertable can be created.
@ -144,10 +144,6 @@ You can view cacheload through show \<db_name>.vgroups;
If cacheload is very close to cachesize, then cachesize may be too small. If cacheload is significantly less than cachesize, then cachesize is sufficient. You can decide whether to modify cachesize based on this principle. The specific modification value can be determined based on the available system memory, whether to double it or increase it several times.
4. stt_trigger
Please stop database writing before modifying the stt_trigger parameter.
:::note
Other parameters are not supported for modification in version 3.0.0.0

View File

@ -491,15 +491,15 @@ SELECT ... FROM (SELECT ... FROM ...) ...;
:::
## UNION ALL Clause
## UNION Clause
```text title=Syntax
SELECT ...
UNION ALL SELECT ...
[UNION ALL SELECT ...]
UNION [ALL] SELECT ...
[UNION [ALL] SELECT ...]
```
TDengine supports the UNION ALL operator. This means that if multiple SELECT clauses return result sets with the exact same structure (column names, column types, number of columns, order), these result sets can be combined together using UNION ALL. Currently, only the UNION ALL mode is supported, which means that duplicates are not removed during the merging process. In the same SQL statement, a maximum of 100 UNION ALLs are supported.
TDengine supports the UNION [ALL] operator. This means that if multiple SELECT clauses return result sets with the exact same structure (column names, column types, number of columns, order), these result sets can be combined together using UNION [ALL].
## SQL Examples

View File

@ -2171,7 +2171,7 @@ ignore_negative: {
**Usage Instructions**:
- Can be used with the columns associated with the selection. For example: select _rowts, DERIVATIVE() from.
- Can be used with the columns associated with the selection. For example: select _rowts, DERIVATIVE(col1, 1s, 1) from tb1.
### DIFF

View File

@ -45,7 +45,7 @@ ALTER ALL DNODES dnode_option
For configuration parameters that support dynamic modification, you can use the ALTER DNODE or ALTER ALL DNODES syntax to modify the values of configuration parameters in a dnode. Starting from version 3.3.4.0, the modified configuration parameters will be automatically persisted and will remain effective even after the database service is restarted.
To check whether a configuration parameter supports dynamic modification, please refer to the following page: [taosd Reference](../01-components/01-taosd.md)
To check whether a configuration parameter supports dynamic modification, please refer to the following page: [taosd Reference](/tdengine-reference/components/taosd/)
The value is the parameter's value and needs to be in character format. For example, to change the log output level of dnode 1 to debug:
@ -130,7 +130,7 @@ ALTER LOCAL local_option
You can use the above syntax to modify the client's configuration parameters, and there is no need to restart the client. The changes take effect immediately.
To check whether a configuration parameter supports dynamic modification, please refer to the following page:[taosc Reference](../01-components/02-taosc.md)
To check whether a configuration parameter supports dynamic modification, please refer to the following page:[taosc Reference](/tdengine-reference/components/taosc/)
## View Client Configuration

View File

@ -60,7 +60,7 @@ CREATE [OR REPLACE] AGGREGATE FUNCTION function_name AS library_path OUTPUTTYPE
CREATE AGGREGATE FUNCTION l2norm AS "/home/taos/udf_example/libl2norm.so" OUTPUTTYPE DOUBLE bufsize 64;
```
About how to develop custom functions, please refer to [UDF Usage Instructions](../../../developer-guide/user-defined-functions/).
About how to develop custom functions, please refer to [UDF Usage Instructions](/developer-guide/user-defined-functions/).
## Manage UDF

View File

@ -4,7 +4,11 @@ title: Time-Range Small Materialized Aggregates (TSMAs)
slug: /tdengine-reference/sql-manual/manage-tsmas
---
To improve the performance of aggregate function queries with large data volumes, window pre-aggregation (TSMA Time-Range Small Materialized Aggregates) objects are created. By using fixed time windows to pre-calculate specified aggregate functions and storing the results, query performance is enhanced by querying these pre-calculated results.
In scenarios with large amounts of data, it is often necessary to query summary results for a certain period. As historical data increases or the time range expands, query time will also increase accordingly. By using materialized aggregation, the calculation results can be stored in advance, allowing subsequent queries to directly read the aggregated results without scanning the original data, such as the SMA (Small Materialized Aggregates) information within the current block.
The SMA information within a block has a small granularity. If the query time range is in days, months, or even years, the number of blocks will be large. Therefore, TSMA (Time-Range Small Materialized Aggregates) supports users to specify a time window for materialized aggregation. By pre-calculating the data within a fixed time window and storing the calculation results, queries can be performed on the pre-calculated results to improve query performance.
![TSMA Introduction](./assets/TSMA_intro.png)
## Creating TSMA

Binary file not shown.

After

Width:  |  Height:  |  Size: 109 KiB

View File

@ -509,8 +509,8 @@ For the OpenTSDB text protocol, the parsing of timestamps follows its official p
- **Interface Description**: Used for polling to consume data. Each consumer can only call this interface in a single thread.
- tmq: [Input] Points to a valid ws_tmq_t structure pointer, which represents a TMQ consumer object.
- timeout: [Input] Polling timeout in milliseconds, a negative number indicates a default timeout of 1 second.
- **Return Value**: Non-`NULL`: Success, returns a pointer to a WS_RES structure, which contains the received message. `NULL`: Failure, indicates no data. WS_RES results are consistent with taos_query results, and information in WS_RES can be obtained through various query interfaces, such as schema, etc.
- **Return Value**: Non-`NULL`: Success, returns a pointer to a WS_RES structure, which contains the received message. `NULL`: indicates no data, the error code can be obtained through ws_errno (NULL), please refer to the reference manual for specific error message. WS_RES results are consistent with taos_query results, and information in WS_RES can be obtained through various query interfaces, such as schema, etc.
- `int32_t ws_tmq_consumer_close(ws_tmq_t *tmq)`
- **Interface Description**: Used to close the ws_tmq_t structure. Must be used in conjunction with ws_tmq_consumer_new.
- tmq: [Input] Points to a valid ws_tmq_t structure pointer, which represents a TMQ consumer object.
@ -1195,8 +1195,8 @@ In addition to using SQL or parameter binding APIs to insert data, you can also
- **Interface Description**: Used to poll for consuming data, each consumer can only call this interface in a single thread.
- tmq: [Input] Points to a valid tmq_t structure pointer, representing a TMQ consumer object.
- timeout: [Input] Polling timeout in milliseconds, a negative number indicates a default timeout of 1 second.
- **Return Value**: Non-`NULL`: Success, returns a pointer to a TAOS_RES structure containing the received messages. `NULL`: Failure, indicates no data. TAOS_RES results are consistent with taos_query results, and information in TAOS_RES can be obtained through various query interfaces, such as schema, etc.
- **Return Value**: Non-`NULL`: Success, returns a pointer to a TAOS_RES structure containing the received messages. `NULL`: indicates no data, the error code can be obtained through taos_errno (NULL), please refer to the reference manual for specific error message. TAOS_RES results are consistent with taos_query results, and information in TAOS_RES can be obtained through various query interfaces, such as schema, etc.
- `int32_t tmq_consumer_close(tmq_t *tmq)`
- **Interface Description**: Used to close a tmq_t structure. Must be used in conjunction with tmq_consumer_new.
- tmq: [Input] Points to a valid tmq_t structure pointer, which represents a TMQ consumer object.

View File

@ -33,6 +33,8 @@ The JDBC driver implementation for TDengine strives to be consistent with relati
| taos-jdbcdriver Version | Major Changes | TDengine Version |
| ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ |
| 3.5.3 | Support unsigned data types in WebSocket connections. | - |
| 3.5.2 | Fixed WebSocket result set free bug. | - |
| 3.5.1 | Fixed the getObject issue in data subscription. | - |
| 3.5.0 | 1. Optimized the performance of WebSocket connection parameter binding, supporting parameter binding queries using binary data. <br/> 2. Optimized the performance of small queries in WebSocket connection. <br/> 3. Added support for setting time zone and app info on WebSocket connection. | 3.3.5.0 and higher |
| 3.4.0 | 1. Replaced fastjson library with jackson. <br/> 2. WebSocket uses a separate protocol identifier. <br/> 3. Optimized background thread usage to avoid user misuse leading to timeouts. | - |
@ -127,24 +129,27 @@ Please refer to the specific error codes:
TDengine currently supports timestamp, numeric, character, boolean types, and the corresponding Java type conversions are as follows:
| TDengine DataType | JDBCType |
| ----------------- | ------------------ |
| TIMESTAMP | java.sql.Timestamp |
| INT | java.lang.Integer |
| BIGINT | java.lang.Long |
| FLOAT | java.lang.Float |
| DOUBLE | java.lang.Double |
| SMALLINT | java.lang.Short |
| TINYINT | java.lang.Byte |
| BOOL | java.lang.Boolean |
| BINARY | byte array |
| NCHAR | java.lang.String |
| JSON | java.lang.String |
| VARBINARY | byte[] |
| GEOMETRY | byte[] |
| TDengine DataType | JDBCType | Remark |
| ----------------- | -------------------- | --------------------------------------- |
| TIMESTAMP | java.sql.Timestamp | |
| BOOL | java.lang.Boolean | |
| TINYINT | java.lang.Byte | |
| TINYINT UNSIGNED | java.lang.Short | only supported in WebSocket connections |
| SMALLINT | java.lang.Short | |
| SMALLINT UNSIGNED | java.lang.Integer | only supported in WebSocket connections |
| INT | java.lang.Integer | |
| INT UNSIGNED | java.lang.Long | only supported in WebSocket connections |
| BIGINT | java.lang.Long | |
| BIGINT UNSIGNED | java.math.BigInteger | only supported in WebSocket connections |
| FLOAT | java.lang.Float | |
| DOUBLE | java.lang.Double | |
| BINARY | byte array | |
| NCHAR | java.lang.String | |
| JSON | java.lang.String | only supported in tags |
| VARBINARY | byte[] | |
| GEOMETRY | byte[] | |
**Note**: JSON type is only supported in tags.
Due to historical reasons, the BINARY type in TDengine is not truly binary data and is no longer recommended. Please use VARBINARY type instead.
**Note**: Due to historical reasons, the BINARY type in TDengine is not truly binary data and is no longer recommended. Please use VARBINARY type instead.
GEOMETRY type is binary data in little endian byte order, complying with the WKB standard. For more details, please refer to [Data Types](../../sql-manual/data-types/)
For the WKB standard, please refer to [Well-Known Binary (WKB)](https://libgeos.org/specifications/wkb/)
For the Java connector, you can use the jts library to conveniently create GEOMETRY type objects, serialize them, and write to TDengine. Here is an example [Geometry Example](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/GeometryDemo.java)

View File

@ -50,7 +50,7 @@ Supports Python 3.0 and above.
-The platforms supported by native connections are consistent with those supported by the TDengine client driver.
-WebSocket/REST connections support all platforms that can run Python.
## Versions History
## Version History
Python Connector historical versions (it is recommended to use the latest version of 'taopsy'):

View File

@ -23,13 +23,14 @@ import RequestId from "../../assets/resources/_request_id.mdx";
## Version History
| Connector Version | Major Changes | TDengine Version |
|------------------|-------------------------------------------------|-------------------|
| 3.1.4 | Improved WebSocket query and insert performance. | 3.3.2.0 and higher |
| 3.1.3 | Supported WebSocket auto-reconnect. | - |
| 3.1.2 | Fixed schemaless resource release. | - |
| 3.1.1 | Supported varbinary and geometry types. | - |
| 3.1.0 | WebSocket uses a native C# implementation. | 3.2.1.0 and higher |
| Connector Version | Major Changes | TDengine Version |
|-------------------|------------------------------------------------------------|--------------------|
| 3.1.5 | Fix WebSocket encoding error for Chinese character length. | - |
| 3.1.4 | Improved WebSocket query and insert performance. | 3.3.2.0 and higher |
| 3.1.3 | Supported WebSocket auto-reconnect. | - |
| 3.1.2 | Fixed schemaless resource release. | - |
| 3.1.1 | Supported varbinary and geometry types. | - |
| 3.1.0 | WebSocket uses a native C# implementation. | 3.2.1.0 and higher |
## Exceptions and Error Codes

View File

@ -124,7 +124,7 @@ In addition to this, the WebSocket connection method also supports 32-bit applic
| v1.1.0 | 1. Supports view functionality. <br/>2. Supports VARBINARY/GEOMETRY data types. <br/>3. Supports ODBC 32-bit WebSocket connection method (Enterprise edition only). <br/>4. Supports ODBC data source configuration dialog settings for compatibility adaptation options for industrial software like KingSCADA, Kepware, etc. (Enterprise edition only). | 3.3.3.0 and higher |
| v1.0.2 | Supports CP1252 character encoding. | 3.2.3.0 and higher |
| v1.0.1 | 1. Supports DSN settings for BI mode, in BI mode TDengine database does not return system database and supertable subtable information. <br/>2. Refactored character set conversion module, improving read and write performance. <br/> 3. Default connection method in ODBC data source configuration dialog changed to "WebSocket". <br/>4. Added "Test Connection" control in ODBC data source configuration dialog. <br/>5. ODBC data source configuration supports Chinese/English interface. | - |
| v1.0.0.0 | Initial release, supports interacting with Tdengine database to read and write data, refer to the "API Reference" section for details. | 3.2.2.0 and higher |
| v1.0.0.0 | Initial release, supports interacting with TDengine database to read and write data, refer to the "API Reference" section for details. | 3.2.2.0 and higher |
## Data Type Mapping

View File

@ -252,7 +252,7 @@ Description:
- code: (`int`) 0 represents success.
- column_meta: (`[][3]any`) Column information, each column is described by three values: column name (string), column type (string), and type length (int).
- rows: (`int`) Number of data return rows.
- data: (`[][]any`) Specific data content (time format only supports RFC3339, result set for timezone 0).
- data: (`[][]any`) Specific data content (time format only supports RFC3339, result set for timezone 0, when specifying tz, the corresponding time zone is returned).
Column types use the following strings:
@ -434,7 +434,6 @@ curl http://<fqnd>:<port>/rest/login/<username>/<password>
Here, `fqdn` is the FQDN or IP address of the TDengine database, `port` is the port number of the TDengine service, `username` is the database username, and `password` is the database password. The return is in JSON format, with the fields meaning as follows:
- status: Flag of the request result.
- code: Return code.
- desc: Authorization code.

View File

@ -534,4 +534,5 @@ This document details the server error codes that may be encountered when using
| 0x80004000 | Invalid message | The subscribed data is illegal, generally does not occur | Check the client-side error logs for details |
| 0x80004001 | Consumer mismatch | The vnode requested for subscription and the reassigned vnode are inconsistent, usually occurs when new consumers join the same consumer group | Internal error, not exposed to users |
| 0x80004002 | Consumer closed | The consumer no longer exists | Check if it has already been closed |
| 0x80004017 | Invalid status, please subscribe topic first | tmq status invalidate | Without calling subscribe, directly poll data |
| 0x80004100 | Stream task not exist | The stream computing task does not exist | Check the server-side error logs |

View File

@ -25,6 +25,10 @@ Download links for TDengine 3.x version installation packages are as follows:
import Release from "/components/ReleaseV3";
## 3.3.5.2
<Release type="tdengine" version="3.3.5.2" />
## 3.3.5.0
<Release type="tdengine" version="3.3.5.0" />

View File

@ -2,7 +2,7 @@
title: TDengine 3.3.4.8 Release Notes
sidebar_label: 3.3.4.8
description: Version 3.3.4.8 Notes
slug: /release-history/release-notes/3.3.4.8
slug: /release-history/release-notes/3-3-4-8
---
## New Features

View File

@ -2,7 +2,7 @@
title: TDengine 3.3.5.0 Release Notes
sidebar_label: 3.3.5.0
description: Version 3.3.5.0 Notes
slug: /release-history/release-notes/3.3.5.0
slug: /release-history/release-notes/3-3-5-0
---
## Features

View File

@ -0,0 +1,43 @@
---
title: TDengine 3.3.5.2 Release Notes
sidebar_label: 3.3.5.2
description: Version 3.3.5.2 Notes
slug: /release-history/release-notes/3.3.5.2
---
## Features
1. feat: taosX now support multiple stables with template for MQTT
## Enhancements
1. enh: improve taosX error message if database is invalid
2. enh: use poetry group depencencies and reduce dep when install [#251](https://github.com/taosdata/taos-connector-python/issues/251)
3. enh: improve backup restore using taosX
4. enh: during the multi-level storage data migration, if the migration time is too long, it may cause the Vnode to switch leader
5. enh: adjust the systemctl strategy for managing the taosd process, if three consecutive restarts fail within 60 seconds, the next restart will be delayed until 900 seconds later
## Fixes
1. fix: the maxRetryWaitTime parameter is used to control the maximum reconnection timeout time for the client when the cluster is unable to provide services, but it does not take effect when encountering a Sync timeout error
2. fix: supports immediate subscription to the new tag value after modifying the tag value of the sub-table
3. fix: the tmq_consumer_poll function for data subscription does not return an error code when the call fails
4. fix: taosd may crash when more than 100 views are created and the show views command is executed
5. fix: when using stmt2 to insert data, if not all data columns are bound, the insertion operation will fail
6. fix: when using stmt2 to insert data, if the database name or table name is enclosed in backticks, the insertion operation will fail
7. fix: when closing a vnode, if there are ongoing file merge tasks, taosd may crash
8. fix: frequent execution of the “drop table with tb_uid” statement may lead to a deadlock in taosd
9. fix: the potential deadlock during the switching of log files
10. fix: prohibit the creation of databases with the same names as system databases (information_schema, performance_schema)
11. fix: when the inner query of a nested query come from a super table, the sorting information cannot be pushed up
12. fix: incorrect error reporting when attempting to write Geometry data types that do not conform to topological specifications through the STMT interface
13. fix: when using the percentile function and session window in a query statement, if an error occurs, taosd may crash
14. fix: the issue of being unable to dynamically modify system parameters
15. fix: random error of tranlict transaction in replication
16. fix: the same consumer executes the unsubscribe operation and immediately attempts to subscribe to other different topics, the subscription API will return an error
17. fix: fix CVE-2022-28948 security issue in go connector
18. fix: when a subquery in a view contains an ORDER BY clause with an alias, and the query function itself also has an alias, querying the view will result in an error
19. fix: when changing the database from a single replica to a mulit replica, if there are some metadata generated by earlier versions that are no longer used in the new version, the modification operation will fail
20. fix: column names were not correctly copied when using SELECT * FROM subqueries
21. fix: when performing max/min function on string type data, the results are inaccurate and taosd will crash
22. fix: stream computing does not support the use of the HAVING clause, but no error is reported during creation
23. fix: the version information displayed by taos shell for the server is inaccurate, such as being unable to correctly distinguish between the community edition and the enterprise edition
24. fix: in certain specific query scenarios, when JOIN and CAST are used together, taosd may crash

View File

@ -3,9 +3,11 @@ title: Release Notes
slug: /release-history/release-notes
---
[3.3.5.0](./3-3-5-0/)
[3.3.5.2](./3.3.5.2)
[3.3.4.8](./3-3-4-8/)
[3.3.5.0](./3.3.5.0)
[3.3.4.3](./3-3-4-3/)
[3.3.3.0](./3-3-3-0/)

View File

@ -19,7 +19,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.3</version>
</dependency>
<dependency>
<groupId>org.locationtech.jts</groupId>

View File

@ -1,6 +1,4 @@
package com.taosdata.example;
import com.alibaba.fastjson.JSON;
import com.taosdata.jdbc.AbstractStatement;
import java.sql.*;

View File

@ -104,8 +104,9 @@ public class JdbcDemo {
private void executeQuery(String sql) {
long start = System.currentTimeMillis();
try (Statement statement = connection.createStatement()) {
ResultSet resultSet = statement.executeQuery(sql);
try (Statement statement = connection.createStatement();
ResultSet resultSet = statement.executeQuery(sql)) {
long end = System.currentTimeMillis();
printSql(sql, true, (end - start));
Util.printResult(resultSet);

View File

@ -47,7 +47,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.3</version>
</dependency>
</dependencies>

View File

@ -18,7 +18,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.3</version>
</dependency>
<!-- druid -->
<dependency>

View File

@ -17,7 +17,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.3</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>

View File

@ -47,7 +47,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.3</version>
</dependency>
<dependency>

View File

@ -70,7 +70,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.3</version>
</dependency>
<dependency>

View File

@ -67,7 +67,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.3</version>
<!-- <scope>system</scope>-->
<!-- <systemPath>${project.basedir}/src/main/resources/lib/taos-jdbcdriver-2.0.15-dist.jar</systemPath>-->
</dependency>

View File

@ -263,7 +263,7 @@ splitSql.setSelect("ts, current, voltage, phase, groupid, location")
Class<SourceRecords<RowData>> typeClass = (Class<SourceRecords<RowData>>) (Class<?>) SourceRecords.class;
SourceSplitSql sql = new SourceSplitSql("select ts, `current`, voltage, phase, tbname from meters");
TDengineSource<SourceRecords<RowData>> source = new TDengineSource<>(connProps, sql, typeClass);
DataStreamSource<SourceRecords<RowData>> input = env.fromSource(source, WatermarkStrategy.noWatermarks(), "kafka-source");
DataStreamSource<SourceRecords<RowData>> input = env.fromSource(source, WatermarkStrategy.noWatermarks(), "tdengine-source");
DataStream<String> resultStream = input.map((MapFunction<SourceRecords<RowData>, String>) records -> {
StringBuilder sb = new StringBuilder();
Iterator<RowData> iterator = records.iterator();
@ -304,7 +304,7 @@ splitSql.setSelect("ts, current, voltage, phase, groupid, location")
config.setProperty(TDengineCdcParams.VALUE_DESERIALIZER, "RowData");
config.setProperty(TDengineCdcParams.VALUE_DESERIALIZER_ENCODING, "UTF-8");
TDengineCdcSource<RowData> tdengineSource = new TDengineCdcSource<>("topic_meters", config, RowData.class);
DataStreamSource<RowData> input = env.fromSource(tdengineSource, WatermarkStrategy.noWatermarks(), "kafka-source");
DataStreamSource<RowData> input = env.fromSource(tdengineSource, WatermarkStrategy.noWatermarks(), "tdengine-source");
DataStream<String> resultStream = input.map((MapFunction<RowData, String>) rowData -> {
StringBuilder sb = new StringBuilder();
sb.append("tsxx: " + rowData.getTimestamp(0, 0) +
@ -343,7 +343,7 @@ splitSql.setSelect("ts, current, voltage, phase, groupid, location")
Class<ConsumerRecords<RowData>> typeClass = (Class<ConsumerRecords<RowData>>) (Class<?>) ConsumerRecords.class;
TDengineCdcSource<ConsumerRecords<RowData>> tdengineSource = new TDengineCdcSource<>("topic_meters", config, typeClass);
DataStreamSource<ConsumerRecords<RowData>> input = env.fromSource(tdengineSource, WatermarkStrategy.noWatermarks(), "kafka-source");
DataStreamSource<ConsumerRecords<RowData>> input = env.fromSource(tdengineSource, WatermarkStrategy.noWatermarks(), "tdengine-source");
DataStream<String> resultStream = input.map((MapFunction<ConsumerRecords<RowData>, String>) records -> {
Iterator<ConsumerRecord<RowData>> iterator = records.iterator();
StringBuilder sb = new StringBuilder();
@ -388,7 +388,7 @@ splitSql.setSelect("ts, current, voltage, phase, groupid, location")
config.setProperty(TDengineCdcParams.VALUE_DESERIALIZER, "com.taosdata.flink.entity.ResultDeserializer");
config.setProperty(TDengineCdcParams.VALUE_DESERIALIZER_ENCODING, "UTF-8");
TDengineCdcSource<ResultBean> tdengineSource = new TDengineCdcSource<>("topic_meters", config, ResultBean.class);
DataStreamSource<ResultBean> input = env.fromSource(tdengineSource, WatermarkStrategy.noWatermarks(), "kafka-source");
DataStreamSource<ResultBean> input = env.fromSource(tdengineSource, WatermarkStrategy.noWatermarks(), "tdengine-source");
DataStream<String> resultStream = input.map((MapFunction<ResultBean, String>) rowData -> {
StringBuilder sb = new StringBuilder();
sb.append("ts: " + rowData.getTs() +

View File

@ -22,7 +22,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.3</version>
</dependency>
<!-- ANCHOR_END: dep-->

View File

@ -2,6 +2,7 @@ package com.taos.example;
import com.taosdata.jdbc.ws.TSWSPreparedStatement;
import java.math.BigInteger;
import java.sql.*;
import java.util.Random;
@ -26,7 +27,12 @@ public class WSParameterBindingFullDemo {
"binary_col BINARY(100), " +
"nchar_col NCHAR(100), " +
"varbinary_col VARBINARY(100), " +
"geometry_col GEOMETRY(100)) " +
"geometry_col GEOMETRY(100)," +
"utinyint_col tinyint unsigned," +
"usmallint_col smallint unsigned," +
"uint_col int unsigned," +
"ubigint_col bigint unsigned" +
") " +
"tags (" +
"int_tag INT, " +
"double_tag DOUBLE, " +
@ -34,7 +40,12 @@ public class WSParameterBindingFullDemo {
"binary_tag BINARY(100), " +
"nchar_tag NCHAR(100), " +
"varbinary_tag VARBINARY(100), " +
"geometry_tag GEOMETRY(100))"
"geometry_tag GEOMETRY(100)," +
"utinyint_tag tinyint unsigned," +
"usmallint_tag smallint unsigned," +
"uint_tag int unsigned," +
"ubigint_tag bigint unsigned" +
")"
};
private static final int numOfSubTable = 10, numOfRow = 10;
@ -79,7 +90,7 @@ public class WSParameterBindingFullDemo {
// set table name
pstmt.setTableName("ntb_json_" + i);
// set tags
pstmt.setTagJson(1, "{\"device\":\"device_" + i + "\"}");
pstmt.setTagJson(0, "{\"device\":\"device_" + i + "\"}");
// set columns
long current = System.currentTimeMillis();
for (int j = 0; j < numOfRow; j++) {
@ -94,25 +105,29 @@ public class WSParameterBindingFullDemo {
}
private static void stmtAll(Connection conn) throws SQLException {
String sql = "INSERT INTO ? using stb tags(?,?,?,?,?,?,?) VALUES (?,?,?,?,?,?,?,?)";
String sql = "INSERT INTO ? using stb tags(?,?,?,?,?,?,?,?,?,?,?) VALUES (?,?,?,?,?,?,?,?,?,?,?,?)";
try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) {
// set table name
pstmt.setTableName("ntb");
// set tags
pstmt.setTagInt(1, 1);
pstmt.setTagDouble(2, 1.1);
pstmt.setTagBoolean(3, true);
pstmt.setTagString(4, "binary_value");
pstmt.setTagNString(5, "nchar_value");
pstmt.setTagVarbinary(6, new byte[] { (byte) 0x98, (byte) 0xf4, 0x6e });
pstmt.setTagGeometry(7, new byte[] {
pstmt.setTagInt(0, 1);
pstmt.setTagDouble(1, 1.1);
pstmt.setTagBoolean(2, true);
pstmt.setTagString(3, "binary_value");
pstmt.setTagNString(4, "nchar_value");
pstmt.setTagVarbinary(5, new byte[] { (byte) 0x98, (byte) 0xf4, 0x6e });
pstmt.setTagGeometry(6, new byte[] {
0x01, 0x01, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x59,
0x40, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x59, 0x40 });
pstmt.setTagShort(7, (short)255);
pstmt.setTagInt(8, 65535);
pstmt.setTagLong(9, 4294967295L);
pstmt.setTagBigInteger(10, new BigInteger("18446744073709551615"));
long current = System.currentTimeMillis();
@ -129,6 +144,10 @@ public class WSParameterBindingFullDemo {
0x00, 0x00, 0x00, 0x59,
0x40, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x59, 0x40 });
pstmt.setShort(9, (short)255);
pstmt.setInt(10, 65535);
pstmt.setLong(11, 4294967295L);
pstmt.setObject(12, new BigInteger("18446744073709551615"));
pstmt.addBatch();
pstmt.executeBatch();
System.out.println("Successfully inserted rows to example_all_type_stmt.ntb");

View File

@ -63,7 +63,7 @@ toc_max_heading_level: 4
1. 数据库Database数据库提供时序数据的高效存储和读取能力。在工业、物联网场景由设备所产生的时序数据量是十分惊人的。从存储数据的角度来说数据库需要把这些数据持久化到硬盘上并最大程度地压缩从而降低存储成本。从读取数据的角度来说数据库需要保证实时查询以及历史数据的查询效率。比较传统的存储方案是使用 MySql、Oracle 等关系型数据库,也有 Hadoop 体系的 HBase专用的时序数据库则有 InfluxDB、OpenTSDB、Prometheus 等。
2. 数据订阅Data Subscription很多时序数据应用都需要在第一时间订阅到业务所需的实时数据从而及时了解被监测对象的最新状态,用 AI 或其他工具做实时的数据分析。同时,由于数据的隐私以及安全,你只能允许应用订阅他有权限访问的数据。因此,一个时序数据处理平台一定需要具备数据订阅的能力,帮助应用实时获取最新数据。
2. 数据订阅Data Subscription很多时序数据应用都需要在第一时间订阅到业务所需的实时数据从而及时了解被监测对象的最新状态用 AI 或其他工具做实时的数据分析。同时,由于数据的隐私以及安全,你只能允许应用订阅他有权限访问的数据。因此,一个时序数据处理平台一定需要具备数据订阅的能力,帮助应用实时获取最新数据。
3. ETLExtract Transform Load在实际的物联网、工业场景中时序数据的采集需要特定的 ETL 工具进行数据的提取、清洗和转换操作,才能把数据写入数据库中,以保证数据的质量。因为不同数据采集系统往往使用不同的标准,比如采集的温度的物理单位不一致,有的用摄氏度,有的用华氏度;系统之间所在的时区不一致,要进行转换;时间分辨率也可能不统一,因此这些从不同系统汇聚来的数据需要进行转换才能写入数据库。
@ -135,4 +135,4 @@ toc_max_heading_level: 4
18. 需要支持私有化部署。因为很多企业出于安全以及各种因素的考虑,希望采用私有化部署。而传统的企业往往没有很强的 IT 运维团队,因此在安装、部署、运维等方面需要做到简单、快捷,可维护性强。
总之,时序大数据平台应具备高效、可扩展、实时、可靠、灵活、开放、简单、易维护等特点。近年来,众多企业纷纷将时序数据从传统大数据平台或关系型数据库迁移到专用时序大数据平台,以保障海量时序数据得到快速和有效处理,支撑相关业务的持续增长。
总之,时序大数据平台应具备高效、可扩展、实时、可靠、灵活、开放、简单、易维护等特点。近年来,众多企业纷纷将时序数据从传统大数据平台或关系型数据库迁移到专用时序大数据平台,以保障海量时序数据得到快速和有效处理,支撑相关业务的持续增长。

View File

@ -94,6 +94,8 @@ JSON 解析支持 JSONObject 或者 JSONArray。 如下 JSON 示例数据,可
![JSON 解析](./pic/transform-02.png)
> 注意JSON 属性名称中不能含有`.`;如果含有,则必须使用名称 alias 将名称转义。
##### Regex 正则表达式<a name="regex"></a>
可以使用正则表达式的**命名捕获组**从任何字符串(文本)字段中提取多个字段。如图所示,从 nginx 日志中提取访问ip、时间戳、访问的url等字段。
@ -158,6 +160,14 @@ let v3 = data["voltage"].split(",");
过滤功能可以设置过滤条件,满足条件的数据行 才会被写入目标表。过滤条件表达式的结果必须是 boolean 类型。在编写过滤条件前,必须确定 解析字段的类型,根据解析字段的类型,可以使用判断函数、比较操作符(`>`、`>=`、`<=`、`<`、`==`、`!=`)来判断。
对时间戳过滤,可以采用以下函数。其中 ts 为符合 rfc3339 日期时间格式化字符串的字段t1 和 t2 为相对当前时间的秒数,时间范围为 now + t1 ~ now + t2.
```
between_time_range(ts, t1, t2)
// 例如:如果时间范围为最近 7 天内的才能入库,则过滤条件为:
between_time_range(ts, -604800, 0)
```
#### 字段类型及转换
只有明确解析出的每个字段的类型,才能使用正确的语法做数据过滤。

View File

@ -89,7 +89,7 @@ TDengine 提供了丰富的应用程序开发接口,为了便于用户快速
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.3</version>
</dependency>
```

View File

@ -15,6 +15,19 @@ import TabItem from "@theme/TabItem";
**Tips: 数据写入推荐使用参数绑定方式**
:::note
我们只推荐使用下面两种形式的 SQL 进行参数绑定写入:
```sql
一、确定子表存在:
1. INSERT INTO meters (tbname, ts, current, voltage, phase) VALUES(?, ?, ?, ?, ?)
二、自动建表:
1. INSERT INTO meters (tbname, ts, current, voltage, phase, location, group_id) VALUES(?, ?, ?, ?, ?, ?, ?)
2. INSERT INTO ? USING meters TAGS (?, ?) VALUES (?, ?, ?, ?)
```
:::
下面我们继续以智能电表为例,展示各语言连接器使用参数绑定高效写入的功能:
1. 准备一个参数化的 SQL 插入语句,用于向超级表 `meters` 中插入数据。这个语句允许动态地指定子表名、标签和列值。
2. 循环生成多个子表及其对应的数据行。对于每个子表:

View File

@ -19,7 +19,8 @@ TDengine 面向多种写入场景而很多写入场景下TDengine 的存
```SQL
COMPACT DATABASE db_name [start with 'XXXX'] [end with 'YYYY']
COMPACT [db_name.]VGROUPS IN (vgroup_id1, vgroup_id2, ...) [start with 'XXXX'] [end with 'YYYY']
SHOW COMPACTS [compact_id]
SHOW COMPACTS
SHOW COMPACT compact_id;
KILL COMPACT compact_id
```

View File

@ -69,7 +69,7 @@ taosExplorer 服务页面中,进入“系统管理 - 备份”页面,在“
1. 数据库:需要备份的数据库名称。一个备份计划只能备份一个数据库/超级表。
2. 超级表:需要备份的超级表名称。如果不填写,则备份整个数据库。
3. 下次执行时间:首次执行备份任务的日期时间。
4. 备份周期:备份点之间的时间间隔。注意:备份周期必须于数据库的 WAL_RETENTION_PERIOD 参数值。
4. 备份周期:备份点之间的时间间隔。注意:备份周期必须于数据库的 WAL_RETENTION_PERIOD 参数值。
5. 错误重试次数:对于可通过重试解决的错误,系统会按照此次数进行重试。
6. 错误重试间隔:每次重试之间的时间间隔。
7. 目录:存储备份文件的目录。
@ -152,4 +152,4 @@ Caused by:
```sql
alter
database test wal_retention_period 3600;
```
```

View File

@ -13,7 +13,7 @@ Apache Flink 是一款由 Apache 软件基金会支持的开源分布式流批
## 前置条件
准备以下环境:
- TDengine 集群已部署并正常运行(企业及社区版均可)
- TDengine 服务已部署并正常运行(企业及社区版均可)
- taosAdapter 能够正常运行。详细参考 [taosAdapter 使用手册](../../../reference/components/taosadapter)
- Apache Flink v1.19.0 或以上版本已安装。安装 Apache Flink 请参考 [官方文档](https://flink.apache.org/)
@ -24,7 +24,8 @@ Flink Connector 支持所有能运行 Flink 1.19 及以上版本的平台。
## 版本历史
| Flink Connector 版本 | 主要变化 | TDengine 版本 |
| ------------------| ------------------------------------ | ---------------- |
| 2.0.0 | 1. 支持 SQL 查询 TDengine 数据库中的数据<br/> 2. 支持 CDC 订阅 TDengine 数据库中的数据<br/> 3. 支持 Table SQL 方式读取和写入 TDengine 数据库| 3.3.5.0 及以上版本 |
| 2.0.1 | Sink 支持对所有继承自 RowData 并已实现的类型进行数据写入| - |
| 2.0.0 | 1. 支持 SQL 查询 TDengine 数据库中的数据<br/> 2. 支持 CDC 订阅 TDengine 数据库中的数据<br/> 3. 支持 Table SQL 方式读取和写入 TDengine 数据库| 3.3.5.1 及以上版本 |
| 1.0.0 | 支持 Sink 功能,将来着其他数据源的数据写入到 TDengine| 3.3.2.0 及以上版本|
## 异常和错误码
@ -111,7 +112,7 @@ env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.AT_LEAST_ONCE);
<dependency>
<groupId>com.taosdata.flink</groupId>
<artifactId>flink-connector-tdengine</artifactId>
<version>2.0.0</version>
<version>2.0.1</version>
</dependency>
```

View File

@ -0,0 +1,35 @@
---
sidebar_label: Tableau
title: 与 Tableau 集成
---
Tableau 是一款知名的商业智能工具,它支持多种数据源,可方便地连接、导入和整合数据。并且可以通过直观的操作界面,让用户创建丰富多样的可视化图表,并具备强大的分析和筛选功能,为数据决策提供有力支持。
## 前置条件
准备以下环境:
- TDengine 3.3.5.4以上版本集群已部署并正常运行(企业及社区版均可)
- taosAdapter 能够正常运行。详细参考 [taosAdapter 使用手册](../../../reference/components/taosadapter)
- Tableau 桌面版安装并运行(如未安装,请下载并安装 Windows 操作系统 32/64 位 [Tableau 桌面版](https://www.tableau.com/products/desktop/download) )。安装 Tableau 桌面版请参考 [官方文档](https://www.tableau.com)。
- ODBC 驱动安装成功。详细参考[安装 ODBC 驱动](../../../reference/connector/odbc/#安装)
- ODBC 数据源配置成功。详细参考[配置ODBC数据源](../../../reference/connector/odbc/#配置数据源)
## 加载和分析 TDengine 数据
**第 1 步**,在 Windows 系统环境下启动 Tableau之后在其连接页面中搜索 “ODBC”并选择 “其他数据库 (ODBC)”。
**第 2 步**,点击 DNS 单选框,接着选择已配置好的数据源(MyTDengine),然后点击连接按钮。待连接成功后,删除字符串附加部分的内容,最后点击登录按钮即可。
![tableau-odbc](./tableau/tableau-odbc.jpg)
**第 3 步**,在弹出的工作簿页面中,会显示已连接的数据源。点击数据库的下拉列表,会显示需要进行数据分析的数据库。在此基础上,点击表选项中的查找按钮,即可将该数据库下的所有表显示出来。然后,拖动需要分析的表到右侧区域,即可显示出表结构。
![tableau-workbook](./tableau/tableau-table.jpg)
**第 4 步**,点击下方的"立即更新"按钮,即可将表中的数据展示出来。
![tableau-workbook](./tableau/tableau-data.jpg)
**第 5 步**,点击窗口下方的"工作表",弹出数据分析窗口, 并展示分析表的所有字段,将字段拖动到行列即可展示出图表。
![tableau-workbook](./tableau/tableau-analysis.jpg)

Binary file not shown.

After

Width:  |  Height:  |  Size: 235 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 238 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 237 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

View File

@ -19,7 +19,7 @@ DBeaver 是一款流行的跨平台数据库管理工具,方便开发者、数
![DBeaver 连接 TDengine](./dbeaver/dbeaver-connect-tdengine-zh.webp)
2. 配置 TDengine 连接,填入主机地址、端口号、用户名和密码。如果 TDengine 部署在本机,可以只填用户名和密码,默认用户名为 root默认密码为 taosdata。点击“测试连接”可以对连接是否可用进行测试。如果本机没有安装 TDengine Java
2. 配置 TDengine 连接,填入主机地址、端口号6041、用户名和密码。如果 TDengine 部署在本机,可以只填用户名和密码,默认用户名为 root默认密码为 taosdata。点击“测试连接”可以对连接是否可用进行测试。如果本机没有安装 TDengine Java
连接器DBeaver 会提示下载安装。
![配置 TDengine 连接](./dbeaver/dbeaver-config-tdengine-zh.webp)

View File

@ -143,7 +143,7 @@ taosd 命令行参数如下
- 支持版本v3.3.4.0 版本之后取消
#### maxRetryWaitTime
- 说明:重连最大超时时间
- 说明:重连最大超时时间, 从重试时候开始计算
- 类型:整数
- 单位:毫秒
- 默认值10000
@ -439,7 +439,6 @@ taosd 命令行参数如下
- 动态修改:不支持
- 支持版本:从 v3.1.0.0 版本开始引入
### 区域相关
#### timezone
- 说明:时区
@ -749,7 +748,7 @@ charset 的有效值是 UTF-8。
#### ratioOfVnodeStreamThreads
- 说明:流计算使用 vnode 线程的比例
- 类型:浮点数
- 默认值:4
- 默认值:0.5
- 最小值0.01
- 最大值4
- 动态修改:支持通过 SQL 修改,重启生效
@ -1017,7 +1016,6 @@ charset 的有效值是 UTF-8。
- 动态修改:支持通过 SQL 修改,重启生效
- 支持版本:从 v3.1.0.0 版本开始引入
### 流计算参数
#### disableStream
@ -1470,7 +1468,7 @@ charset 的有效值是 UTF-8。
- 支持版本:从 v3.1.0.0 版本开始引入
**补充说明**
1. 在 3.4.0.0 之后,所有配置参数都将被持久化到本地存储,重启数据库服务后,将默认使用持久化的配置参数列表;如果您希望继续使用 config 文件中配置的参数,需设置 forceReadConfig 为 1。
1. 在 3.3.5.0 之后,所有配置参数都将被持久化到本地存储,重启数据库服务后,将默认使用持久化的配置参数列表;如果您希望继续使用 config 文件中配置的参数,需设置 forceReadConfig 为 1。
2. 在 3.2.0.0 ~ 3.3.0.0(不包含)版本生效,启用该参数后不能回退到升级前的版本
3. TSZ 压缩算法是通过数据预测技术完成的压缩,所以更适合有规律变化的数据
4. TSZ 压缩时间会更长一些,如果您的服务器 CPU 空闲多,存储空间小的情况下适合选用

View File

@ -262,7 +262,21 @@ Prometheus 使用的由 \*NIX 内核暴露的硬件和操作系统指标的输
### 获取 table 的 VGroup ID
可以访问 http 接口 `http://<fqdn>:6041/rest/vgid?db=<db>&table=<table>` 获取 table 的 VGroup ID。
可以 POST 请求 http 接口 `http://<fqdn>:<port>/rest/sql/<db>/vgid` 获取 table 的 VGroup IDbody 是多个表名 JSON 数组。
样例:获取数据库为 power表名为 d_bind_1 和 d_bind_2 的 VGroup ID
```shell
curl --location 'http://127.0.0.1:6041/rest/sql/power/vgid' \
--user 'root:taosdata' \
--data '["d_bind_1","d_bind_2"]'
```
响应:
```json
{"code":0,"vgIDs":[153,152]}
```
## 内存使用优化方法

View File

@ -29,7 +29,7 @@ taosKeeper 需要在操作系统终端执行,该工具支持三种配置方式
Usage of taoskeeper v3.3.3.0:
-R, --RotationInterval string interval for refresh metrics, such as "300ms", Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Env "TAOS_KEEPER_ROTATION_INTERVAL" (default "15s")
-c, --config string config path default /etc/taos/taoskeeper.toml
--drop string run taoskeeper in command mode, only support old_taosd_metric_stables.
--drop string run taoskeeper in command mode, only support old_taosd_metric_stables.
--environment.incgroup whether running in cgroup. Env "TAOS_KEEPER_ENVIRONMENT_INCGROUP"
--fromTime string parameter of transfer, example: 2020-01-01T00:00:00+08:00 (default "2020-01-01T00:00:00+08:00")
--gopoolsize int coroutine size. Env "TAOS_KEEPER_POOL_SIZE" (default 50000)
@ -65,7 +65,7 @@ Usage of taoskeeper v3.3.3.0:
taosKeeper 支持用 `taoskeeper -c <keeper config file>` 命令来指定配置文件。
若不指定配置文件taosKeeper 会使用默认配置文件,其路径为: `/etc/taos/taoskeeper.toml`
若既不指定 taosKeeper 配置文件,且 `/etc/taos/taoskeeper.toml` 也不存在,将使用默认配置。
若既不指定 taosKeeper 配置文件,且 `/etc/taos/taoskeeper.toml` 也不存在,将使用默认配置。
**下面是配置文件的示例:**
@ -261,7 +261,7 @@ Query OK, 14 row(s) in set (0.006542s)
可以查看一个超级表的最近一条上报记录,如:
``` shell
```shell
taos> select last_row(*) from taosd_dnodes_info;
last_row(_ts) | last_row(disk_engine) | last_row(system_net_in) | last_row(vnodes_num) | last_row(system_net_out) | last_row(uptime) | last_row(has_mnode) | last_row(io_read_disk) | last_row(error_log_count) | last_row(io_read) | last_row(cpu_cores) | last_row(has_qnode) | last_row(has_snode) | last_row(disk_total) | last_row(mem_engine) | last_row(info_log_count) | last_row(cpu_engine) | last_row(io_write_disk) | last_row(debug_log_count) | last_row(disk_used) | last_row(mem_total) | last_row(io_write) | last_row(masters) | last_row(cpu_system) | last_row(trace_log_count) | last_row(mem_free) |
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
@ -288,22 +288,22 @@ $ curl http://127.0.0.1:6043/metrics
部分结果集:
```shell
# HELP taos_cluster_info_connections_total
# HELP taos_cluster_info_connections_total
# TYPE taos_cluster_info_connections_total counter
taos_cluster_info_connections_total{cluster_id="554014120921134497"} 8
# HELP taos_cluster_info_dbs_total
# HELP taos_cluster_info_dbs_total
# TYPE taos_cluster_info_dbs_total counter
taos_cluster_info_dbs_total{cluster_id="554014120921134497"} 2
# HELP taos_cluster_info_dnodes_alive
# HELP taos_cluster_info_dnodes_alive
# TYPE taos_cluster_info_dnodes_alive counter
taos_cluster_info_dnodes_alive{cluster_id="554014120921134497"} 1
# HELP taos_cluster_info_dnodes_total
# HELP taos_cluster_info_dnodes_total
# TYPE taos_cluster_info_dnodes_total counter
taos_cluster_info_dnodes_total{cluster_id="554014120921134497"} 1
# HELP taos_cluster_info_first_ep
# HELP taos_cluster_info_first_ep
# TYPE taos_cluster_info_first_ep gauge
taos_cluster_info_first_ep{cluster_id="554014120921134497",value="tdengine:6030"} 1
# HELP taos_cluster_info_first_ep_dnode_id
# HELP taos_cluster_info_first_ep_dnode_id
# TYPE taos_cluster_info_first_ep_dnode_id counter
taos_cluster_info_first_ep_dnode_id{cluster_id="554014120921134497"} 1
```
@ -365,12 +365,12 @@ taos_cluster_info_first_ep_dnode_id{cluster_id="554014120921134497"} 1
| taos_dnodes_info_has_qnode | counter | 是否有 qnode |
| taos_dnodes_info_has_snode | counter | 是否有 snode |
| taos_dnodes_info_io_read | gauge | 该 dnode 所在节点的 io 读取速率(单位 Byte/s) |
| taos_dnodes_info_io_read_disk | gauge | 该 dnode 所在节点的磁盘 io 写入速率(单位 Byte/s) |
| taos_dnodes_info_io_write | gauge | 该 dnode 所在节点的 io 写入速率(单位 Byte/s) |
| taos_dnodes_info_io_write_disk | gauge | 该 dnode 所在节点的磁盘 io 写入速率(单位 Byte/s) |
| taos_dnodes_info_io_read_disk | gauge | 该 dnode 所在节点的磁盘 io 写入速率(单位 Byte/s) |
| taos_dnodes_info_io_write | gauge | 该 dnode 所在节点的 io 写入速率(单位 Byte/s) |
| taos_dnodes_info_io_write_disk | gauge | 该 dnode 所在节点的磁盘 io 写入速率(单位 Byte/s) |
| taos_dnodes_info_masters | counter | 主节点数量 |
| taos_dnodes_info_mem_engine | counter | 该 dnode 的进程所使用的内存(单位 KB) |
| taos_dnodes_info_mem_system | counter | 该 dnode 所在节的系统所使用的内存(单位 KB) |
| taos_dnodes_info_mem_system | counter | 该 dnode 所在节的系统所使用的内存(单位 KB) |
| taos_dnodes_info_mem_total | counter | 该 dnode 所在节点的总内存(单位 KB) |
| taos_dnodes_info_net_in | gauge | 该 dnode 所在节点的网络传入速率(单位 Byte/s) |
| taos_dnodes_info_net_out | gauge | 该 dnode 所在节点的网络传出速率(单位 Byte/s) |
@ -511,7 +511,7 @@ taos_cluster_info_first_ep_dnode_id{cluster_id="554014120921134497"} 1
- `database_name`: 数据库名称
- `vgroup_id`: 虚拟组 id
- **类型**: gauge
- **含义**: 虚拟组状态。 0 为 unsynced表示没有leader选出1 为 ready。
- **含义**: 虚拟组状态。 0 为 unsynced表示没有 leader 选出1 为 ready。
##### taos_taosd_vgroups_info_tables_num
@ -532,7 +532,7 @@ taos_cluster_info_first_ep_dnode_id{cluster_id="554014120921134497"} 1
- `vgroup_id`: 虚拟组 id
- **类型**: gauge
- **含义**: 虚拟节点角色
### 抽取配置
Prometheus 提供了 `scrape_configs` 配置如何从 endpoint 抽取监控数据,通常只需要修改 `static_configs` 中的 targets 配置为 taoskeeper 的 endpoint 地址,更多配置信息请参考 [Prometheus 配置文档](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config)。
@ -558,13 +558,13 @@ scrape_configs:
taosKeeper 也会将自己采集的监控数据写入监控数据库,默认是 `log` 库,可以在 taoskeeper 配置文件中修改。
### keeper\_monitor 表
### keeper_monitor 表
`keeper_monitor` 记录 taoskeeper 监控数据。
| field | type | is\_tag | comment |
| :------- | :-------- | :------ | :----------- |
| ts | TIMESTAMP | | timestamp |
| cpu | DOUBLE | | cpu 使用率 |
| mem | DOUBLE | | 内存使用率 |
| identify | NCHAR | TAG | 身份标识信息 |
| field | type | is_tag | comment |
| :------- | :-------- | :----- | :----------- |
| ts | TIMESTAMP | | timestamp |
| cpu | DOUBLE | | cpu 使用率 |
| mem | DOUBLE | | 内存使用率 |
| identify | NCHAR | TAG | 身份标识信息 |

View File

@ -225,3 +225,5 @@ sc.exe stop taos-explorer # Windows
登录时,请使用数据库用户名和密码登录。首次使用,默认的用户名为 `root`,密码为 `taosdata`。登录成功后即可进入`数据浏览器`页面,您可以使用查看数据库、 创建数据库、创建超级表/子表等管理功能。
其他功能页面,如`数据写入-数据源`等页面,为企业版特有功能,您可以点击查看和简单体验,并不能实际使用。
如果由于网络原因无法完成注册环节,则需要在有外网的环境注册完毕,然后把注册好的 /etc/taos/explorer-register.cfg 替换到内网环境。

View File

@ -67,7 +67,7 @@ database_option: {
- KEEP表示数据文件保存的天数缺省值为 3650取值范围 [1, 365000],且必须大于或等于 3 倍的 DURATION 参数值。数据库会自动删除保存时间超过 KEEP 值的数据从而释放存储空间。KEEP 可以使用加单位的表示形式,如 KEEP 100h、KEEP 10d 等,支持 m分钟、h小时和 d三个单位。也可以不写单位如 KEEP 50此时默认单位为天。企业版支持[多级存储](../../operation/planning/#%E5%A4%9A%E7%BA%A7%E5%AD%98%E5%82%A8)功能, 因此, 可以设置多个保存时间(多个以英文逗号分隔,最多 3 个,满足 keep 0 \<= keep 1 \<= keep 2如 KEEP 100h,100d,3650d; 社区版不支持多级存储功能(即使配置了多个保存时间, 也不会生效, KEEP 会取最大的保存时间)。了解更多,请点击 [关于主键时间戳](https://docs.taosdata.com/reference/taos-sql/insert/)
- KEEP_TIME_OFFSET自 3.2.0.0 版本生效。删除或迁移保存时间超过 KEEP 值的数据的延迟执行时间,默认值为 0 (小时)。在数据文件保存时间超过 KEEP 后,删除或迁移操作不会立即执行,而会额外等待本参数指定的时间间隔,以实现与业务高峰期错开的目的。
- STT_TRIGGER表示落盘文件触发文件合并的个数。开源版本固定为 1企业版本可设置范围为 1 到 16。对于少表高频写入场景,此参数建议使用默认配置;而对于多表低频写入场景,此参数建议配置较大的值。
- STT_TRIGGER表示落盘文件触发文件合并的个数。对于少表高频写入场景此参数建议使用默认配置而对于多表低频写入场景此参数建议配置较大的值。
- SINGLE_STABLE表示此数据库中是否只可以创建一个超级表用于超级表列非常多的情况。
- 0表示可以创建多张超级表。
- 1表示只可以创建一张超级表。
@ -146,10 +146,6 @@ alter_database_option: {
如果 cacheload 非常接近 cachesize则 cachesize 可能过小。 如果 cacheload 明显小于 cachesize 则 cachesize 是够用的。可以根据这个原则判断是否需要修改 cachesize 。具体修改值可以根据系统可用内存情况来决定是加倍或者是提高几倍。
4. stt_trigger
在修改 stt_trigger 参数之前请先停止数据库写入。
:::note
其它参数在 3.0.0.0 中暂不支持修改
@ -209,7 +205,7 @@ REDISTRIBUTE VGROUP vgroup_no DNODE dnode_id1 [DNODE dnode_id2] [DNODE dnode_id3
BALANCE VGROUP LEADER
```
触发集群所有 vgroup 中的 leader 重新选主,对集群各节点进行负载再均衡操作。
触发集群所有 vgroup 中的 leader 重新选主,对集群各节点进行负载再均衡操作。(企业版功能)
## 查看数据库工作状态

View File

@ -218,7 +218,7 @@ ALTER TABLE tb_name COMMENT 'string_value'
DROP TABLE [IF EXISTS] [db_name.]tb_name [, [IF EXISTS] [db_name.]tb_name] ...
```
**注意**:删除表并不会立即释放该表所占用的磁盘空间,而是把该表的数据标记为已删除,在查询时这些数据将不会再出现,但释放磁盘空间会延迟到系统自动或用户手动进行数据重整时。
**注意**:删除表并不会立即释放该表所占用的磁盘空间,而是把该表的数据标记为已删除,在查询时这些数据将不会再出现,但释放磁盘空间会延迟到系统自动(建库参数 keep 生效)或用户手动进行数据重整时(企业版功能 compact
## 查看表的信息

View File

@ -491,15 +491,15 @@ SELECT ... FROM (SELECT ... FROM ...) ...;
:::
## UNION ALL 子句
## UNION 子句
```txt title=语法
SELECT ...
UNION ALL SELECT ...
[UNION ALL SELECT ...]
UNION [ALL] SELECT ...
[UNION [ALL] SELECT ...]
```
TDengine 支持 UNION ALL 操作符。也就是说,如果多个 SELECT 子句返回结果集的结构完全相同(列名、列类型、列数、顺序),那么可以通过 UNION ALL 把这些结果集合并到一起。目前只支持 UNION ALL 模式,也即在结果集的合并过程中是不去重的。在同一个 sql 语句中UNION ALL 最多支持 100 个。
TDengine 支持 UNION [ALL] 操作符。也就是说,如果多个 SELECT 子句返回结果集的结构完全相同(列名、列类型、列数、顺序),那么可以通过 UNION [ALL] 把这些结果集合并到一起。
## SQL 示例

View File

@ -2099,7 +2099,7 @@ ignore_negative: {
**使用说明**:
- 可以与选择相关联的列一起使用。 例如: select \_rowts, DERIVATIVE() from。
- 可以与选择相关联的列一起使用。 例如: select \_rowts, DERIVATIVE(col1, 1s, 1) from tb1
### DIFF

View File

@ -231,6 +231,7 @@ description: TDengine 保留关键字的详细列表
| LEADER | |
| LEADING | |
| LEFT | |
| LEVEL | 3.3.0.0 到 3.3.2.11 的所有版本 |
| LICENCES | |
| LIKE | |
| LIMIT | |

View File

@ -4,7 +4,10 @@ title: 窗口预聚集
description: 窗口预聚集使用说明
---
为了提高大数据量的聚合函数查询性能,通过创建窗口预聚集 (TSMA Time-Range Small Materialized Aggregates) 对象, 使用固定时间窗口对指定的聚集函数进行预计算,并将计算结果存储下来,查询时通过查询预计算结果以提高查询性能。
在大数据量场景下, 经常需要查询某段时间内的汇总结果, 当历史数据变多或者时间范围变大时, 查询时间也会相应增加. 通过预聚集的方式可以将计算结果提前存储下来, 后续查询可以直接读取聚集结果, 而不需要扫描原始数据, 如当前Block内的SMA (Small Materialized Aggregates)信息.
Block内的SMA信息粒度较小, 若查询时间范围是日,月甚至年时, Block的数量将会很多, 因此TSMA (Time-Range Small Materialized Aggregates)支持用户指定时间窗口进行预聚集. 通过对固定时间窗口内的数据进行预计算, 并将计算结果存储下来, 查询时通过查询预计算结果以提高查询性能。
![TSMA Introduction](./pic/TSMA_intro.png)
## 创建TSMA

Binary file not shown.

After

Width:  |  Height:  |  Size: 109 KiB

View File

@ -509,7 +509,7 @@ TDengine 推荐数据库应用的每个线程都建立一个独立的连接,
- **接口说明**:用于轮询消费数据,每一个消费者,只能单线程调用该接口。
- tmq[入参] 指向一个有效的 ws_tmq_t 结构体指针,该结构体代表一个 TMQ 消费者对象。
- timeout[入参] 轮询的超时时间单位为毫秒负数表示默认超时1秒。
- **返回值**:非 `NULL`:成功,返回一个指向 WS_RES 结构体的指针,该结构体包含了接收到的消息。`NULL`失败,表示没有数据。WS_RES 结果和 taos_query 返回结果一致,可通过查询的各种接口获取 WS_RES 里的信息,比如 schema 等。
- **返回值**:非 `NULL`:成功,返回一个指向 WS_RES 结构体的指针,该结构体包含了接收到的消息。`NULL`:表示没有数据, 可通过 ws_errno(NULL) 获取错误码,具体错误码参见参考手册。WS_RES 结果和 taos_query 返回结果一致,可通过查询的各种接口获取 WS_RES 里的信息,比如 schema 等。
- `int32_t ws_tmq_consumer_close(ws_tmq_t *tmq)`
- **接口说明**:用于关闭 ws_tmq_t 结构体。需与 ws_tmq_consumer_new 配合使用。
@ -1125,11 +1125,8 @@ TDengine 的异步 API 均采用非阻塞调用模式。应用程序可以用多
- conf[入参] 指向一个有效的 tmq_conf_t 结构体指针,该结构体代表一个 TMQ 配置对象。
- cb[入参] 指向一个有效的 tmq_commit_cb 回调函数指针,该函数将在消息被消费后调用以确认消息处理状态。
- param[入参] 传递给回调函数的用户自定义参数。
设置自动提交回调函数的定义如下:
```
typedef void(tmq_commit_cb(tmq_t *tmq, int32_t code, void *param))
```
- 设置自动提交回调函数的定义如下:
`typedef void(tmq_commit_cb(tmq_t *tmq, int32_t code, void *param))`
- `void tmq_conf_destroy(tmq_conf_t *conf)`
- **接口说明**:销毁一个 TMQ 配置对象并释放相关资源。
@ -1192,7 +1189,7 @@ TDengine 的异步 API 均采用非阻塞调用模式。应用程序可以用多
- `int32_t tmq_consumer_close(tmq_t *tmq)`
- **接口说明**:用于关闭 tmq_t 结构体。需与 tmq_consumer_new 配合使用。
- tmq[入参] 指向一个有效的 tmq_t 结构体指针,该结构体代表一个 TMQ 消费者对象。
- **返回值**`0`:成功。非 `0`:失败,可调用函数 `char *tmq_err2str(int32_t code)` 获取更详细的错误信息
- **返回值**非 `NULL`:成功,返回一个指向 TAOS_RES 结构体的指针,该结构体包含了接收到的消息。。`NULL`表示没有数据可通过taos_errno(NULL) 获取错误码具体错误码参见参考手册。TAOS_RES 结果和 taos_query 返回结果一致,可通过查询的各种接口获取 TAOS_RES 里的信息,比如 schema 等
- `int32_t tmq_get_topic_assignment(tmq_t *tmq, const char *pTopicName, tmq_topic_assignment **assignment, int32_t *numOfAssignment)`
- **接口说明**:返回当前 consumer 分配的 vgroup 的信息,每个 vgroup 的信息包括 vgIdwal 的最大最小 offset以及当前消费到的 offset。
@ -1243,11 +1240,6 @@ TDengine 的异步 API 均采用非阻塞调用模式。应用程序可以用多
- cb[入参] 一个回调函数指针,当提交完成时会被调用。
- param[入参] 一个用户自定义的参数,将在回调函数中传递给 cb。
**说明**
- commit接口分为两种类型每种类型有同步和异步接口
- 第一种类型:根据消息提交,提交消息里的进度,如果消息传 NULL提交当前 consumer 所有消费的 vgroup 的当前进度 : tmq_commit_sync/tmq_commit_async
- 第二种类型:根据某个 topic 的某个 vgroup 的 offset 提交 : tmq_commit_offset_sync/tmq_commit_offset_async
- `int64_t tmq_position(tmq_t *tmq, const char *pTopicName, int32_t vgId)`
- **接口说明**:获取当前消费位置,即已消费到的数据位置的下一个位置.
- tmq[入参] 指向一个有效的 tmq_t 结构体指针,该结构体代表一个 TMQ 消费者对象。
@ -1255,7 +1247,7 @@ TDengine 的异步 API 均采用非阻塞调用模式。应用程序可以用多
- vgId[入参] 虚拟组 vgroup 的 ID。
- **返回值**`>=0`:成功,返回一个 int64_t 类型的值,表示当前位置的偏移量。`<0`:失败,返回值就是错误码,可调用函数 `char *tmq_err2str(int32_t code)` 获取更详细的错误信息。
- `int32_t tmq_offset_seek(tmq_t *tmq, const char *pTopicName, int32_t vgId, int64_t offset)`
- `int32_t tmq_offset_seek(tmq_t *tmq, const char *pTopicName, int32_t vgId, int64_t offset)`
- **接口说明**:将 TMQ 消费者对象在某个特定 topic 和 vgroup 的偏移量设置到指定的位置。
- tmq[入参] 指向一个有效的 tmq_t 结构体指针,该结构体代表一个 TMQ 消费者对象。
- pTopicName[入参] 要查询当前位置的主题名称。
@ -1288,14 +1280,14 @@ TDengine 的异步 API 均采用非阻塞调用模式。应用程序可以用多
- res[入参] 指向一个有效的 TAOS_RES 结构体指针,该结构体包含了从 TMQ 消费者轮询得到的消息。
- **返回值**:返回一个 tmq_res_t 类型的枚举值,表示消息类型。
- tmq_res_t 表示消费到的数据类型,定义如下:
```
typedef enum tmq_res_t {
TMQ_RES_INVALID = -1, // 无效
TMQ_RES_DATA = 1, // 数据类型
TMQ_RES_TABLE_META = 2, // 元数据类型
TMQ_RES_METADATA = 3 // 既有元数据类型又有数据类型,即自动建表
} tmq_res_t;
```
```
typedef enum tmq_res_t {
TMQ_RES_INVALID = -1, // 无效
TMQ_RES_DATA = 1, // 数据类型
TMQ_RES_TABLE_META = 2, // 元数据类型
TMQ_RES_METADATA = 3 // 既有元数据类型又有数据类型,即自动建表
} tmq_res_t;
```
- `const char *tmq_get_topic_name(TAOS_RES *res)`
- **接口说明**:从 TMQ 消费者获取的消息结果中获取所属的 topic 名称。

View File

@ -33,6 +33,8 @@ TDengine 的 JDBC 驱动实现尽可能与关系型数据库驱动保持一致
| taos-jdbcdriver 版本 | 主要变化 | TDengine 版本 |
| ------------------| ---------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------- |
| 3.5.3 | 在 WebSocket 连接上支持无符号数据类型 | - |
| 3.5.2 | 解决了 WebSocket 查询结果集释放 bug | - |
| 3.5.1 | 解决了数据订阅获取时间戳对象类型问题 | - |
| 3.5.0 | 1. 优化了 WebSocket 连接参数绑定性能,支持参数绑定查询使用二进制数据 <br/> 2. 优化了 WebSocket 连接在小查询上的性能 <br/> 3. WebSocket 连接上支持设置时区和应用信息 | 3.3.5.0 及更高版本 |
| 3.4.0 | 1. 使用 jackson 库替换 fastjson 库 <br/> 2. WebSocket 采用独立协议标识 <br/> 3. 优化后台拉取线程使用,避免用户误用导致超时 | - |
@ -127,24 +129,27 @@ JDBC 连接器可能报错的错误码包括 4 种:
TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对应类型转换如下:
| TDengine DataType | JDBCType |
| ----------------- | ------------------ |
| TIMESTAMP | java.sql.Timestamp |
| INT | java.lang.Integer |
| BIGINT | java.lang.Long |
| FLOAT | java.lang.Float |
| DOUBLE | java.lang.Double |
| SMALLINT | java.lang.Short |
| TINYINT | java.lang.Byte |
| BOOL | java.lang.Boolean |
| BINARY | byte array |
| NCHAR | java.lang.String |
| JSON | java.lang.String |
| VARBINARY | byte[] |
| GEOMETRY | byte[] |
| TDengine DataType | JDBCType | 备注|
| ----------------- | -------------------- |-------------------- |
| TIMESTAMP | java.sql.Timestamp ||
| BOOL | java.lang.Boolean ||
| TINYINT | java.lang.Byte ||
| TINYINT UNSIGNED | java.lang.Short |仅在 WebSocket 连接方式支持|
| SMALLINT | java.lang.Short ||
| SMALLINT UNSIGNED | java.lang.Integer |仅在 WebSocket 连接方式支持|
| INT | java.lang.Integer ||
| INT UNSIGNED | java.lang.Long |仅在 WebSocket 连接方式支持|
| BIGINT | java.lang.Long ||
| BIGINT UNSIGNED | java.math.BigInteger |仅在 WebSocket 连接方式支持|
| FLOAT | java.lang.Float ||
| DOUBLE | java.lang.Double ||
| BINARY | byte array ||
| NCHAR | java.lang.String ||
| JSON | java.lang.String |仅在 tag 中支持|
| VARBINARY | byte[] ||
| GEOMETRY | byte[] ||
**注意**JSON 类型仅在 tag 中支持。
由于历史原因TDengine中的BINARY底层不是真正的二进制数据已不建议使用。请用VARBINARY类型代替。
**注意**由于历史原因TDengine中的BINARY底层不是真正的二进制数据已不建议使用。请用VARBINARY类型代替。
GEOMETRY类型是little endian字节序的二进制数据符合WKB规范。详细信息请参考 [数据类型](../../taos-sql/data-type/#数据类型)
WKB规范请参考[Well-Known Binary (WKB)](https://libgeos.org/specifications/wkb/)
对于java连接器可以使用jts库来方便的创建GEOMETRY类型对象序列化后写入TDengine这里有一个样例[Geometry示例](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/GeometryDemo.java)

View File

@ -24,6 +24,7 @@ import RequestId from "./_request_id.mdx";
| Connector 版本 | 主要变化 | TDengine 版本 |
|:-------------|:---------------------------|:--------------|
| 3.1.5 | 修复 websocket 协议编码中文时长度错误 | - |
| 3.1.4 | 提升 websocket 查询和写入性能 | 3.3.2.0 及更高版本 |
| 3.1.3 | 支持 WebSocket 自动重连 | - |
| 3.1.2 | 修复 schemaless 资源释放 | - |

View File

@ -118,7 +118,7 @@ TDengine ODBC 支持两种连接 TDengine 数据库方式WebSocket 连接与
| v1.1.0 | 1. 支持视图功能;<br/>2. 支持 VARBINARY/GEOMETRY 数据类型;<br/>3. 支持 ODBC 32 位 WebSocket 连接方式(仅企业版支持);<br/>4. 支持 ODBC 数据源配置对话框设置对工业软件 KingSCADA、Kepware 等的兼容性适配选项(仅企业版支持); | 3.3.3.0 及更高版本 |
| v1.0.2 | 支持 CP1252 字符编码; | 3.2.3.0 及更高版本 |
| v1.0.1 | 1. 支持 DSN 设置 BI 模式,在 BI 模式下 TDengine 数据库不返回系统数据库和超级表子表信息;<br/>2. 重构字符集转换模块,提升读写性能;<br/>3. ODBC 数据源配置对话框中默认修改默认连接方式为“WebSocket”<br/>4. ODBC 数据源配置对话框增加“测试连接”控件;<br/>5. ODBC 数据源配置支持中文/英文界面; | - |
| v1.0.0.0 | 发布初始版本,支持与Tdengine数据库交互以读写数据具体请参考“API 参考”一节 | 3.2.2.0 及更高版本 |
| v1.0.0.0 | 发布初始版本,支持与 TDengine 数据库交互以读写数据具体请参考“API 参考”一节 | 3.2.2.0 及更高版本 |
## 数据类型映射

View File

@ -253,7 +253,7 @@ C 接口网络不可用相关错误码:
- code`int`0 代表成功。
- column_meta`[][3]any` 列信息每个列会用三个值来说明分别为列名string、列类型string、类型长度int
- rows`int`)数据返回行数。
- data`[][]any`)具体数据内容(时间格式仅支持 RFC3339结果集为 0 时区)。
- data`[][]any`)具体数据内容(时间格式仅支持 RFC3339结果集为 0 时区,指定 tz 时返回对应时区)。
列类型使用如下字符串:
@ -435,7 +435,6 @@ curl http://<fqnd>:<port>/rest/login/<username>/<password>
其中,`fqdn` 是 TDengine 数据库的 FQDN 或 IP 地址,`port` 是 TDengine 服务的端口号,`username` 为数据库用户名,`password` 为数据库密码,返回值为 JSON 格式,各字段含义如下:
- status请求结果的标志位。
- code返回值代码。
- desc授权码。

View File

@ -554,5 +554,6 @@ description: TDengine 服务端的错误码列表和详细说明
| 0x80004000 | Invalid message | 订阅到的数据非法,一般不会出现 | 具体查看client端的错误日志提示 |
| 0x80004001 | Consumer mismatch | 订阅请求的vnode和重新分配的vnode不一致一般存在于有新消费者加入相同消费者组里时 | 内部错误,不暴露给用户 |
| 0x80004002 | Consumer closed | 消费者已经不存在了 | 查看是否已经close掉了 |
| 0x80004017 | Invalid status, please subscribe topic first | 数据订阅状态不对 | 没有调用 subscribe直接poll数据 |
| 0x80004100 | Stream task not exist | 流计算任务不存在 | 具体查看server端的错误日志 |

View File

@ -191,7 +191,7 @@ TDengine 存储的数据包括采集的时序数据以及库、表相关的元
在进行海量数据管理时为了实现水平扩展通常需要采用数据分片sharding和数据分区partitioning策略。TDengine 通过 vnode 来实现数据分片,并通过按时间段划分数据文件来实现时序数据的分区。
vnode 不仅负责处理时序数据的写入、查询和计算任务还承担着负载均衡、数据恢复以及支持异构环境的重要角色。为了实现这些目标TDengine 将一个 dnode 根据其计算和存储资源切分为多个 vnode。这些 vnode 的管理过程对应用程序是完全透明的由TDengine 自动完成。
vnode 不仅负责处理时序数据的写入、查询和计算任务还承担着负载均衡、数据恢复以及支持异构环境的重要角色。为了实现这些目标TDengine 将一个 dnode 根据其计算和存储资源切分为多个 vnode。这些 vnode 的管理过程对应用程序是完全透明的由TDengine 自动完成。
对于单个数据采集点,无论其数据量有多大,一个 vnode 都拥有足够的计算资源和存储资源来应对(例如,如果每秒生成一条 16B 的记录,一年产生的原始数据量也不到 0.5GB。因此TDengine 将一张表即一个数据采集点的所有数据都存储在一个vnode 中,避免将同一数据采集点的数据分散到两个或多个 dnode 上。同时,一个 vnode 可以存储多个数据采集点(表)的数据,最大可容纳的表数目上限为 100 万。设计上,一个 vnode 中的所有表都属于同一个数据库。
@ -371,4 +371,4 @@ alter dnode 1 "/mnt/disk2/taos 1";
3. 多级存储目前不支持删除已经挂载的硬盘的功能。
4. 0 级存储至少存在一个 disable_create_new_file 为 0 的挂载点1 级 和 2 级存储没有该限制。
:::
:::

View File

@ -18,9 +18,9 @@ taosc 的执行过程可以简要总结为:解析 SQL 为 AST生成逻辑
### mnode
在 TDengine 集群中超级表的信息和元数据库的基础信息都得到妥善管理。mnode 为元数据服务器,负责响应 taosc 的元数据查询请求。当 taosc 需要获取 vgroup 等元数据信息时,它会向 mnode 发送请求。mnode 在收到请求后,会迅速返回所需的信息,确保 taosc 能够顺利执行其操作。
在 TDengine 集群中超级表的信息和元数据库的基础信息都得到妥善管理。mnode 为元数据服务器,负责响应 taosc 的元数据查询请求。当 taosc 需要获取 vgroup 等元数据信息时,它会向 mnode 发送请求。mnode 在收到请求后,会迅速返回所需的信息,确保 taosc 能够顺利执行其操作。
此外mnode 还负责接收 taosc 发送的心跳信息。这些心跳信息有助于维持 aosc 与 mnode 之间的连接状态,确保两者之间的通信畅通无阻。
此外mnode 还负责接收 taosc 发送的心跳信息。这些心跳信息有助于维持 taosc 与 mnode 之间的连接状态,确保两者之间的通信畅通无阻。
### vnode

View File

@ -24,6 +24,10 @@ TDengine 3.x 各版本安装包下载链接如下:
import Release from "/components/ReleaseV3";
## 3.3.5.2
<Release type="tdengine" version="3.3.5.2" />
## 3.3.5.0
<Release type="tdengine" version="3.3.5.0" />

View File

@ -0,0 +1,42 @@
---
title: 3.3.5.2 版本说明
sidebar_label: 3.3.5.2
description: 3.3.5.2 版本说明
---
## 特性
1. 特性taosX MQTT 数据源支持根据模板创建多个超级表
## 优化
1. 优化:改进 taosX 数据库不可用时的错误信息
2. 优化:使用 Poetry 标准管理依赖项并减少 Python 连接器安装依赖项 [#251](https://github.com/taosdata/taos-connector-python/issues/251)
3. 优化taosX 增量备份和恢复优化
4. 优化:在多级存储数据迁移过程中,如果迁移时间过长,可能会导致 Vnode 切主
5. 优化:调整 systemctl 守护 taosd 进程的策略,如果 60 秒内连续三次重启失败,下次重启将推迟至 900 秒后
## 修复
1. 修复maxRetryWaitTime 参数用于控制当集群无法提供服务时客户端的最大重连超时时间,但在遇到 Sync timeout 错误时,该参数不生效
2. 修复:支持在修改子表的 tag 值后,即时订阅到更新后的 tag 值
3. 修复:数据订阅的 tmq_consumer_poll 函数调用失败时没有返回错误码
4. 修复:当创建超过 100 个视图并执行 show views 命令时taosd 可能会发生崩溃
5. 修复:当使用 stmt2 写入数据时,如果未绑定所有的数据列,写入操作将会失败
6. 修复:当使用 stmt2 写入数据时,如果数据库名或表名使用了反引号,写入操作将会失败
7. 修复:关闭 vnode 时如果有正在进行的文件合并任务taosd 可能会崩溃
8. 修复:频繁执行 drop table with `tb_uid` 语句可能导致 taosd 死锁
9. 修复:日志文件切换过程中可能出现的死锁问题
10. 修复禁止创建与系统库information_schema, performance_schema同名的数据库
11. 修复:当嵌套查询的内层查询来源于超级表时,排序信息无法被上推
12. 修复:通过 STMT 接口尝试写入不符合拓扑规范的 Geometry 数据类型时误报错误
13. 修复:在查询语句中使用 percentile 函数和会话窗口时如果出现错误taosd 可能会崩溃
14. 修复:无法动态修改系统参数的问题
15. 修复:订阅同步偶发 Translict transaction 错误
16. 修复:同一消费者在执行取消订阅操作后,立即尝试订阅其他不同的主题时,会返回错误
17. 修复Go 连接器安全修复 CVE-2022-28948
18. 修复:当视图中的子查询包含带别名的 ORDER BY 子句,并且查询函数自身也带有别名时,查询该视图会引发错误
19. 修复:在将数据库从单副本修改为多副本时,如果存在一些由较早版本生成且在新版本中已不再使用的元数据,会导致修改操作失败
20. 修复:在使用 SELECT * FROM 子查询时,列名未能正确复制到外层查询
21. 修复:对字符串类型数据执行 max/min 函数时,结果不准确且 taosd 可能会崩溃
22. 修复:流式计算不支持使用 HAVING 语句,但在创建时未报告错误
23. 修复taos shell 显示的服务端版本信息不准确,例如无法正确区分社区版和企业版
24. 修复:在某些特定的查询场景下,当 JOIN 和 CAST 联合使用时taosd 可能会崩溃

View File

@ -4,6 +4,7 @@ sidebar_label: 版本说明
description: 各版本版本说明
---
[3.3.5.2](./3.3.5.2)
[3.3.5.0](./3.3.5.0)
[3.3.4.8](./3.3.4.8)
[3.3.4.3](./3.3.4.3)

View File

@ -86,7 +86,7 @@ int32_t taosAnalBufWriteDataEnd(SAnalyticBuf *pBuf);
int32_t taosAnalBufClose(SAnalyticBuf *pBuf);
void taosAnalBufDestroy(SAnalyticBuf *pBuf);
const char *taosAnalAlgoStr(EAnalAlgoType algoType);
const char *taosAnalysisAlgoType(EAnalAlgoType algoType);
EAnalAlgoType taosAnalAlgoInt(const char *algoName);
const char *taosAnalAlgoUrlStr(EAnalAlgoType algoType);

View File

@ -123,6 +123,10 @@ enum {
TMQ_MSG_TYPE__POLL_BATCH_META_RSP,
};
static char* tmqMsgTypeStr[] = {
"data", "meta", "ask ep", "meta data", "wal info", "batch meta"
};
enum {
STREAM_INPUT__DATA_SUBMIT = 1,
STREAM_INPUT__DATA_BLOCK,

View File

@ -57,9 +57,9 @@ const static uint8_t BIT2_MAP[4] = {0b11111100, 0b11110011, 0b11001111, 0b001111
#define ONE ((uint8_t)1)
#define THREE ((uint8_t)3)
#define DIV_8(i) ((i) >> 3)
#define MOD_8(i) ((i) & 7)
#define MOD_8(i) ((i)&7)
#define DIV_4(i) ((i) >> 2)
#define MOD_4(i) ((i) & 3)
#define MOD_4(i) ((i)&3)
#define MOD_4_TIME_2(i) (MOD_4(i) << 1)
#define BIT1_SIZE(n) (DIV_8((n)-1) + 1)
#define BIT2_SIZE(n) (DIV_4((n)-1) + 1)
@ -201,8 +201,10 @@ int32_t tColDataSortMerge(SArray **arr);
int32_t tColDataAddValueByDataBlock(SColData *pColData, int8_t type, int32_t bytes, int32_t nRows, char *lengthOrbitmap,
char *data);
// for encode/decode
int32_t tPutColData(uint8_t version, uint8_t *pBuf, SColData *pColData);
int32_t tGetColData(uint8_t version, uint8_t *pBuf, SColData *pColData);
int32_t tEncodeColData(uint8_t version, SEncoder *pEncoder, SColData *pColData);
int32_t tDecodeColData(uint8_t version, SDecoder *pDecoder, SColData *pColData);
int32_t tEncodeRow(SEncoder *pEncoder, SRow *pRow);
int32_t tDecodeRow(SDecoder *pDecoder, SRow **ppRow);
// STRUCT ================================
struct STColumn {

View File

@ -34,6 +34,9 @@ extern "C" {
#define GLOBAL_CONFIG_FILE_VERSION 1
#define LOCAL_CONFIG_FILE_VERSION 1
#define RPC_MEMORY_USAGE_RATIO 0.1
#define QUEUE_MEMORY_USAGE_RATIO 0.6
typedef enum {
DND_CA_SM4 = 1,
} EEncryptAlgor;
@ -110,6 +113,7 @@ extern int32_t tsNumOfQnodeFetchThreads;
extern int32_t tsNumOfSnodeStreamThreads;
extern int32_t tsNumOfSnodeWriteThreads;
extern int64_t tsQueueMemoryAllowed;
extern int64_t tsApplyMemoryAllowed;
extern int32_t tsRetentionSpeedLimitMB;
extern int32_t tsNumOfCompactThreads;

View File

@ -261,6 +261,7 @@
TD_DEF_MSG_TYPE(TDMT_MND_UPDATE_DNODE_INFO, "update-dnode-info", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_MND_AUDIT, "audit", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_MND_CONFIG, "init-config", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_MND_CONFIG_SDB, "config-sdb", NULL, NULL)
TD_CLOSE_MSG_SEG(TDMT_END_MND_MSG)
TD_NEW_MSG_SEG(TDMT_VND_MSG) // 2<<8

View File

@ -268,7 +268,7 @@ typedef struct SStoreMeta {
const void* (*extractTagVal)(const void* tag, int16_t type, STagVal* tagVal); // todo remove it
int32_t (*getTableUidByName)(void* pVnode, char* tbName, uint64_t* uid);
int32_t (*getTableTypeByName)(void* pVnode, char* tbName, ETableType* tbType);
int32_t (*getTableTypeSuidByName)(void* pVnode, char* tbName, ETableType* tbType, uint64_t* suid);
int32_t (*getTableNameByUid)(void* pVnode, uint64_t uid, char* tbName);
bool (*isTableExisted)(void* pVnode, tb_uid_t uid);

View File

@ -313,9 +313,9 @@ typedef struct SOrderByExprNode {
} SOrderByExprNode;
typedef struct SLimitNode {
ENodeType type; // QUERY_NODE_LIMIT
int64_t limit;
int64_t offset;
ENodeType type; // QUERY_NODE_LIMIT
SValueNode* limit;
SValueNode* offset;
} SLimitNode;
typedef struct SStateWindowNode {
@ -681,6 +681,7 @@ int32_t nodesValueNodeToVariant(const SValueNode* pNode, SVariant* pVal);
int32_t nodesMakeValueNodeFromString(char* literal, SValueNode** ppValNode);
int32_t nodesMakeValueNodeFromBool(bool b, SValueNode** ppValNode);
int32_t nodesMakeValueNodeFromInt32(int32_t value, SNode** ppNode);
int32_t nodesMakeValueNodeFromInt64(int64_t value, SNode** ppNode);
char* nodesGetFillModeString(EFillMode mode);
int32_t nodesMergeConds(SNode** pDst, SNodeList** pSrc);

View File

@ -374,7 +374,7 @@ void* getTaskPoolWorkerCb();
((_code) == TSDB_CODE_SYN_NOT_LEADER || (_code) == TSDB_CODE_SYN_INTERNAL_ERROR || \
(_code) == TSDB_CODE_VND_STOPPED || (_code) == TSDB_CODE_APP_IS_STARTING || (_code) == TSDB_CODE_APP_IS_STOPPING)
#define SYNC_SELF_LEADER_REDIRECT_ERROR(_code) \
((_code) == TSDB_CODE_SYN_NOT_LEADER || (_code) == TSDB_CODE_SYN_RESTORING || (_code) == TSDB_CODE_SYN_INTERNAL_ERROR)
((_code) == TSDB_CODE_SYN_NOT_LEADER || (_code) == TSDB_CODE_SYN_RESTORING || (_code) == TSDB_CODE_SYN_INTERNAL_ERROR || (_code) == TSDB_CODE_SYN_TIMEOUT)
#define SYNC_OTHER_LEADER_REDIRECT_ERROR(_code) ((_code) == TSDB_CODE_MNODE_NOT_FOUND)
#define NO_RET_REDIRECT_ERROR(_code) \

View File

@ -176,7 +176,9 @@ typedef struct SSyncFSM {
void (*FpRollBackCb)(const struct SSyncFSM* pFsm, SRpcMsg* pMsg, SFsmCbMeta* pMeta);
void (*FpRestoreFinishCb)(const struct SSyncFSM* pFsm, const SyncIndex commitIdx);
void (*FpAfterRestoredCb)(const struct SSyncFSM* pFsm, const SyncIndex commitIdx);
void (*FpReConfigCb)(const struct SSyncFSM* pFsm, SRpcMsg* pMsg, SReConfigCbMeta* pMeta);
void (*FpLeaderTransferCb)(const struct SSyncFSM* pFsm, SRpcMsg* pMsg, SFsmCbMeta* pMeta);
bool (*FpApplyQueueEmptyCb)(const struct SSyncFSM* pFsm);
int32_t (*FpApplyQueueItems)(const struct SSyncFSM* pFsm);

View File

@ -47,6 +47,7 @@ const char* terrstr();
char* taosGetErrMsgReturn();
char* taosGetErrMsg();
void taosClearErrMsg();
int32_t* taosGetErrno();
int32_t* taosGetErrln();
int32_t taosGetErrSize();
@ -1013,6 +1014,7 @@ int32_t taosGetErrSize();
#define TSDB_CODE_TMQ_REPLAY_NOT_SUPPORT TAOS_DEF_ERROR_CODE(0, 0x4014)
#define TSDB_CODE_TMQ_NO_TABLE_QUALIFIED TAOS_DEF_ERROR_CODE(0, 0x4015)
#define TSDB_CODE_TMQ_NO_NEED_REBALANCE TAOS_DEF_ERROR_CODE(0, 0x4016)
#define TSDB_CODE_TMQ_INVALID_STATUS TAOS_DEF_ERROR_CODE(0, 0x4017)
// stream
#define TSDB_CODE_STREAM_TASK_NOT_EXIST TAOS_DEF_ERROR_CODE(0, 0x4100)

View File

@ -456,13 +456,13 @@ typedef enum ELogicConditionType {
#define TSDB_DB_SCHEMALESS_OFF 0
#define TSDB_DEFAULT_DB_SCHEMALESS TSDB_DB_SCHEMALESS_OFF
#define TSDB_MIN_STT_TRIGGER 1
#ifdef TD_ENTERPRISE
// #ifdef TD_ENTERPRISE
#define TSDB_MAX_STT_TRIGGER 16
#define TSDB_DEFAULT_SST_TRIGGER 2
#else
#define TSDB_MAX_STT_TRIGGER 1
#define TSDB_DEFAULT_SST_TRIGGER 1
#endif
// #else
// #define TSDB_MAX_STT_TRIGGER 1
// #define TSDB_DEFAULT_SST_TRIGGER 1
// #endif
#define TSDB_STT_TRIGGER_ARRAY_SIZE 16 // maximum of TSDB_MAX_STT_TRIGGER of TD_ENTERPRISE and TD_COMMUNITY
#define TSDB_MIN_HASH_PREFIX (2 - TSDB_TABLE_NAME_LEN)
#define TSDB_MAX_HASH_PREFIX (TSDB_TABLE_NAME_LEN - 2)
@ -558,6 +558,7 @@ typedef enum ELogicConditionType {
#define TSDB_QUERY_CLEAR_TYPE(x, _type) ((x) &= (~_type))
#define TSDB_QUERY_RESET_TYPE(x) ((x) = TSDB_QUERY_TYPE_NON_TYPE)
#define TSDB_ORDER_NONE 0
#define TSDB_ORDER_ASC 1
#define TSDB_ORDER_DESC 2
@ -664,6 +665,14 @@ typedef enum {
ANAL_ALGO_TYPE_END,
} EAnalAlgoType;
typedef enum {
TSDB_VERSION_UNKNOWN = 0,
TSDB_VERSION_OSS,
TSDB_VERSION_ENTERPRISE,
TSDB_VERSION_CLOUD,
TSDB_VERSION_END,
} EVersionType;
#define MIN_RESERVE_MEM_SIZE 1024 // MB
#ifdef __cplusplus

View File

@ -118,6 +118,7 @@ static int32_t tDecodeI64v(SDecoder* pCoder, int64_t* val);
static int32_t tDecodeFloat(SDecoder* pCoder, float* val);
static int32_t tDecodeDouble(SDecoder* pCoder, double* val);
static int32_t tDecodeBool(SDecoder* pCoder, bool* val);
static int32_t tDecodeBinaryWithSize(SDecoder* pCoder, uint32_t size, uint8_t** val);
static int32_t tDecodeBinary(SDecoder* pCoder, uint8_t** val, uint32_t* len);
static int32_t tDecodeCStrAndLen(SDecoder* pCoder, char** val, uint32_t* len);
static int32_t tDecodeCStr(SDecoder* pCoder, char** val);
@ -404,6 +405,19 @@ static int32_t tDecodeBool(SDecoder* pCoder, bool* val) {
return 0;
}
static FORCE_INLINE int32_t tDecodeBinaryWithSize(SDecoder* pCoder, uint32_t size, uint8_t** val) {
if (pCoder->pos + size > pCoder->size) {
TAOS_RETURN(TSDB_CODE_OUT_OF_RANGE);
}
if (val) {
*val = pCoder->data + pCoder->pos;
}
pCoder->pos += size;
return 0;
}
static FORCE_INLINE int32_t tDecodeBinary(SDecoder* pCoder, uint8_t** val, uint32_t* len) {
uint32_t length = 0;
@ -412,21 +426,12 @@ static FORCE_INLINE int32_t tDecodeBinary(SDecoder* pCoder, uint8_t** val, uint3
*len = length;
}
if (pCoder->pos + length > pCoder->size) {
TAOS_RETURN(TSDB_CODE_OUT_OF_RANGE);
}
if (val) {
*val = pCoder->data + pCoder->pos;
}
pCoder->pos += length;
return 0;
TAOS_RETURN(tDecodeBinaryWithSize(pCoder, length, val));
}
static FORCE_INLINE int32_t tDecodeCStrAndLen(SDecoder* pCoder, char** val, uint32_t* len) {
TAOS_CHECK_RETURN(tDecodeBinary(pCoder, (uint8_t**)val, len));
if (*len > 0) { // notice!!! *len maybe 0
if (*len > 0) { // notice!!! *len maybe 0
(*len) -= 1;
}
return 0;
@ -497,7 +502,7 @@ static FORCE_INLINE int32_t tDecodeBinaryAlloc32(SDecoder* pCoder, void** val, u
static FORCE_INLINE int32_t tDecodeCStrAndLenAlloc(SDecoder* pCoder, char** val, uint64_t* len) {
TAOS_CHECK_RETURN(tDecodeBinaryAlloc(pCoder, (void**)val, len));
if (*len > 0){
if (*len > 0) {
(*len) -= 1;
}
return 0;

View File

@ -119,6 +119,11 @@ void taosLogCrashInfo(char *nodeType, char *pMsg, int64_t msgLen, int signum, vo
void taosReadCrashInfo(char *filepath, char **pMsg, int64_t *pMsgLen, TdFilePtr *pFd);
void taosReleaseCrashLogFile(TdFilePtr pFile, bool truncateFile);
int32_t initCrashLogWriter();
void checkAndPrepareCrashInfo();
bool reportThreadSetQuit();
void writeCrashLogToFile(int signum, void *sigInfo, char *nodeType, int64_t clusterId, int64_t startTime);
// clang-format off
#define uFatal(...) { if (uDebugFlag & DEBUG_FATAL) { taosPrintLog("UTL FATAL", DEBUG_FATAL, tsLogEmbedded ? 255 : uDebugFlag, __VA_ARGS__); }}
#define uError(...) { if (uDebugFlag & DEBUG_ERROR) { taosPrintLog("UTL ERROR ", DEBUG_ERROR, tsLogEmbedded ? 255 : uDebugFlag, __VA_ARGS__); }}

View File

@ -55,6 +55,7 @@ typedef struct {
typedef enum {
DEF_QITEM = 0,
RPC_QITEM = 1,
APPLY_QITEM = 2,
} EQItype;
typedef void (*FItem)(SQueueInfo *pInfo, void *pItem);

View File

@ -15,7 +15,7 @@ TimeoutStartSec=0
StandardOutput=null
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
StartLimitInterval=900s
[Install]
WantedBy=multi-user.target

View File

@ -16,6 +16,7 @@ verType=$7
script_dir="$(dirname $(readlink -f $0))"
top_dir="$(readlink -f ${script_dir}/../..)"
pkg_dir="${top_dir}/debworkroom"
taosx_dir="$(readlink -f ${script_dir}/../../../../taosx)"
#echo "curr_dir: ${curr_dir}"
#echo "top_dir: ${top_dir}"
@ -81,11 +82,11 @@ fi
if [ -f "${compile_dir}/test/cfg/taoskeeper.service" ]; then
cp ${compile_dir}/test/cfg/taoskeeper.service ${pkg_dir}${install_home_path}/cfg || :
fi
if [ -f "${compile_dir}/../../../explorer/target/taos-explorer.service" ]; then
cp ${compile_dir}/../../../explorer/target/taos-explorer.service ${pkg_dir}${install_home_path}/cfg || :
if [ -f "${taosx_dir}/explorer/server/examples/explorer.service" ]; then
cp ${taosx_dir}/explorer/server/examples/explorer.service ${pkg_dir}${install_home_path}/cfg/taos-explorer.service || :
fi
if [ -f "${compile_dir}/../../../explorer/server/example/explorer.toml" ]; then
cp ${compile_dir}/../../../explorer/server/example/explorer.toml ${pkg_dir}${install_home_path}/cfg || :
if [ -f "${taosx_dir}/explorer/server/examples/explorer.toml" ]; then
cp ${taosx_dir}/explorer/server/examples/explorer.toml ${pkg_dir}${install_home_path}/cfg || :
fi
# cp ${taoskeeper_binary} ${pkg_dir}${install_home_path}/bin
@ -113,8 +114,8 @@ if [ -f "${compile_dir}/build/bin/taoskeeper" ]; then
cp ${compile_dir}/build/bin/taoskeeper ${pkg_dir}${install_home_path}/bin ||:
fi
if [ -f "${compile_dir}/../../../explorer/target/release/taos-explorer" ]; then
cp ${compile_dir}/../../../explorer/target/release/taos-explorer ${pkg_dir}${install_home_path}/bin ||:
if [ -f "${taosx_dir}/target/release/taos-explorer" ]; then
cp ${taosx_dir}/target/release/taos-explorer ${pkg_dir}${install_home_path}/bin ||:
fi
cp ${compile_dir}/build/bin/taos ${pkg_dir}${install_home_path}/bin

View File

@ -17,6 +17,7 @@ verType=$7
script_dir="$(dirname $(readlink -f $0))"
top_dir="$(readlink -f ${script_dir}/../..)"
pkg_dir="${top_dir}/rpmworkroom"
taosx_dir="$(readlink -f ${script_dir}/../../../../taosx)"
spec_file="${script_dir}/tdengine.spec"
#echo "curr_dir: ${curr_dir}"
@ -76,7 +77,7 @@ cd ${pkg_dir}
${csudo}mkdir -p BUILD BUILDROOT RPMS SOURCES SPECS SRPMS
${csudo}rpmbuild --define="_version ${tdengine_ver}" --define="_topdir ${pkg_dir}" --define="_compiledir ${compile_dir}" -bb ${spec_file}
${csudo}rpmbuild --define="_version ${tdengine_ver}" --define="_topdir ${pkg_dir}" --define="_compiledir ${compile_dir}" --define="_taosxdir ${taosx_dir}" -bb ${spec_file}
# copy rpm package to output_dir, and modify package name, then clean temp dir
#${csudo}cp -rf RPMS/* ${output_dir}

View File

@ -76,12 +76,12 @@ if [ -f %{_compiledir}/test/cfg/taoskeeper.service ]; then
cp %{_compiledir}/test/cfg/taoskeeper.service %{buildroot}%{homepath}/cfg ||:
fi
if [ -f %{_compiledir}/../../../explorer/target/taos-explorer.service ]; then
cp %{_compiledir}/../../../explorer/target/taos-explorer.service %{buildroot}%{homepath}/cfg ||:
if [ -f %{_taosxdir}/explorer/server/examples/explorer.service ]; then
cp %{_taosxdir}/explorer/server/examples/explorer.service %{buildroot}%{homepath}/cfg/taos-explorer.service ||:
fi
if [ -f %{_compiledir}/../../../explorer/server/examples/explorer.toml ]; then
cp %{_compiledir}/../../../explorer/server/examples/explorer.toml %{buildroot}%{homepath}/cfg ||:
if [ -f %{_taosxdir}/explorer/server/examples/explorer.toml ]; then
cp %{_taosxdir}/explorer/server/examples/explorer.toml %{buildroot}%{homepath}/cfg ||:
fi
#cp %{_compiledir}/../packaging/rpm/taosd %{buildroot}%{homepath}/init.d
@ -100,8 +100,8 @@ cp %{_compiledir}/../../enterprise/packaging/stop-all.sh %{buildroot}%{homepath
sed -i "s/versionType=\"enterprise\"/versionType=\"community\"/g" %{buildroot}%{homepath}/bin/start-all.sh
sed -i "s/versionType=\"enterprise\"/versionType=\"community\"/g" %{buildroot}%{homepath}/bin/stop-all.sh
if [ -f %{_compiledir}/../../../explorer/target/release/taos-explorer ]; then
cp %{_compiledir}/../../../explorer/target/release/taos-explorer %{buildroot}%{homepath}/bin
if [ -f %{_taosxdir}/target/release/taos-explorer ]; then
cp %{_taosxdir}/target/release/taos-explorer %{buildroot}%{homepath}/bin
fi
if [ -f %{_compiledir}/build/bin//taoskeeper ]; then

View File

@ -174,6 +174,7 @@ help() {
echo " config_qemu_guest_agent - Configure QEMU guest agent"
echo " deploy_docker - Deploy Docker"
echo " deploy_docker_compose - Deploy Docker Compose"
echo " install_trivy - Install Trivy"
echo " clone_enterprise - Clone the enterprise repository"
echo " clone_community - Clone the community repository"
echo " clone_taosx - Clone TaosX repository"
@ -316,6 +317,17 @@ add_config_if_not_exist() {
grep -qF -- "$config" "$file" || echo "$config" >> "$file"
}
# Function to check if a tool is installed
check_installed() {
local command_name="$1"
if command -v "$command_name" >/dev/null 2>&1; then
echo "$command_name is already installed. Skipping installation."
return 0
else
echo "$command_name is not installed."
return 1
fi
}
# General error handling function
check_status() {
local message_on_failure="$1"
@ -584,9 +596,12 @@ centos_skip_check() {
# Deploy cmake
deploy_cmake() {
# Check if cmake is installed
if command -v cmake >/dev/null 2>&1; then
echo "Cmake is already installed. Skipping installation."
cmake --version
# if command -v cmake >/dev/null 2>&1; then
# echo "Cmake is already installed. Skipping installation."
# cmake --version
# return
# fi
if check_installed "cmake"; then
return
fi
install_package "cmake3"
@ -1058,11 +1073,13 @@ deploy_go() {
GOPATH_DIR="/root/go"
# Check if Go is installed
if command -v go >/dev/null 2>&1; then
echo "Go is already installed. Skipping installation."
# if command -v go >/dev/null 2>&1; then
# echo "Go is already installed. Skipping installation."
# return
# fi
if check_installed "go"; then
return
fi
# Fetch the latest version number of Go
GO_LATEST_DATA=$(curl --retry 10 --retry-delay 5 --retry-max-time 120 -s https://golang.google.cn/VERSION?m=text)
GO_LATEST_VERSION=$(echo "$GO_LATEST_DATA" | grep -oP 'go[0-9]+\.[0-9]+\.[0-9]+')
@ -1731,6 +1748,42 @@ deploy_docker_compose() {
fi
}
# Instal trivy
install_trivy() {
echo -e "${YELLOW}Installing Trivy...${NO_COLOR}"
# Check if Trivy is already installed
# if command -v trivy >/dev/null 2>&1; then
# echo "Trivy is already installed. Skipping installation."
# trivy --version
# return
# fi
if check_installed "trivy"; then
return
fi
# Install jq
install_package jq
# Get latest version
LATEST_VERSION=$(curl -s https://api.github.com/repos/aquasecurity/trivy/releases/latest | jq -r .tag_name)
# Download
if [ -f /etc/debian_version ]; then
wget https://github.com/aquasecurity/trivy/releases/download/"${LATEST_VERSION}"/trivy_"${LATEST_VERSION#v}"_Linux-64bit.deb
# Install
dpkg -i trivy_"${LATEST_VERSION#v}"_Linux-64bit.deb
elif [ -f /etc/redhat-release ]; then
wget https://github.com/aquasecurity/trivy/releases/download/"${LATEST_VERSION}"/trivy_"${LATEST_VERSION#v}"_Linux-64bit.rpm
# Install
rpm -ivh trivy_"${LATEST_VERSION#v}"_Linux-64bit.rpm
else
echo "Unsupported Linux distribution."
exit 1
fi
# Check
trivy --version
check_status "Failed to install Trivy" "Trivy installed successfully." $?
rm -rf trivy_"${LATEST_VERSION#v}"_Linux-64bit.deb trivy_"${LATEST_VERSION#v}"_Linux-64bit.rpm
}
# Reconfigure cloud-init
reconfig_cloud_init() {
echo "Reconfiguring cloud-init..."
@ -1912,6 +1965,7 @@ TDinternal() {
install_maven_via_sdkman 3.9.9
install_node_via_nvm 16.20.2
install_python_via_pyenv 3.10.12
install_via_pip pandas psutil fabric2 requests faker simplejson toml pexpect tzlocal distro decorator loguru hyperloglog toml taospy taos-ws-py
}
# deploy TDgpt
@ -2003,6 +2057,7 @@ deploy_dev() {
install_nginx
deploy_docker
deploy_docker_compose
install_trivy
check_status "Failed to deploy some tools" "Deploy all tools successfully" $?
}
@ -2158,6 +2213,9 @@ main() {
deploy_docker_compose)
deploy_docker_compose
;;
install_trivy)
install_trivy
;;
clone_enterprise)
clone_enterprise
;;

Some files were not shown because too many files have changed in this diff Show More