Merge branch '3.0' of https://github.com/taosdata/TDengine into feat/TS-5776

This commit is contained in:
wangmm0220 2025-02-17 17:41:25 +08:00
commit 6f2ccfc3b2
1804 changed files with 11944 additions and 270828 deletions

View File

@ -10,15 +10,25 @@ on:
- 'docs/**'
- 'packaging/**'
- 'tests/**'
- '*.md'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
name: Build and test
name: Build and test on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os:
- ubuntu-20.04
- ubuntu-22.04
- ubuntu-24.04
- macos-13
- macos-14
- macos-15
steps:
- name: Checkout the repository
@ -29,12 +39,19 @@ jobs:
with:
go-version: 1.18
- name: Install system dependencies
- name: Install dependencies on Linux
if: runner.os == 'Linux'
run: |
sudo apt update -y
sudo apt install -y build-essential cmake \
libgeos-dev libjansson-dev libsnappy-dev liblzma-dev libz-dev \
zlib1g pkg-config libssl-dev gawk
zlib1g-dev pkg-config libssl-dev gawk
- name: Install dependencies on macOS
if: runner.os == 'macOS'
run: |
brew update
brew install argp-standalone gflags pkg-config snappy zlib geos jansson gawk openssl
- name: Build and install TDengine
run: |

View File

@ -1,4 +1,4 @@
name: taosKeeper CI
name: taosKeeper Build
on:
push:
@ -8,7 +8,7 @@ on:
jobs:
build:
runs-on: ubuntu-latest
name: Run unit tests
name: Build and test on ubuntu-latest
steps:
- name: Checkout the repository

View File

@ -75,4 +75,4 @@ available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.ht
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq
https://www.contributor-covenant.org/faq

View File

@ -10,7 +10,36 @@
简体中文 | [English](README.md) | [TDengine 云服务](https://cloud.taosdata.com/?utm_medium=cn&utm_source=github) | 很多职位正在热招中,请看[这里](https://www.taosdata.com/careers/)
# TDengine 简介
# 目录
1. [TDengine 简介](#1-tdengine-简介)
1. [文档](#2-文档)
1. [必备工具](#3-必备工具)
- [3.1 Linux预备](#31-linux系统)
- [3.2 macOS预备](#32-macos系统)
- [3.3 Windows预备](#33-windows系统)
- [3.4 克隆仓库](#34-克隆仓库)
1. [构建](#4-构建)
- [4.1 Linux系统上构建](#41-linux系统上构建)
- [4.2 macOS系统上构建](#42-macos系统上构建)
- [4.3 Windows系统上构建](#43-windows系统上构建)
1. [打包](#5-打包)
1. [安装](#6-安装)
- [6.1 Linux系统上安装](#61-linux系统上安装)
- [6.2 macOS系统上安装](#62-macos系统上安装)
- [6.3 Windows系统上安装](#63-windows系统上安装)
1. [快速运行](#7-快速运行)
- [7.1 Linux系统上运行](#71-linux系统上运行)
- [7.2 macOS系统上运行](#72-macos系统上运行)
- [7.3 Windows系统上运行](#73-windows系统上运行)
1. [测试](#8-测试)
1. [版本发布](#9-版本发布)
1. [工作流](#10-工作流)
1. [覆盖率](#11-覆盖率)
1. [成为社区贡献者](#12-成为社区贡献者)
# 1. 简介
TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series Database, TSDB)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外TDengine 还提供缓存、数据订阅、流式计算等功能是一极简的时序数据处理平台最大程度的减小系统设计的复杂度降低研发和运营成本。与其他时序数据库相比TDengine 的主要优势如下:
@ -26,323 +55,335 @@ TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series
- **核心开源**TDengine 的核心代码包括集群功能全部开源截止到2022年8月1日全球超过 135.9k 个运行实例GitHub Star 18.7kFork 4.4k,社区活跃。
# 文档
了解TDengine高级功能的完整列表请 [点击](https://tdengine.com/tdengine/)。体验TDengine最简单的方式是通过[TDengine云平台](https://cloud.tdengine.com)。
关于完整的使用手册,系统架构和更多细节,请参考 [TDengine 文档](https://docs.taosdata.com) 或者 [TDengine Documentation](https://docs.tdengine.com)。
# 2. 文档
# 构建
关于完整的使用手册,系统架构和更多细节,请参考 [TDengine](https://www.taosdata.com/) 或者 [TDengine 官方文档](https://docs.taosdata.com)。
用户可根据需求选择通过[容器](https://docs.taosdata.com/get-started/docker/)、[安装包](https://docs.taosdata.com/get-started/package/)、[Kubernetes](https://docs.taosdata.com/deployment/k8s/)来安装或直接使用无需安装部署的[云服务](https://cloud.taosdata.com/)。本快速指南是面向想自己编译、打包、测试的开发者的。
如果想编译或测试TDengine连接器请访问以下仓库: [JDBC连接器](https://github.com/taosdata/taos-connector-jdbc), [Go连接器](https://github.com/taosdata/driver-go), [Python连接器](https://github.com/taosdata/taos-connector-python), [Node.js连接器](https://github.com/taosdata/taos-connector-node), [C#连接器](https://github.com/taosdata/taos-connector-dotnet), [Rust连接器](https://github.com/taosdata/taos-connector-rust).
# 3. 前置条件
TDengine 目前可以在 Linux、 Windows、macOS 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。目前不支持使用交叉编译器构建。
用户可根据需求选择通过源码、[容器](https://docs.taosdata.com/get-started/docker/)、[安装包](https://docs.taosdata.com/get-started/package/)或[Kubernetes](https://docs.taosdata.com/deployment/k8s/)来安装。本快速指南仅适用于通过源码安装。
TDengine 还提供一组辅助工具软件 taosTools目前它包含 taosBenchmark曾命名为 taosdemo和 taosdump 两个软件。默认 TDengine 编译不包含 taosTools, 您可以在编译 TDengine 时使用`cmake .. -DBUILD_TOOLS=true` 来同时编译 taosTools。
## 3.1 Linux系统
为了构建TDengine, 请使用 [CMake](https://cmake.org/) 3.13.0 或者更高版本。
<details>
## 安装工具
<summary>安装Linux必备工具</summary>
### Ubuntu 18.04 及以上版本 & Debian
### Ubuntu 18.04、20.04、22.04
```bash
sudo apt-get install -y gcc cmake build-essential git libssl-dev libgflags2.2 libgflags-dev
sudo apt-get udpate
sudo apt-get install -y gcc cmake build-essential git libjansson-dev \
libsnappy-dev liblzma-dev zlib1g-dev pkg-config
```
#### 为 taos-tools 安装编译需要的软件
为了在 Ubuntu/Debian 系统上编译 [taos-tools](https://github.com/taosdata/taos-tools) 需要安装如下软件:
### CentOS 8
```bash
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev zlib1g pkg-config
```
### CentOS 7.9
```bash
sudo yum install epel-release
sudo yum update
sudo yum install -y gcc gcc-c++ make cmake3 gflags git openssl-devel
sudo ln -sf /usr/bin/cmake3 /usr/bin/cmake
yum install -y epel-release gcc gcc-c++ make cmake git perl dnf-plugins-core
yum config-manager --set-enabled powertools
yum install -y zlib-static xz-devel snappy-devel jansson-devel pkgconfig libatomic-static libstdc++-static
```
### CentOS 8/Fedora/Rocky Linux
</details>
## 3.2 macOS系统
<details>
<summary>安装macOS必备工具</summary>
根据提示安装依赖工具 [brew](https://brew.sh/).
```bash
sudo dnf install -y gcc gcc-c++ gflags make cmake epel-release git openssl-devel
```
#### 在 CentOS 上构建 taosTools 安装依赖软件
#### CentOS 7.9
```
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
```
#### CentOS 8/Fedora/Rocky Linux
```
sudo yum install -y epel-release
sudo yum install -y dnf-plugins-core
sudo yum config-manager --set-enabled powertools
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
```
注意:由于 snappy 缺乏 pkg-config 支持(参考 [链接](https://github.com/google/snappy/pull/86)),会导致 cmake 提示无法发现 libsnappy实际上工作正常。
若 powertools 安装失败,可以尝试改用:
```
sudo yum config-manager --set-enabled powertools
```
#### CentOS + devtoolset
除上述编译依赖包,需要执行以下命令:
```
sudo yum install centos-release-scl
sudo yum install devtoolset-9 devtoolset-9-libatomic-devel
scl enable devtoolset-9 -- bash
```
### macOS
```
brew install argp-standalone gflags pkgconfig
```
### 设置 golang 开发环境
</details>
TDengine 包含数个使用 Go 语言开发的组件比如taosAdapter, 请参考 golang.org 官方文档设置 go 开发环境。
## 3.3 Windows系统
请使用 1.20 及以上版本。对于中国用户,我们建议使用代理来加速软件包下载。
<details>
```
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct
```
<summary>安装Windows必备工具</summary>
缺省是不会构建 taosAdapter, 但您可以使用以下命令选择构建 taosAdapter 作为 RESTful 接口的服务
进行中。
```
cmake .. -DBUILD_HTTP=false
```
</details>
### 设置 rust 开发环境
## 3.4 克隆仓库
TDengine 包含数个使用 Rust 语言开发的组件. 请参考 rust-lang.org 官方文档设置 rust 开发环境。
## 获取源码
首先,你需要从 GitHub 克隆源码:
通过如下命令将TDengine仓库克隆到指定计算机:
```bash
git clone https://github.com/taosdata/TDengine.git
cd TDengine
```
如果使用 https 协议下载比较慢,可以通过修改 ~/.gitconfig 文件添加以下两行设置使用 ssh 协议下载。需要首先上传 ssh 密钥到 GitHub详细方法请参考 GitHub 官方文档。
```
[url "git@github.com:"]
insteadOf = https://github.com/
```
## 特别说明
# 4. 构建
[JDBC 连接器](https://github.com/taosdata/taos-connector-jdbc) [Go 连接器](https://github.com/taosdata/driver-go)[Python 连接器](https://github.com/taosdata/taos-connector-python)[Node.js 连接器](https://github.com/taosdata/taos-connector-node)[C# 连接器](https://github.com/taosdata/taos-connector-dotnet) [Rust 连接器](https://github.com/taosdata/taos-connector-rust) 和 [Grafana 插件](https://github.com/taosdata/grafanaplugin)已移到独立仓库
TDengine 还提供一组辅助工具软件 taosTools目前它包含 taosBenchmark曾命名为 taosdemo和 taosdump 两个软件。默认 TDengine 编译不包含 taosTools, 您可以在编译 TDengine 时使用`cmake .. -DBUILD_TOOLS=true` 来同时编译 taosTools。
为了构建TDengine, 请使用 [CMake](https://cmake.org/) 3.13.0 或者更高版本。
## 构建 TDengine
## 4.1 Linux系统上构建
### Linux 系统
<details>
可以运行代码仓库中的 `build.sh` 脚本编译出 TDengine 和 taosTools包含 taosBenchmark 和 taosdump
<summary>Linux系统上构建步骤</summary>
可以通过以下命令使用脚本 `build.sh` 编译TDengine和taosTools包括taosBenchmark和taosdump:
```bash
./build.sh
```
这个脚本等价于执行如下命令:
也可以通过以下命令进行构建:
```bash
mkdir debug
cd debug
mkdir debug && cd debug
cmake .. -DBUILD_TOOLS=true -DBUILD_CONTRIB=true
make
```
您也可以选择使用 jemalloc 作为内存分配器,替代默认的 glibc
可以使用Jemalloc作为内存分配器而不是使用glibc:
```bash
apt install autoconf
cmake .. -DJEMALLOC_ENABLED=true
```
在 X86-64、X86、arm64 平台上TDengine 生成脚本可以自动检测机器架构。也可以手动配置 CPUTYPE 参数来指定 CPU 类型,如 aarch64 等。
aarch64
TDengine构建脚本可以自动检测x86、x86-64、arm64平台上主机的体系结构。
您也可以通过CPUTYPE选项手动指定架构:
```bash
cmake .. -DCPUTYPE=aarch64 && cmake --build .
```
### Windows 系统
</details>
如果你使用的是 Visual Studio 2013 版本:
## 4.2 macOS系统上构建
打开 cmd.exe执行 vcvarsall.bat 时,为 64 位操作系统指定“x86_amd64”为 32 位操作系统指定“x86”。
<details>
```bash
<summary>macOS系统上构建步骤</summary>
请安装XCode命令行工具和cmake。使用XCode 11.4+在Catalina和Big Sur上完成验证。
```shell
mkdir debug && cd debug
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" < x86_amd64 | x86 >
cmake .. && cmake --build .
```
</details>
## 4.3 Windows系统上构建
<details>
<summary>Windows系统上构建步骤</summary>
如果您使用的是Visual Studio 2013请执行“cmd.exe”打开命令窗口执行如下命令。
执行vcvarsall.bat时64位的Windows请指定“amd64”32位的Windows请指定“x86”。
```cmd
mkdir debug && cd debug
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" < amd64 | x86 >
cmake .. -G "NMake Makefiles"
nmake
```
如果你使用的是 Visual Studio 2019 或 2017 版本:
如果您使用Visual Studio 2019或2017:
打开 cmd.exe执行 vcvarsall.bat 时,为 64 位操作系统指定“x64”为 32 位操作系统指定“x86”。
请执行“cmd.exe”打开命令窗口执行如下命令。
执行vcvarsall.bat时64位的Windows请指定“x64”32位的Windows请指定“x86”。
```bash
```cmd
mkdir debug && cd debug
"c:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvarsall.bat" < x64 | x86 >
cmake .. -G "NMake Makefiles"
nmake
```
你也可以从开始菜单中找到"Visual Studio < 2019 | 2017 >"菜单项,根据你的系统选择"x64 Native Tools Command Prompt for VS < 2019 | 2017 >"或"x86 Native Tools Command Prompt for VS < 2019 | 2017 >",打开命令行窗口,执行:
或者您可以通过点击Windows开始菜单打开命令窗口->“Visual Studio < 2019 | 2017 >”文件夹->“x64原生工具命令提示符VS < 2019 | 2017 >”或“x86原生工具命令提示符VS < 2019 | 2017 >”取决于你的Windows是什么架构然后执行命令如下
```bash
```cmd
mkdir debug && cd debug
cmake .. -G "NMake Makefiles"
nmake
```
</details>
### macOS 系统
# 5. 打包
安装 XCode 命令行工具和 cmake. 在 Catalina 和 Big Sur 操作系统上,需要安装 XCode 11.4+ 版本。
由于一些组件依赖关系TDengine社区安装程序不能仅由该存储库创建。我们仍在努力改进。
# 6. 安装
## 6.1 Linux系统上安装
<details>
<summary>Linux系统上安装详细步骤</summary>
构建成功后TDengine可以通过以下命令进行安装:
```bash
mkdir debug && cd debug
cmake .. && cmake --build .
sudo make install
```
从源代码安装还将为TDengine配置服务管理。用户也可以使用[TDengine安装包](https://docs.taosdata.com/get-started/package/)进行安装。
# 安装
</details>
## Linux 系统
## 6.2 macOS系统上安装
生成完成后,安装 TDengine
<details>
<summary>macOS系统上安装详细步骤</summary>
构建成功后TDengine可以通过以下命令进行安装:
```bash
sudo make install
```
用户可以在[文件目录结构](https://docs.taosdata.com/reference/directory/)中了解更多在操作系统中生成的目录或文件。
</details>
从源代码安装也会为 TDengine 配置服务管理 ,用户也可以选择[从安装包中安装](https://docs.taosdata.com/get-started/package/)。
## 6.3 Windows系统上安装
安装成功后,在终端中启动 TDengine 服务:
<details>
```bash
sudo systemctl start taosd
```
<summary>Windows系统上安装详细步骤</summary>
用户可以使用 TDengine CLI 来连接 TDengine 服务,在终端中,输入:
```bash
taos
```
如果 TDengine CLI 连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印出错误消息。
## Windows 系统
生成完成后,安装 TDengine
构建成功后TDengine可以通过以下命令进行安装:
```cmd
nmake install
```
## macOS 系统
</details>
生成完成后,安装 TDengine
# 7. 快速运行
## 7.1 Linux系统上运行
<details>
<summary>Linux系统上运行详细步骤</summary>
在Linux系统上安装TDengine完成后在终端运行如下命令启动服务:
```bash
sudo make install
sudo systemctl start taosd
```
用户可以在[文件目录结构](https://docs.taosdata.com/reference/directory/)中了解更多在操作系统中生成的目录或文件。
从源代码安装也会为 TDengine 配置服务管理 ,用户也可以选择[从安装包中安装](https://docs.taosdata.com/get-started/package/)。
安装成功后,可以在应用程序中双击 TDengine 图标启动服务,或者在终端中启动 TDengine 服务:
```bash
sudo launchctl start com.tdengine.taosd
```
用户可以使用 TDengine CLI 来连接 TDengine 服务,在终端中,输入:
然后用户可以通过如下命令使用TDengine命令行连接TDengine服务:
```bash
taos
```
如果 TDengine CLI 连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印出错误消息。
如果TDengine 命令行连接服务器成功,系统将打印欢迎信息和版本信息。否则,将显示连接错误信息。
## 快速运行
如果不希望以服务方式运行 TDengine也可以在终端中直接运行它。也即在生成完成后执行以下命令在 Windows 下,生成的可执行文件会带有 .exe 后缀,例如会名为 taosd.exe
如果您不想将TDengine作为服务运行您可以在当前终端中运行它。例如要在构建完成后快速启动TDengine服务器在终端中运行以下命令我们以Linux为例Windows上的命令为 `taosd.exe`
```bash
./build/bin/taosd -c test/cfg
```
在另一个终端,使用 TDengine CLI 连接服务器:
在另一个终端使用TDengine命令行连接服务器:
```bash
./build/bin/taos -c test/cfg
```
"-c test/cfg"指定系统配置文件所在目录。
选项 `-c test/cfg` 指定系统配置文件的目录。
# 体验 TDengine
</details>
在 TDengine 终端中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行插入查询操作。
## 7.2 macOS系统上运行
```sql
CREATE DATABASE demo;
USE demo;
CREATE TABLE t (ts TIMESTAMP, speed INT);
INSERT INTO t VALUES('2019-07-15 00:00:00', 10);
INSERT INTO t VALUES('2019-07-15 01:00:00', 20);
SELECT * FROM t;
ts | speed |
===================================
19-07-15 00:00:00.000| 10|
19-07-15 01:00:00.000| 20|
Query OK, 2 row(s) in set (0.001700s)
<details>
<summary>macOS系统上运行详细步骤</summary>
在macOS上安装完成后启动服务双击/applications/TDengine启动程序或者在终端中执行如下命令
```bash
sudo launchctl start com.tdengine.taosd
```
# 应用开发
然后在终端中使用如下命令通过TDengine命令行连接TDengine服务器:
## 官方连接器
```bash
taos
```
TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用:
如果TDengine命令行连接服务器成功系统将打印欢迎信息和版本信息。否则将显示错误信息。
- [Java](https://docs.taosdata.com/reference/connector/java/)
- [C/C++](https://docs.taosdata.com/reference/connector/cpp/)
- [Python](https://docs.taosdata.com/reference/connector/python/)
- [Go](https://docs.taosdata.com/reference/connector/go/)
- [Node.js](https://docs.taosdata.com/reference/connector/node/)
- [Rust](https://docs.taosdata.com/reference/connector/rust/)
- [C#](https://docs.taosdata.com/reference/connector/csharp/)
- [RESTful API](https://docs.taosdata.com/reference/connector/rest-api/)
</details>
# 成为社区贡献者
## 7.3 Windows系统上运行
<details>
<summary>Windows系统上运行详细步骤</summary>
您可以使用以下命令在Windows平台上启动TDengine服务器:
```cmd
.\build\bin\taosd.exe -c test\cfg
```
在另一个终端上使用TDengine命令行连接服务器:
```cmd
.\build\bin\taos.exe -c test\cfg
```
选项 `-c test/cfg` 指定系统配置文件的目录。
</details>
# 8. 测试
有关如何在TDengine上运行不同类型的测试请参考 [TDengine测试](./tests/README-CN.md)
# 9. 版本发布
TDengine发布版本的完整列表请参考 [版本列表](https://github.com/taosdata/TDengine/releases)
# 10. 工作流
TDengine构建检查工作流可以在参考 [Github Action](https://github.com/taosdata/TDengine/actions/workflows/taosd-ci-build.yml), 更多的工作流正在创建中,将很快可用。
# 11. 覆盖率
最新的TDengine测试覆盖率报告可参考 [coveralls.io](https://coveralls.io/github/taosdata/TDengine)
<details>
<summary>如何在本地运行测试覆盖率报告?</summary>
在本地创建测试覆盖率报告HTML格式请运行以下命令:
```bash
cd tests
bash setup-lcov.sh -v 1.16 && ./run_local_coverage.sh -b main -c task
# on main branch and run cases in longtimeruning_cases.task
# for more infomation about options please refer to ./run_local_coverage.sh -h
```
> **注意:**
> 请注意,-b和-i选项将使用-DCOVER=true选项重新编译TDengine这可能需要花费一些时间。
</details>
# 12. 成为社区贡献者
点击 [这里](https://www.taosdata.com/contributor),了解如何成为 TDengine 的贡献者。
# 加入技术交流群
TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小 T 为好友,即可入群。

View File

@ -10,10 +10,10 @@
[![GitHub Actions Workflow Status](https://img.shields.io/github/actions/workflow/status/taosdata/tdengine/taosd-ci-build.yml)](https://github.com/taosdata/TDengine/actions/workflows/taosd-ci-build.yml)
[![Coverage Status](https://coveralls.io/repos/github/taosdata/TDengine/badge.svg?branch=3.0)](https://coveralls.io/github/taosdata/TDengine?branch=3.0)
![GitHub commit activity](https://img.shields.io/github/commit-activity/m/taosdata/tdengine)
[![GitHub commit activity](https://img.shields.io/github/commit-activity/m/taosdata/tdengine)](https://github.com/feici02/TDengine/commits/main/)
<br />
![GitHub Release](https://img.shields.io/github/v/release/taosdata/tdengine)
![GitHub License](https://img.shields.io/github/license/taosdata/tdengine)
[![GitHub Release](https://img.shields.io/github/v/release/taosdata/tdengine)](https://github.com/taosdata/TDengine/releases)
[![GitHub License](https://img.shields.io/github/license/taosdata/tdengine)](https://github.com/taosdata/TDengine/blob/main/LICENSE)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4201/badge)](https://bestpractices.coreinfrastructure.org/projects/4201)
<br />
[![Twitter Follow](https://img.shields.io/twitter/follow/tdenginedb?label=TDengine&style=social)](https://twitter.com/tdenginedb)
@ -74,8 +74,14 @@ For a full list of TDengine competitive advantages, please [check here](https://
For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.tdengine.com) ([TDengine 文档](https://docs.taosdata.com))
You can choose to install TDengine via [container](https://docs.tdengine.com/get-started/deploy-in-docker/), [installation package](https://docs.tdengine.com/get-started/deploy-from-package/), [Kubernetes](https://docs.tdengine.com/operations-and-maintenance/deploy-your-cluster/#kubernetes-deployment) or try [fully managed service](https://cloud.tdengine.com/) without installation. This quick guide is for developers who want to contribute, build, release and test TDengine by themselves.
For contributing/building/testing TDengine Connectors, please check the following repositories: [JDBC Connector](https://github.com/taosdata/taos-connector-jdbc), [Go Connector](https://github.com/taosdata/driver-go), [Python Connector](https://github.com/taosdata/taos-connector-python), [Node.js Connector](https://github.com/taosdata/taos-connector-node), [C# Connector](https://github.com/taosdata/taos-connector-dotnet), [Rust Connector](https://github.com/taosdata/taos-connector-rust).
# 3. Prerequisites
At the moment, TDengine server supports running on Linux/Windows/MacOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service. TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. Right now we don't support build with cross-compiling environment.
## 3.1 On Linux
<details>
@ -85,7 +91,7 @@ For user manual, system design and architecture, please refer to [TDengine Docum
### For Ubuntu 18.04、20.04、22.04
```bash
sudo apt-get udpate
sudo apt-get update
sudo apt-get install -y gcc cmake build-essential git libjansson-dev \
libsnappy-dev liblzma-dev zlib1g-dev pkg-config
```
@ -127,10 +133,6 @@ Work in Progress.
## 3.4 Clone the repo
<details>
<summary>Clone the repo</summary>
Clone the repository to the target machine:
```bash
@ -138,21 +140,13 @@ git clone https://github.com/taosdata/TDengine.git
cd TDengine
```
> **NOTE:**
> TDengine Connectors can be found in following repositories: [JDBC Connector](https://github.com/taosdata/taos-connector-jdbc), [Go Connector](https://github.com/taosdata/driver-go), [Python Connector](https://github.com/taosdata/taos-connector-python), [Node.js Connector](https://github.com/taosdata/taos-connector-node), [C# Connector](https://github.com/taosdata/taos-connector-dotnet), [Rust Connector](https://github.com/taosdata/taos-connector-rust).
</details>
# 4. Building
At the moment, TDengine server supports running on Linux/Windows/MacOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service. TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. Right now we don't support build with cross-compiling environment.
You can choose to install through source code, [container](https://docs.tdengine.com/get-started/deploy-in-docker/), [installation package](https://docs.tdengine.com/get-started/deploy-from-package/) or [Kubernetes](https://docs.tdengine.com/operations-and-maintenance/deploy-your-cluster/#kubernetes-deployment). This quick guide only applies to install from source.
TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine.
To build TDengine, use [CMake](https://cmake.org/) 3.13.0 or higher versions in the project directory.
TDengine requires [GCC](https://gcc.gnu.org/) 9.3.1 or higher and [CMake](https://cmake.org/) 3.13.0 or higher for building.
## 4.1 Build on Linux

View File

@ -166,6 +166,10 @@ IF(${BUILD_WITH_ANALYSIS})
set(BUILD_WITH_S3 ON)
ENDIF()
IF(${TD_LINUX})
set(BUILD_WITH_ANALYSIS ON)
ENDIF()
IF(${BUILD_S3})
IF(${BUILD_WITH_S3})
@ -205,13 +209,6 @@ option(
off
)
option(
BUILD_WITH_NURAFT
"If build with NuRaft"
OFF
)
option(
BUILD_WITH_UV
"If build with libuv"

View File

@ -15,6 +15,18 @@ IF (TD_PRODUCT_NAME)
ADD_DEFINITIONS(-DTD_PRODUCT_NAME="${TD_PRODUCT_NAME}")
ENDIF ()
IF (CUS_NAME)
ADD_DEFINITIONS(-DCUS_NAME="${CUS_NAME}")
ENDIF ()
IF (CUS_PROMPT)
ADD_DEFINITIONS(-DCUS_PROMPT="${CUS_PROMPT}")
ENDIF ()
IF (CUS_EMAIL)
ADD_DEFINITIONS(-DCUS_EMAIL="${CUS_EMAIL}")
ENDIF ()
find_program(HAVE_GIT NAMES git)
IF (DEFINED GITINFO)

View File

@ -12,7 +12,7 @@ ExternalProject_Add(curl2
BUILD_IN_SOURCE TRUE
BUILD_ALWAYS 1
UPDATE_COMMAND ""
CONFIGURE_COMMAND ${CONTRIB_CONFIG_ENV} ./configure --prefix=$ENV{HOME}/.cos-local.2 --with-ssl=$ENV{HOME}/.cos-local.2 --enable-websockets --enable-shared=no --disable-ldap --disable-ldaps --without-brotli --without-zstd --without-libidn2 --without-nghttp2 --without-libpsl #--enable-debug
CONFIGURE_COMMAND ${CONTRIB_CONFIG_ENV} ./configure --prefix=$ENV{HOME}/.cos-local.2 --with-ssl=$ENV{HOME}/.cos-local.2 --enable-websockets --enable-shared=no --disable-ldap --disable-ldaps --without-brotli --without-zstd --without-libidn2 --without-nghttp2 --without-libpsl --without-librtmp #--enable-debug
BUILD_COMMAND make -j
INSTALL_COMMAND make install
TEST_COMMAND ""

View File

@ -1,13 +0,0 @@
# taos-tools
ExternalProject_Add(taos-tools
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
GIT_TAG 3.0
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
BINARY_DIR ""
#BUILD_IN_SOURCE TRUE
CONFIGURE_COMMAND ""
BUILD_COMMAND ""
INSTALL_COMMAND ""
TEST_COMMAND ""
)

View File

@ -205,9 +205,18 @@ ENDIF()
# download dependencies
configure_file(${CONTRIB_TMP_FILE} "${TD_CONTRIB_DIR}/deps-download/CMakeLists.txt")
execute_process(COMMAND "${CMAKE_COMMAND}" -G "${CMAKE_GENERATOR}" .
WORKING_DIRECTORY "${TD_CONTRIB_DIR}/deps-download")
WORKING_DIRECTORY "${TD_CONTRIB_DIR}/deps-download"
RESULT_VARIABLE result)
IF(NOT result EQUAL "0")
message(FATAL_ERROR "CMake step for dowloading dependencies failed: ${result}")
ENDIF()
execute_process(COMMAND "${CMAKE_COMMAND}" --build .
WORKING_DIRECTORY "${TD_CONTRIB_DIR}/deps-download")
WORKING_DIRECTORY "${TD_CONTRIB_DIR}/deps-download"
RESULT_VARIABLE result)
IF(NOT result EQUAL "0")
message(FATAL_ERROR "CMake step for building dependencies failed: ${result}")
ENDIF()
# ================================================================================================
# Build

View File

@ -20,14 +20,6 @@ if(${BUILD_WITH_SQLITE})
add_subdirectory(sqlite)
endif(${BUILD_WITH_SQLITE})
if(${BUILD_WITH_CRAFT})
add_subdirectory(craft)
endif(${BUILD_WITH_CRAFT})
if(${BUILD_WITH_TRAFT})
# add_subdirectory(traft)
endif(${BUILD_WITH_TRAFT})
if(${BUILD_S3})
add_subdirectory(azure)
endif()

View File

@ -26,7 +26,7 @@ CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name
SUBTABLE(expression) AS subquery
stream_options: {
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time | FORCE_WINDOW_CLOSE]
WATERMARK time
IGNORE EXPIRED [0|1]
DELETE_MARK time
@ -56,13 +56,17 @@ window_clause: {
}
```
The subquery supports session windows, state windows, and sliding windows. When used with supertables, session windows and state windows must be used together with `partition by tbname`.
The subquery supports session windows, state windows, time windows, event windows, and count windows. When used with supertables, state windows, event windows, and count windows must be used together with `partition by tbname`.
1. SESSION is a session window, where tol_val is the maximum range of the time interval. All data within the tol_val time interval belong to the same window. If the time interval between two consecutive data points exceeds tol_val, the next window automatically starts.
2. EVENT_WINDOW is an event window, defined by start and end conditions. The window starts when the start_trigger_condition is met and closes when the end_trigger_condition is met. start_trigger_condition and end_trigger_condition can be any condition expressions supported by TDengine and can include different columns.
2. STATE_WINDOW is a state window. The col is used to identify the state value. Values with the same state value belong to the same state window. When the value of col changes, the current window ends and the next window is automatically opened.
3. COUNT_WINDOW is a counting window, divided by a fixed number of data rows. count_val is a constant, a positive integer, and must be at least 2 and less than 2147483648. count_val represents the maximum number of data rows in each COUNT_WINDOW. If the total number of data rows cannot be evenly divided by count_val, the last window will have fewer rows than count_val. sliding_val is a constant, representing the number of rows the window slides, similar to the SLIDING in INTERVAL.
3. INTERVAL is a time window, which can be further divided into sliding time windows and tumbling time windows.The INTERVAL clause is used to specify the equal time period of the window, and the SLIDING clause is used to specify the time by which the window slides forward. When the value of interval_val is equal to the value of sliding_val, the time window is a tumbling time window; otherwise, it is a sliding time window. Note: The value of sliding_val must be less than or equal to the value of interval_val.
4. EVENT_WINDOW is an event window, defined by start and end conditions. The window starts when the start_trigger_condition is met and closes when the end_trigger_condition is met. start_trigger_condition and end_trigger_condition can be any condition expressions supported by TDengine and can include different columns.
5. COUNT_WINDOW is a counting window, divided by a fixed number of data rows. count_val is a constant, a positive integer, and must be at least 2 and less than 2147483648. count_val represents the maximum number of data rows in each COUNT_WINDOW. If the total number of data rows cannot be evenly divided by count_val, the last window will have fewer rows than count_val. sliding_val is a constant, representing the number of rows the window slides, similar to the SLIDING in INTERVAL.
The definition of a window is exactly the same as in the time-series data window query, for details refer to the TDengine window functions section.

View File

@ -31,12 +31,12 @@ There are many parameters for creating consumers, which flexibly support various
| Parameter Name | Type | Description | Remarks |
| :-----------------------: | :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `td.connect.ip` | string | Server IP address | |
| `td.connect.ip` | string | FQDN of Server | ip or host name |
| `td.connect.user` | string | Username | |
| `td.connect.pass` | string | Password | |
| `td.connect.port` | integer | Server port number | |
| `group.id` | string | Consumer group ID, the same consumer group shares consumption progress | <br />**Required**. Maximum length: 192.<br />Each topic can have up to 100 consumer groups |
| `client.id` | string | Client ID | Maximum length: 192 |
| `group.id` | string | Consumer group ID, the same consumer group shares consumption progress | <br />**Required**. Maximum length: 192,excess length will be cut off.<br />Each topic can have up to 100 consumer groups |
| `client.id` | string | Client ID | Maximum length: 255, excess length will be cut off. |
| `auto.offset.reset` | enum | Initial position of the consumer group subscription | <br />`earliest`: default(version < 3.2.0.0); subscribe from the beginning; <br/>`latest`: default(version >= 3.2.0.0); only subscribe from the latest data; <br/>`none`: cannot subscribe without a committed offset |
| `enable.auto.commit` | boolean | Whether to enable automatic consumption point submission, true: automatic submission, client application does not need to commit; false: client application needs to commit manually | Default is true |
| `auto.commit.interval.ms` | integer | Time interval for automatically submitting consumption records, in milliseconds | Default is 5000 |

View File

@ -1,882 +0,0 @@
---
title: Deploying Your Cluster
slug: /operations-and-maintenance/deploy-your-cluster
---
Since TDengine was designed with a distributed architecture from the beginning, it has powerful horizontal scaling capabilities to meet the growing data processing needs. Therefore, TDengine supports clustering and has open-sourced this core functionality. Users can choose from four deployment methods according to their actual environment and needs—manual deployment, Docker deployment, Kubernetes deployment, and Helm deployment.
## Manual Deployment
### Deploying taosd
taosd is the most important service component in the TDengine cluster. This section describes the steps to manually deploy a taosd cluster.
#### 1. Clear Data
If the physical nodes for setting up the cluster contain previous test data or have had other versions of TDengine installed (such as 1.x/2.x), please delete them and clear all data first.
#### 2. Check Environment
Before deploying the TDengine cluster, it is crucial to thoroughly check the network settings of all dnodes and the physical nodes where the applications are located. Here are the steps to check:
- Step 1: Execute the `hostname -f` command on each physical node to view and confirm that all node hostnames are unique. This step can be omitted for nodes where application drivers are located.
- Step 2: Execute the `ping host` command on each physical node, where host is the hostname of other physical nodes. This step aims to detect the network connectivity between the current node and other physical nodes. If you cannot ping through, immediately check the network and DNS settings. For Linux operating systems, check the `/etc/hosts` file; for Windows operating systems, check the `C:\Windows\system32\drivers\etc\hosts` file. Network issues will prevent the formation of a cluster, so be sure to resolve this issue.
- Step 3: Repeat the above network detection steps on the physical nodes where the application is running. If the network is found to be problematic, the application will not be able to connect to the taosd service. At this point, carefully check the DNS settings or hosts file of the physical node where the application is located to ensure it is configured correctly.
- Step 4: Check ports to ensure that all hosts in the cluster can communicate over TCP on port 6030.
By following these steps, you can ensure that all nodes communicate smoothly at the network level, laying a solid foundation for the successful deployment of the TDengine cluster.
#### 3. Installation
To ensure consistency and stability within the cluster, install the same version of TDengine on all physical nodes.
#### 4. Modify Configuration
Modify the configuration file of TDengine (the configuration files of all nodes need to be modified). Assuming the endpoint of the first dnode to be started is `h1.tdengine.com:6030`, the cluster-related parameters are as follows.
```shell
# firstEp is the first dnode that each dnode connects to after the initial startup
firstEp h1.tdengine.com:6030
# Must be configured to the FQDN of this dnode, if there is only one hostname on this machine, you can comment out or delete the following line
fqdn h1.tdengine.com
# Configure the port of this dnode, default is 6030
serverPort 6030
```
The parameters that must be modified are firstEp and fqdn. For each dnode, the firstEp configuration should remain consistent, but fqdn must be set to the value of the dnode it is located on. Other parameters do not need to be modified unless you are clear on why they should be changed.
For dnodes wishing to join the cluster, it is essential to ensure that the parameters related to the TDengine cluster listed in the table below are set identically. Any mismatch in parameters may prevent the dnode from successfully joining the cluster.
| Parameter Name | Meaning |
|:----------------:|:---------------------------------------------------------:|
| statusInterval | Interval at which dnode reports status to mnode |
| timezone | Time zone |
| locale | System locale information and encoding format |
| charset | Character set encoding |
| ttlChangeOnWrite | Whether ttl expiration changes with table modification |
#### 5. Start
Start the first dnode, such as `h1.tdengine.com`, following the steps mentioned above. Then execute taos in the terminal to start TDengine's CLI program taos, and execute the `show dnodes` command within it to view all dnode information in the current cluster.
```shell
taos> show dnodes;
id | endpoint | vnodes|support_vnodes|status| create_time | note |
===================================================================================
1| h1.tdengine.com:6030 | 0| 1024| ready| 2022-07-16 10:50:42.673 | |
```
You can see that the endpoint of the dnode node that has just started is `h1.tdengine.com:6030`. This address is the first Ep of the new cluster.
#### 6. Adding dnode
Follow the steps mentioned earlier, start taosd on each physical node. Each dnode needs to configure the firstEp parameter in the taos.cfg file to the endpoint of the first node of the new cluster, which in this case is `h1.tdengine.com:6030`. On the machine where the first dnode is located, run taos in the terminal, open TDengine's CLI program taos, then log into the TDengine cluster, and execute the following SQL.
```shell
create dnode "h2.tdengine.com:6030"
```
Add the new dnode's endpoint to the cluster's endpoint list. You need to put `fqdn:port` in double quotes, otherwise, it will cause an error when running. Please note to replace the example h2.tdengine.com:6030 with the endpoint of this new dnode. Then execute the following SQL to see if the new node has successfully joined. If the dnode you want to join is currently offline, please refer to the "Common Issues" section later in this chapter for a solution.
```shell
show dnodes;
```
In the logs, please confirm that the fqdn and port of the output dnode are consistent with the endpoint you just tried to add. If they are not consistent, correct it to the correct endpoint. By following the steps above, you can continuously add new dnodes to the cluster one by one, thereby expanding the scale of the cluster and improving overall performance. Make sure to follow the correct process when adding new nodes, which helps maintain the stability and reliability of the cluster.
**Tips**
- Any dnode that has joined the cluster can serve as the firstEp for subsequent nodes to be added. The firstEp parameter only functions when that dnode first joins the cluster. After joining, the dnode will save the latest mnode's endpoint list, and subsequently, it no longer depends on this parameter. The firstEp parameter in the configuration file is mainly used for client connections, and if no parameters are set for TDengine's CLI, it will default to connecting to the node specified by firstEp.
- Two dnodes that have not configured the firstEp parameter will run independently after starting. At this time, it is not possible to join one dnode to another to form a cluster.
- TDengine does not allow merging two independent clusters into a new cluster.
#### 7. Adding mnode
When creating a TDengine cluster, the first dnode automatically becomes the mnode of the cluster, responsible for managing and coordinating the cluster. To achieve high availability of mnode, subsequent dnodes need to manually create mnode. Please note that a cluster can create up to 3 mnodes, and only one mnode can be created on each dnode. When the number of dnodes in the cluster reaches or exceeds 3, you can create mnode for the existing cluster. In the first dnode, first log into TDengine through the CLI program taos, then execute the following SQL.
```shell
create mnode on dnode <dnodeId>
```
Please note to replace the dnodeId in the example above with the serial number of the newly created dnode (which can be obtained by executing the `show dnodes` command). Finally, execute the following `show mnodes` to see if the newly created mnode has successfully joined the cluster.
**Tips**
During the process of setting up a TDengine cluster, if a new node always shows as offline after executing the create dnode command to add a new node, please follow these steps for troubleshooting.
- Step 1, check whether the taosd service on the new node has started normally. You can confirm this by checking the log files or using the ps command.
- Step 2, if the taosd service has started, next check whether the new node's network connection is smooth and confirm whether the firewall has been turned off. Network issues or firewall settings may prevent the node from communicating with other nodes in the cluster.
- Step 3, use the taos -h fqdn command to try to connect to the new node, then execute the show dnodes command. This will display the running status of the new node as an independent cluster. If the displayed list is inconsistent with that shown on the main node, it indicates that the new node may have formed a single-node cluster on its own. To resolve this issue, follow these steps. First, stop the taosd service on the new node. Second, clear all files in the dataDir directory specified in the taos.cfg configuration file on the new node. This will delete all data and configuration information related to that node. Finally, restart the taosd service on the new node. This will reset the new node to its initial state, ready to rejoin the main cluster.
### Deploying taosAdapter
This section discusses how to deploy taosAdapter, which provides RESTful and WebSocket access capabilities for the TDengine cluster, thus playing a very important role in the cluster.
1. Installation
After the installation of TDengine Enterprise is complete, taosAdapter can be used. If you want to deploy taosAdapter on different servers, TDengine Enterprise needs to be installed on these servers.
2. Single Instance Deployment
Deploying a single instance of taosAdapter is very simple. For specific commands and configuration parameters, please refer to the taosAdapter section in the manual.
3. Multiple Instances Deployment
The main purposes of deploying multiple instances of taosAdapter are as follows:
- To increase the throughput of the cluster and prevent taosAdapter from becoming a system bottleneck.
- To enhance the robustness and high availability of the cluster, allowing requests entering the business system to be automatically routed to other instances when one instance fails.
When deploying multiple instances of taosAdapter, it is necessary to address load balancing issues to avoid overloading some nodes while others remain idle. During the deployment process, multiple single instances need to be deployed separately, and the deployment steps for each instance are exactly the same as those for deploying a single instance. The next critical part is configuring Nginx. Below is a verified best practice configuration; you only need to replace the endpoint with the correct address in the actual environment. For the meanings of each parameter, please refer to the official Nginx documentation.
```json
user root;
worker_processes auto;
error_log /var/log/nginx_error.log;
events {
use epoll;
worker_connections 1024;
}
http {
access_log off;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 6041;
location ~* {
proxy_pass http://dbserver;
proxy_read_timeout 600s;
proxy_send_timeout 600s;
proxy_connect_timeout 600s;
proxy_next_upstream error http_502 non_idempotent;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
}
}
server {
listen 6043;
location ~* {
proxy_pass http://keeper;
proxy_read_timeout 60s;
proxy_next_upstream error http_502 http_500 non_idempotent;
}
}
server {
listen 6060;
location ~* {
proxy_pass http://explorer;
proxy_read_timeout 60s;
proxy_next_upstream error http_502 http_500 non_idempotent;
}
}
upstream dbserver {
least_conn;
server 172.16.214.201:6041 max_fails=0;
server 172.16.214.202:6041 max_fails=0;
server 172.16.214.203:6041 max_fails=0;
}
upstream keeper {
ip_hash;
server 172.16.214.201:6043 ;
server 172.16.214.202:6043 ;
server 172.16.214.203:6043 ;
}
upstream explorer{
ip_hash;
server 172.16.214.201:6060 ;
server 172.16.214.202:6060 ;
server 172.16.214.203:6060 ;
}
}
```
### Deploying taosKeeper
To use the monitoring capabilities of TDengine, taosKeeper is an essential component. For monitoring, please refer to [TDinsight](../../tdengine-reference/components/tdinsight), and for details on deploying taosKeeper, please refer to the [taosKeeper Reference Manual](../../tdengine-reference/components/taoskeeper).
### Deploying taosX
To utilize the data ingestion capabilities of TDengine, it is necessary to deploy the taosX service. For detailed explanations and deployment, please refer to the enterprise edition reference manual.
### Deploying taosX-Agent
For some data sources such as Pi, OPC, etc., due to network conditions and data source access restrictions, taosX cannot directly access the data sources. In such cases, a proxy service, taosX-Agent, needs to be deployed. For detailed explanations and deployment, please refer to the enterprise edition reference manual.
### Deploying taos-Explorer
TDengine provides the capability to visually manage TDengine clusters. To use the graphical interface, the taos-Explorer service needs to be deployed. For detailed explanations and deployment, please refer to the [taos-Explorer Reference Manual](../../tdengine-reference/components/taosexplorer/)
## Docker Deployment
This section will explain how to start TDengine services in Docker containers and access them. You can use environment variables in the docker run command line or docker-compose file to control the behavior of services in the container.
### Starting TDengine
The TDengine image is launched with HTTP service activated by default. Use the following command to create a containerized TDengine environment with HTTP service.
```shell
docker run -d --name tdengine \
-v ~/data/taos/dnode/data:/var/lib/taos \
-v ~/data/taos/dnode/log:/var/log/taos \
-p 6041:6041 tdengine/tdengine
```
Detailed parameter explanations are as follows:
- /var/lib/taos: Default data file directory for TDengine, can be modified through the configuration file.
- /var/log/taos: Default log file directory for TDengine, can be modified through the configuration file.
The above command starts a container named tdengine and maps the HTTP service's port 6041 to the host port 6041. The following command can verify if the HTTP service in the container is available.
```shell
curl -u root:taosdata -d "show databases" localhost:6041/rest/sql
```
Run the following command to access TDengine within the container.
```shell
$ docker exec -it tdengine taos
taos> show databases;
name |
=================================
information_schema |
performance_schema |
Query OK, 2 rows in database (0.033802s)
```
Within the container, TDengine CLI or various connectors (such as JDBC-JNI) connect to the server via the container's hostname. Accessing TDengine inside the container from outside is more complex, and using RESTful/WebSocket connection methods is the simplest approach.
### Starting TDengine in host network mode
Run the following command to start TDengine in host network mode, which allows using the host's FQDN to establish connections, rather than using the container's hostname.
```shell
docker run -d --name tdengine --network host tdengine/tdengine
```
This method is similar to starting TDengine on the host using the systemctl command. If the TDengine client is already installed on the host, you can directly use the following command to access the TDengine service.
```shell
$ taos
taos> show dnodes;
id | endpoint | vnodes | support_vnodes | status | create_time | note |
=================================================================================================================================================
1 | vm98:6030 | 0 | 32 | ready | 2022-08-19 14:50:05.337 | |
Query OK, 1 rows in database (0.010654s)
```
### Start TDengine with a specified hostname and port
Use the following command to establish a connection on a specified hostname using the TAOS_FQDN environment variable or the fqdn configuration item in taos.cfg. This method provides greater flexibility for deploying TDengine.
```shell
docker run -d \
--name tdengine \
-e TAOS_FQDN=tdengine \
-p 6030:6030 \
-p 6041-6049:6041-6049 \
-p 6041-6049:6041-6049/udp \
tdengine/tdengine
```
First, the above command starts a TDengine service in the container, listening on the hostname tdengine, and maps the container's port 6030 to the host's port 6030, and the container's port range [6041, 6049] to the host's port range [6041, 6049]. If the port range on the host is already in use, you can modify the command to specify a free port range on the host.
Secondly, ensure that the hostname tdengine is resolvable in /etc/hosts. Use the following command to save the correct configuration information to the hosts file.
```shell
echo 127.0.0.1 tdengine |sudo tee -a /etc/hosts
```
Finally, you can access the TDengine service using the TDengine CLI with tdengine as the server address, as follows.
```shell
taos -h tdengine -P 6030
```
If TAOS_FQDN is set to the same as the hostname of the host, the effect is the same as "starting TDengine in host network mode".
## Kubernetes Deployment
As a time-series database designed for cloud-native architectures, TDengine inherently supports Kubernetes deployment. This section introduces how to step-by-step create a highly available TDengine cluster for production use using YAML files, with a focus on common operations of TDengine in a Kubernetes environment. This subsection requires readers to have a certain understanding of Kubernetes, be proficient in running common kubectl commands, and understand concepts such as statefulset, service, and pvc. Readers unfamiliar with these concepts can refer to the Kubernetes official website for learning.
To meet the requirements of high availability, the cluster needs to meet the following requirements:
- 3 or more dnodes: Multiple vnodes in the same vgroup of TDengine should not be distributed on the same dnode, so if creating a database with 3 replicas, the number of dnodes should be 3 or more.
- 3 mnodes: mnodes are responsible for managing the entire cluster, with TDengine defaulting to one mnode. If the dnode hosting this mnode goes offline, the entire cluster becomes unavailable.
- 3 replicas of the database: TDengine's replica configuration is at the database level, so 3 replicas can ensure that the cluster remains operational even if any one of the 3 dnodes goes offline. If 2 dnodes go offline, the cluster becomes unavailable because RAFT cannot complete the election. (Enterprise edition: In disaster recovery scenarios, if the data files of any node are damaged, recovery can be achieved by restarting the dnode.)
### Prerequisites
To deploy and manage a TDengine cluster using Kubernetes, the following preparations need to be made.
- This article applies to Kubernetes v1.19 and above.
- This article uses the kubectl tool for installation and deployment, please install the necessary software in advance.
- Kubernetes has been installed and deployed and can normally access or update necessary container repositories or other services.
### Configure Service
Create a Service configuration file: taosd-service.yaml, the service name metadata.name (here "taosd") will be used in the next step. First, add the ports used by TDengine, then set the determined labels app (here "tdengine") in the selector.
```yaml
---
apiVersion: v1
kind: Service
metadata:
name: "taosd"
labels:
app: "tdengine"
spec:
ports:
- name: tcp6030
protocol: "TCP"
port: 6030
- name: tcp6041
protocol: "TCP"
port: 6041
selector:
app: "tdengine"
```
### Stateful Services StatefulSet
According to Kubernetes' descriptions of various deployment types, we will use StatefulSet as the deployment resource type for TDengine. Create the file tdengine.yaml, where replicas define the number of cluster nodes as 3. The node timezone is set to China (Asia/Shanghai), and each node is allocated 5G of standard storage, which you can modify according to actual conditions.
Please pay special attention to the configuration of startupProbe. After a dnode's Pod goes offline for a period of time and then restarts, the newly online dnode will be temporarily unavailable. If the startupProbe configuration is too small, Kubernetes will consider the Pod to be in an abnormal state and attempt to restart the Pod. This dnode's Pod will frequently restart and never return to a normal state.
```yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: "tdengine"
labels:
app: "tdengine"
spec:
serviceName: "taosd"
replicas: 3
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: "tdengine"
template:
metadata:
name: "tdengine"
labels:
app: "tdengine"
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- tdengine
topologyKey: kubernetes.io/hostname
containers:
- name: "tdengine"
image: "tdengine/tdengine:3.2.3.0"
imagePullPolicy: "IfNotPresent"
ports:
- name: tcp6030
protocol: "TCP"
containerPort: 6030
- name: tcp6041
protocol: "TCP"
containerPort: 6041
env:
# POD_NAME for FQDN config
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# SERVICE_NAME and NAMESPACE for fqdn resolve
- name: SERVICE_NAME
value: "taosd"
- name: STS_NAME
value: "tdengine"
- name: STS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# TZ for timezone settings, we recommend to always set it.
- name: TZ
value: "Asia/Shanghai"
# Environment variables with prefix TAOS_ will be parsed and converted into corresponding parameter in taos.cfg. For example, serverPort in taos.cfg should be configured by TAOS_SERVER_PORT when using K8S to deploy
- name: TAOS_SERVER_PORT
value: "6030"
# Must set if you want a cluster.
- name: TAOS_FIRST_EP
value: "$(STS_NAME)-0.$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local:$(TAOS_SERVER_PORT)"
# TAOS_FQND should always be set in k8s env.
- name: TAOS_FQDN
value: "$(POD_NAME).$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local"
volumeMounts:
- name: taosdata
mountPath: /var/lib/taos
startupProbe:
exec:
command:
- taos-check
failureThreshold: 360
periodSeconds: 10
readinessProbe:
exec:
command:
- taos-check
initialDelaySeconds: 5
timeoutSeconds: 5000
livenessProbe:
exec:
command:
- taos-check
initialDelaySeconds: 15
periodSeconds: 20
volumeClaimTemplates:
- metadata:
name: taosdata
spec:
accessModes:
- "ReadWriteOnce"
storageClassName: "standard"
resources:
requests:
storage: "5Gi"
```
### Deploying TDengine Cluster Using kubectl Command
First, create the corresponding namespace `dengine-test`, as well as the PVC, ensuring that there is enough remaining space with `storageClassName` set to `standard`. Then execute the following commands in sequence:
```shell
kubectl apply -f taosd-service.yaml -n tdengine-test
```
The above configuration will create a three-node TDengine cluster, with `dnode` automatically configured. You can use the `show dnodes` command to view the current cluster nodes:
```shell
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show dnodes"
kubectl exec -it tdengine-2 -n tdengine-test -- taos -s "show dnodes"
```
The output is as follows:
```shell
taos show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 0 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | |
2 | tdengine-1.ta... | 0 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | |
3 | tdengine-2.ta... | 0 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | |
Query OK, 3 row(s) in set (0.001853s)
```
View the current mnode:
```shell
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
taos> show mnodes\G
*************************** 1.row ***************************
id: 1
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
role: leader
status: ready
create_time: 2023-07-19 17:54:18.559
reboot_time: 2023-07-19 17:54:19.520
Query OK, 1 row(s) in set (0.001282s)
```
Create mnode
```shell
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 2"
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 3"
```
View mnode
```shell
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
taos> show mnodes\G
*************************** 1.row ***************************
id: 1
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
role: leader
status: ready
create_time: 2023-07-19 17:54:18.559
reboot_time: 2023-07-20 09:19:36.060
*************************** 2.row ***************************
id: 2
endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030
role: follower
status: ready
create_time: 2023-07-20 09:22:05.600
reboot_time: 2023-07-20 09:22:12.838
*************************** 3.row ***************************
id: 3
endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030
role: follower
status: ready
create_time: 2023-07-20 09:22:20.042
reboot_time: 2023-07-20 09:22:23.271
Query OK, 3 row(s) in set (0.003108s)
```
### Port Forwarding
Using kubectl port forwarding feature allows applications to access the TDengine cluster running in the Kubernetes environment.
```shell
kubectl port-forward -n tdengine-test tdengine-0 6041:6041 &
```
Use the curl command to verify the TDengine REST API using port 6041.
```shell
curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
{"code":0,"column_meta":[["name","VARCHAR",64]],"data":[["information_schema"],["performance_schema"],["test"],["test1"]],"rows":4}
```
### Cluster Expansion
TDengine supports cluster expansion:
```shell
kubectl scale statefulsets tdengine -n tdengine-test --replicas=4
```
The command line argument `--replica=4` indicates that the TDengine cluster is to be expanded to 4 nodes. After execution, first check the status of the POD:
```shell
kubectl get pod -l app=tdengine -n tdengine-test -o wide
```
Output as follows:
```text
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tdengine-0 1/1 Running 4 (6h26m ago) 6h53m 10.244.2.75 node86 <none> <none>
tdengine-1 1/1 Running 1 (6h39m ago) 6h53m 10.244.0.59 node84 <none> <none>
tdengine-2 1/1 Running 0 5h16m 10.244.1.224 node85 <none> <none>
tdengine-3 1/1 Running 0 3m24s 10.244.2.76 node86 <none> <none>
```
At this point, the Pod status is still Running. The dnode status in the TDengine cluster can be seen after the Pod status changes to ready:
```shell
kubectl exec -it tdengine-3 -n tdengine-test -- taos -s "show dnodes"
```
The dnode list of the four-node TDengine cluster after expansion:
```text
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
4 | tdengine-3.ta... | 0 | 16 | ready | 2023-07-20 16:01:44.007 | 2023-07-20 16:01:44.889 | | | |
Query OK, 4 row(s) in set (0.003628s)
```
### Cleaning up the Cluster
**Warning**
When deleting PVCs, pay attention to the PV persistentVolumeReclaimPolicy. It is recommended to set it to Delete, so that when the PVC is deleted, the PV will be automatically cleaned up, along with the underlying CSI storage resources. If the policy to automatically clean up PVs when deleting PVCs is not configured, after deleting the PVCs, manually cleaning up the PVs may not release the corresponding CSI storage resources.
To completely remove the TDengine cluster, you need to clean up the statefulset, svc, pvc, and finally delete the namespace.
```shell
kubectl delete statefulset -l app=tdengine -n tdengine-test
kubectl delete svc -l app=tdengine -n tdengine-test
kubectl delete pvc -l app=tdengine -n tdengine-test
kubectl delete namespace tdengine-test
```
### Cluster Disaster Recovery Capabilities
For high availability and reliability of TDengine in a Kubernetes environment, in terms of hardware damage and disaster recovery, it is discussed on two levels:
- The disaster recovery capabilities of the underlying distributed block storage, which includes multiple replicas of block storage. Popular distributed block storage like Ceph has multi-replica capabilities, extending storage replicas to different racks, cabinets, rooms, and data centers (or directly using block storage services provided by public cloud vendors).
- TDengine's disaster recovery, in TDengine Enterprise, inherently supports the recovery of a dnode's work by launching a new blank dnode when an existing dnode permanently goes offline (due to physical disk damage and data loss).
## Deploying TDengine Cluster with Helm
Helm is the package manager for Kubernetes.
The previous section on deploying the TDengine cluster with Kubernetes was simple enough, but Helm can provide even more powerful capabilities.
### Installing Helm
```shell
curl -fsSL -o get_helm.sh \
https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod +x get_helm.sh
./get_helm.sh
```
Helm operates Kubernetes using kubectl and kubeconfig configurations, which can be set up following the Rancher installation configuration for Kubernetes.
### Installing TDengine Chart
The TDengine Chart has not yet been released to the Helm repository, it can currently be downloaded directly from GitHub:
```shell
wget https://github.com/taosdata/TDengine-Operator/raw/3.0/helm/tdengine-3.0.2.tgz
```
Retrieve the current Kubernetes storage class:
```shell
kubectl get storageclass
```
In minikube, the default is standard. Then, use the helm command to install:
```shell
helm install tdengine tdengine-3.0.2.tgz \
--set storage.className=<your storage class name> \
--set image.tag=3.2.3.0
```
In a minikube environment, you can set a smaller capacity to avoid exceeding disk space:
```shell
helm install tdengine tdengine-3.0.2.tgz \
--set storage.className=standard \
--set storage.dataSize=2Gi \
--set storage.logSize=10Mi \
--set image.tag=3.2.3.0
```
After successful deployment, the TDengine Chart will output instructions for operating TDengine:
```shell
export POD_NAME=$(kubectl get pods --namespace default \
-l "app.kubernetes.io/name=tdengine,app.kubernetes.io/instance=tdengine" \
-o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default exec $POD_NAME -- taos -s "show dnodes; show mnodes"
kubectl --namespace default exec -it $POD_NAME -- taos
```
You can create a table for testing:
```shell
kubectl --namespace default exec $POD_NAME -- \
taos -s "create database test;
use test;
create table t1 (ts timestamp, n int);
insert into t1 values(now, 1)(now + 1s, 2);
select * from t1;"
```
### Configuring values
TDengine supports customization through `values.yaml`.
You can obtain the complete list of values supported by the TDengine Chart with helm show values:
```shell
helm show values tdengine-3.0.2.tgz
```
You can save the results as `values.yaml`, then modify various parameters in it, such as the number of replicas, storage class name, capacity size, TDengine configuration, etc., and then use the following command to install the TDengine cluster:
```shell
helm install tdengine tdengine-3.0.2.tgz -f values.yaml
```
All parameters are as follows:
```yaml
# Default values for tdengine.
# This is a YAML-formatted file.
# Declare variables to be passed into helm templates.
replicaCount: 1
image:
prefix: tdengine/tdengine
#pullPolicy: Always
# Overrides the image tag whose default is the chart appVersion.
# tag: "3.0.2.0"
service:
# ClusterIP is the default service type, use NodeIP only if you know what you are doing.
type: ClusterIP
ports:
# TCP range required
tcp: [6030, 6041, 6042, 6043, 6044, 6046, 6047, 6048, 6049, 6060]
# UDP range
udp: [6044, 6045]
# Set timezone here, not in taoscfg
timezone: "Asia/Shanghai"
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
storage:
# Set storageClassName for pvc. K8s use default storage class if not set.
#
className: ""
dataSize: "100Gi"
logSize: "10Gi"
nodeSelectors:
taosd:
# node selectors
clusterDomainSuffix: ""
# Config settings in taos.cfg file.
#
# The helm/k8s support will use environment variables for taos.cfg,
# converting an upper-snake-cased variable like `TAOS_DEBUG_FLAG`,
# to a camelCase taos config variable `debugFlag`.
#
# Note:
# 1. firstEp/secondEp: should not be set here, it's auto generated at scale-up.
# 2. serverPort: should not be set, we'll use the default 6030 in many places.
# 3. fqdn: will be auto generated in kubernetes, user should not care about it.
# 4. role: currently role is not supported - every node is able to be mnode and vnode.
#
# Btw, keep quotes "" around the value like below, even the value will be number or not.
taoscfg:
# Starts as cluster or not, must be 0 or 1.
# 0: all pods will start as a separate TDengine server
# 1: pods will start as TDengine server cluster. [default]
CLUSTER: "1"
# number of replications, for cluster only
TAOS_REPLICA: "1"
# TAOS_NUM_OF_RPC_THREADS: number of threads for RPC
#TAOS_NUM_OF_RPC_THREADS: "2"
#
# TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data
#TAOS_NUM_OF_COMMIT_THREADS: "4"
# enable/disable installation / usage report
#TAOS_TELEMETRY_REPORTING: "1"
# time interval of system monitor, seconds
#TAOS_MONITOR_INTERVAL: "30"
# time interval of dnode status reporting to mnode, seconds, for cluster only
#TAOS_STATUS_INTERVAL: "1"
# time interval of heart beat from shell to dnode, seconds
#TAOS_SHELL_ACTIVITY_TIMER: "3"
# minimum sliding window time, milli-second
#TAOS_MIN_SLIDING_TIME: "10"
# minimum time window, milli-second
#TAOS_MIN_INTERVAL_TIME: "1"
# the compressed rpc message, option:
# -1 (no compression)
# 0 (all message compressed),
# > 0 (rpc message body which larger than this value will be compressed)
#TAOS_COMPRESS_MSG_SIZE: "-1"
# max number of connections allowed in dnode
#TAOS_MAX_SHELL_CONNS: "50000"
# stop writing logs when the disk size of the log folder is less than this value
#TAOS_MINIMAL_LOG_DIR_G_B: "0.1"
# stop writing temporary files when the disk size of the tmp folder is less than this value
#TAOS_MINIMAL_TMP_DIR_G_B: "0.1"
# if disk free space is less than this value, taosd service exit directly within startup process
#TAOS_MINIMAL_DATA_DIR_G_B: "0.1"
# One mnode is equal to the number of vnode consumed
#TAOS_MNODE_EQUAL_VNODE_NUM: "4"
# enbale/disable http service
#TAOS_HTTP: "1"
# enable/disable system monitor
#TAOS_MONITOR: "1"
# enable/disable async log
#TAOS_ASYNC_LOG: "1"
#
# time of keeping log files, days
#TAOS_LOG_KEEP_DAYS: "0"
# The following parameters are used for debug purpose only.
# debugFlag 8 bits mask: FILE-SCREEN-UNUSED-HeartBeat-DUMP-TRACE_WARN-ERROR
# 131: output warning and error
# 135: output debug, warning and error
# 143: output trace, debug, warning and error to log
# 199: output debug, warning and error to both screen and file
# 207: output trace, debug, warning and error to both screen and file
#
# debug flag for all log type, take effect when non-zero value\
#TAOS_DEBUG_FLAG: "143"
# generate core file when service crash
#TAOS_ENABLE_CORE_FILE: "1"
```
### Expansion
For expansion, refer to the explanation in the previous section, with some additional operations needed from the helm deployment.
First, retrieve the name of the StatefulSet from the deployment.
```shell
export STS_NAME=$(kubectl get statefulset \
-l "app.kubernetes.io/name=tdengine" \
-o jsonpath="{.items[0].metadata.name}")
```
The expansion operation is extremely simple, just increase the replica. The following command expands TDengine to three nodes:
```shell
kubectl scale --replicas 3 statefulset/$STS_NAME
```
Use the commands `show dnodes` and `show mnodes` to check if the expansion was successful.
### Cleaning up the Cluster
Under Helm management, the cleanup operation also becomes simple:
```shell
helm uninstall tdengine
```
However, Helm will not automatically remove PVCs, you need to manually retrieve and then delete the PVCs.

View File

@ -0,0 +1,215 @@
---
title: Manual Deployment
slug: /operations-and-maintenance/deploy-your-cluster/manual-deployment
---
You can deploy TDengine manually on a physical or virtual machine.
## Deploying taosd
taosd is the most important service component in the TDengine cluster. This section describes the steps to manually deploy a taosd cluster.
### 1. Clear Data
If the physical nodes for setting up the cluster contain previous test data or have had other versions of TDengine installed (such as 1.x/2.x), please delete them and clear all data first.
### 2. Check Environment
Before deploying the TDengine cluster, it is crucial to thoroughly check the network settings of all dnodes and the physical nodes where the applications are located. Here are the steps to check:
- Step 1: Execute the `hostname -f` command on each physical node to view and confirm that all node hostnames are unique. This step can be omitted for nodes where application drivers are located.
- Step 2: Execute the `ping host` command on each physical node, where host is the hostname of other physical nodes. This step aims to detect the network connectivity between the current node and other physical nodes. If you cannot ping through, immediately check the network and DNS settings. For Linux operating systems, check the `/etc/hosts` file; for Windows operating systems, check the `C:\Windows\system32\drivers\etc\hosts` file. Network issues will prevent the formation of a cluster, so be sure to resolve this issue.
- Step 3: Repeat the above network detection steps on the physical nodes where the application is running. If the network is found to be problematic, the application will not be able to connect to the taosd service. At this point, carefully check the DNS settings or hosts file of the physical node where the application is located to ensure it is configured correctly.
- Step 4: Check ports to ensure that all hosts in the cluster can communicate over TCP on port 6030.
By following these steps, you can ensure that all nodes communicate smoothly at the network level, laying a solid foundation for the successful deployment of the TDengine cluster.
### 3. Installation
To ensure consistency and stability within the cluster, install the same version of TDengine on all physical nodes.
### 4. Modify Configuration
Modify the configuration file of TDengine (the configuration files of all nodes need to be modified). Assuming the endpoint of the first dnode to be started is `h1.tdengine.com:6030`, the cluster-related parameters are as follows.
```shell
# firstEp is the first dnode that each dnode connects to after the initial startup
firstEp h1.tdengine.com:6030
# Must be configured to the FQDN of this dnode, if there is only one hostname on this machine, you can comment out or delete the following line
fqdn h1.tdengine.com
# Configure the port of this dnode, default is 6030
serverPort 6030
```
The parameters that must be modified are firstEp and fqdn. For each dnode, the firstEp configuration should remain consistent, but fqdn must be set to the value of the dnode it is located on. Other parameters do not need to be modified unless you are clear on why they should be changed.
For dnodes wishing to join the cluster, it is essential to ensure that the parameters related to the TDengine cluster listed in the table below are set identically. Any mismatch in parameters may prevent the dnode from successfully joining the cluster.
| Parameter Name | Meaning |
|:----------------:|:---------------------------------------------------------:|
| statusInterval | Interval at which dnode reports status to mnode |
| timezone | Time zone |
| locale | System locale information and encoding format |
| charset | Character set encoding |
| ttlChangeOnWrite | Whether ttl expiration changes with table modification |
### 5. Start
Start the first dnode, such as `h1.tdengine.com`, following the steps mentioned above. Then execute taos in the terminal to start TDengine's CLI program taos, and execute the `show dnodes` command within it to view all dnode information in the current cluster.
```shell
taos> show dnodes;
id | endpoint | vnodes|support_vnodes|status| create_time | note |
===================================================================================
1| h1.tdengine.com:6030 | 0| 1024| ready| 2022-07-16 10:50:42.673 | |
```
You can see that the endpoint of the dnode node that has just started is `h1.tdengine.com:6030`. This address is the first Ep of the new cluster.
### 6. Adding dnode
Follow the steps mentioned earlier, start taosd on each physical node. Each dnode needs to configure the firstEp parameter in the taos.cfg file to the endpoint of the first node of the new cluster, which in this case is `h1.tdengine.com:6030`. On the machine where the first dnode is located, run taos in the terminal, open TDengine's CLI program taos, then log into the TDengine cluster, and execute the following SQL.
```shell
create dnode "h2.tdengine.com:6030"
```
Add the new dnode's endpoint to the cluster's endpoint list. You need to put `fqdn:port` in double quotes, otherwise, it will cause an error when running. Please note to replace the example h2.tdengine.com:6030 with the endpoint of this new dnode. Then execute the following SQL to see if the new node has successfully joined. If the dnode you want to join is currently offline, please refer to the "Common Issues" section later in this chapter for a solution.
```shell
show dnodes;
```
In the logs, please confirm that the fqdn and port of the output dnode are consistent with the endpoint you just tried to add. If they are not consistent, correct it to the correct endpoint. By following the steps above, you can continuously add new dnodes to the cluster one by one, thereby expanding the scale of the cluster and improving overall performance. Make sure to follow the correct process when adding new nodes, which helps maintain the stability and reliability of the cluster.
**Tips**
- Any dnode that has joined the cluster can serve as the firstEp for subsequent nodes to be added. The firstEp parameter only functions when that dnode first joins the cluster. After joining, the dnode will save the latest mnode's endpoint list, and subsequently, it no longer depends on this parameter. The firstEp parameter in the configuration file is mainly used for client connections, and if no parameters are set for TDengine's CLI, it will default to connecting to the node specified by firstEp.
- Two dnodes that have not configured the firstEp parameter will run independently after starting. At this time, it is not possible to join one dnode to another to form a cluster.
- TDengine does not allow merging two independent clusters into a new cluster.
### 7. Adding mnode
When creating a TDengine cluster, the first dnode automatically becomes the mnode of the cluster, responsible for managing and coordinating the cluster. To achieve high availability of mnode, subsequent dnodes need to manually create mnode. Please note that a cluster can create up to 3 mnodes, and only one mnode can be created on each dnode. When the number of dnodes in the cluster reaches or exceeds 3, you can create mnode for the existing cluster. In the first dnode, first log into TDengine through the CLI program taos, then execute the following SQL.
```shell
create mnode on dnode <dnodeId>
```
Please note to replace the dnodeId in the example above with the serial number of the newly created dnode (which can be obtained by executing the `show dnodes` command). Finally, execute the following `show mnodes` to see if the newly created mnode has successfully joined the cluster.
**Tips**
During the process of setting up a TDengine cluster, if a new node always shows as offline after executing the create dnode command to add a new node, please follow these steps for troubleshooting.
- Step 1, check whether the taosd service on the new node has started normally. You can confirm this by checking the log files or using the ps command.
- Step 2, if the taosd service has started, next check whether the new node's network connection is smooth and confirm whether the firewall has been turned off. Network issues or firewall settings may prevent the node from communicating with other nodes in the cluster.
- Step 3, use the taos -h fqdn command to try to connect to the new node, then execute the show dnodes command. This will display the running status of the new node as an independent cluster. If the displayed list is inconsistent with that shown on the main node, it indicates that the new node may have formed a single-node cluster on its own. To resolve this issue, follow these steps. First, stop the taosd service on the new node. Second, clear all files in the dataDir directory specified in the taos.cfg configuration file on the new node. This will delete all data and configuration information related to that node. Finally, restart the taosd service on the new node. This will reset the new node to its initial state, ready to rejoin the main cluster.
## Deploying taosAdapter
This section discusses how to deploy taosAdapter, which provides RESTful and WebSocket access capabilities for the TDengine cluster, thus playing a very important role in the cluster.
1. Installation
After the installation of TDengine Enterprise is complete, taosAdapter can be used. If you want to deploy taosAdapter on different servers, TDengine Enterprise needs to be installed on these servers.
2. Single Instance Deployment
Deploying a single instance of taosAdapter is very simple. For specific commands and configuration parameters, please refer to the taosAdapter section in the manual.
3. Multiple Instances Deployment
The main purposes of deploying multiple instances of taosAdapter are as follows:
- To increase the throughput of the cluster and prevent taosAdapter from becoming a system bottleneck.
- To enhance the robustness and high availability of the cluster, allowing requests entering the business system to be automatically routed to other instances when one instance fails.
When deploying multiple instances of taosAdapter, it is necessary to address load balancing issues to avoid overloading some nodes while others remain idle. During the deployment process, multiple single instances need to be deployed separately, and the deployment steps for each instance are exactly the same as those for deploying a single instance. The next critical part is configuring Nginx. Below is a verified best practice configuration; you only need to replace the endpoint with the correct address in the actual environment. For the meanings of each parameter, please refer to the official Nginx documentation.
```json
user root;
worker_processes auto;
error_log /var/log/nginx_error.log;
events {
use epoll;
worker_connections 1024;
}
http {
access_log off;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 6041;
location ~* {
proxy_pass http://dbserver;
proxy_read_timeout 600s;
proxy_send_timeout 600s;
proxy_connect_timeout 600s;
proxy_next_upstream error http_502 non_idempotent;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
}
}
server {
listen 6043;
location ~* {
proxy_pass http://keeper;
proxy_read_timeout 60s;
proxy_next_upstream error http_502 http_500 non_idempotent;
}
}
server {
listen 6060;
location ~* {
proxy_pass http://explorer;
proxy_read_timeout 60s;
proxy_next_upstream error http_502 http_500 non_idempotent;
}
}
upstream dbserver {
least_conn;
server 172.16.214.201:6041 max_fails=0;
server 172.16.214.202:6041 max_fails=0;
server 172.16.214.203:6041 max_fails=0;
}
upstream keeper {
ip_hash;
server 172.16.214.201:6043 ;
server 172.16.214.202:6043 ;
server 172.16.214.203:6043 ;
}
upstream explorer{
ip_hash;
server 172.16.214.201:6060 ;
server 172.16.214.202:6060 ;
server 172.16.214.203:6060 ;
}
}
```
## Deploying taosKeeper
To use the monitoring capabilities of TDengine, taosKeeper is an essential component. For monitoring, please refer to [TDinsight](../../../tdengine-reference/components/tdinsight), and for details on deploying taosKeeper, please refer to the [taosKeeper Reference Manual](../../../tdengine-reference/components/taoskeeper).
## Deploying taosX
To utilize the data ingestion capabilities of TDengine, it is necessary to deploy the taosX service. For detailed explanations and deployment, please refer to the enterprise edition reference manual.
## Deploying taosX-Agent
For some data sources such as Pi, OPC, etc., due to network conditions and data source access restrictions, taosX cannot directly access the data sources. In such cases, a proxy service, taosX-Agent, needs to be deployed. For detailed explanations and deployment, please refer to the enterprise edition reference manual.
## Deploying taos-Explorer
TDengine provides the capability to visually manage TDengine clusters. To use the graphical interface, the taos-Explorer service needs to be deployed. For detailed explanations and deployment, please refer to the [taos-Explorer Reference Manual](../../../tdengine-reference/components/taosexplorer/)

View File

@ -0,0 +1,93 @@
---
title: Docker Deployment
slug: /operations-and-maintenance/deploy-your-cluster/docker-deployment
---
You can deploy TDengine services in Docker containers and use environment variables in the docker run command line or docker-compose file to control the behavior of services in the container.
## Starting TDengine
The TDengine image is launched with HTTP service activated by default. Use the following command to create a containerized TDengine environment with HTTP service.
```shell
docker run -d --name tdengine \
-v ~/data/taos/dnode/data:/var/lib/taos \
-v ~/data/taos/dnode/log:/var/log/taos \
-p 6041:6041 tdengine/tdengine
```
Detailed parameter explanations are as follows:
- /var/lib/taos: Default data file directory for TDengine, can be modified through the configuration file.
- /var/log/taos: Default log file directory for TDengine, can be modified through the configuration file.
The above command starts a container named tdengine and maps the HTTP service's port 6041 to the host port 6041. The following command can verify if the HTTP service in the container is available.
```shell
curl -u root:taosdata -d "show databases" localhost:6041/rest/sql
```
Run the following command to access TDengine within the container.
```shell
$ docker exec -it tdengine taos
taos> show databases;
name |
=================================
information_schema |
performance_schema |
Query OK, 2 rows in database (0.033802s)
```
Within the container, TDengine CLI or various connectors (such as JDBC-JNI) connect to the server via the container's hostname. Accessing TDengine inside the container from outside is more complex, and using RESTful/WebSocket connection methods is the simplest approach.
## Starting TDengine in host network mode
Run the following command to start TDengine in host network mode, which allows using the host's FQDN to establish connections, rather than using the container's hostname.
```shell
docker run -d --name tdengine --network host tdengine/tdengine
```
This method is similar to starting TDengine on the host using the systemctl command. If the TDengine client is already installed on the host, you can directly use the following command to access the TDengine service.
```shell
$ taos
taos> show dnodes;
id | endpoint | vnodes | support_vnodes | status | create_time | note |
=================================================================================================================================================
1 | vm98:6030 | 0 | 32 | ready | 2022-08-19 14:50:05.337 | |
Query OK, 1 rows in database (0.010654s)
```
## Start TDengine with a specified hostname and port
Use the following command to establish a connection on a specified hostname using the TAOS_FQDN environment variable or the fqdn configuration item in taos.cfg. This method provides greater flexibility for deploying TDengine.
```shell
docker run -d \
--name tdengine \
-e TAOS_FQDN=tdengine \
-p 6030:6030 \
-p 6041-6049:6041-6049 \
-p 6041-6049:6041-6049/udp \
tdengine/tdengine
```
First, the above command starts a TDengine service in the container, listening on the hostname tdengine, and maps the container's port 6030 to the host's port 6030, and the container's port range [6041, 6049] to the host's port range [6041, 6049]. If the port range on the host is already in use, you can modify the command to specify a free port range on the host.
Secondly, ensure that the hostname tdengine is resolvable in /etc/hosts. Use the following command to save the correct configuration information to the hosts file.
```shell
echo 127.0.0.1 tdengine |sudo tee -a /etc/hosts
```
Finally, you can access the TDengine service using the TDengine CLI with tdengine as the server address, as follows.
```shell
taos -h tdengine -P 6030
```
If TAOS_FQDN is set to the same as the hostname of the host, the effect is the same as "starting TDengine in host network mode".

View File

@ -0,0 +1,812 @@
---
title: Kubernetes Deployment
slug: /operations-and-maintenance/deploy-your-cluster/kubernetes-deployment
---
You can use kubectl or Helm to deploy TDengine in Kubernetes.
Note that Helm is only supported in TDengine Enterprise. To deploy TDengine OSS in Kubernetes, use kubectl.
## Deploy TDengine with kubectl
As a time-series database designed for cloud-native architectures, TDengine inherently supports Kubernetes deployment. This section introduces how to step-by-step create a highly available TDengine cluster for production use using YAML files, with a focus on common operations of TDengine in a Kubernetes environment. This subsection requires readers to have a certain understanding of Kubernetes, be proficient in running common kubectl commands, and understand concepts such as statefulset, service, and pvc. Readers unfamiliar with these concepts can refer to the Kubernetes official website for learning.
To meet the requirements of high availability, the cluster needs to meet the following requirements:
- 3 or more dnodes: Multiple vnodes in the same vgroup of TDengine should not be distributed on the same dnode, so if creating a database with 3 replicas, the number of dnodes should be 3 or more.
- 3 mnodes: mnodes are responsible for managing the entire cluster, with TDengine defaulting to one mnode. If the dnode hosting this mnode goes offline, the entire cluster becomes unavailable.
- 3 replicas of the database: TDengine's replica configuration is at the database level, so 3 replicas can ensure that the cluster remains operational even if any one of the 3 dnodes goes offline. If 2 dnodes go offline, the cluster becomes unavailable because RAFT cannot complete the election. (Enterprise edition: In disaster recovery scenarios, if the data files of any node are damaged, recovery can be achieved by restarting the dnode.)
### Prerequisites
To deploy and manage a TDengine cluster using Kubernetes, the following preparations need to be made.
- This article applies to Kubernetes v1.19 and above.
- This article uses the kubectl tool for installation and deployment, please install the necessary software in advance.
- Kubernetes has been installed and deployed and can normally access or update necessary container repositories or other services.
### Configure Service
Create a Service configuration file: taosd-service.yaml, the service name metadata.name (here "taosd") will be used in the next step. First, add the ports used by TDengine, then set the determined labels app (here "tdengine") in the selector.
```yaml
---
apiVersion: v1
kind: Service
metadata:
name: "taosd"
labels:
app: "tdengine"
spec:
ports:
- name: tcp6030
protocol: "TCP"
port: 6030
- name: tcp6041
protocol: "TCP"
port: 6041
selector:
app: "tdengine"
```
### Stateful Services StatefulSet
According to Kubernetes' descriptions of various deployment types, we will use StatefulSet as the deployment resource type for TDengine. Create the file tdengine.yaml, where replicas define the number of cluster nodes as 3. The node timezone is set to China (Asia/Shanghai), and each node is allocated 5G of standard storage, which you can modify according to actual conditions.
Please pay special attention to the configuration of startupProbe. After a dnode's Pod goes offline for a period of time and then restarts, the newly online dnode will be temporarily unavailable. If the startupProbe configuration is too small, Kubernetes will consider the Pod to be in an abnormal state and attempt to restart the Pod. This dnode's Pod will frequently restart and never return to a normal state.
```yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: "tdengine"
labels:
app: "tdengine"
spec:
serviceName: "taosd"
replicas: 3
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: "tdengine"
template:
metadata:
name: "tdengine"
labels:
app: "tdengine"
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- tdengine
topologyKey: kubernetes.io/hostname
containers:
- name: "tdengine"
image: "tdengine/tdengine:3.2.3.0"
imagePullPolicy: "IfNotPresent"
ports:
- name: tcp6030
protocol: "TCP"
containerPort: 6030
- name: tcp6041
protocol: "TCP"
containerPort: 6041
env:
# POD_NAME for FQDN config
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# SERVICE_NAME and NAMESPACE for fqdn resolve
- name: SERVICE_NAME
value: "taosd"
- name: STS_NAME
value: "tdengine"
- name: STS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# TZ for timezone settings, we recommend to always set it.
- name: TZ
value: "Asia/Shanghai"
# Environment variables with prefix TAOS_ will be parsed and converted into corresponding parameter in taos.cfg. For example, serverPort in taos.cfg should be configured by TAOS_SERVER_PORT when using K8S to deploy
- name: TAOS_SERVER_PORT
value: "6030"
# Must set if you want a cluster.
- name: TAOS_FIRST_EP
value: "$(STS_NAME)-0.$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local:$(TAOS_SERVER_PORT)"
# TAOS_FQND should always be set in k8s env.
- name: TAOS_FQDN
value: "$(POD_NAME).$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local"
volumeMounts:
- name: taosdata
mountPath: /var/lib/taos
startupProbe:
exec:
command:
- taos-check
failureThreshold: 360
periodSeconds: 10
readinessProbe:
exec:
command:
- taos-check
initialDelaySeconds: 5
timeoutSeconds: 5000
livenessProbe:
exec:
command:
- taos-check
initialDelaySeconds: 15
periodSeconds: 20
volumeClaimTemplates:
- metadata:
name: taosdata
spec:
accessModes:
- "ReadWriteOnce"
storageClassName: "standard"
resources:
requests:
storage: "5Gi"
```
### Deploying TDengine Cluster Using kubectl Command
First, create the corresponding namespace `dengine-test`, as well as the PVC, ensuring that there is enough remaining space with `storageClassName` set to `standard`. Then execute the following commands in sequence:
```shell
kubectl apply -f taosd-service.yaml -n tdengine-test
```
The above configuration will create a three-node TDengine cluster, with `dnode` automatically configured. You can use the `show dnodes` command to view the current cluster nodes:
```shell
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show dnodes"
kubectl exec -it tdengine-2 -n tdengine-test -- taos -s "show dnodes"
```
The output is as follows:
```shell
taos show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 0 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | |
2 | tdengine-1.ta... | 0 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | |
3 | tdengine-2.ta... | 0 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | |
Query OK, 3 row(s) in set (0.001853s)
```
View the current mnode:
```shell
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
taos> show mnodes\G
*************************** 1.row ***************************
id: 1
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
role: leader
status: ready
create_time: 2023-07-19 17:54:18.559
reboot_time: 2023-07-19 17:54:19.520
Query OK, 1 row(s) in set (0.001282s)
```
Create mnode
```shell
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 2"
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 3"
```
View mnode
```shell
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
taos> show mnodes\G
*************************** 1.row ***************************
id: 1
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
role: leader
status: ready
create_time: 2023-07-19 17:54:18.559
reboot_time: 2023-07-20 09:19:36.060
*************************** 2.row ***************************
id: 2
endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030
role: follower
status: ready
create_time: 2023-07-20 09:22:05.600
reboot_time: 2023-07-20 09:22:12.838
*************************** 3.row ***************************
id: 3
endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030
role: follower
status: ready
create_time: 2023-07-20 09:22:20.042
reboot_time: 2023-07-20 09:22:23.271
Query OK, 3 row(s) in set (0.003108s)
```
### Port Forwarding
Using kubectl port forwarding feature allows applications to access the TDengine cluster running in the Kubernetes environment.
```shell
kubectl port-forward -n tdengine-test tdengine-0 6041:6041 &
```
Use the curl command to verify the TDengine REST API using port 6041.
```shell
curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
{"code":0,"column_meta":[["name","VARCHAR",64]],"data":[["information_schema"],["performance_schema"],["test"],["test1"]],"rows":4}
```
### Cluster Expansion
TDengine supports cluster expansion:
```shell
kubectl scale statefulsets tdengine -n tdengine-test --replicas=4
```
The command line argument `--replica=4` indicates that the TDengine cluster is to be expanded to 4 nodes. After execution, first check the status of the POD:
```shell
kubectl get pod -l app=tdengine -n tdengine-test -o wide
```
Output as follows:
```text
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tdengine-0 1/1 Running 4 (6h26m ago) 6h53m 10.244.2.75 node86 <none> <none>
tdengine-1 1/1 Running 1 (6h39m ago) 6h53m 10.244.0.59 node84 <none> <none>
tdengine-2 1/1 Running 0 5h16m 10.244.1.224 node85 <none> <none>
tdengine-3 1/1 Running 0 3m24s 10.244.2.76 node86 <none> <none>
```
At this point, the Pod status is still Running. The dnode status in the TDengine cluster can be seen after the Pod status changes to ready:
```shell
kubectl exec -it tdengine-3 -n tdengine-test -- taos -s "show dnodes"
```
The dnode list of the four-node TDengine cluster after expansion:
```text
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
4 | tdengine-3.ta... | 0 | 16 | ready | 2023-07-20 16:01:44.007 | 2023-07-20 16:01:44.889 | | | |
Query OK, 4 row(s) in set (0.003628s)
```
### Cleaning up the Cluster
**Warning**
When deleting PVCs, pay attention to the PV persistentVolumeReclaimPolicy. It is recommended to set it to Delete, so that when the PVC is deleted, the PV will be automatically cleaned up, along with the underlying CSI storage resources. If the policy to automatically clean up PVs when deleting PVCs is not configured, after deleting the PVCs, manually cleaning up the PVs may not release the corresponding CSI storage resources.
To completely remove the TDengine cluster, you need to clean up the statefulset, svc, pvc, and finally delete the namespace.
```shell
kubectl delete statefulset -l app=tdengine -n tdengine-test
kubectl delete svc -l app=tdengine -n tdengine-test
kubectl delete pvc -l app=tdengine -n tdengine-test
kubectl delete namespace tdengine-test
```
### Cluster Disaster Recovery Capabilities
For high availability and reliability of TDengine in a Kubernetes environment, in terms of hardware damage and disaster recovery, it is discussed on two levels:
- The disaster recovery capabilities of the underlying distributed block storage, which includes multiple replicas of block storage. Popular distributed block storage like Ceph has multi-replica capabilities, extending storage replicas to different racks, cabinets, rooms, and data centers (or directly using block storage services provided by public cloud vendors).
- TDengine's disaster recovery, in TDengine Enterprise, inherently supports the recovery of a dnode's work by launching a new blank dnode when an existing dnode permanently goes offline (due to physical disk damage and data loss).
## Deploy TDengine with Helm
Helm is the package manager for Kubernetes.
The previous section on deploying the TDengine cluster with Kubernetes was simple enough, but Helm can provide even more powerful capabilities.
### Installing Helm
```shell
curl -fsSL -o get_helm.sh \
https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod +x get_helm.sh
./get_helm.sh
```
Helm operates Kubernetes using kubectl and kubeconfig configurations, which can be set up following the Rancher installation configuration for Kubernetes.
### Installing TDengine Chart
The TDengine Chart has not yet been released to the Helm repository, it can currently be downloaded directly from GitHub:
```shell
wget https://github.com/taosdata/TDengine-Operator/raw/3.0/helm/tdengine-enterpise-3.5.0.tgz
```
Note that it's for the enterprise edition, and the community edition is not yet available.
Follow the steps below to install the TDengine Chart:
```shell
# Edit the values.yaml file to set the topology of the cluster
vim values.yaml
helm install tdengine tdengine-enterprise-3.5.0.tgz -f values.yaml
```
#### Case 1: Simple 1-node Deployment
The following is a simple example of deploying a single-node TDengine cluster using Helm.
```yaml
# This example is a simple deployment with one server replica.
name: "tdengine"
image:
repository: image.cloud.taosdata.com/ # Leave a trailing slash for the repository, or "" for no repository
server: taosx/integrated:3.3.5.1-b0a54bdd
# Set timezone here, not in taoscfg
timezone: "Asia/Shanghai"
labels:
app: "tdengine"
# Add more labels as needed.
services:
server:
type: ClusterIP
replica: 1
ports:
# TCP range required
tcp: [6041, 6030, 6060]
# UDP range, optional
udp:
volumes:
- name: data
mountPath: /var/lib/taos
spec:
storageClassName: "local-path"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: "10Gi"
- name: log
mountPath: /var/log/taos/
spec:
storageClassName: "local-path"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: "10Gi"
files:
- name: cfg # must be lower case.
mountPath: /etc/taos/taos.cfg
content: |
dataDir /var/lib/taos/
logDir /var/log/taos/
```
Let's explain the above configuration:
- name: The name of the deployment, here it is "tdengine".
- image:
- repository: The image repository address, remember to leave a trailing slash for the repository, or set it to an empty string to use docker.io.
- server: The specific name and tag of the server image. You need to ask your business partner for the TDengine Enterprise image.
- timezone: Set the timezone, here it is "Asia/Shanghai".
- labels: Add labels to the deployment, here is an app label with the value "tdengine", more labels can be added as needed.
- services:
- server: Configure the server service.
- type: The service type, here it is **ClusterIP**.
- replica: The number of replicas, here it is 1.
- ports: Configure the ports of the service.
- tcp: The required TCP port range, here it is [6041, 6030, 6060].
- udp: The optional UDP port range, which is not configured here.
- volumes: Configure the volumes.
- name: The name of the volume, here there are two volumes, data and log.
- mountPath: The mount path of the volume.
- spec: The specification of the volume.
- storageClassName: The storage class name, here it is **local-path**.
- accessModes: The access mode, here it is **ReadWriteOnce**.
- resources.requests.storage: The requested storage size, here it is **10Gi**.
- files: Configure the files to mount in TDengine server.
- name: The name of the file, here it is **cfg**.
- mountPath: The mount path of the file, which is **taos.cfg**.
- content: The content of the file, here the **dataDir** and **logDir** are configured.
After configuring the values.yaml file, use the following command to install the TDengine Chart:
```shell
helm install simple tdengine-enterprise-3.5.0.tgz -f values.yaml
```
After installation, you can see the instructions to see the status of the TDengine cluster:
```shell
NAME: simple
LAST DEPLOYED: Sun Feb 9 13:40:00 2025 default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get first POD name:
export POD_NAME=$(kubectl get pods --namespace default \
-l "app.kubernetes.io/name=tdengine,app.kubernetes.io/instance=simple" -o jsonpath="{.items[0].metadata.name}")
2. Show dnodes/mnodes:
kubectl --namespace default exec $POD_NAME -- taos -s "show dnodes; show mnodes"
3. Run into taos shell:
kubectl --namespace default exec -it $POD_NAME -- taos
```
Follow the instructions to check the status of the TDengine cluster:
```shell
root@u1-58:/data1/projects/helm# kubectl --namespace default exec $POD_NAME -- taos -s "show dnodes; show mnodes"
Welcome to the TDengine Command Line Interface, Client Version:3.3.5.1
Copyright (c) 2023 by TDengine, all rights reserved.
taos> show dnodes; show mnodes
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | machine_id |
==========================================================================================================================================================================================================
1 | simple-tdengine-0.simple-td... | 0 | 85 | ready | 2025-02-07 21:17:34.903 | 2025-02-08 15:52:34.781 | | BWhWyPiEBrWZrQCSqTSc2a/H |
Query OK, 1 row(s) in set (0.005133s)
id | endpoint | role | status | create_time | role_time |
==================================================================================================================================
1 | simple-tdengine-0.simple-td... | leader | ready | 2025-02-07 21:17:34.906 | 2025-02-08 15:52:34.878 |
Query OK, 1 row(s) in set (0.004299s)
```
To clean up the TDengine cluster, use the following command:
```shell
helm uninstall simple
kubectl delete pvc -l app.kubernetes.io/instance=simple
```
#### Case 2: Tiered-Storage Deployment
The following is an example of deploying a TDengine cluster with tiered storage using Helm.
```yaml
# This is an example of a 3-tiered storage deployment with one server replica.
name: "tdengine"
image:
repository: image.cloud.taosdata.com/ # Leave a trailing slash for the repository, or "" for no repository
server: taosx/integrated:3.3.5.1-b0a54bdd
# Set timezone here, not in taoscfg
timezone: "Asia/Shanghai"
labels:
# Add more labels as needed.
services:
server:
type: ClusterIP
replica: 1
ports:
# TCP range required
tcp: [6041, 6030, 6060]
# UDP range, optional
udp:
volumes:
- name: tier0
mountPath: /data/taos0/
spec:
storageClassName: "local-path"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: "10Gi"
- name: tier1
mountPath: /data/taos1/
spec:
storageClassName: "local-path"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: "10Gi"
- name: tier2
mountPath: /data/taos2/
spec:
storageClassName: "local-path"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: "10Gi"
- name: log
mountPath: /var/log/taos/
spec:
storageClassName: "local-path"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: "10Gi"
environment:
TAOS_DEBUG_FLAG: "131"
files:
- name: cfg # must be lower case.
mountPath: /etc/taos/taos.cfg
content: |
dataDir /data/taos0/ 0 1
dataDir /data/taos1/ 1 0
dataDir /data/taos2/ 2 0
```
You can see that the configuration is similar to the previous one, with the addition of the tiered storage configuration. The dataDir configuration in the taos.cfg file is also modified to support tiered storage.
After configuring the values.yaml file, use the following command to install the TDengine Chart:
```shell
helm install tiered tdengine-enterprise-3.5.0.tgz -f values.yaml
```
#### Case 3: 2-replica Deployment
TDengine support 2-replica deployment with an arbitrator, which can be configured as follows:
```yaml
# This example shows how to deploy a 2-replica TDengine cluster with an arbitrator.
name: "tdengine"
image:
repository: image.cloud.taosdata.com/ # Leave a trailing slash for the repository, or "" for no repository
server: taosx/integrated:3.3.5.1-b0a54bdd
# Set timezone here, not in taoscfg
timezone: "Asia/Shanghai"
labels:
my-app: "tdengine"
# Add more labels as needed.
services:
arbitrator:
type: ClusterIP
volumes:
- name: arb-data
mountPath: /var/lib/taos
spec:
storageClassName: "local-path"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: "10Gi"
- name: arb-log
mountPath: /var/log/taos/
spec:
storageClassName: "local-path"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: "10Gi"
server:
type: ClusterIP
replica: 2
ports:
# TCP range required
tcp: [6041, 6030, 6060]
# UDP range, optional
udp:
volumes:
- name: data
mountPath: /var/lib/taos
spec:
storageClassName: "local-path"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: "10Gi"
- name: log
mountPath: /var/log/taos/
spec:
storageClassName: "local-path"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: "10Gi"
```
You can see that the configuration is similar to the first one, with the addition of the arbitrator configuration. The arbitrator service is configured with the same storage as the server service, and the server service is configured with 2 replicas (the arbitrator should be 1 replica and not able to be changed).
#### Case 4: 3-replica Deployment with Single taosX
```yaml
# This example shows how to deploy a 3-replica TDengine cluster with separate taosx/explorer service.
# Users should know that the explorer/taosx service is not cluster-ready, so it is recommended to deploy it separately.
name: "tdengine"
image:
repository: image.cloud.taosdata.com/ # Leave a trailing slash for the repository, or "" for no repository
server: taosx/integrated:3.3.5.1-b0a54bdd
# Set timezone here, not in taoscfg
timezone: "Asia/Shanghai"
labels:
# Add more labels as needed.
services:
server:
type: ClusterIP
replica: 3
ports:
# TCP range required
tcp: [6041, 6030]
# UDP range, optional
udp:
volumes:
- name: data
mountPath: /var/lib/taos
spec:
storageClassName: "local-path"
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: "10Gi"
- name: log
mountPath: /var/log/taos/
spec:
storageClassName: "local-path"
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: "10Gi"
environment:
ENABLE_TAOSX: "0" # Disable taosx in server replicas.
taosx:
type: ClusterIP
volumes:
- name: taosx-data
mountPath: /var/lib/taos
spec:
storageClassName: "local-path"
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: "10Gi"
- name: taosx-log
mountPath: /var/log/taos/
spec:
storageClassName: "local-path"
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: "10Gi"
files:
- name: taosx
mountPath: /etc/taos/taosx.toml
content: |-
# TAOSX configuration in TOML format.
[monitor]
# FQDN of taosKeeper service, no default value
fqdn = "localhost"
# How often to send metrics to taosKeeper, default every 10 seconds. Only value from 1 to 10 is valid.
interval = 10
# log configuration
[log]
# All log files are stored in this directory
#
#path = "/var/log/taos" # on linux/macOS
# log filter level
#
#level = "info"
# Compress archived log files or not
#
#compress = false
# The number of log files retained by the current explorer server instance in the `path` directory
#
#rotationCount = 30
# Rotate when the log file reaches this size
#
#rotationSize = "1GB"
# Log downgrade when the remaining disk space reaches this size, only logging `ERROR` level logs
#
#reservedDiskSize = "1GB"
# The number of days log files are retained
#
#keepDays = 30
# Watching the configuration file for log.loggers changes, default to true.
#
#watching = true
# Customize the log output level of modules, and changes will be applied after modifying the file when log.watching is enabled
#
# ## Examples:
#
# crate = "error"
# crate::mod1::mod2 = "info"
# crate::span[field=value] = "warn"
#
[log.loggers]
#"actix_server::accept" = "warn"
#"taos::query" = "warn"
```
You can see that the configuration is similar to the first one, with the addition of the taosx configuration. The taosx service is configured with similar storage configuration as the server service, and the server service is configured with 3 replicas. Since the taosx service is not cluster-ready, it is recommended to deploy it separately.
After configuring the values.yaml file, use the following command to install the TDengine Chart:
```shell
helm install replica3 tdengine-enterprise-3.5.0.tgz -f values.yaml
```
You can use the following command to expose the explorer service to the outside world with ingress:
```shell
tee replica3-ingress.yaml <<EOF
# This is a helm chart example for deploying 3 replicas of TDengine Explorer
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: replica3-ingress
namespace: default
spec:
rules:
- host: replica3.local.tdengine.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: replica3-tdengine-taosx
port:
number: 6060
EOF
kubectl apply -f replica3-ingress.yaml
```
Use `kubectl get ingress` to view the ingress service.
```shell
root@server:/data1/projects/helm# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
replica3-ingress nginx replica3.local.tdengine.com 192.168.1.58 80 48m
```
You can configure the domain name resolution to point to the ingress service's external IP address. For example, add the following line to the hosts file:
```conf
192.168.1.58 replica3.local.tdengine.com
```
Now you can access the explorer service through the domain name `replica3.local.tdengine.com`.
```shell
curl http://replica3.local.tdengine.com
```

View File

@ -0,0 +1,11 @@
---
title: Deploying Your Cluster
slug: /operations-and-maintenance/deploy-your-cluster
---
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
You can deploy a TDengine cluster manually, by using Docker, or by using Kubernetes. For Kubernetes deployments, you can use kubectl or Helm.
<DocCardList items={useCurrentSidebarCategory().items}/>

View File

@ -17,7 +17,9 @@ TDengine is designed for various writing scenarios, and many of these scenarios
```sql
COMPACT DATABASE db_name [start with 'XXXX'] [end with 'YYYY'];
SHOW COMPACT [compact_id];
COMPACT [db_name.]VGROUPS IN (vgroup_id1, vgroup_id2, ...) [start with 'XXXX'] [end with 'YYYY'];
SHOW COMPACTS;
SHOW COMPACT compact_id;
KILL COMPACT compact_id;
```

View File

@ -1,14 +1,19 @@
---
title: Data Backup and Restoration
slug: /operations-and-maintenance/back-up-and-restore-data
slug: /operations-and-maintenance/data-backup-and-restoration
---
To prevent data loss and accidental deletions, TDengine provides comprehensive features such as data backup, restoration, fault tolerance, and real-time synchronization of remote data to ensure the security of data storage. This section briefly explains the backup and restoration functions.
import Image from '@theme/IdealImage';
import imgBackup from '../assets/data-backup-01.png';
You can back up the data in your TDengine cluster and restore it in the event that data is lost or damaged.
## Data Backup and Restoration Using taosdump
taosdump is an open-source tool that supports backing up data from a running TDengine cluster and restoring the backed-up data to the same or another running TDengine cluster. taosdump can back up the database as a logical data unit or back up data records within a specified time period in the database. When using taosdump, you can specify the directory path for data backup. If no directory path is specified, taosdump will default to backing up the data in the current directory.
### Back Up Data with taosdump
Below is an example of using taosdump to perform data backup.
```shell
@ -19,6 +24,8 @@ After executing the above command, taosdump will connect to the TDengine cluster
When using taosdump, if the specified storage path already contains data files, taosdump will prompt the user and exit immediately to avoid data overwriting. This means the same storage path can only be used for one backup. If you see related prompts, please operate carefully to avoid accidental data loss.
### Restore Data with taosdump
To restore data files from a specified local file path to a running TDengine cluster, you can execute the taosdump command by specifying command-line parameters and the data file path. Below is an example code for taosdump performing data restoration.
```shell
@ -27,25 +34,62 @@ taosdump -i /file/path -h localhost -P 6030
After executing the above command, taosdump will connect to the TDengine cluster at localhost:6030 and restore the data files from /file/path to the TDengine cluster.
## Data Backup and Restoration Based on TDengine Enterprise
## Data Backup and Restoration in TDengine Enterprise
TDengine Enterprise provides an efficient incremental backup feature, with the following process.
TDengine Enterprise implements incremental backup and recovery of data by using data subscription. The backup and recovery functions of TDengine Enterprise include the following concepts:
Step 1, access the taosExplorer service through a browser, usually at the port 6060 of the IP address where the TDengine cluster is located, such as `http://localhost:6060`.
1. Incremental data backup: Based on TDengine's data subscription function, all data changes of **the backup object** (including: addition, modification, deletion, metadata change, etc.) are recorded to generate a backup file.
2. Data recovery: Use the backup file generated by incremental data backup to restore **the backup object** to a specified point in time.
3. Backup object: The object that the user backs up can be a **database** or a **supertable**.
4. Backup plan: The user creates a periodic backup task for the backup object. The backup plan starts at a specified time point and periodically executes the backup task at intervals of **the backup cycle. Each backup task generates a** **backup point** .
5. Backup point: Each time a backup task is executed, a set of backup files is generated. They correspond to a time point, called **a backup point** . The first backup point is called **the initial backup point** .
6. Restore task: The user selects a backup point in the backup plan and creates a restore task. The restore task starts from **the initial backup point** and plays back the data changes in **the backup file** one by one until the specified backup point ends.
Step 2, in the "System Management - Backup" page of the taosExplorer service, add a new data backup task, fill in the database name and backup storage file path in the task configuration information, and start the data backup after completing the task creation. Three parameters can be configured on the data backup configuration page:
### Incremental Backup Example
- Backup cycle: Required, configure the time interval for each data backup execution, which can be selected from a dropdown menu to execute once every day, every 7 days, or every 30 days. After configuration, a data backup task will be initiated at 0:00 of the corresponding backup cycle;
- Database: Required, configure the name of the database to be backed up (the database's wal_retention_period parameter must be greater than 0);
- Directory: Required, configure the path in the running environment of taosX where the data will be backed up, such as `/root/data_backup`;
<figure>
<Image img={imgBackup} alt="Incremental backup process"/>
<figcaption>Figure 1. Incremental backup process</figcaption>
</figure>
Step 3, after the data backup task is completed, find the created data backup task in the list of created tasks on the same page, and directly perform one-click restoration to restore the data to TDengine.
1. The user creates a backup plan to execute the backup task every 1 day starting from 2024-08-27 00:00:00 .
2. The first backup task was executed at 2024-08-27 00:00:00, generating an initial backup point .
3. After that, the backup task is executed every 1 day, and multiple backup points are generated .
4. Users can select a backup point and create a restore task .
5. The restore task starts from the initial backup point, applies the backup points one by one, and restores to the specified backup point.
Compared to taosdump, if the same data is backed up multiple times in the specified storage path, since TDengine Enterprise not only has high backup efficiency but also implements incremental processing, each backup task will be completed quickly. As taosdump always performs full backups, TDengine Enterprise can significantly reduce system overhead in scenarios with large data volumes and is more convenient.
### Back Up Data in TDengine Enterprise
**Common Error Troubleshooting**
1. In a web browser, open the taosExplorer interface for TDengine. This interface is located on port 6060 on the hostname or IP address running TDengine.
2. In the main menu on the left, click **Management** and open the **Backup** tab.
3. Under **Backup Plan**, click **Create New Backup** to define your backup plan.
1. **Database:** Select the database that you want to backup.
2. **Super Table:** (Optional) Select the supertable that you want to backup. If you do not select a supertable, all data in the database is backed up.
3. **Next execution time:** Enter the date and time when you want to perform the initial backup for this backup plan. If you specify a date and time in the past, the initial backup is performed immediately.
4. **Backup Cycle:** Specify how often you want to perform incremental backups. The value of this field must be less than the value of `WAL_RETENTION_PERIOD` for the specified database.
5. **Retry times:** Enter how many times you want to retry a backup task that has failed, provided that the specific failure might be resolved by retrying.
6. **Retry interval:** Enter the delay in seconds between retry attempts.
7. **Directory:** Enter the full path of the directory in which you want to store backup files.
8. **Backup file max size:** Enter the maximum size of a single backup file. If the total size of your backup exceeds this number, the backup is split into multiple files.
9. **Compression level:** Select **fastest** for the fastest performance but lowest compression ratio, **best** for the highest compression ratio but slowest performance, or **balanced** for a combination of performance and compression.
1. If the task fails to start and reports the following error:
4. Click **Confirm** to create the backup plan.
You can view your backup plans and modify, clone, or delete them using the buttons in the **Operation** columns. Click **Refresh** to update the status of your plans. Note that you must stop a backup plan before you can delete it. You can also click **View** in the **Backup File** column to view the backup record points and files created by each plan.
### Restore Data in TDengine Enterprise
1. Locate the backup plan containing data that you want to restore and click **View** in the **Backup File** column.
2. Determine the backup record point to which you want to restore and click the Restore icon in the **Operation** column.
3. Select the backup file timestamp and target database and click **Confirm**.
## Troubleshooting
### Port Access Exception
A port access exception is indicated by the following error:
```text
Error: tmq to td task exec error
@ -54,9 +98,11 @@ Caused by:
[0x000B] Unable to establish connection
```
The cause is an abnormal connection to the data source port, check whether the data source FQDN is connected and whether port 6030 is accessible.
If you encounter this error, check whether the data source FQDN is connected and whether port 6030 is listening and accessible.
2. If using a WebSocket connection, the task fails to start and reports the following error:
### Connection Issues
A connection issue is indicated by the task failing to start and reporting the following error:
```text
Error: tmq to td task exec error
@ -67,15 +113,16 @@ Caused by:
2: failed to lookup address information: Temporary failure in name resolution
```
When using a WebSocket connection, you may encounter various types of errors, which can be seen after "Caused by". Here are some possible errors:
The following are some possible errors for WebSocket connections:
- "Temporary failure in name resolution": DNS resolution error. Check whether the specified IP address or FQDN can be accessed normally.
- "IO error: Connection refused (os error 111)": Port access failed. Check whether the port is configured correctly and is enabled and accessible.
- "IO error: received corrupt message": Message parsing failed. This may be because SSL was enabled using the WSS method, but the source port is not supported.
- "HTTP error: *": Confirm that you are connecting to the correct taosAdapter port and that your LSB/Nginx/Proxy has been configured correctly.
- "WebSocket protocol error: Handshake not finished": WebSocket connection error. This is typically caused by an incorrectly configured port.
- "Temporary failure in name resolution": DNS resolution error, check if the IP or FQDN can be accessed normally.
- "IO error: Connection refused (os error 111)": Port access failure, check if the port is configured correctly or if it is open and accessible.
- "IO error: received corrupt message": Message parsing failed, possibly because SSL was enabled using wss, but the source port does not support it.
- "HTTP error: *": Possibly connected to the wrong taosAdapter port or incorrect LSB/Nginx/Proxy configuration.
- "WebSocket protocol error: Handshake not finished": WebSocket connection error, usually because the configured port is incorrect.
### WAL Configuration
3. If the task fails to start and reports the following error:
A WAL configuration issue is indicated by the task failing to start and reporting the following error:
```text
Error: tmq to td task exec error
@ -84,11 +131,8 @@ Caused by:
[0x038C] WAL retention period is zero
```
This is due to incorrect WAL configuration in the source database, preventing subscription.
Solution:
Modify the data WAL configuration:
To resolve this error, modify the WAL retention period for the affected database:
```sql
alter database test wal_retention_period 3600;
ALTER DATABASE test WAL_RETENTION_PERIOD 3600;
```

View File

@ -26,6 +26,7 @@ Flink Connector supports all platforms that can run Flink 1.19 and above version
| Flink Connector Version | Major Changes | TDengine Version|
|-------------------------| ------------------------------------ | ---------------- |
| 2.0.2 | The Table Sink supports types such as RowKind.UPDATE_BEFORE, RowKind.UPDATE_AFTER, and RowKind.DELETE.| - |
| 2.0.1 | Sink supports writing types from Rowdata implementations.| - |
| 2.0.0 | 1.Support SQL queries on data in TDengine database. <br/> 2. Support CDC subscription to data in TDengine database.<br/> 3. Supports reading and writing to TDengine database using Table SQL. | 3.3.5.1 and higher|
| 1.0.0 | Support Sink function to write data from other sources to TDengine in the future.| 3.3.2.0 and higher|
@ -115,7 +116,7 @@ If using Maven to manage a project, simply add the following dependencies in pom
<dependency>
<groupId>com.taosdata.flink</groupId>
<artifactId>flink-connector-tdengine</artifactId>
<version>2.0.1</version>
<version>2.0.2</version>
</dependency>
```

View File

@ -13,45 +13,108 @@ import Icinga2 from "../../assets/resources/_icinga2.mdx"
import TCollector from "../../assets/resources/_tcollector.mdx"
taosAdapter is a companion tool for TDengine, serving as a bridge and adapter between the TDengine cluster and applications. It provides an easy and efficient way to ingest data directly from data collection agents (such as Telegraf, StatsD, collectd, etc.). It also offers InfluxDB/OpenTSDB compatible data ingestion interfaces, allowing InfluxDB/OpenTSDB applications to be seamlessly ported to TDengine.
The connectors of TDengine in various languages communicate with TDengine through the WebSocket interface, hence the taosAdapter must be installed.
taosAdapter offers the following features:
- RESTful interface
- Compatible with InfluxDB v1 write interface
- Compatible with OpenTSDB JSON and telnet format writing
- Seamless connection to Telegraf
- Seamless connection to collectd
- Seamless connection to StatsD
- Supports Prometheus remote_read and remote_write
- Retrieves the VGroup ID of the virtual node group (VGroup) where the table is located
## taosAdapter Architecture Diagram
The architecture diagram is as follows:
<figure>
<Image img={imgAdapter} alt="taosAdapter architecture"/>
<figcaption>Figure 1. taosAdapter architecture</figcaption>
</figure>
## Deployment Methods for taosAdapter
## Feature List
### Installing taosAdapter
The taosAdapter provides the following features:
- WebSocket Interface:
Supports executing SQL, schemaless writing, parameter binding, and data subscription through the WebSocket protocol.
- Compatible with InfluxDB v1 write interface:
[https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/](https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/)
- Compatible with OpenTSDB JSON and telnet format writing:
- [http://opentsdb.net/docs/build/html/api_http/put.html](http://opentsdb.net/docs/build/html/api_http/put.html)
- [http://opentsdb.net/docs/build/html/api_telnet/put.html](http://opentsdb.net/docs/build/html/api_telnet/put.html)
- collectd data writing:
collectd is a system statistics collection daemon, visit [https://collectd.org/](https://collectd.org/) for more information.
- StatsD data writing:
StatsD is a simple yet powerful daemon for gathering statistics. Visit [https://github.com/statsd/statsd](https://github.com/statsd/statsd) for more information.
- icinga2 OpenTSDB writer data writing:
icinga2 is a software for collecting check results metrics and performance data. Visit [https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer](https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer) for more information.
- TCollector data writing:
TCollector is a client process that collects data from local collectors and pushes it to OpenTSDB. Visit [http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html) for more information.
- node_exporter data collection and writing:
node_exporter is an exporter of machine metrics. Visit [https://github.com/prometheus/node_exporter](https://github.com/prometheus/node_exporter) for more information.
- Supports Prometheus remote_read and remote_write:
remote_read and remote_write are Prometheus's data read-write separation cluster solutions. Visit [https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis](https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis) for more information.
- RESTful API:
[RESTful API](../../client-libraries/rest-api/)
### WebSocket Interface
Through the WebSocket interface of taosAdapter, connectors in various languages can achieve SQL execution, schemaless writing, parameter binding, and data subscription functionalities. Refer to the [Development Guide](../../../developer-guide/connecting-to-tdengine/#websocket-connection) for more details.
### Compatible with InfluxDB v1 write interface
You can use any client that supports the HTTP protocol to write data in InfluxDB compatible format to TDengine by accessing the Restful interface URL `http://<fqdn>:6041/influxdb/v1/write`.
Supported InfluxDB parameters are as follows:
- `db` specifies the database name used by TDengine
- `precision` the time precision used by TDengine
- `u` TDengine username
- `p` TDengine password
- `ttl` the lifespan of automatically created subtables, determined by the TTL parameter of the first data entry in the subtable, which cannot be updated. For more information, please refer to the TTL parameter in the [table creation document](../../sql-manual/manage-tables/).
Note: Currently, InfluxDB's token authentication method is not supported, only Basic authentication and query parameter verification are supported.
Example: `curl --request POST http://127.0.0.1:6041/influxdb/v1/write?db=test --user "root:taosdata" --data-binary "measurement,host=host1 field1=2i,field2=2.0 1577836800000000000"`
### Compatible with OpenTSDB JSON and telnet format writing
You can use any client that supports the HTTP protocol to write data in OpenTSDB compatible format to TDengine by accessing the Restful interface URL `http://<fqdn>:6041/<APIEndPoint>`. EndPoint as follows:
```text
/opentsdb/v1/put/json/<db>
/opentsdb/v1/put/telnet/<db>
```
### collectd data writing
<CollectD />
### StatsD data writing
<StatsD />
### icinga2 OpenTSDB writer data writing
<Icinga2 />
### TCollector data writing
<TCollector />
### node_exporter data collection and writing
An exporter used by Prometheus that exposes hardware and operating system metrics from \*NIX kernels
- Enable configuration of taosAdapter node_exporter.enable
- Set the relevant configuration for node_exporter
- Restart taosAdapter
### Supports Prometheus remote_read and remote_write
<Prometheus />
### RESTful API
You can use any client that supports the HTTP protocol to write data to TDengine or query data from TDengine by accessing the RESTful interface URL `http://<fqdn>:6041/rest/sql`. For details, please refer to the [REST API documentation](../../client-libraries/rest-api/).
## Installation
taosAdapter is part of the TDengine server software. If you are using TDengine server, you do not need any additional steps to install taosAdapter. If you need to deploy taosAdapter separately from the TDengine server, you should install the complete TDengine on that server to install taosAdapter. If you need to compile taosAdapter from source code, you can refer to the [Build taosAdapter](https://github.com/taosdata/taosadapter/blob/3.0/BUILD.md) document.
### Starting/Stopping taosAdapter
After the installation is complete, you can start the taosAdapter service using the command `systemctl start taosadapter`.
On Linux systems, the taosAdapter service is managed by default by systemd. Use the command `systemctl start taosadapter` to start the taosAdapter service. Use the command `systemctl stop taosadapter` to stop the taosAdapter service.
### Removing taosAdapter
Use the command rmtaos to remove the TDengine server software, including taosAdapter.
### Upgrading taosAdapter
taosAdapter and TDengine server need to use the same version. Please upgrade taosAdapter by upgrading the TDengine server.
taosAdapter deployed separately from taosd must be upgraded by upgrading the TDengine server on its server.
## taosAdapter Parameter List
## Configuration
taosAdapter supports configuration through command-line parameters, environment variables, and configuration files. The default configuration file is `/etc/taos/taosadapter.toml`.
@ -80,6 +143,7 @@ Usage of taosAdapter:
--instanceId int instance ID. Env "TAOS_ADAPTER_INSTANCE_ID" (default 32)
--log.compress whether to compress old log. Env "TAOS_ADAPTER_LOG_COMPRESS"
--log.enableRecordHttpSql whether to record http sql. Env "TAOS_ADAPTER_LOG_ENABLE_RECORD_HTTP_SQL"
--log.keepDays uint log retention days, must be a positive integer. Env "TAOS_ADAPTER_LOG_KEEP_DAYS" (default 30)
--log.level string log level (trace debug info warning error). Env "TAOS_ADAPTER_LOG_LEVEL" (default "info")
--log.path string log path. Env "TAOS_ADAPTER_LOG_PATH" (default "/var/log/taos")
--log.reservedDiskSize string reserved disk size for log dir (KB MB GB), must be a positive integer. Env "TAOS_ADAPTER_LOG_RESERVED_DISK_SIZE" (default "1GB")
@ -90,6 +154,8 @@ Usage of taosAdapter:
--log.sqlRotationSize string record sql log rotation size(KB MB GB), must be a positive integer. Env "TAOS_ADAPTER_LOG_SQL_ROTATION_SIZE" (default "1GB")
--log.sqlRotationTime duration record sql log rotation time. Env "TAOS_ADAPTER_LOG_SQL_ROTATION_TIME" (default 24h0m0s)
--logLevel string log level (trace debug info warning error). Env "TAOS_ADAPTER_LOG_LEVEL" (default "info")
--maxAsyncConcurrentLimit int The maximum number of concurrent calls allowed for the C asynchronous method. 0 means use CPU core count. Env "TAOS_ADAPTER_MAX_ASYNC_CONCURRENT_LIMIT"
--maxSyncConcurrentLimit int The maximum number of concurrent calls allowed for the C synchronized method. 0 means use CPU core count. Env "TAOS_ADAPTER_MAX_SYNC_CONCURRENT_LIMIT"
--monitor.collectDuration duration Set monitor duration. Env "TAOS_ADAPTER_MONITOR_COLLECT_DURATION" (default 3s)
--monitor.disable Whether to disable monitoring. Env "TAOS_ADAPTER_MONITOR_DISABLE" (default true)
--monitor.identity string The identity of the current instance, or 'hostname:port' if it is empty. Env "TAOS_ADAPTER_MONITOR_IDENTITY"
@ -118,7 +184,7 @@ Usage of taosAdapter:
--opentsdb_telnet.flushInterval duration opentsdb_telnet flush interval (0s means not valid) . Env "TAOS_ADAPTER_OPENTSDB_TELNET_FLUSH_INTERVAL"
--opentsdb_telnet.maxTCPConnections int max tcp connections. Env "TAOS_ADAPTER_OPENTSDB_TELNET_MAX_TCP_CONNECTIONS" (default 250)
--opentsdb_telnet.password string opentsdb_telnet password. Env "TAOS_ADAPTER_OPENTSDB_TELNET_PASSWORD" (default "taosdata")
--opentsdb_telnet.ports ints opentsdb_telnet tcp port. Env "TAOS_ADAPTER_OPENTSDB_TELNET_PORTS" (default [6046,6047,6048,6049])
--opentsdb_telnet.ports ints opentsdb telnet tcp port. Env "TAOS_ADAPTER_OPENTSDB_TELNET_PORTS" (default [6046,6047,6048,6049])
--opentsdb_telnet.tcpKeepAlive enable tcp keep alive. Env "TAOS_ADAPTER_OPENTSDB_TELNET_TCP_KEEP_ALIVE"
--opentsdb_telnet.ttl int opentsdb_telnet data ttl. Env "TAOS_ADAPTER_OPENTSDB_TELNET_TTL"
--opentsdb_telnet.user string opentsdb_telnet user. Env "TAOS_ADAPTER_OPENTSDB_TELNET_USER" (default "root")
@ -131,6 +197,9 @@ Usage of taosAdapter:
--prometheus.enable enable prometheus. Env "TAOS_ADAPTER_PROMETHEUS_ENABLE" (default true)
--restfulRowLimit int restful returns the maximum number of rows (-1 means no limit). Env "TAOS_ADAPTER_RESTFUL_ROW_LIMIT" (default -1)
--smlAutoCreateDB Whether to automatically create db when writing with schemaless. Env "TAOS_ADAPTER_SML_AUTO_CREATE_DB"
--ssl.certFile string ssl cert file path. Env "TAOS_ADAPTER_SSL_CERT_FILE"
--ssl.enable enable ssl. Env "TAOS_ADAPTER_SSL_ENABLE"
--ssl.keyFile string ssl key file path. Env "TAOS_ADAPTER_SSL_KEY_FILE"
--statsd.allowPendingMessages int statsd allow pending messages. Env "TAOS_ADAPTER_STATSD_ALLOW_PENDING_MESSAGES" (default 50000)
--statsd.db string statsd db name. Env "TAOS_ADAPTER_STATSD_DB" (default "statsd")
--statsd.deleteCounters statsd delete counter cache after gather. Env "TAOS_ADAPTER_STATSD_DELETE_COUNTERS" (default true)
@ -157,27 +226,44 @@ Usage of taosAdapter:
-V, --version Print the version and exit
```
Note:
When using a browser to make API calls, please set the following Cross-Origin Resource Sharing (CORS) parameters according to the actual situation:
See the example configuration file at [example/config/taosadapter.toml](https://github.com/taosdata/taosadapter/blob/3.0/example/config/taosadapter.toml).
```text
AllowAllOrigins
AllowOrigins
AllowHeaders
ExposeHeaders
AllowCredentials
AllowWebSockets
```
### Cross-Origin Configuration
When making API calls from the browser, please configure the following Cross-Origin Resource Sharing (CORS) parameters based on your actual situation:
- **`cors.allowAllOrigins`**: Whether to allow all origins to access, default is true.
- **`cors.allowOrigins`**: A comma-separated list of origins allowed to access. Multiple origins can be specified.
- **`cors.allowHeaders`**: A comma-separated list of request headers allowed for cross-origin access. Multiple headers can be specified.
- **`cors.exposeHeaders`**: A comma-separated list of response headers exposed for cross-origin access. Multiple headers can be specified.
- **`cors.allowCredentials`**: Whether to allow cross-origin requests to include user credentials, such as cookies, HTTP authentication information, or client SSL certificates.
- **`cors.allowWebSockets`**: Whether to allow WebSockets connections.
If you are not making API calls through a browser, you do not need to worry about these configurations.
The above configurations take effect for the following interfaces:
* RESTful API requests
* WebSocket API requests
* InfluxDB v1 write interface
* OpenTSDB HTTP write interface
For details about the CORS protocol, please refer to: [https://www.w3.org/wiki/CORS_Enabled](https://www.w3.org/wiki/CORS_Enabled) or [https://developer.mozilla.org/docs/Web/HTTP/CORS](https://developer.mozilla.org/docs/Web/HTTP/CORS).
See the example configuration file at [example/config/taosadapter.toml](https://github.com/taosdata/taosadapter/blob/3.0/example/config/taosadapter.toml).
### Connection Pool Configuration
### Connection Pool Parameters Description
taosAdapter uses a connection pool to manage connections to TDengine, improving concurrency performance and resource utilization. The connection pool configuration applies to the following interfaces, and these interfaces share a single connection pool:
When using the RESTful API, the system will manage TDengine connections through a connection pool. The connection pool can be configured with the following parameters:
* RESTful API requests
* InfluxDB v1 write interface
* OpenTSDB JSON and telnet format writing
* Telegraf data writing
* collectd data writing
* StatsD data writing
* node_exporter data collection writing
* Prometheus remote_read and remote_write
The configuration parameters for the connection pool are as follows:
- **`pool.maxConnect`**: The maximum number of connections allowed in the pool, default is twice the number of CPU cores. It is recommended to keep the default setting.
- **`pool.maxIdle`**: The maximum number of idle connections in the pool, default is the same as `pool.maxConnect`. It is recommended to keep the default setting.
@ -185,153 +271,136 @@ When using the RESTful API, the system will manage TDengine connections through
- **`pool.waitTimeout`**: Timeout for obtaining a connection from the pool, default is set to 60 seconds. If a connection is not obtained within the timeout period, HTTP status code 503 will be returned. This parameter is available starting from version 3.3.3.0.
- **`pool.maxWait`**: The maximum number of requests waiting to get a connection in the pool, default is 0, which means no limit. When the number of queued requests exceeds this value, new requests will return HTTP status code 503. This parameter is available starting from version 3.3.3.0.
## Feature List
### HTTP Response Code Configuration
- RESTful API
[RESTful API](../../client-libraries/rest-api/)
- Compatible with InfluxDB v1 write interface
[https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/](https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/)
- Compatible with OpenTSDB JSON and telnet format writing
- [http://opentsdb.net/docs/build/html/api_http/put.html](http://opentsdb.net/docs/build/html/api_http/put.html)
- [http://opentsdb.net/docs/build/html/api_telnet/put.html](http://opentsdb.net/docs/build/html/api_telnet/put.html)
- Seamless connection with collectd.
collectd is a system statistics collection daemon, visit [https://collectd.org/](https://collectd.org/) for more information.
- Seamless connection with StatsD.
StatsD is a simple yet powerful daemon for gathering statistics. Visit [https://github.com/statsd/statsd](https://github.com/statsd/statsd) for more information.
- Seamless connection with icinga2.
icinga2 is a software for collecting check results metrics and performance data. Visit [https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer](https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer) for more information.
- Seamless connection with tcollector.
TCollector is a client process that collects data from local collectors and pushes it to OpenTSDB. Visit [http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html) for more information.
- Seamless connection with node_exporter.
node_exporter is an exporter of machine metrics. Visit [https://github.com/prometheus/node_exporter](https://github.com/prometheus/node_exporter) for more information.
- Supports Prometheus remote_read and remote_write.
remote_read and remote_write are Prometheus's data read-write separation cluster solutions. Visit [https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis](https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis) for more information.
- Get the VGroup ID of the virtual node group (VGroup) where the table is located.
taosAdapter uses the parameter `httpCodeServerError` to set whether to return a non-200 HTTP status code when the C interface returns an error. When set to true, it will return different HTTP status codes based on the error code returned by C. See [HTTP Response Codes](../../client-libraries/rest-api/) for details.
## Interface
This configuration only affects the **RESTful interface**.
### TDengine RESTful Interface
**Parameter Description**
You can use any client that supports the HTTP protocol to write data to TDengine or query data from TDengine by accessing the RESTful interface URL `http://<fqdn>:6041/rest/sql`. For details, please refer to the [REST API documentation](../../client-libraries/rest-api/).
- **`httpCodeServerError`**:
- **When set to `true`**: Map the error code returned by the C interface to the corresponding HTTP status code.
- **When set to `false`**: Regardless of the error returned by the C interface, always return the HTTP status code `200` (default value).
### InfluxDB
### Memory limit configuration
You can use any client that supports the HTTP protocol to write data in InfluxDB compatible format to TDengine by accessing the Restful interface URL `http://<fqdn>:6041/influxdb/v1/write`.
taosAdapter will monitor the memory usage during its operation and adjust it through two thresholds. The valid value range is an integer from 1 to 100, and the unit is the percentage of system physical memory.
Supported InfluxDB parameters are as follows:
This configuration only affects the following interfaces:
- `db` specifies the database name used by TDengine
- `precision` the time precision used by TDengine
- `u` TDengine username
- `p` TDengine password
- `ttl` the lifespan of automatically created subtables, determined by the TTL parameter of the first data entry in the subtable, which cannot be updated. For more information, please refer to the TTL parameter in the [table creation document](../../sql-manual/manage-tables/).
* RESTful interface request
* InfluxDB v1 write interface
* OpenTSDB HTTP write interface
* Prometheus remote_read and remote_write interfaces
Note: Currently, InfluxDB's token authentication method is not supported, only Basic authentication and query parameter verification are supported.
Example: curl --request POST `http://127.0.0.1:6041/influxdb/v1/write?db=test` --user "root:taosdata" --data-binary "measurement,host=host1 field1=2i,field2=2.0 1577836800000000000"
**Parameter Description**
### OpenTSDB
- **`pauseQueryMemoryThreshold`**:
- When memory usage exceeds this threshold, taosAdapter will stop processing query requests.
- Default value: `70` (i.e. 70% of system physical memory).
- **`pauseAllMemoryThreshold`**:
- When memory usage exceeds this threshold, taosAdapter will stop processing all requests (including writes and queries).
- Default value: `80` (i.e. 80% of system physical memory).
You can use any client that supports the HTTP protocol to write data in OpenTSDB compatible format to TDengine by accessing the Restful interface URL `http://<fqdn>:6041/<APIEndPoint>`. EndPoint as follows:
When memory usage falls below the threshold, taosAdapter will automatically resume the corresponding function.
```text
/opentsdb/v1/put/json/<db>
/opentsdb/v1/put/telnet/<db>
```
**HTTP return content:**
### collectd
- **When `pauseQueryMemoryThreshold` is exceeded**:
- HTTP status code: `503`
- Return content: `"query memory exceeds threshold"`
<CollectD />
- **When `pauseAllMemoryThreshold` is exceeded**:
- HTTP status code: `503`
- Return content: `"memory exceeds threshold"`
### StatsD
**Status check interface:**
<StatsD />
The memory status of taosAdapter can be checked through the following interface:
- **Normal status**: `http://<fqdn>:6041/-/ping` returns `code 200`.
- **Memory exceeds threshold**:
- If the memory exceeds `pauseAllMemoryThreshold`, `code 503` is returned.
- If the memory exceeds `pauseQueryMemoryThreshold` and the request parameter contains `action=query`, `code 503` is returned.
### icinga2 OpenTSDB writer
**Related configuration parameters:**
<Icinga2 />
- **`monitor.collectDuration`**: memory monitoring interval, default value is `3s`, environment variable is `TAOS_MONITOR_COLLECT_DURATION`.
- **`monitor.incgroup`**: whether to run in a container (set to `true` for running in a container), default value is `false`, environment variable is `TAOS_MONITOR_INCGROUP`.
- **`monitor.pauseQueryMemoryThreshold`**: memory threshold (percentage) for query request pause, default value is `70`, environment variable is `TAOS_MONITOR_PAUSE_QUERY_MEMORY_THRESHOLD`.
- **`monitor.pauseAllMemoryThreshold`**: memory threshold (percentage) for query and write request pause, default value is `80`, environment variable is `TAOS_MONITOR_PAUSE_ALL_MEMORY_THRESHOLD`.
### TCollector
You can make corresponding adjustments based on the specific project application scenario and operation strategy, and it is recommended to use operation monitoring software to monitor the system memory status in a timely manner. The load balancer can also check the operation status of taosAdapter through this interface.
<TCollector />
### Schemaless write create DB configuration
### node_exporter
Starting from **version 3.0.4.0**, taosAdapter provides the parameter `smlAutoCreateDB` to control whether to automatically create a database (DB) when writing to the schemaless protocol.
An exporter used by Prometheus that exposes hardware and operating system metrics from \*NIX kernels
The `smlAutoCreateDB` parameter only affects the following interfaces:
- Enable configuration of taosAdapter node_exporter.enable
- Set the relevant configuration for node_exporter
- Restart taosAdapter
- InfluxDB v1 write interface
- OpenTSDB JSON and telnet format writing
- Telegraf data writing
- collectd data writing
- StatsD data writing
- node_exporter data writing
### prometheus
**Parameter Description**
<Prometheus />
- **`smlAutoCreateDB`**:
- **When set to `true`**: When writing to the schemaless protocol, if the target database does not exist, taosAdapter will automatically create the database.
- **When set to `false`**: The user needs to manually create the database, otherwise the write will fail (default value).
### Getting the VGroup ID of a table
### Number of results returned configuration
You can send a POST request to the HTTP interface `http://<fqdn>:<port>/rest/sql/<db>/vgid` to get the VGroup ID of a table.
The body should be a JSON array of multiple table names.
taosAdapter provides the parameter `restfulRowLimit` to control the number of results returned by the HTTP interface.
Example: Get the VGroup ID for the database power and tables d_bind_1 and d_bind_2.
The `restfulRowLimit` parameter only affects the return results of the following interfaces:
- RESTful interface
- Prometheus remote_read interface
**Parameter Description**
- **`restfulRowLimit`**:
- **When set to a positive integer**: The number of results returned by the interface will not exceed this value.
- **When set to `-1`**: The number of results returned by the interface is unlimited (default value).
### Log configuration
1. You can set the taosAdapter log output detail level by setting the --log.level parameter or the environment variable TAOS_ADAPTER_LOG_LEVEL. Valid values include: panic, fatal, error, warn, warning, info, debug, and trace.
2. Starting from **3.3.5.0 version**, taosAdapter supports dynamic modification of log level through HTTP interface. Users can dynamically adjust the log level by sending HTTP PUT request to /config interface. The authentication method of this interface is the same as /rest/sql interface, and the configuration item key-value pair in JSON format must be passed in the request body.
The following is an example of setting the log level to debug through the curl command:
```shell
curl --location 'http://127.0.0.1:6041/rest/sql/power/vgid' \
--user 'root:taosdata' \
--data '["d_bind_1","d_bind_2"]'
curl --location --request PUT 'http://127.0.0.1:6041/config' \
-u root:taosdata \
--data '{"log.level": "debug"}'
```
response:
## Service Management
```json
{"code":0,"vgIDs":[153,152]}
```
### Starting/Stopping taosAdapter
## Memory Usage Optimization Methods
On Linux systems, the taosAdapter service is managed by default by systemd. Use the command `systemctl start taosadapter` to start the taosAdapter service. Use the command `systemctl stop taosadapter` to stop the taosAdapter service.
taosAdapter will monitor its memory usage during operation and adjust it through two thresholds. Valid values range from -1 to 100 as a percentage of system physical memory.
### Upgrading taosAdapter
- pauseQueryMemoryThreshold
- pauseAllMemoryThreshold
taosAdapter and TDengine server need to use the same version. Please upgrade taosAdapter by upgrading the TDengine server.
taosAdapter deployed separately from taosd must be upgraded by upgrading the TDengine server on its server.
When the pauseQueryMemoryThreshold threshold is exceeded, it stops processing query requests.
### Removing taosAdapter
HTTP return content:
Use the command rmtaos to remove the TDengine server software, including taosAdapter.
- code 503
- body "query memory exceeds threshold"
## Monitoring Metrics
When the pauseAllMemoryThreshold threshold is exceeded, it stops processing all write and query requests.
Currently, taosAdapter only collects monitoring indicators for RESTful/WebSocket related requests. There are no monitoring indicators for other interfaces.
HTTP return content:
taosAdapter reports monitoring indicators to taosKeeper, which will be written to the monitoring database by taosKeeper. The default is the `log` database, which can be modified in the taoskeeper configuration file. The following is a detailed introduction to these monitoring indicators.
- code 503
- body "memory exceeds threshold"
When memory falls below the threshold, the corresponding functions are resumed.
Status check interface `http://<fqdn>:6041/-/ping`
- Normally returns `code 200`
- No parameters If memory exceeds pauseAllMemoryThreshold, it will return `code 503`
- Request parameter `action=query` If memory exceeds either pauseQueryMemoryThreshold or pauseAllMemoryThreshold, it will return `code 503`
Corresponding configuration parameters
```text
monitor.collectDuration Monitoring interval Environment variable "TAOS_MONITOR_COLLECT_DURATION" (default value 3s)
monitor.incgroup Whether it is running in cgroup (set to true in containers) Environment variable "TAOS_MONITOR_INCGROUP"
monitor.pauseAllMemoryThreshold Memory threshold for stopping inserts and queries Environment variable "TAOS_MONITOR_PAUSE_ALL_MEMORY_THRESHOLD" (default value 80)
monitor.pauseQueryMemoryThreshold Memory threshold for stopping queries Environment variable "TAOS_MONITOR_PAUSE_QUERY_MEMORY_THRESHOLD" (default value 70)
```
You can adjust according to the specific project application scenario and operational strategy, and it is recommended to use operational monitoring software to monitor the system memory status in real time. Load balancers can also check the running status of taosAdapter through this interface.
## taosAdapter Monitoring Metrics
taosAdapter collects monitoring metrics related to REST/WebSocket requests. These monitoring metrics are reported to taosKeeper, which writes them into the monitoring database, by default the `log` database, which can be modified in the taoskeeper configuration file. Below is a detailed introduction to these monitoring metrics.
### adapter_requests table
`adapter_requests` records taosadapter monitoring data.
The `adapter_requests` table records taosAdapter monitoring data, and the fields are as follows:
| field | type | is_tag | comment |
| :--------------- | :----------- | :----- | :---------------------------------------- |
@ -354,32 +423,10 @@ taosAdapter collects monitoring metrics related to REST/WebSocket requests. Thes
| endpoint | VARCHAR | | request endpoint |
| req_type | NCHAR | tag | request type: 0 for REST, 1 for WebSocket |
## Result Return Limit
taosAdapter controls the number of results returned through the parameter `restfulRowLimit`, -1 represents no limit, default is no limit.
## Changes after upgrading httpd to taosAdapter
This parameter controls the return of the following interfaces
- `http://<fqdn>:6041/rest/sql`
- `http://<fqdn>:6041/prometheus/v1/remote_read/:db`
## Configure HTTP Return Codes
taosAdapter uses the parameter `httpCodeServerError` to set whether to return a non-200 HTTP status code when the C interface returns an error. When set to true, it will return different HTTP status codes based on the error code returned by C. See [HTTP Response Codes](../../client-libraries/rest-api/) for details.
## Configure Automatic DB Creation for Schemaless Writes
Starting from version 3.0.4.0, taosAdapter provides the parameter `smlAutoCreateDB` to control whether to automatically create a DB when writing via the schemaless protocol. The default value is false, which does not automatically create a DB, and requires the user to manually create a DB before performing schemaless writes.
## Troubleshooting
You can check the running status of taosAdapter with the command `systemctl status taosadapter`.
You can also adjust the detail level of taosAdapter log output by setting the --logLevel parameter or the environment variable TAOS_ADAPTER_LOG_LEVEL. Valid values include: panic, fatal, error, warn, warning, info, debug, and trace.
## How to Migrate from Older Versions of TDengine to taosAdapter
In TDengine server version 2.2.x.x or earlier, the taosd process included an embedded HTTP service. As mentioned earlier, taosAdapter is a standalone software managed by systemd, having its own process. Moreover, there are some differences in configuration parameters and behaviors between the two, as shown in the table below:
In TDengine server version 2.2.x.x or earlier, the taosd process included an embedded HTTP service(httpd). As mentioned earlier, taosAdapter is a standalone software managed by systemd, having its own process. Moreover, there are some differences in configuration parameters and behaviors between the two, as shown in the table below:
| **#** | **embedded httpd** | **taosAdapter** | **comment** |
| ----- | ------------------- | ---------------------------------------------------------- | ------------------------------------------------------------ |

View File

@ -70,6 +70,7 @@ Metric details (from top to bottom, left to right):
- **Databases** - Number of databases.
- **Connections** - Current number of connections.
- **DNodes/MNodes/VGroups/VNodes**: Total and alive count of each resource.
- **Classified Connection Counts**: The current number of active connections, classified by user, application, and IP.
- **DNodes/MNodes/VGroups/VNodes Alive Percent**: The ratio of alive/total for each resource, enable alert rules, and trigger when the resource survival rate (average healthy resource ratio within 1 minute) is less than 100%.
- **Measuring Points Used**: Number of measuring points used with alert rules enabled (no data for community edition, healthy by default).
@ -183,22 +184,22 @@ After importing, click on "Alert rules" on the left side of the Grafana interfac
The specific configuration of the 14 alert rules is as follows:
| alert rule| Rule threshold| Behavior when no data | Data scanning interval |Duration | SQL |
| ------ | --------- | ---------------- | ----------- |------- |----------------------|
|CPU load of dnode node|average > 80%|Trigger alert|5 minutes|5 minutes |`select now(), dnode_id, last(cpu_system) as cup_use from log.taosd_dnodes_info where _ts >= (now- 5m) and _ts < now partition by dnode_id having first(_ts) > 0 `|
|Memory of dnode node |average > 60%|Trigger alert|5 minutes|5 minutes|`select now(), dnode_id, last(mem_engine) / last(mem_total) * 100 as taosd from log.taosd_dnodes_info where _ts >= (now- 5m) and _ts <now partition by dnode_id`|
|Disk capacity occupancy of dnode nodes | > 80%|Trigger alert|5 minutes|5 minutes|`select now(), dnode_id, data_dir_level, data_dir_name, last(used) / last(total) * 100 as used from log.taosd_dnodes_data_dirs where _ts >= (now - 5m) and _ts < now partition by dnode_id, data_dir_level, data_dir_name`|
|Authorization expires |< 60天|Trigger alert|1 day|0 0 seconds|`select now(), cluster_id, last(grants_expire_time) / 86400 as expire_time from log.taosd_cluster_info where _ts >= (now - 24h) and _ts < now partition by cluster_id having first(_ts) > 0 `|
|The used measurement points has reached the authorized number|>= 90%|Trigger alert|1 day|0 seconds|`select now(), cluster_id, CASE WHEN max(grants_timeseries_total) > 0.0 THEN max(grants_timeseries_used) /max(grants_timeseries_total) * 100.0 ELSE 0.0 END AS result from log.taosd_cluster_info where _ts >= (now - 30s) and _ts < now partition by cluster_id having timetruncate(first(_ts), 1m) > 0`|
|Number of concurrent query requests | > 100|Do not trigger alert|1 minute|0 seconds|`select now() as ts, count(*) as slow_count from performance_schema.perf_queries`|
|Maximum time for slow query execution (no time window) |> 300秒|Do not trigger alert|1 minute|0 seconds|`select now() as ts, count(*) as slow_count from performance_schema.perf_queries where exec_usec>300000000`|
|dnode offline |total != alive|Trigger alert|30 seconds|0 seconds|`select now(), cluster_id, last(dnodes_total) - last(dnodes_alive) as dnode_offline from log.taosd_cluster_info where _ts >= (now -30s) and _ts < now partition by cluster_id having first(_ts) > 0`|
|vnode offline |total != alive|Trigger alert|30 seconds|0 seconds|`select now(), cluster_id, last(vnodes_total) - last(vnodes_alive) as vnode_offline from log.taosd_cluster_info where _ts >= (now - 30s) and _ts < now partition by cluster_id having first(_ts) > 0 `|
|Number of data deletion requests |> 0|Do not trigger alert|30 seconds|0 seconds|``select now(), count(`count`) as `delete_count` from log.taos_sql_req where sql_type = 'delete' and _ts >= (now -30s) and _ts < now``|
|Adapter RESTful request fail |> 5|Do not trigger alert|30 seconds|0 seconds|``select now(), sum(`fail`) as `Failed` from log.adapter_requests where req_type=0 and ts >= (now -30s) and ts < now``|
|Adapter WebSocket request fail |> 5|Do not trigger alert|30 seconds|0 seconds|``select now(), sum(`fail`) as `Failed` from log.adapter_requests where req_type=1 and ts >= (now -30s) and ts < now``|
|Dnode data reporting is missing |< 3|Trigger alert|180 seconds|0 seconds|`select now(), cluster_id, count(*) as dnode_report from log.taosd_cluster_info where _ts >= (now -180s) and _ts < now partition by cluster_id having timetruncate(first(_ts), 1h) > 0`|
|Restart dnode |max(update_time) > last(update_time)|Trigger alert|90 seconds|0 seconds|`select now(), dnode_id, max(uptime) - last(uptime) as dnode_restart from log.taosd_dnodes_info where _ts >= (now - 90s) and _ts < now partition by dnode_id`|
| alert rule | Rule threshold | Behavior when no data | Data scanning interval | Duration | SQL |
| ------------------------------------------------------------- | ------------------------------------ | --------------------- | ---------------------- | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| CPU load of dnode node | average > 80% | Trigger alert | 5 minutes | 5 minutes | `select now(), dnode_id, last(cpu_system) as cup_use from log.taosd_dnodes_info where _ts >= (now- 5m) and _ts < now partition by dnode_id having first(_ts) > 0 ` |
| Memory of dnode node | average > 60% | Trigger alert | 5 minutes | 5 minutes | `select now(), dnode_id, last(mem_engine) / last(mem_total) * 100 as taosd from log.taosd_dnodes_info where _ts >= (now- 5m) and _ts <now partition by dnode_id` |
| Disk capacity occupancy of dnode nodes | > 80% | Trigger alert | 5 minutes | 5 minutes | `select now(), dnode_id, data_dir_level, data_dir_name, last(used) / last(total) * 100 as used from log.taosd_dnodes_data_dirs where _ts >= (now - 5m) and _ts < now partition by dnode_id, data_dir_level, data_dir_name` |
| Authorization expires | < 60天 | Trigger alert | 1 day | 0 0 seconds | `select now(), cluster_id, last(grants_expire_time) / 86400 as expire_time from log.taosd_cluster_info where _ts >= (now - 24h) and _ts < now partition by cluster_id having first(_ts) > 0 ` |
| The used measurement points has reached the authorized number | >= 90% | Trigger alert | 1 day | 0 seconds | `select now(), cluster_id, CASE WHEN max(grants_timeseries_total) > 0.0 THEN max(grants_timeseries_used) /max(grants_timeseries_total) * 100.0 ELSE 0.0 END AS result from log.taosd_cluster_info where _ts >= (now - 30s) and _ts < now partition by cluster_id having timetruncate(first(_ts), 1m) > 0` |
| Number of concurrent query requests | > 100 | Do not trigger alert | 1 minute | 0 seconds | `select now() as ts, count(*) as slow_count from performance_schema.perf_queries` |
| Maximum time for slow query execution (no time window) | > 300秒 | Do not trigger alert | 1 minute | 0 seconds | `select now() as ts, count(*) as slow_count from performance_schema.perf_queries where exec_usec>300000000` |
| dnode offline | total != alive | Trigger alert | 30 seconds | 0 seconds | `select now(), cluster_id, last(dnodes_total) - last(dnodes_alive) as dnode_offline from log.taosd_cluster_info where _ts >= (now -30s) and _ts < now partition by cluster_id having first(_ts) > 0` |
| vnode offline | total != alive | Trigger alert | 30 seconds | 0 seconds | `select now(), cluster_id, last(vnodes_total) - last(vnodes_alive) as vnode_offline from log.taosd_cluster_info where _ts >= (now - 30s) and _ts < now partition by cluster_id having first(_ts) > 0 ` |
| Number of data deletion requests | > 0 | Do not trigger alert | 30 seconds | 0 seconds | ``select now(), count(`count`) as `delete_count` from log.taos_sql_req where sql_type = 'delete' and _ts >= (now -30s) and _ts < now`` |
| Adapter RESTful request fail | > 5 | Do not trigger alert | 30 seconds | 0 seconds | ``select now(), sum(`fail`) as `Failed` from log.adapter_requests where req_type=0 and ts >= (now -30s) and ts < now`` |
| Adapter WebSocket request fail | > 5 | Do not trigger alert | 30 seconds | 0 seconds | ``select now(), sum(`fail`) as `Failed` from log.adapter_requests where req_type=1 and ts >= (now -30s) and ts < now`` |
| Dnode data reporting is missing | < 3 | Trigger alert | 180 seconds | 0 seconds | `select now(), cluster_id, count(*) as dnode_report from log.taosd_cluster_info where _ts >= (now -180s) and _ts < now partition by cluster_id having timetruncate(first(_ts), 1h) > 0` |
| Restart dnode | max(update_time) > last(update_time) | Trigger alert | 90 seconds | 0 seconds | `select now(), dnode_id, max(uptime) - last(uptime) as dnode_restart from log.taosd_dnodes_info where _ts >= (now - 90s) and _ts < now partition by dnode_id` |
TDengine users can modify and improve these alert rules according to their own business needs. In Grafana 7.5 and below versions, the Dashboard and Alert rules functions are combined, while in subsequent new versions, the two functions are separated. To be compatible with Grafana7.5 and below versions, an Alert Used Only panel has been added to the TDinsight panel, which is only required for Grafana7.5 and below versions.
@ -258,19 +259,19 @@ Install and configure TDinsight dashboard in Grafana on Ubuntu 18.04/20.04 syste
Most command line options can also be achieved through environment variables.
| Short Option | Long Option | Environment Variable | Description |
| ------------ | ------------------------------- | ------------------------------ | -------------------------------------------------------- |
| -v | --plugin-version | TDENGINE_PLUGIN_VERSION | TDengine datasource plugin version, default is latest. |
| -P | --grafana-provisioning-dir | GF_PROVISIONING_DIR | Grafana provisioning directory, default is `/etc/grafana/provisioning/` |
| -G | --grafana-plugins-dir | GF_PLUGINS_DIR | Grafana plugins directory, default is `/var/lib/grafana/plugins`. |
| -O | --grafana-org-id | GF_ORG_ID | Grafana organization ID, default is 1. |
| -n | --tdengine-ds-name | TDENGINE_DS_NAME | TDengine datasource name, default is TDengine. |
| -a | --tdengine-api | TDENGINE_API | TDengine REST API endpoint. Default is `http://127.0.0.1:6041`. |
| -u | --tdengine-user | TDENGINE_USER | TDengine user name. [default: root] |
| -p | --tdengine-password | TDENGINE_PASSWORD | TDengine password. [default: taosdata] |
| -i | --tdinsight-uid | TDINSIGHT_DASHBOARD_UID | TDinsight dashboard `uid`. [default: tdinsight] |
| -t | --tdinsight-title | TDINSIGHT_DASHBOARD_TITLE | TDinsight dashboard title. [default: TDinsight] |
| -e | --tdinsight-editable | TDINSIGHT_DASHBOARD_EDITABLE | If the provisioning dashboard could be editable. [default: false] |
| Short Option | Long Option | Environment Variable | Description |
| ------------ | -------------------------- | ---------------------------- | ----------------------------------------------------------------------- |
| -v | --plugin-version | TDENGINE_PLUGIN_VERSION | TDengine datasource plugin version, default is latest. |
| -P | --grafana-provisioning-dir | GF_PROVISIONING_DIR | Grafana provisioning directory, default is `/etc/grafana/provisioning/` |
| -G | --grafana-plugins-dir | GF_PLUGINS_DIR | Grafana plugins directory, default is `/var/lib/grafana/plugins`. |
| -O | --grafana-org-id | GF_ORG_ID | Grafana organization ID, default is 1. |
| -n | --tdengine-ds-name | TDENGINE_DS_NAME | TDengine datasource name, default is TDengine. |
| -a | --tdengine-api | TDENGINE_API | TDengine REST API endpoint. Default is `http://127.0.0.1:6041`. |
| -u | --tdengine-user | TDENGINE_USER | TDengine user name. [default: root] |
| -p | --tdengine-password | TDENGINE_PASSWORD | TDengine password. [default: taosdata] |
| -i | --tdinsight-uid | TDINSIGHT_DASHBOARD_UID | TDinsight dashboard `uid`. [default: tdinsight] |
| -t | --tdinsight-title | TDINSIGHT_DASHBOARD_TITLE | TDinsight dashboard title. [default: TDinsight] |
| -e | --tdinsight-editable | TDINSIGHT_DASHBOARD_EDITABLE | If the provisioning dashboard could be editable. [default: false] |
:::note
The new version of the plugin uses the Grafana unified alerting feature, the `-E` option is no longer supported.

View File

@ -10,11 +10,7 @@ slug: /tdengine-reference/tools/taosdump
## Installation
Taosdump provides two installation methods:
- Taosdump is the default installation component in the TDengine installation package, which can be used after installing TDengine. For how to install TDengine, please refer to [TDengine Installation](../../../get-started/)
- Compile and install taos tools separately, refer to [taos tools](https://github.com/taosdata/taos-tools) .
taosdump is the default installation component in the TDengine installation package, which can be used after installing TDengine. For how to install TDengine, please refer to [TDengine Installation](../../../get-started/)
## Common Use Cases

View File

@ -8,11 +8,7 @@ TaosBenchmark is a performance benchmarking tool for TDengine products, providin
## Installation
taosBenchmark provides two installation methods:
- taosBenchmark is the default installation component in the TDengine installation package, which can be used after installing TDengine. For how to install TDengine, please refer to [TDengine Installation](../../../get started/)
- Compile and install taos tools separately, refer to [taos tools](https://github.com/taosdata/taos-tools) .
taosBenchmark is the default installation component in the TDengine installation package, which can be used after installing TDengine. For how to install TDengine, please refer to [TDengine Installation](../../../get started/)
## Operation
@ -63,7 +59,7 @@ taosBenchmark -f <json file>
<summary>insert.json</summary>
```json
{{#include /taos-tools/example/insert.json}}
{{#include /TDengine/tools/taos-tools/example/insert.json}}
```
</details>
@ -74,7 +70,7 @@ taosBenchmark -f <json file>
<summary>query.json</summary>
```json
{{#include /taos-tools/example/query.json}}
{{#include /TDengine/tools/taos-tools/example/query.json}}
```
</details>
@ -85,7 +81,7 @@ taosBenchmark -f <json file>
<summary>tmq.json</summary>
```json
{{#include /taos-tools/example/tmq.json}}
{{#include /TDengine/tools/taos-tools/example/tmq.json}}
```
</details>
@ -250,7 +246,7 @@ INFO: Spend 26.9530 second completed total queries: 30000, the QPS of all thread
```
- The first line represents the percentile distribution of query execution and query request delay for each of the three threads executing 10000 queries. The SQL command is the test query statement
- The second line indicates that the total query time is 26.9653 seconds, the total queries is 10000 * 3 = 30000, and the query rate per second (QPS) is 1113.049 times/second
- The second line indicates that the total query time is 26.9653 seconds, and the query rate per second (QPS) is 1113.049 times/second
- If the `continue_if_fail` option is set to `yes` in the query, the last line will output the number of failed requests and error rate, the format like "error + number of failed requests (error rate)"
- QPS = number of successful requests / time spent (in seconds)
- Error rate = number of failed requests / (number of successful requests + number of failed requests)
@ -463,7 +459,7 @@ For other common parameters, see Common Configuration Parameters.
Configuration parameters for querying specified tables (can specify supertables, subtables, or regular tables) are set in `specified_table_query`.
- `General Query`: Each SQL in `sqls` starts `threads` threads to query this SQL, Each thread exits after executing the `query_times` queries, and only after all threads executing this SQL have completed can the next SQL be executed.
`General Query`: Each SQL in `sqls` starts `threads` threads to query this SQL, Each thread exits after executing the `query_times` queries, and only after all threads executing this SQL have completed can the next SQL be executed.
The total number of queries(`General Query`) = the number of `sqls` * `query_times` * `threads`
- `Mixed Query` : All SQL statements in `sqls` are divided into `threads` groups, with each thread executing one group. Each SQL statement needs to execute `query_times` queries.
The total number of queries(`Mixed Query`) = the number of `sqls` * `query_times`
@ -505,7 +501,7 @@ Configuration parameters for subscribing to specified tables (can specify supert
- **sqls** :
- **sql** : The SQL command to execute, required.
#### Data Type Writing Comparison Table in Configuration File
### Data Type Writing Comparison Table in Configuration File
| # | **Engine** | **taosBenchmark**
| --- | :----------------: | :---------------:

View File

@ -61,8 +61,7 @@ window_clause: {
| COUNT_WINDOW(count_val[, sliding_val])
interp_clause:
RANGE(ts_val [, ts_val]) EVERY(every_val) FILL(fill_mod_and_val)
| RANGE(ts_val, surrounding_time_val) FILL(fill_mod_and_val)
RANGE(ts_val [, ts_val] [, surrounding_time_val]) EVERY(every_val) FILL(fill_mod_and_val)
partition_by_clause:
PARTITION BY partition_by_expr [, partition_by_expr] ...

View File

@ -1967,7 +1967,7 @@ ignore_null_values: {
- For queries on tables with composite primary keys, if there are data with the same timestamp, only the data with the smallest composite primary key participates in the calculation.
- INTERP query supports NEAR FILL mode, i.e., when FILL is needed, it uses the data closest to the current time point for interpolation. When the timestamps before and after are equally close to the current time slice, FILL the previous row's value. This mode is not supported in stream computing and window queries. For example: SELECT INTERP(col) FROM tb RANGE('2023-01-01 00:00:00', '2023-01-01 00:10:00') FILL(NEAR).(Supported from version 3.3.4.9).
- INTERP can only use the pseudocolumn `_irowts_origin` when using FILL PREV/NEXT/NEAR modes. `_irowts_origin` is supported from version 3.3.4.9.
- INTERP `RANGE` clause supports the expansion of the time range (supported from version 3.3.4.9), such as `RANGE('2023-01-01 00:00:00', 10s)` means to find data 10s before and after the time point '2023-01-01 00:00:00' for interpolation, FILL PREV/NEXT/NEAR respectively means to look for data forward/backward/around the time point, if there is no data around the time point, then use the value specified by FILL for interpolation, therefore the FILL clause must specify a value at this time. For example: SELECT INTERP(col) FROM tb RANGE('2023-01-01 00:00:00', 10s) FILL(PREV, 1). Currently, only the combination of time point and time range is supported, not the combination of time interval and time range, i.e., RANGE('2023-01-01 00:00:00', '2023-02-01 00:00:00', 1h) is not supported. The specified time range rules are similar to EVERY, the unit cannot be year or month, the value cannot be 0, and cannot have quotes. When using this extension, other FILL modes except FILL PREV/NEXT/NEAR are not supported, and the EVERY clause cannot be specified.
- INTERP `RANGE` clause supports the expansion of the time range (supported from version 3.3.4.9), For example, `RANGE('2023-01-01 00:00:00', 10s)` means that only data within 10s around the time point '2023-01-01 00:00:00' can be used for interpolation. `FILL PREV/NEXT/NEAR` respectively means to look for data forward/backward/around the time point. If there is no data around the time point, the default value specified by `FILL` is used for interpolation. Therefore the `FILL` clause must specify the default value at the same time. For example: SELECT INTERP(col) FROM tb RANGE('2023-01-01 00:00:00', 10s) FILL(PREV, 1). Starting from the 3.3.6.0 version, the combination of time period and time range is supported. When interpolating for each point within the time period, the time range requirement must be met. Prior versions only supported single time point and its time range. The available values for time range are similar to `EVERY`, the unit cannot be year or month, the value must be greater than 0, and cannot be in quotes. When using this extension, `FILL` modes other than `PREV/NEXT/NEAR` are not supported.
### LAST

View File

@ -11,7 +11,7 @@ import imgStream from './assets/stream-processing-01.png';
```sql
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name[(field1_name, field2_name [PRIMARY KEY], ...)] [TAGS (create_definition [, create_definition] ...)] SUBTABLE(expression) AS subquery
stream_options: {
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time | FORCE_WINDOW_CLOSE]
WATERMARK time
IGNORE EXPIRED [0|1]
DELETE_MARK time
@ -58,6 +58,10 @@ window_clause: {
Where SESSION is a session window, tol_val is the maximum range of the time interval. All data within the tol_val time interval belong to the same window. If the time between two consecutive data points exceeds tol_val, the next window automatically starts. The window's_wend equals the last data point's time plus tol_val.
STATE_WINDOW is a state window. The col is used to identify the state value. Values with the same state value belong to the same state window. When the value of col changes, the current window ends and the next window is automatically opened.
INTERVAL is a time window, which can be further divided into sliding time windows and tumbling time windows.The INTERVAL clause is used to specify the equal time period of the window, and the SLIDING clause is used to specify the time by which the window slides forward. When the value of interval_val is equal to the value of sliding_val, the time window is a tumbling time window; otherwise, it is a sliding time window. Note: The value of sliding_val must be less than or equal to the value of interval_val.
EVENT_WINDOW is an event window, defined by start and end conditions. The window starts when the start_trigger_condition is met and closes when the end_trigger_condition is met. start_trigger_condition and end_trigger_condition can be any condition expression supported by TDengine and can include different columns.
COUNT_WINDOW is a count window, dividing the window by a fixed number of data rows. count_val is a constant, a positive integer, must be at least 2 and less than 2147483648. count_val represents the maximum number of data rows each COUNT_WINDOW contains. If the total number of data rows is not divisible by count_val, the last window will have fewer rows than count_val. sliding_val is a constant, representing the number of rows the window slides, similar to the SLIDING in INTERVAL.

View File

@ -33,6 +33,7 @@ The JDBC driver implementation for TDengine strives to be consistent with relati
| taos-jdbcdriver Version | Major Changes | TDengine Version |
| ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ |
| 3.5.3 | Support unsigned data types in WebSocket connections. | - |
| 3.5.2 | Fixed WebSocket result set free bug. | - |
| 3.5.1 | Fixed the getObject issue in data subscription. | - |
| 3.5.0 | 1. Optimized the performance of WebSocket connection parameter binding, supporting parameter binding queries using binary data. <br/> 2. Optimized the performance of small queries in WebSocket connection. <br/> 3. Added support for setting time zone and app info on WebSocket connection. | 3.3.5.0 and higher |
@ -128,24 +129,27 @@ Please refer to the specific error codes:
TDengine currently supports timestamp, numeric, character, boolean types, and the corresponding Java type conversions are as follows:
| TDengine DataType | JDBCType |
| ----------------- | ------------------ |
| TIMESTAMP | java.sql.Timestamp |
| INT | java.lang.Integer |
| BIGINT | java.lang.Long |
| FLOAT | java.lang.Float |
| DOUBLE | java.lang.Double |
| SMALLINT | java.lang.Short |
| TINYINT | java.lang.Byte |
| BOOL | java.lang.Boolean |
| BINARY | byte array |
| NCHAR | java.lang.String |
| JSON | java.lang.String |
| VARBINARY | byte[] |
| GEOMETRY | byte[] |
| TDengine DataType | JDBCType | Remark |
| ----------------- | -------------------- | --------------------------------------- |
| TIMESTAMP | java.sql.Timestamp | |
| BOOL | java.lang.Boolean | |
| TINYINT | java.lang.Byte | |
| TINYINT UNSIGNED | java.lang.Short | only supported in WebSocket connections |
| SMALLINT | java.lang.Short | |
| SMALLINT UNSIGNED | java.lang.Integer | only supported in WebSocket connections |
| INT | java.lang.Integer | |
| INT UNSIGNED | java.lang.Long | only supported in WebSocket connections |
| BIGINT | java.lang.Long | |
| BIGINT UNSIGNED | java.math.BigInteger | only supported in WebSocket connections |
| FLOAT | java.lang.Float | |
| DOUBLE | java.lang.Double | |
| BINARY | byte array | |
| NCHAR | java.lang.String | |
| JSON | java.lang.String | only supported in tags |
| VARBINARY | byte[] | |
| GEOMETRY | byte[] | |
**Note**: JSON type is only supported in tags.
Due to historical reasons, the BINARY type in TDengine is not truly binary data and is no longer recommended. Please use VARBINARY type instead.
**Note**: Due to historical reasons, the BINARY type in TDengine is not truly binary data and is no longer recommended. Please use VARBINARY type instead.
GEOMETRY type is binary data in little endian byte order, complying with the WKB standard. For more details, please refer to [Data Types](../../sql-manual/data-types/)
For the WKB standard, please refer to [Well-Known Binary (WKB)](https://libgeos.org/specifications/wkb/)
For the Java connector, you can use the jts library to conveniently create GEOMETRY type objects, serialize them, and write to TDengine. Here is an example [Geometry Example](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/GeometryDemo.java)

View File

@ -25,6 +25,8 @@ Support all platforms that can run Node.js.
| Node.js Connector Version | Major Changes | TDengine Version |
| ------------------------- | ------------------------------------------------------------------------ | --------------------------- |
| 3.1.4 | Modified the readme.| - |
| 3.1.3 | Upgraded the es5-ext version to address vulnerabilities in the lower version. | - |
| 3.1.2 | Optimized data protocol and parsing, significantly improved performance. | - |
| 3.1.1 | Optimized data transmission performance. | 3.3.2.0 and higher versions |
| 3.1.0 | New release, supports WebSocket connection. | 3.2.0.0 and higher versions |
@ -132,16 +134,20 @@ Node.js connector (`@tdengine/websocket`), which connects to a TDengine instance
In addition to obtaining a connection through a specified URL, you can also use WSConfig to specify parameters when establishing a connection.
```js
try {
let url = 'ws://127.0.0.1:6041'
let conf = WsSql.NewConfig(url)
conf.setUser('root')
conf.setPwd('taosdata')
conf.setDb('db')
conf.setTimeOut(500)
let wsSql = await WsSql.open(conf);
} catch (e) {
console.error(e);
const taos = require("@tdengine/websocket");
async function createConnect() {
try {
let url = 'ws://127.0.0.1:6041'
let conf = new taos.WSConfig(url)
conf.setUser('root')
conf.setPwd('taosdata')
conf.setDb('db')
conf.setTimeOut(500)
let wsSql = await taos.sqlConnect(conf)
} catch (e) {
console.error(e);
}
}
```

View File

@ -4,7 +4,7 @@ title: taosTools Release History and Download Links
slug: /release-history/taostools
---
Download links for various versions of taosTools are as follows:
Starting from version 3.0.6.0, taosTools has been integrated into the TDengine installation package and is no longer provided separately. Download links for various versions of taosTools (corresponding to TDengine 3.0.5.2 and earlier) are as follows:
For other historical versions, please visit [here](https://tdengine.com/downloads/historical)

View File

@ -3,13 +3,9 @@ title: Release Notes
slug: /release-history/release-notes
---
[3.3.5.0](./3-3-5-0/)
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
[3.3.5.2](./3.3.5.2)
[3.3.4.8](./3-3-4-8/)
[3.3.4.3](./3-3-4-3/)
[3.3.3.0](./3-3-3-0/)
[3.3.2.0](./3-3-2-0/)
<DocCardList items={useCurrentSidebarCategory().items}/>
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

View File

@ -1,84 +1,84 @@
### Configuring taosAdapter
#### Configuring taosAdapter
Method to configure taosAdapter to receive collectd data:
- Enable the configuration item in the taosAdapter configuration file (default location is /etc/taos/taosadapter.toml)
```
...
[opentsdb_telnet]
enable = true
maxTCPConnections = 250
tcpKeepAlive = false
dbs = ["opentsdb_telnet", "collectd", "icinga2", "tcollector"]
ports = [6046, 6047, 6048, 6049]
user = "root"
password = "taosdata"
...
```
```toml
...
[opentsdb_telnet]
enable = true
maxTCPConnections = 250
tcpKeepAlive = false
dbs = ["opentsdb_telnet", "collectd", "icinga2", "tcollector"]
ports = [6046, 6047, 6048, 6049]
user = "root"
password = "taosdata"
...
```
The default database name written by taosAdapter is `collectd`, but you can also modify the dbs item in the taosAdapter configuration file to specify a different name. Fill in user and password with the actual TDengine configuration values. After modifying the configuration file, taosAdapter needs to be restarted.
The default database name written by taosAdapter is `collectd`, but you can also modify the dbs item in the taosAdapter configuration file to specify a different name. Fill in user and password with the actual TDengine configuration values. After modifying the configuration file, taosAdapter needs to be restarted.
- You can also use taosAdapter command line parameters or set environment variables to start, to enable taosAdapter to receive collectd data, for more details please refer to the taosAdapter reference manual.
### Configuring collectd
#### Configuring collectd
collectd uses a plugin mechanism that can write the collected monitoring data to different data storage software in various forms. TDengine supports direct collection plugins and write_tsdb plugins.
#### Configuring to receive direct collection plugin data
1. **Configuring to receive direct collection plugin data**
Modify the related configuration items in the collectd configuration file (default location /etc/collectd/collectd.conf).
Modify the related configuration items in the collectd configuration file (default location /etc/collectd/collectd.conf).
```text
LoadPlugin network
<Plugin network>
Server "<taosAdapter's host>" "<port for collectd direct>"
</Plugin>
```
```xml
LoadPlugin network
<Plugin network>
Server "<taosAdapter's host>" "<port for collectd direct>"
</Plugin>
```
Where \<taosAdapter's host> should be filled with the domain name or IP address of the server running taosAdapter. \<port for collectd direct> should be filled with the port used by taosAdapter to receive collectd data (default is 6045).
Where \<taosAdapter's host> should be filled with the domain name or IP address of the server running taosAdapter. \<port for collectd direct> should be filled with the port used by taosAdapter to receive collectd data (default is 6045).
Example as follows:
Example as follows:
```text
LoadPlugin network
<Plugin network>
Server "127.0.0.1" "6045"
</Plugin>
```
```xml
LoadPlugin network
<Plugin network>
Server "127.0.0.1" "6045"
</Plugin>
```
#### Configuring write_tsdb plugin data
2. **Configuring write_tsdb plugin data**
Modify the related configuration items in the collectd configuration file (default location /etc/collectd/collectd.conf).
Modify the related configuration items in the collectd configuration file (default location /etc/collectd/collectd.conf).
```text
LoadPlugin write_tsdb
<Plugin write_tsdb>
<Node>
Host "<taosAdapter's host>"
Port "<port for collectd write_tsdb plugin>"
...
</Node>
</Plugin>
```
```xml
LoadPlugin write_tsdb
<Plugin write_tsdb>
<Node>
Host "<taosAdapter's host>"
Port "<port for collectd write_tsdb plugin>"
...
</Node>
</Plugin>
```
Where \<taosAdapter's host> should be filled with the domain name or IP address of the server running taosAdapter. \<port for collectd write_tsdb plugin> should be filled with the port used by taosAdapter to receive collectd write_tsdb plugin data (default is 6047).
Where \<taosAdapter's host> should be filled with the domain name or IP address of the server running taosAdapter. \<port for collectd write_tsdb plugin> should be filled with the port used by taosAdapter to receive collectd write_tsdb plugin data (default is 6047).
```text
LoadPlugin write_tsdb
<Plugin write_tsdb>
<Node>
Host "127.0.0.1"
Port "6047"
HostTags "status=production"
StoreRates false
AlwaysAppendDS false
</Node>
</Plugin>
```
```xml
LoadPlugin write_tsdb
<Plugin write_tsdb>
<Node>
Host "127.0.0.1"
Port "6047"
HostTags "status=production"
StoreRates false
AlwaysAppendDS false
</Node>
</Plugin>
```
Then restart collectd:
Then restart collectd:
```
systemctl restart collectd
```
```shell
systemctl restart collectd
```

View File

@ -1,43 +1,43 @@
### Configuring taosAdapter
#### Configuring taosAdapter
Method to configure taosAdapter to receive icinga2 data:
- Enable the configuration item in the taosAdapter configuration file (default location /etc/taos/taosadapter.toml)
```
...
[opentsdb_telnet]
enable = true
maxTCPConnections = 250
tcpKeepAlive = false
dbs = ["opentsdb_telnet", "collectd", "icinga2", "tcollector"]
ports = [6046, 6047, 6048, 6049]
user = "root"
password = "taosdata"
...
```
```toml
...
[opentsdb_telnet]
enable = true
maxTCPConnections = 250
tcpKeepAlive = false
dbs = ["opentsdb_telnet", "collectd", "icinga2", "tcollector"]
ports = [6046, 6047, 6048, 6049]
user = "root"
password = "taosdata"
...
```
The default database name written by taosAdapter is `icinga2`, but you can also modify the dbs item in the taosAdapter configuration file to specify a different name. Fill in user and password with the actual TDengine configuration values. taosAdapter needs to be restarted after modifications.
The default database name written by taosAdapter is `icinga2`, but you can also modify the dbs item in the taosAdapter configuration file to specify a different name. Fill in user and password with the actual TDengine configuration values. taosAdapter needs to be restarted after modifications.
- You can also use taosAdapter command line parameters or set environment variables to enable taosAdapter to receive icinga2 data, for more details please refer to the taosAdapter reference manual
### Configuring icinga2
#### Configuring icinga2
- Enable icinga2's opentsdb-writer (reference link https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer)
- Modify the configuration file `/etc/icinga2/features-enabled/opentsdb.conf` filling in \<taosAdapter's host> with the domain name or IP address of the server running taosAdapter, \<port for icinga2> with the corresponding port supported by taosAdapter for receiving icinga2 data (default is 6048)
```
object OpenTsdbWriter "opentsdb" {
host = "<taosAdapter's host>"
port = <port for icinga2>
}
```
```c
object OpenTsdbWriter "opentsdb" {
host = "<taosAdapter's host>"
port = <port for icinga2>
}
```
Example file:
Example file:
```
object OpenTsdbWriter "opentsdb" {
host = "127.0.0.1"
port = 6048
}
```
```c
object OpenTsdbWriter "opentsdb" {
host = "127.0.0.1"
port = 6048
}
```

View File

@ -1,18 +1,18 @@
Configuring Prometheus is done by editing the Prometheus configuration file `prometheus.yml` (default location `/etc/prometheus/prometheus.yml`).
### Configure Third-Party Database Address
#### Configure Third-Party Database Address
Set the `remote_read url` and `remote_write url` to point to the domain name or IP address of the server running the taosAdapter service, the REST service port (taosAdapter defaults to 6041), and the name of the database you want to write to in TDengine, ensuring the URLs are formatted as follows:
- remote_read url: `http://<taosAdapter's host>:<REST service port>/prometheus/v1/remote_read/<database name>`
- remote_write url: `http://<taosAdapter's host>:<REST service port>/prometheus/v1/remote_write/<database name>`
### Configure Basic Authentication
#### Configure Basic Authentication
- username: \<TDengine's username>
- password: \<TDengine's password>
### Example configuration of remote_write and remote_read in the prometheus.yml file
#### Example configuration of remote_write and remote_read in the prometheus.yml file
```yaml
remote_write:

View File

@ -1,46 +1,46 @@
### Configure taosAdapter
#### Configure taosAdapter
Method to configure taosAdapter to receive StatsD data:
- Enable the configuration item in the taosAdapter configuration file (default location /etc/taos/taosadapter.toml)
```
...
[statsd]
enable = true
port = 6044
db = "statsd"
user = "root"
password = "taosdata"
worker = 10
gatherInterval = "5s"
protocol = "udp"
maxTCPConnections = 250
tcpKeepAlive = false
allowPendingMessages = 50000
deleteCounters = true
deleteGauges = true
deleteSets = true
deleteTimings = true
...
```
```toml
...
[statsd]
enable = true
port = 6044
db = "statsd"
user = "root"
password = "taosdata"
worker = 10
gatherInterval = "5s"
protocol = "udp"
maxTCPConnections = 250
tcpKeepAlive = false
allowPendingMessages = 50000
deleteCounters = true
deleteGauges = true
deleteSets = true
deleteTimings = true
...
```
The default database name written by taosAdapter is `statsd`, but you can also modify the db item in the taosAdapter configuration file to specify a different name. Fill in the user and password with the actual TDengine configuration values. After modifying the configuration file, taosAdapter needs to be restarted.
The default database name written by taosAdapter is `statsd`, but you can also modify the db item in the taosAdapter configuration file to specify a different name. Fill in the user and password with the actual TDengine configuration values. After modifying the configuration file, taosAdapter needs to be restarted.
- You can also use taosAdapter command line arguments or set environment variables to enable the taosAdapter to receive StatsD data. For more details, please refer to the taosAdapter reference manual.
### Configure StatsD
#### Configure StatsD
To use StatsD, download its [source code](https://github.com/statsd/statsd). Modify its configuration file according to the example file `exampleConfig.js` found in the root directory of the local source code download. Replace \<taosAdapter's host> with the domain name or IP address of the server running taosAdapter, and \<port for StatsD> with the port that taosAdapter uses to receive StatsD data (default is 6044).
```
```text
Add to the backends section "./backends/repeater"
Add to the repeater section { host:'<taosAdapter's host>', port: <port for StatsD>}
```
Example configuration file:
```
```js
{
port: 8125
, backends: ["./backends/repeater"]
@ -50,7 +50,7 @@ port: 8125
After adding the following content, start StatsD (assuming the configuration file is modified to config.js).
```
```shell
npm install
node stats.js config.js &
```

View File

@ -1,27 +1,27 @@
### Configuring taosAdapter
#### Configuring taosAdapter
To configure taosAdapter to receive data from TCollector:
- Enable the configuration in the taosAdapter configuration file (default location /etc/taos/taosadapter.toml)
```
...
[opentsdb_telnet]
enable = true
maxTCPConnections = 250
tcpKeepAlive = false
dbs = ["opentsdb_telnet", "collectd", "icinga2", "tcollector"]
ports = [6046, 6047, 6048, 6049]
user = "root"
password = "taosdata"
...
```
```toml
...
[opentsdb_telnet]
enable = true
maxTCPConnections = 250
tcpKeepAlive = false
dbs = ["opentsdb_telnet", "collectd", "icinga2", "tcollector"]
ports = [6046, 6047, 6048, 6049]
user = "root"
password = "taosdata"
...
```
The default database name that taosAdapter writes to is `tcollector`, but you can specify a different name by modifying the dbs option in the taosAdapter configuration file. Fill in the user and password with the actual values configured in TDengine. After modifying the configuration file, taosAdapter needs to be restarted.
The default database name that taosAdapter writes to is `tcollector`, but you can specify a different name by modifying the dbs option in the taosAdapter configuration file. Fill in the user and password with the actual values configured in TDengine. After modifying the configuration file, taosAdapter needs to be restarted.
- You can also use taosAdapter command line arguments or set environment variables to enable the taosAdapter to receive tcollector data. For more details, please refer to the taosAdapter reference manual.
### Configuring TCollector
#### Configuring TCollector
To use TCollector, download its [source code](https://github.com/OpenTSDB/tcollector). Its configuration options are in its source code. Note: There are significant differences between different versions of TCollector; this only refers to the latest code in the current master branch (git commit: 37ae920).
@ -29,7 +29,7 @@ Modify the contents of `collectors/etc/config.py` and `tcollector.py`. Change th
Example of git diff output for source code modifications:
```
```diff
index e7e7a1c..ec3e23c 100644
--- a/collectors/etc/config.py
+++ b/collectors/etc/config.py

View File

@ -19,7 +19,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version>
<version>3.5.3</version>
</dependency>
<dependency>
<groupId>org.locationtech.jts</groupId>

View File

@ -47,7 +47,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version>
<version>3.5.3</version>
</dependency>
</dependencies>

View File

@ -18,7 +18,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version>
<version>3.5.3</version>
</dependency>
<!-- druid -->
<dependency>

View File

@ -17,7 +17,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version>
<version>3.5.3</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>

View File

@ -5,7 +5,7 @@
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.4.0</version>
<version>2.7.18</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.taosdata.example</groupId>
@ -18,6 +18,18 @@
<java.version>1.8</java.version>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.baomidou</groupId>
<artifactId>mybatis-plus-bom</artifactId>
<version>3.5.10.1</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
@ -28,14 +40,21 @@
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<!-- spring boot2 引入可选模块 -->
<dependency>
<groupId>com.baomidou</groupId>
<artifactId>mybatis-plus-boot-starter</artifactId>
<version>3.1.2</version>
</dependency>
<!-- jdk 8+ 引入可选模块 -->
<dependency>
<groupId>com.baomidou</groupId>
<artifactId>mybatis-plus-jsqlparser-4.9</artifactId>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<version>2.3.232</version>
<scope>runtime</scope>
</dependency>
<dependency>
@ -47,7 +66,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version>
<version>3.5.3</version>
</dependency>
<dependency>

View File

@ -1,34 +1,26 @@
package com.taosdata.example.mybatisplusdemo.config;
import com.baomidou.mybatisplus.extension.plugins.PaginationInterceptor;
import com.baomidou.mybatisplus.annotation.DbType;
import com.baomidou.mybatisplus.extension.plugins.MybatisPlusInterceptor;
import com.baomidou.mybatisplus.extension.plugins.inner.PaginationInnerInterceptor;
import org.mybatis.spring.annotation.MapperScan;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.transaction.annotation.EnableTransactionManagement;
@EnableTransactionManagement
@Configuration
@MapperScan("com.taosdata.example.mybatisplusdemo.mapper")
public class MybatisPlusConfig {
/** mybatis 3.4.1 pagination config start ***/
// @Bean
// public MybatisPlusInterceptor mybatisPlusInterceptor() {
// MybatisPlusInterceptor interceptor = new MybatisPlusInterceptor();
// interceptor.addInnerInterceptor(new PaginationInnerInterceptor());
// return interceptor;
// }
// @Bean
// public ConfigurationCustomizer configurationCustomizer() {
// return configuration -> configuration.setUseDeprecatedExecutor(false);
// }
/**
* 添加分页插件
*/
@Bean
public PaginationInterceptor paginationInterceptor() {
// return new PaginationInterceptor();
PaginationInterceptor paginationInterceptor = new PaginationInterceptor();
//TODO: mybatis-plus do not support TDengine, use postgresql Dialect
paginationInterceptor.setDialectType("postgresql");
return paginationInterceptor;
public MybatisPlusInterceptor mybatisPlusInterceptor() {
MybatisPlusInterceptor interceptor = new MybatisPlusInterceptor();
interceptor.addInnerInterceptor(new PaginationInnerInterceptor(DbType.MYSQL));
return interceptor;
}
}

View File

@ -5,6 +5,7 @@ import com.taosdata.example.mybatisplusdemo.domain.Meters;
import org.apache.ibatis.annotations.Insert;
import org.apache.ibatis.annotations.Param;
import org.apache.ibatis.annotations.Update;
import org.apache.ibatis.executor.BatchResult;
import java.util.List;
@ -15,17 +16,6 @@ public interface MetersMapper extends BaseMapper<Meters> {
@Insert("insert into meters (tbname, ts, groupid, location, current, voltage, phase) values(#{tbname}, #{ts}, #{groupid}, #{location}, #{current}, #{voltage}, #{phase})")
int insertOne(Meters one);
@Insert({
"<script>",
"insert into meters (tbname, ts, groupid, location, current, voltage, phase) values ",
"<foreach collection='list' item='item' index='index' separator=','>",
"(#{item.tbname}, #{item.ts}, #{item.groupid}, #{item.location}, #{item.current}, #{item.voltage}, #{item.phase})",
"</foreach>",
"</script>"
})
int insertBatch(@Param("list") List<Meters> metersList);
@Update("drop stable if exists meters")
void dropTable();
}

View File

@ -11,9 +11,6 @@ public interface TemperatureMapper extends BaseMapper<Temperature> {
@Update("CREATE TABLE if not exists temperature(ts timestamp, temperature float) tags(location nchar(64), tbIndex int)")
int createSuperTable();
@Update("create table #{tbName} using temperature tags( #{location}, #{tbindex})")
int createTable(@Param("tbName") String tbName, @Param("location") String location, @Param("tbindex") int tbindex);
@Update("drop table if exists temperature")
void dropSuperTable();

View File

@ -10,7 +10,7 @@ public interface WeatherMapper extends BaseMapper<Weather> {
@Update("CREATE TABLE if not exists weather(ts timestamp, temperature float, humidity int, location nchar(100))")
int createTable();
@Insert("insert into weather (ts, temperature, humidity, location) values(#{ts}, #{temperature}, #{humidity}, #{location})")
@Insert("insert into weather (ts, temperature, humidity, location) values(#{ts}, #{temperature}, #{humidity}, #{location, jdbcType=NCHAR})")
int insertOne(Weather one);
@Update("drop table if exists weather")

View File

@ -0,0 +1,19 @@
package com.taosdata.example.mybatisplusdemo.service;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import javax.sql.DataSource;
import java.sql.Connection;
import java.sql.SQLException;
@Service
public class DatabaseConnectionService {
@Autowired
private DataSource dataSource;
public Connection getConnection() throws SQLException {
return dataSource.getConnection();
}
}

View File

@ -0,0 +1,23 @@
package com.taosdata.example.mybatisplusdemo.service;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.sql.Connection;
import java.sql.SQLException;
import java.sql.Statement;
@Service
public class TemperatureService {
@Autowired
private DatabaseConnectionService databaseConnectionService;
public void createTable(String tableName, String location, int tbIndex) throws SQLException {
try (Connection connection = databaseConnectionService.getConnection();
Statement statement = connection.createStatement()) {
statement.executeUpdate("create table " + tableName + " using temperature tags( '" + location +"', " + tbIndex + ")");
}
}
}

View File

@ -5,6 +5,7 @@ import com.baomidou.mybatisplus.core.metadata.IPage;
import com.baomidou.mybatisplus.extension.plugins.pagination.Page;
import com.taosdata.example.mybatisplusdemo.domain.Meters;
import com.taosdata.example.mybatisplusdemo.domain.Weather;
import org.apache.ibatis.executor.BatchResult;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
@ -18,6 +19,8 @@ import java.util.LinkedList;
import java.util.List;
import java.util.Random;
import static java.sql.Statement.SUCCESS_NO_INFO;
@RunWith(SpringJUnit4ClassRunner.class)
@SpringBootTest
public class MetersMapperTest {
@ -63,8 +66,19 @@ public class MetersMapperTest {
metersList.add(one);
}
int affectRows = mapper.insertBatch(metersList);
Assert.assertEquals(100, affectRows);
List<BatchResult> affectRowsList = mapper.insert(metersList, 10000);
long totalAffectedRows = 0;
for (BatchResult batchResult : affectRowsList) {
int[] updateCounts = batchResult.getUpdateCounts();
for (int status : updateCounts) {
if (status == SUCCESS_NO_INFO) {
totalAffectedRows++;
}
}
}
Assert.assertEquals(100, totalAffectedRows);
}
@Test
@ -93,7 +107,7 @@ public class MetersMapperTest {
@Test
public void testSelectCount() {
int count = mapper.selectCount(null);
long count = mapper.selectCount(null);
// Assert.assertEquals(5, count);
System.out.println(count);
}

View File

@ -4,6 +4,7 @@ import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.baomidou.mybatisplus.core.metadata.IPage;
import com.baomidou.mybatisplus.extension.plugins.pagination.Page;
import com.taosdata.example.mybatisplusdemo.domain.Temperature;
import com.taosdata.example.mybatisplusdemo.service.TemperatureService;
import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
@ -13,6 +14,8 @@ import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Timestamp;
import java.util.HashMap;
import java.util.List;
@ -22,18 +25,20 @@ import java.util.Random;
@RunWith(SpringJUnit4ClassRunner.class)
@SpringBootTest
public class TemperatureMapperTest {
@Autowired
private TemperatureService temperatureService;
private static Random random = new Random(System.currentTimeMillis());
private static String[] locations = {"北京", "上海", "深圳", "广州", "杭州"};
@Before
public void before() {
public void before() throws SQLException {
mapper.dropSuperTable();
// create table temperature
mapper.createSuperTable();
// create table t_X using temperature
for (int i = 0; i < 10; i++) {
mapper.createTable("t" + i, locations[random.nextInt(locations.length)], i);
temperatureService.createTable("t" + i, locations[i % locations.length], i);
}
// insert into table
int affectRows = 0;
@ -107,7 +112,7 @@ public class TemperatureMapperTest {
* **/
@Test
public void testSelectCount() {
int count = mapper.selectCount(null);
long count = mapper.selectCount(null);
Assert.assertEquals(10, count);
}

View File

@ -52,7 +52,7 @@ public class WeatherMapperTest {
one.setTemperature(random.nextFloat() * 50);
one.setHumidity(random.nextInt(100));
one.setLocation("望京");
int affectRows = mapper.insert(one);
int affectRows = mapper.insertOne(one);
Assert.assertEquals(1, affectRows);
}
@ -82,7 +82,7 @@ public class WeatherMapperTest {
@Test
public void testSelectCount() {
int count = mapper.selectCount(null);
long count = mapper.selectCount(null);
// Assert.assertEquals(5, count);
System.out.println(count);
}

View File

@ -5,7 +5,7 @@
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.6.15</version>
<version>2.7.18</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.taosdata.example</groupId>
@ -34,7 +34,7 @@
<dependency>
<groupId>org.mybatis.spring.boot</groupId>
<artifactId>mybatis-spring-boot-starter</artifactId>
<version>2.1.1</version>
<version>2.3.2</version>
</dependency>
<dependency>
@ -70,7 +70,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version>
<version>3.5.3</version>
</dependency>
<dependency>

View File

@ -50,13 +50,6 @@
), groupId int)
</update>
<update id="createTable" parameterType="com.taosdata.example.springbootdemo.domain.Weather">
create table if not exists test.t#{groupId} using test.weather tags
(
#{location},
#{groupId}
)
</update>
<select id="select" resultMap="BaseResultMap">
select * from test.weather order by ts desc
@ -69,8 +62,8 @@
</select>
<insert id="insert" parameterType="com.taosdata.example.springbootdemo.domain.Weather">
insert into test.t#{groupId} (ts, temperature, humidity, note, bytes)
values (#{ts}, ${temperature}, ${humidity}, #{note}, #{bytes})
insert into test.t${groupId} (ts, temperature, humidity, note, bytes)
values (#{ts}, #{temperature}, #{humidity}, #{note}, #{bytes})
</insert>
<select id="getSubTables" resultType="String">

View File

@ -0,0 +1,19 @@
package com.taosdata.example.springbootdemo.service;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import javax.sql.DataSource;
import java.sql.Connection;
import java.sql.SQLException;
@Service
public class DatabaseConnectionService {
@Autowired
private DataSource dataSource;
public Connection getConnection() throws SQLException {
return dataSource.getConnection();
}
}

View File

@ -6,6 +6,9 @@ import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.nio.charset.StandardCharsets;
import java.sql.Connection;
import java.sql.SQLException;
import java.sql.Statement;
import java.sql.Timestamp;
import java.util.List;
import java.util.Map;
@ -16,6 +19,9 @@ public class WeatherService {
@Autowired
private WeatherMapper weatherMapper;
@Autowired
private DatabaseConnectionService databaseConnectionService;
private Random random = new Random(System.currentTimeMillis());
private String[] locations = {"北京", "上海", "广州", "深圳", "天津"};
@ -32,7 +38,7 @@ public class WeatherService {
weather.setGroupId(i % locations.length);
weather.setNote("note-" + i);
weather.setBytes(locations[random.nextInt(locations.length)].getBytes(StandardCharsets.UTF_8));
weatherMapper.createTable(weather);
createTable(weather);
count += weatherMapper.insert(weather);
}
return count;
@ -78,4 +84,14 @@ public class WeatherService {
weather.setLocation(location);
return weather;
}
public void createTable(Weather weather) {
try (Connection connection = databaseConnectionService.getConnection();
Statement statement = connection.createStatement()) {
String tableName = "t" + weather.getGroupId();
statement.executeUpdate("create table if not exists " + tableName + " using test.weather tags( '" + weather.getLocation() +"', " + weather.getGroupId() + ")");
} catch (SQLException e) {
throw new RuntimeException(e);
}
}
}

View File

@ -4,8 +4,8 @@
#spring.datasource.username=root
#spring.datasource.password=taosdata
# datasource config - JDBC-RESTful
spring.datasource.driver-class-name=com.taosdata.jdbc.rs.RestfulDriver
spring.datasource.url=jdbc:TAOS-RS://localhost:6041/test
spring.datasource.driver-class-name=com.taosdata.jdbc.ws.WebSocketDriver
spring.datasource.url=jdbc:TAOS-WS://localhost:6041/test
spring.datasource.username=root
spring.datasource.password=taosdata
spring.datasource.druid.initial-size=5

View File

@ -67,7 +67,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version>
<version>3.5.3</version>
<!-- <scope>system</scope>-->
<!-- <systemPath>${project.basedir}/src/main/resources/lib/taos-jdbcdriver-2.0.15-dist.jar</systemPath>-->
</dependency>

View File

@ -22,7 +22,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version>
<version>3.5.3</version>
</dependency>
<!-- ANCHOR_END: dep-->

View File

@ -2,6 +2,7 @@ package com.taos.example;
import com.taosdata.jdbc.ws.TSWSPreparedStatement;
import java.math.BigInteger;
import java.sql.*;
import java.util.Random;
@ -26,7 +27,12 @@ public class WSParameterBindingFullDemo {
"binary_col BINARY(100), " +
"nchar_col NCHAR(100), " +
"varbinary_col VARBINARY(100), " +
"geometry_col GEOMETRY(100)) " +
"geometry_col GEOMETRY(100)," +
"utinyint_col tinyint unsigned," +
"usmallint_col smallint unsigned," +
"uint_col int unsigned," +
"ubigint_col bigint unsigned" +
") " +
"tags (" +
"int_tag INT, " +
"double_tag DOUBLE, " +
@ -34,7 +40,12 @@ public class WSParameterBindingFullDemo {
"binary_tag BINARY(100), " +
"nchar_tag NCHAR(100), " +
"varbinary_tag VARBINARY(100), " +
"geometry_tag GEOMETRY(100))"
"geometry_tag GEOMETRY(100)," +
"utinyint_tag tinyint unsigned," +
"usmallint_tag smallint unsigned," +
"uint_tag int unsigned," +
"ubigint_tag bigint unsigned" +
")"
};
private static final int numOfSubTable = 10, numOfRow = 10;
@ -79,7 +90,7 @@ public class WSParameterBindingFullDemo {
// set table name
pstmt.setTableName("ntb_json_" + i);
// set tags
pstmt.setTagJson(1, "{\"device\":\"device_" + i + "\"}");
pstmt.setTagJson(0, "{\"device\":\"device_" + i + "\"}");
// set columns
long current = System.currentTimeMillis();
for (int j = 0; j < numOfRow; j++) {
@ -94,25 +105,29 @@ public class WSParameterBindingFullDemo {
}
private static void stmtAll(Connection conn) throws SQLException {
String sql = "INSERT INTO ? using stb tags(?,?,?,?,?,?,?) VALUES (?,?,?,?,?,?,?,?)";
String sql = "INSERT INTO ? using stb tags(?,?,?,?,?,?,?,?,?,?,?) VALUES (?,?,?,?,?,?,?,?,?,?,?,?)";
try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) {
// set table name
pstmt.setTableName("ntb");
// set tags
pstmt.setTagInt(1, 1);
pstmt.setTagDouble(2, 1.1);
pstmt.setTagBoolean(3, true);
pstmt.setTagString(4, "binary_value");
pstmt.setTagNString(5, "nchar_value");
pstmt.setTagVarbinary(6, new byte[] { (byte) 0x98, (byte) 0xf4, 0x6e });
pstmt.setTagGeometry(7, new byte[] {
pstmt.setTagInt(0, 1);
pstmt.setTagDouble(1, 1.1);
pstmt.setTagBoolean(2, true);
pstmt.setTagString(3, "binary_value");
pstmt.setTagNString(4, "nchar_value");
pstmt.setTagVarbinary(5, new byte[] { (byte) 0x98, (byte) 0xf4, 0x6e });
pstmt.setTagGeometry(6, new byte[] {
0x01, 0x01, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x59,
0x40, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x59, 0x40 });
pstmt.setTagShort(7, (short)255);
pstmt.setTagInt(8, 65535);
pstmt.setTagLong(9, 4294967295L);
pstmt.setTagBigInteger(10, new BigInteger("18446744073709551615"));
long current = System.currentTimeMillis();
@ -129,6 +144,10 @@ public class WSParameterBindingFullDemo {
0x00, 0x00, 0x00, 0x59,
0x40, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x59, 0x40 });
pstmt.setShort(9, (short)255);
pstmt.setInt(10, 65535);
pstmt.setLong(11, 4294967295L);
pstmt.setObject(12, new BigInteger("18446744073709551615"));
pstmt.addBatch();
pstmt.executeBatch();
System.out.println("Successfully inserted rows to example_all_type_stmt.ntb");

View File

@ -23,7 +23,7 @@ CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name
SUBTABLE(expression) AS subquery
stream_options: {
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time | FORCE_WINDOW_CLOSE]
WATERMARK time
IGNORE EXPIRED [0|1]
DELETE_MARK time
@ -52,13 +52,17 @@ window_cluse: {
}
```
subquery 支持会话窗口、状态窗口与滑动窗口。其中,会话窗口与状态窗口搭配超级表时必须与 partition by tbname 一起使用。
subquery 支持会话窗口、状态窗口、时间窗口、事件窗口与计数窗口。其中,状态窗口、事件窗口与计数窗口搭配超级表时必须与 partition by tbname 一起使用。
1. 其中SESSION 是会话窗口tol_val 是时间间隔的最大范围。在 tol_val 时间间隔范围内的数据都属于同一个窗口,如果连续的两条数据的时间间隔超过 tol_val则自动开启下一个窗口。
2. EVENT_WINDOW 是事件窗口,根据开始条件和结束条件来划定窗口。当 start_trigger_condition 满足时则窗口开始,直到 end_trigger_condition 满足时窗口关闭。 start_trigger_condition 和 end_trigger_condition 可以是任意 TDengine 支持的条件表达式,且可以包含不同的列
2. STATE_WINDOW 是状态窗口col 用来标识状态量相同的状态量数值则归属于同一个状态窗口col 数值改变后则当前窗口结束,自动开启下一个窗口
3. COUNT_WINDOW 是计数窗口,按固定的数据行数来划分窗口。 count_val 是常量,是正整数,必须大于等于 2小于 2147483648。 count_val 表示每个 COUNT_WINDOW 包含的最大数据行数,总数据行数不能整除 count_val 时,最后一个窗口的行数会小于 count_val 。 sliding_val 是常量,表示窗口滑动的数量,类似于 INTERVAL 的 SLIDING 。
3. INTERVAL 是时间窗口又可分为滑动时间窗口和翻转时间窗口。INTERVAL 子句用于指定窗口相等时间周期SLIDING 字句用于指定窗口向前滑动的时间。当 interval_val 与 sliding_val 相等的时候时间窗口即为翻转时间窗口否则为滑动时间窗口注意sliding_val 必须小于等于 interval_val。
4. EVENT_WINDOW 是事件窗口,根据开始条件和结束条件来划定窗口。当 start_trigger_condition 满足时则窗口开始,直到 end_trigger_condition 满足时窗口关闭。 start_trigger_condition 和 end_trigger_condition 可以是任意 TDengine 支持的条件表达式,且可以包含不同的列。
5. COUNT_WINDOW 是计数窗口,按固定的数据行数来划分窗口。 count_val 是常量,是正整数,必须大于等于 2小于 2147483648。 count_val 表示每个 COUNT_WINDOW 包含的最大数据行数,总数据行数不能整除 count_val 时,最后一个窗口的行数会小于 count_val 。 sliding_val 是常量,表示窗口滑动的数量,类似于 INTERVAL 的 SLIDING 。
窗口的定义与时序数据窗口查询中的定义完全相同,具体可参考 TDengine 窗口函数部分。

View File

@ -89,7 +89,7 @@ TDengine 提供了丰富的应用程序开发接口,为了便于用户快速
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version>
<version>3.5.3</version>
</dependency>
```

View File

@ -30,12 +30,12 @@ TDengine 消费者的概念跟 Kafka 类似,消费者通过订阅主题来接
| 参数名称 | 类型 | 参数说明 | 备注 |
| :-----------------------: | :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `td.connect.ip` | string | 服务端的 IP 地址 | |
| `td.connect.ip` | string | 服务端的 FQDN | 可以是ip或者host name |
| `td.connect.user` | string | 用户名 | |
| `td.connect.pass` | string | 密码 | |
| `td.connect.port` | integer | 服务端的端口号 | |
| `group.id` | string | 消费组 ID同一消费组共享消费进度 | <br />**必填项**。最大长度192。<br />每个topic最多可建立 100 个 consumer group |
| `client.id` | string | 客户端 ID | 最大长度:192 |
| `group.id` | string | 消费组 ID同一消费组共享消费进度 | <br />**必填项**。最大长度192,超长将截断<br />每个topic最多可建立 100 个 consumer group |
| `client.id` | string | 客户端 ID | 最大长度255超长将截断。 |
| `auto.offset.reset` | enum | 消费组订阅的初始位置 | <br />`earliest`: default(version < 3.2.0.0);从头开始订阅; <br/>`latest`: default(version >= 3.2.0.0);仅从最新数据开始订阅; <br/>`none`: 没有提交的 offset 无法订阅 |
| `enable.auto.commit` | boolean | 是否启用消费位点自动提交true: 自动提交客户端应用无需commitfalse客户端应用需要自行commit | 默认值为 true |
| `auto.commit.interval.ms` | integer | 消费记录自动提交消费位点时间间隔,单位为毫秒 | 默认值为 5000 |

View File

@ -17,10 +17,11 @@ TDengine 面向多种写入场景而很多写入场景下TDengine 的存
### 语法
```SQL
COMPACT DATABASE db_name [start with 'XXXX'] [end with 'YYYY']
COMPACT [db_name.]VGROUPS IN (vgroup_id1, vgroup_id2, ...) [start with 'XXXX'] [end with 'YYYY']
SHOW COMPACT [compact_id]
KILL COMPACT compact_id
COMPACT DATABASE db_name [start with 'XXXX'] [end with 'YYYY'];
COMPACT [db_name.]VGROUPS IN (vgroup_id1, vgroup_id2, ...) [start with 'XXXX'] [end with 'YYYY'];
SHOW COMPACTS;
SHOW COMPACT compact_id;
KILL COMPACT compact_id;
```
### 效果

View File

@ -69,7 +69,7 @@ taosExplorer 服务页面中,进入“系统管理 - 备份”页面,在“
1. 数据库:需要备份的数据库名称。一个备份计划只能备份一个数据库/超级表。
2. 超级表:需要备份的超级表名称。如果不填写,则备份整个数据库。
3. 下次执行时间:首次执行备份任务的日期时间。
4. 备份周期:备份点之间的时间间隔。注意:备份周期必须于数据库的 WAL_RETENTION_PERIOD 参数值。
4. 备份周期:备份点之间的时间间隔。注意:备份周期必须于数据库的 WAL_RETENTION_PERIOD 参数值。
5. 错误重试次数:对于可通过重试解决的错误,系统会按照此次数进行重试。
6. 错误重试间隔:每次重试之间的时间间隔。
7. 目录:存储备份文件的目录。
@ -152,4 +152,4 @@ Caused by:
```sql
alter
database test wal_retention_period 3600;
```
```

View File

@ -24,6 +24,7 @@ Flink Connector 支持所有能运行 Flink 1.19 及以上版本的平台。
## 版本历史
| Flink Connector 版本 | 主要变化 | TDengine 版本 |
| ------------------| ------------------------------------ | ---------------- |
| 2.0.2 | Table Sink 支持 RowKind.UPDATE_BEFORE、RowKind.UPDATE_AFTER 和 RowKind.DELETE 类型| - |
| 2.0.1 | Sink 支持对所有继承自 RowData 并已实现的类型进行数据写入| - |
| 2.0.0 | 1. 支持 SQL 查询 TDengine 数据库中的数据<br/> 2. 支持 CDC 订阅 TDengine 数据库中的数据<br/> 3. 支持 Table SQL 方式读取和写入 TDengine 数据库| 3.3.5.1 及以上版本 |
| 1.0.0 | 支持 Sink 功能,将来着其他数据源的数据写入到 TDengine| 3.3.2.0 及以上版本|
@ -112,7 +113,7 @@ env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.AT_LEAST_ONCE);
<dependency>
<groupId>com.taosdata.flink</groupId>
<artifactId>flink-connector-tdengine</artifactId>
<version>2.0.1</version>
<version>2.0.2</version>
</dependency>
```

View File

@ -0,0 +1,35 @@
---
sidebar_label: Tableau
title: 与 Tableau 集成
---
Tableau 是一款知名的商业智能工具,它支持多种数据源,可方便地连接、导入和整合数据。并且可以通过直观的操作界面,让用户创建丰富多样的可视化图表,并具备强大的分析和筛选功能,为数据决策提供有力支持。
## 前置条件
准备以下环境:
- TDengine 3.3.5.4以上版本集群已部署并正常运行(企业及社区版均可)
- taosAdapter 能够正常运行。详细参考 [taosAdapter 使用手册](../../../reference/components/taosadapter)
- Tableau 桌面版安装并运行(如未安装,请下载并安装 Windows 操作系统 32/64 位 [Tableau 桌面版](https://www.tableau.com/products/desktop/download) )。安装 Tableau 桌面版请参考 [官方文档](https://www.tableau.com)。
- ODBC 驱动安装成功。详细参考[安装 ODBC 驱动](../../../reference/connector/odbc/#安装)
- ODBC 数据源配置成功。详细参考[配置ODBC数据源](../../../reference/connector/odbc/#配置数据源)
## 加载和分析 TDengine 数据
**第 1 步**,在 Windows 系统环境下启动 Tableau之后在其连接页面中搜索 “ODBC”并选择 “其他数据库 (ODBC)”。
**第 2 步**,点击 DNS 单选框,接着选择已配置好的数据源(MyTDengine),然后点击连接按钮。待连接成功后,删除字符串附加部分的内容,最后点击登录按钮即可。
![tableau-odbc](./tableau/tableau-odbc.jpg)
**第 3 步**,在弹出的工作簿页面中,会显示已连接的数据源。点击数据库的下拉列表,会显示需要进行数据分析的数据库。在此基础上,点击表选项中的查找按钮,即可将该数据库下的所有表显示出来。然后,拖动需要分析的表到右侧区域,即可显示出表结构。
![tableau-workbook](./tableau/tableau-table.jpg)
**第 4 步**,点击下方的"立即更新"按钮,即可将表中的数据展示出来。
![tableau-workbook](./tableau/tableau-data.jpg)
**第 5 步**,点击窗口下方的"工作表",弹出数据分析窗口, 并展示分析表的所有字段,将字段拖动到行列即可展示出图表。
![tableau-workbook](./tableau/tableau-analysis.jpg)

Binary file not shown.

After

Width:  |  Height:  |  Size: 235 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 238 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 237 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

File diff suppressed because it is too large Load Diff

View File

@ -9,102 +9,504 @@ TDengine 客户端驱动提供了应用编程所需要的全部 API并且在
## 配置参数
### 连接相关
|参数名称|支持版本|动态修改|参数含义|
|----------------------|----------|-------------------------|-------------|
|firstEp | |支持动态修改 立即生效 |启动时,主动连接的集群中首个 dnode 的 endpoint缺省值hostname:6030若无法获取该服务器的 hostname则赋值为 localhost|
|secondEp | |支持动态修改 立即生效 |启动时,如果 firstEp 连接不上,尝试连接集群中第二个 dnode 的 endpoint没有缺省值|
|serverPort | |支持动态修改 立即生效 |taosd 监听的端口,默认值 6030|
|compressMsgSize | |支持动态修改 立即生效 |是否对 RPC 消息进行压缩;-1所有消息都不压缩0所有消息都压缩N (N>0):只有大于 N 个字节的消息才压缩;缺省值 -1|
|shellActivityTimer | |不支持动态修改 |客户端向 mnode 发送心跳的时长,单位为秒,取值范围 1-120默认值 3|
|numOfRpcSessions | |支持动态修改 立即生效 |RPC 支持的最大连接数,取值范围 100-100000缺省值 30000|
|numOfRpcThreads | |不支持动态修改 |RPC 收发数据线程数目取值范围1-50,默认值为 CPU 核数的一半|
|numOfTaskQueueThreads | |不支持动态修改 |客户端处理 RPC消息的线程数, 范围4-16,默认值为 CPU 核数的一半|
|timeToGetAvailableConn| 3.3.4.*之后取消 |不支持动态修改 |获得可用连接的最长等待时间,取值范围 10-50000000单位为毫秒缺省值 500000|
|useAdapter | |支持动态修改 立即生效 |内部参数,是否使用 taosadapter影响 CSV 文件导入|
|shareConnLimit |3.3.4.0 新增|不支持动态修改 |内部参数,一个链接可以共享的查询数目,取值范围 1-256默认值 10|
|readTimeout |3.3.4.0 新增|不支持动态修改 |内部参数,最小超时时间,取值范围 64-604800单位为秒默认值 900|
#### firstEp
- 说明:启动时,主动连接的集群中首个 dnode 的 endpoint
- 默认值hostname:6030若无法获取该服务器的 hostname则赋值为 localhost
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
#### secondEp
- 说明:启动时,如果 firstEp 连接不上,尝试连接集群中第二个 dnode 的 endpoint
- 默认值:无
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
#### serverPort
- 说明taosd 监听的端口
- 默认值6030
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
#### compressMsgSize
- 说明:是否对 RPC 消息进行压缩
- 类型:整数;-1所有消息都不压缩0所有消息都压缩N (N>0):只有大于 N 个字节的消息才压缩
- 单位bytes
- 默认值:-1
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
#### shellActivityTimer
- 说明:客户端向 mnode 发送心跳的时长
- 类型:整数
- 单位:秒
- 默认值3
- 最小值1
- 最大值120
- 动态修改:不支持
- 支持版本:从 v3.0.0.0 版本开始引入
#### numOfRpcSessions
- 说明RPC 支持的最大连接数
- 类型:整数
- 默认值30000
- 最小值100
- 最大值100000
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
#### numOfRpcThreads
- 说明RPC 收发数据线程数目
- 类型:整数
- 默认值CPU 核数的一半
- 最小值1
- 最大值50
- 动态修改:不支持
- 支持版本:从 v3.0.0.0 版本开始引入
#### numOfTaskQueueThreads
- 说明:客户端处理 RPC消息的线程数
- 类型:整数
- 默认值CPU 核数的一半
- 最小值4
- 最大值16
- 动态修改:不支持
- 支持版本:从 v3.0.0.0 版本开始引入
#### timeToGetAvailableConn
- 说明:获得可用连接的最长等待时间
- 类型:整数
- 单位:毫秒
- 默认值500000
- 最小值10
- 最大值50000000
- 动态修改:不支持
- 支持版本3.3.4.*之后取消
#### useAdapter
- 说明:是否使用 taosadapter影响 CSV 文件导入 `内部参数`
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
#### shareConnLimit
- 说明:一个链接可以共享的查询数目 `内部参数`
- 最小值1
- 最大值256
- 默认值10
- 动态修改:不支持
- 支持版本:从 v3.3.4.0 版本开始引入
#### readTimeout
- 说明:最小超时时间 `内部参数`
- 单位:秒
- 最小值64
- 最大值604800
- 默认值900
- 动态修改:不支持
- 支持版本:从 v3.3.4.0 版本开始引入
### 查询相关
|参数名称|支持版本|动态修改|参数含义|
|----------------------|----------|-------------------------|-------------|
|countAlwaysReturnValue | |支持动态修改 立即生效 |count/hyperloglog 函数在输入数据为空或者 NULL 的情况下是否返回值0返回空行1返回默认值 1该参数设置为 1 时,如果查询中含有 INTERVAL 子句或者该查询使用了 TSMA 时,且相应的组或窗口内数据为空或者 NULL对应的组或窗口将不返回查询结果注意此参数客户端和服务端值应保持一致|
|keepColumnName | |支持动态修改 立即生效 |Last、First、LastRow 函数查询且未指定别名时,自动设置别名为列名(不含函数名),因此 order by 子句如果引用了该列名将自动引用该列对应的函数1表示自动设置别名为列名(不包含函数名)0表示不自动设置别名缺省值0|
|multiResultFunctionStarReturnTags|3.3.3.0 后|支持动态修改 立即生效 |查询超级表时last(\*)/last_row(\*)/first(\*) 是否返回标签列查询普通表、子表时不受该参数影响0不返回标签列1返回标签列缺省值0该参数设置为 0 时last(\*)/last_row(\*)/first(\*) 只返回超级表的普通列;为 1 时,返回超级表的普通列和标签列|
|metaCacheMaxSize | |支持动态修改 立即生效 |指定单个客户端元数据缓存大小的最大值,单位 MB缺省值 -1表示无限制|
|maxTsmaCalcDelay | |支持动态修改 立即生效 |查询时客户端可允许的 tsma 计算延迟,若 tsma 的计算延迟大于配置值,则该 TSMA 将不会被使用;取值范围 600s - 86400s即 10 分钟 - 1 小时缺省值600 秒|
|tsmaDataDeleteMark | |支持动态修改 立即生效 |TSMA 计算的历史数据中间结果保存时间,单位为毫秒;取值范围 >= 3600000即大于等于1h缺省值86400000即 1d |
|queryPolicy | |支持动态修改 立即生效 |查询语句的执行策略1只使用 vnode不使用 qnode2没有扫描算子的子任务在 qnode 执行,带扫描算子的子任务在 vnode 执行3vnode 只运行扫描算子,其余算子均在 qnode 执行缺省值1|
|queryTableNotExistAsEmpty | |支持动态修改 立即生效 |查询表不存在时是否返回空结果集false返回错误true返回空结果集缺省值 false|
|querySmaOptimize | |支持动态修改 立即生效 |sma index 的优化策略0表示不使用 sma index永远从原始数据进行查询1表示使用 sma index对符合的语句直接从预计算的结果进行查询缺省值0|
|queryPlannerTrace | |支持动态修改 立即生效 |内部参数,查询计划是否输出详细日志|
|queryNodeChunkSize | |支持动态修改 立即生效 |内部参数,查询计划的块大小|
|queryUseNodeAllocator | |支持动态修改 立即生效 |内部参数,查询计划的分配方法|
|queryMaxConcurrentTables | |不支持动态修改 |内部参数,查询计划的并发数目|
|enableQueryHb | |支持动态修改 立即生效 |内部参数,是否发送查询心跳消息|
|minSlidingTime | |支持动态修改 立即生效 |内部参数sliding 的最小允许值|
|minIntervalTime | |支持动态修改 立即生效 |内部参数interval 的最小允许值|
#### countAlwaysReturnValue
- 说明count/hyperloglog 函数在输入数据为空或者 NULL 的情况下是否返回值;该参数设置为 1 时,如果查询中含有 INTERVAL 子句或者该查询使用了 TSMA 时,且相应的组或窗口内数据为空或者 NULL对应的组或窗口将不返回查询结果注意此参数客户端和服务端值应保持一致
- 类型整数0返回空行1返回
- 最小值0
- 最大值1
- 默认值1
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
#### keepColumnName
- 说明Last、First、LastRow 函数查询且未指定别名时,自动设置别名为列名(不含函数名),因此 order by 子句如果引用了该列名将自动引用该列对应的函数
- 类型整数1表示自动设置别名为列名(不包含函数名)0表示不自动设置别名
- 最小值0
- 最大值1
- 默认值0
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
#### multiResultFunctionStarReturnTags
- 说明查询超级表时last(\*)/last_row(\*)/first(\*) 是否返回标签列;查询普通表、子表时,不受该参数影响;
- 类型整数0不返回标签列1返回标签列该参数设置为 0 时last(\*)/last_row(\*)/first(\*) 只返回超级表的普通列;为 1 时,返回超级表的普通列和标签列
- 最小值0
- 最大值1
- 默认值0
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.3.3.0 版本开始引入
#### metaCacheMaxSize
- 说明:指定单个客户端元数据缓存大小的最大值
- 类型整数0不返回标签列1返回标签列该参数设置为 0 时last(\*)/last_row(\*)/first(\*) 只返回超级表的普通列;为 1 时,返回超级表的普通列和标签列
- 单位MB
- 最小值:-1
- 最大值2147483647
- 默认值:-1 表示无限制
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
#### maxTsmaCalcDelay
- 说明:查询时客户端可允许的 tsma 计算延迟,若 tsma 的计算延迟大于配置值,则该 TSMA 将不会被使用
- 类型:整数
- 单位:秒
- 最小值600
- 最大值86400
- 默认值600
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
#### tsmaDataDeleteMark
- 说明TSMA 计算的历史数据中间结果保存时间
- 类型:整数
- 单位:毫秒
- 最小值3600000
- 最大值9223372036854775807
- 默认值86400000
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
#### queryPolicy
- 说明:查询语句的执行策略
- 类型整数1只使用 vnode不使用 qnode2没有扫描算子的子任务在 qnode 执行,带扫描算子的子任务在 vnode 执行3vnode 只运行扫描算子,其余算子均在 qnode 执行;
- 单位:秒
- 默认值1
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
#### queryTableNotExistAsEmpty
- 说明:查询表不存在时是否返回空结果集
- 类型布尔false返回错误true返回空结果集
- 默认值false
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
#### querySmaOptimize
- 说明querSmaOptimize永远从原始数据进行查询
- 类型:整数 1表示使用 sma index对符合的语句直接从预计算的结果进行查询
- 默认值false
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
#### queryPlannerTrace
- 说明:查询计划是否输出详细日志 `内部参数`
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
#### queryNodeChunkSize
- 说明:查询计划的块大小 `内部参数`
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
#### queryUseNodeAllocator
- 说明:查询计划的分配方法 `内部参数`
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
#### queryMaxConcurrentTables
- 说明:查询计划的并发数目 `内部参数`
- 动态修改:不支持
- 支持版本:从 v3.0.0.0 版本开始引入
#### enableQueryHb
- 说明:是否发送查询心跳消息 `内部参数`
- 动态修改:不支持
- 支持版本:从 v3.0.0.0 版本开始引入
#### minSlidingTime
- 说明sliding 的最小允许值 `内部参数`
- 动态修改:不支持
- 支持版本:从 v3.0.0.0 版本开始引入
#### minIntervalTime
- 说明interval 的最小允许值 `内部参数`
- 动态修改:不支持
- 支持版本:从 v3.0.0.0 版本开始引入
### 写入相关
|参数名称|支持版本|动态修改|参数含义|
|----------------------|----------|-------------------------|-------------|
|smlChildTableName | |支持动态修改 立即生效 |schemaless 自定义的子表名的 key无缺省值|
|smlAutoChildTableNameDelimiter| |支持动态修改 立即生效 |schemaless tag 之间的连接符,连起来作为子表名,无缺省值|
|smlTagName | |支持动态修改 立即生效 |schemaless tag 为空时默认的 tag 名字,缺省值 "_tag_null"|
|smlTsDefaultName | |支持动态修改 立即生效 |schemaless 自动建表的时间列名字通过该配置设置,缺省值 "_ts"|
|smlDot2Underline | |支持动态修改 立即生效 |schemaless 把超级表名中的 dot 转成下划线|
|maxInsertBatchRows | |支持动态修改 立即生效 |内部参数,一批写入的最大条数|
#### smlChildTableName
- 说明schemaless 自定义的子表名的 key
- 默认值:无
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
#### smlAutoChildTableNameDelimiter
- 说明schemaless tag 之间的连接符,连起来作为子表名
- 默认值:无
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
#### smlTagName
- 说明schemaless tag 为空时默认的 tag 名字
- 默认值:"_tag_null"
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
#### smlTsDefaultName
- 说明schemaless 自动建表的时间列名字通过该配置设置
- 默认值:"_ts"
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
#### smlDot2Underline
- 说明schemaless 把超级表名中的 dot 转成下划线
- 默认值true
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
#### maxInsertBatchRows
- 说明:一批写入的最大条数
- 默认值1000000
- 最小值1
- 最大值2147483647
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.0.0.0 版本开始引入
### 区域相关
|参数名称|支持版本|动态修改|参数含义|
|----------------------|----------|-------------------------|-------------|
|timezone | |支持动态修改 立即生效 |时区;缺省从系统中动态获取当前的时区设置|
|locale | |支持动态修改 立即生效 |系统区位信息及编码格式,缺省从系统中获取|
|charset | |支持动态修改 立即生效 |字符集编码,缺省从系统中获取|
#### timezone
- 说明:时区
- 默认值:从系统中动态获取当前的时区设置
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.1.0.0 版本开始引入
#### locale
- 说明:系统区位信息及编码格式
- 默认值:从系统中获取
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.1.0.0 版本开始引入
#### charset
- 说明:字符集编码
- 默认值:从系统中获取
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.1.0.0 版本开始引入
### 存储相关
|参数名称|支持版本|动态修改|参数含义|
|----------------------|----------|-------------------------|-------------|
|tempDir | |支持动态修改 立即生效 |指定所有运行过程中的临时文件生成的目录Linux 平台默认值为 /tmp|
|minimalTmpDirGB | |支持动态修改 立即生效 |tempDir 所指定的临时文件目录所需要保留的最小空间,单位 GB缺省值1|
#### tempDir
- 说明:指定所有运行过程中的临时文件生成的目录
- 默认值Linux 平台默认值为 /tmp
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.1.0.0 版本开始引入
#### minimalTmpDirGB
- 说明tempDir 所指定的临时文件目录所需要保留的最小空间
- 类型:浮点数
- 单位GB
- 默认值1
- 最小值0.001f
- 最大值10000000
- 动态修改:不支持
- 支持版本:从 v3.1.0.0 版本开始引入
### 日志相关
|参数名称|支持版本|动态修改|参数含义|
|----------------------|----------|-------------------------|-------------|
|logDir | |不支持动态修改 |日志文件目录,运行日志将写入该目录,缺省值:/var/log/taos|
|minimalLogDirGB | |支持动态修改 立即生效 |日志文件夹所在磁盘可用空间大小小于该值时,停止写日志,单位 GB缺省值1|
|numOfLogLines | |支持动态修改 立即生效 |单个日志文件允许的最大行数缺省值10,000,000|
|asyncLog | |支持动态修改 立即生效 |日志写入模式0同步1异步缺省值1|
|logKeepDays | |支持动态修改 立即生效 |日志文件的最长保存时间单位缺省值0意味着无限保存日志文件不会被重命名也不会有新的日志文件滚动产生但日志文件的内容有可能会不断滚动取决于日志文件大小的设置当设置为大于 0 的值时,当日志文件大小达到设置的上限时会被重命名为 taoslogx.yyy其中 yyy 为日志文件最后修改的时间戳,并滚动产生新的日志文件|
|debugFlag | |支持动态修改 立即生效 |运行日志开关131输出错误和警告日志135输出错误、警告和调试日志143输出错误、警告、调试和跟踪日志默认值 131 或 135 (取决于不同模块)|
|tmrDebugFlag | |支持动态修改 立即生效 |定时器模块的日志开关,取值范围同上|
|uDebugFlag | |支持动态修改 立即生效 |共用功能模块的日志开关,取值范围同上|
|rpcDebugFlag | |支持动态修改 立即生效 |rpc 模块的日志开关,取值范围同上|
|jniDebugFlag | |支持动态修改 立即生效 |jni 模块的日志开关,取值范围同上|
|qDebugFlag | |支持动态修改 立即生效 |query 模块的日志开关,取值范围同上|
|cDebugFlag | |支持动态修改 立即生效 |客户端模块的日志开关,取值范围同上|
|simDebugFlag | |支持动态修改 立即生效 |内部参数,测试工具的日志开关,取值范围同上|
|tqClientDebugFlag|3.3.4.3 后|支持动态修改 立即生效 |客户端模块的日志开关,取值范围同上|
#### logDir
- 说明:日志文件目录,运行日志将写入该目录
- 类型:字符串
- 默认值:/var/log/taos
- 动态修改:不支持
- 支持版本:从 v3.1.0.0 版本开始引入
#### minimalLogDirGB
- 说明:日志文件夹所在磁盘可用空间大小小于该值时,停止写日志
- 类型:浮点数
- 单位GB
- 默认值1
- 最小值0.001f
- 最大值10000000
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.1.0.0 版本开始引入
#### numOfLogLines
- 说明:单个日志文件允许的最大行数
- 类型:整数
- 默认值10,000,000
- 最小值1000
- 最大值2000000000
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.1.0.0 版本开始引入
#### asyncLog
- 说明:日志写入模式
- 类型整数0同步1异步
- 默认值1
- 最小值0
- 最大值1
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.1.0.0 版本开始引入
#### logKeepDays
- 说明日志文件的最长保存时间小于等于0意味着只有两个日志文件相互切换保存日志超过两个文件保存数量的日志会被删除当设置为大于 0 的值时,当日志文件大小达到设置的上限时会被重命名为 taosdlog.yyy其中 yyy 为日志文件最后修改的时间戳,并滚动产生新的日志文件
- 类型整数0
- 单位:天
- 默认值0
- 最小值:-365000
- 最大值365000
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.1.0.0 版本开始引入
#### debugFlag
- 说明:运行日志开关,该参数的设置会影响所有模块的开关,后设置的参数起效
- 类型:整数
- 取值范围131输出错误和警告日志135输出错误、警告和调试日志143输出错误、警告、调试和跟踪日志
- 默认值131 或 135 (取决于不同模块)
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.1.0.0 版本开始引入
#### tmrDebugFlag
- 说明:定时器模块的日志开关
- 类型:整数
- 取值范围:同上
- 默认值131
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.1.0.0 版本开始引入
#### uDebugFlag
- 说明:共用功能模块的日志开关
- 类型:整数
- 取值范围:同上
- 默认值131
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.1.0.0 版本开始引入
#### rpcDebugFlag
- 说明rpc 模块的日志开关
- 类型:整数
- 取值范围:同上
- 默认值131
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.1.0.0 版本开始引入
#### jniDebugFlag
- 说明jni 模块的日志开关
- 类型:整数
- 取值范围:同上
- 默认值131
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.1.0.0 版本开始引入
#### qDebugFlag
- 说明query 模块的日志开关
- 类型:整数
- 取值范围:同上
- 默认值131
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.1.0.0 版本开始引入
#### cDebugFlag
- 说明:客户端模块的日志开关
- 类型:整数
- 取值范围:同上
- 默认值131
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.1.0.0 版本开始引入
#### simDebugFlag
- 说明:测试工具的日志开关 `内部参数`
- 类型:整数
- 取值范围:同上
- 默认值131
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.1.0.0 版本开始引入
#### tqClientDebugFlag
- 说明:测试工具的日志开关 `内部参数`
- 类型:整数
- 取值范围:同上
- 默认值131
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.1.0.0 版本开始引入
### 调试相关
|参数名称|支持版本|动态修改|参数含义|
|----------------------|----------|-------------------------|-------------|
|crashReporting | |支持动态修改 立即生效 |是否上传 crash 到 telemetry0不上传1上传缺省值1|
|enableCoreFile | |支持动态修改 立即生效 |crash 时是否生成 core 文件0不生成1生成缺省值1|
|assert | |不支持动态修改 |断言控制开关缺省值0|
|configDir | |不支持动态修改 |配置文件所在目录|
|scriptDir | |不支持动态修改 |内部参数,测试用例的目录|
|randErrorChance |3.3.3.0 后|不支持动态修改 |内部参数,用于随机失败测试|
|randErrorDivisor |3.3.3.0 后|不支持动态修改 |内部参数,用于随机失败测试|
|randErrorScope |3.3.3.0 后|不支持动态修改 |内部参数,用于随机失败测试|
|safetyCheckLevel |3.3.3.0 后|不支持动态修改 |内部参数,用于随机失败测试|
|simdEnable |3.3.4.3 后|不支持动态修改 |内部参数,用于测试 SIMD 加速|
|AVX512Enable |3.3.4.3 后|不支持动态修改 |内部参数,用于测试 AVX512 加速|
|bypassFlag |3.3.4.5 后|支持动态修改 立即生效 |内部参数用于短路测试0正常写入1写入消息在 taos 客户端发送 RPC 消息前返回2写入消息在 taosd 服务端收到 RPC 消息后返回4写入消息在 taosd 服务端写入内存缓存前返回8写入消息在 taosd 服务端数据落盘前返回缺省值0|
#### crashReporting
- 说明:是否上传 crash 到 telemetry
- 类型整数0不上传1上传
- 默认值1
- 最小值0
- 最大值1
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.1.0.0 版本开始引入
#### enableCoreFile
- 说明crash 时是否生成 core 文件
- 类型整数0不生成1生成
- 默认值1
- 最小值0
- 最大值1
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.1.0.0 版本开始引入
#### assert
- 说明:断言控制开关
- 类型整数0:关闭1开启
- 默认值0
- 最小值0
- 最大值1
- 动态修改:不支持
- 支持版本:从 v3.1.0.0 版本开始引入
#### configDir
- 说明:配置文件所在目录
- 类型:字符串
- 动态修改:不支持
- 支持版本:从 v3.1.0.0 版本开始引入
#### scriptDir
- 说明:测试工具的脚本目录 `内部参数`
- 类型:字符串
- 动态修改:不支持
- 支持版本:从 v3.3.3.0 版本开始引入
#### randErrorChance
- 说明:用于随机失败测试 `内部参数`
- 动态修改:不支持
- 支持版本:从 v3.3.3.0 版本开始引入
#### randErrorDivisor
- 说明:用于随机失败测试 `内部参数`
- 动态修改:不支持
- 支持版本:从 v3.3.3.0 版本开始引入
#### randErrorScope
- 说明:用于随机失败测试 `内部参数`
- 动态修改:不支持
- 支持版本:从 v3.3.3.0 版本开始引入
#### safetyCheckLevel
- 说明:用于随机失败测试 `内部参数`
- 动态修改:不支持
- 支持版本:从 v3.3.3.0 版本开始引入
#### simdEnable
- 说明:用于测试 SIMD 加速 `内部参数`
- 动态修改:不支持
- 支持版本:从 v3.3.4.3 版本开始引入
#### AVX512Enable
- 说明:用于测试 AVX512 加速 `内部参数`
- 动态修改:不支持
- 支持版本:从 v3.3.4.3 版本开始引入
#### bypassFlag
- 说明:配置文件所在目录
- 类型:整数;
- 取值范围0正常写入1写入消息在 taos 客户端发送 RPC 消息前返回2写入消息在 taosd 服务端收到 RPC 消息后返回4写入消息在 taosd 服务端写入内存缓存前返回8写入消息在 taosd 服务端数据落盘前返回
- 默认值0
- 动态修改:支持通过 SQL 修改,立即生效
- 支持版本:从 v3.3.4.5 版本开始引入
### SHELL 相关
|参数名称|支持版本|动态修改|参数含义|
|----------------------|----------|-------------------------|-------------|
|enableScience | |不支持动态修改 |是否开启科学计数法显示浮点数0不开始1开启缺省值1|
#### enableScience
- 说明:是否开启科学计数法显示浮点数
- 类型整数0不开始1开启
- 默认值1
- 最小值0
- 最大值1
- 动态修改:不支持
- 支持版本:从 v3.1.0.0 版本开始引入
## API

View File

@ -11,42 +11,109 @@ import Icinga2 from "./_icinga2.mdx"
import TCollector from "./_tcollector.mdx"
taosAdapter 是一个 TDengine 的配套工具,是 TDengine 集群和应用程序之间的桥梁和适配器。它提供了一种易于使用和高效的方式来直接从数据收集代理软件(如 Telegraf、StatsD、collectd 等)摄取数据。它还提供了 InfluxDB/OpenTSDB 兼容的数据摄取接口,允许 InfluxDB/OpenTSDB 应用程序无缝移植到 TDengine。
TDengine 的各语言连接器通过 WebSocket 接口与 TDengine 进行通信,因此必须安装 taosAdapter。
taosAdapter 提供以下功能:
- Websocket/RESTful 接口
- 兼容 InfluxDB v1 写接口
- 兼容 OpenTSDB JSON 和 telnet 格式写入
- 无缝连接到 Telegraf
- 无缝连接到 collectd
- 无缝连接到 StatsD
- 支持 Prometheus remote_read 和 remote_write
- 获取 table 所在的虚拟节点组VGroup的 VGroup ID
## taosAdapter 架构图
架构图如下:
![TDengine Database taosAdapter Architecture](taosAdapter-architecture.webp)
## taosAdapter 部署方法
## 功能列表
### 安装 taosAdapter
taosAdapter 提供了以下功能:
- WebSocket 接口:
支持通过 WebSocket 协议执行 SQL、无模式数据写入、参数绑定和数据订阅功能。
- 兼容 InfluxDB v1 写接口:
[https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/](https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/)
- 兼容 OpenTSDB JSON 和 telnet 格式写入:
- [http://opentsdb.net/docs/build/html/api_http/put.html](http://opentsdb.net/docs/build/html/api_http/put.html)
- [http://opentsdb.net/docs/build/html/api_telnet/put.html](http://opentsdb.net/docs/build/html/api_telnet/put.html)
- collectd 数据写入:
collectd 是一个系统统计收集守护程序,请访问 [https://collectd.org/](https://collectd.org/) 了解更多信息。
- StatsD 数据写入:
StatsD 是一个简单而强大的统计信息汇总的守护程序。请访问 [https://github.com/statsd/statsd](https://github.com/statsd/statsd) 了解更多信息。
- icinga2 OpenTSDB writer 数据写入:
icinga2 是一个收集检查结果指标和性能数据的软件。请访问 [https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer](https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer) 了解更多信息。
- TCollector 数据写入:
TCollector是一个客户端进程从本地收集器收集数据并将数据推送到 OpenTSDB。请访问 [http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html) 了解更多信息。
- node_exporter 采集写入:
node_export 是一个机器指标的导出器。请访问 [https://github.com/prometheus/node_exporter](https://github.com/prometheus/node_exporter) 了解更多信息。
- Prometheus remote_read 和 remote_write
remote_read 和 remote_write 是 Prometheus 数据读写分离的集群方案。请访问[https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis](https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis) 了解更多信息。
- RESTful 接口:
[RESTful API](../../connector/rest-api)
### WebSocket 接口
各语言连接器通过 taosAdapter 的 WebSocket 接口,能够实现 SQL 执行、无模式写入、参数绑定和数据订阅功能。参考[开发指南](../../../develop/connect/#websocket-连接)。
### 兼容 InfluxDB v1 写接口
您可以使用任何支持 HTTP 协议的客户端访问 Restful 接口地址 `http://<fqdn>:6041/influxdb/v1/write` 来写入 InfluxDB 兼容格式的数据到 TDengine。
支持 InfluxDB 参数如下:
- `db` 指定 TDengine 使用的数据库名
- `precision` TDengine 使用的时间精度
- `u` TDengine 用户名
- `p` TDengine 密码
- `ttl` 自动创建的子表生命周期,以子表的第一条数据的 TTL 参数为准,不可更新。更多信息请参考[创建表文档](../../taos-sql/table/#创建表)的 TTL 参数。
注意: 目前不支持 InfluxDB 的 token 验证方式,仅支持 Basic 验证和查询参数验证。
示例:
```shell
curl --request POST http://127.0.0.1:6041/influxdb/v1/write?db=test --user "root:taosdata" --data-binary "measurement,host=host1 field1=2i,field2=2.0 1577836800000000000"
```
### 兼容 OpenTSDB JSON 和 telnet 格式写入
您可以使用任何支持 HTTP 协议的客户端访问 Restful 接口地址 `http://<fqdn>:6041/<APIEndPoint>` 来写入 OpenTSDB 兼容格式的数据到 TDengine。EndPoint 如下:
```text
/opentsdb/v1/put/json/<db>
/opentsdb/v1/put/telnet/<db>
```
### collectd 数据写入
<CollectD />
### StatsD 数据写入
<StatsD />
### icinga2 OpenTSDB writer 数据写入
<Icinga2 />
### TCollector 数据写入
<TCollector />
### node_exporter 采集写入
Prometheus 使用的由 \*NIX 内核暴露的硬件和操作系统指标的输出器
- 启用 taosAdapter 的配置 node_exporter.enable
- 设置 node_exporter 的相关配置
- 重新启动 taosAdapter
### Prometheus remote_read 和 remote_write
<Prometheus />
### RESTful 接口
您可以使用任何支持 HTTP 协议的客户端通过访问 RESTful 接口地址 `http://<fqdn>:6041/rest/sql` 来写入数据到 TDengine 或从 TDengine 中查询数据。细节请参考[REST API 文档](../../connector/rest-api/)。
## 安装
taosAdapter 是 TDengine 服务端软件 的一部分,如果您使用 TDengine server 您不需要任何额外的步骤来安装 taosAdapter。您可以从[涛思数据官方网站](https://docs.taosdata.com/releases/tdengine/)下载 TDengine server 安装包。如果需要将 taosAdapter 分离部署在 TDengine server 之外的服务器上,则应该在该服务器上安装完整的 TDengine 来安装 taosAdapter。如果您需要使用源代码编译生成 taosAdapter您可以参考[构建 taosAdapter](https://github.com/taosdata/taosadapter/blob/3.0/BUILD-CN.md)文档。
### 启动/停止 taosAdapter
安装完成后使用命令 `systemctl start taosadapter` 可以启动 taosAdapter 服务。
在 Linux 系统上 taosAdapter 服务默认由 systemd 管理。使用命令 `systemctl start taosadapter` 可以启动 taosAdapter 服务。使用命令 `systemctl stop taosadapter` 可以停止 taosAdapter 服务。
### 移除 taosAdapter
使用命令 rmtaos 可以移除包括 taosAdapter 在内的 TDengine server 软件。
### 升级 taosAdapter
taosAdapter 和 TDengine server 需要使用相同版本。请通过升级 TDengine server 来升级 taosAdapter。
与 taosd 分离部署的 taosAdapter 必须通过升级其所在服务器的 TDengine server 才能得到升级。
## taosAdapter 参数列表
## 配置
taosAdapter 支持通过命令行参数、环境变量和配置文件来进行配置。默认配置文件是 /etc/taos/taosadapter.toml。
@ -75,6 +142,7 @@ Usage of taosAdapter:
--instanceId int instance ID. Env "TAOS_ADAPTER_INSTANCE_ID" (default 32)
--log.compress whether to compress old log. Env "TAOS_ADAPTER_LOG_COMPRESS"
--log.enableRecordHttpSql whether to record http sql. Env "TAOS_ADAPTER_LOG_ENABLE_RECORD_HTTP_SQL"
--log.keepDays uint log retention days, must be a positive integer. Env "TAOS_ADAPTER_LOG_KEEP_DAYS" (default 30)
--log.level string log level (trace debug info warning error). Env "TAOS_ADAPTER_LOG_LEVEL" (default "info")
--log.path string log path. Env "TAOS_ADAPTER_LOG_PATH" (default "/var/log/taos")
--log.reservedDiskSize string reserved disk size for log dir (KB MB GB), must be a positive integer. Env "TAOS_ADAPTER_LOG_RESERVED_DISK_SIZE" (default "1GB")
@ -85,6 +153,8 @@ Usage of taosAdapter:
--log.sqlRotationSize string record sql log rotation size(KB MB GB), must be a positive integer. Env "TAOS_ADAPTER_LOG_SQL_ROTATION_SIZE" (default "1GB")
--log.sqlRotationTime duration record sql log rotation time. Env "TAOS_ADAPTER_LOG_SQL_ROTATION_TIME" (default 24h0m0s)
--logLevel string log level (trace debug info warning error). Env "TAOS_ADAPTER_LOG_LEVEL" (default "info")
--maxAsyncConcurrentLimit int The maximum number of concurrent calls allowed for the C asynchronous method. 0 means use CPU core count. Env "TAOS_ADAPTER_MAX_ASYNC_CONCURRENT_LIMIT"
--maxSyncConcurrentLimit int The maximum number of concurrent calls allowed for the C synchronized method. 0 means use CPU core count. Env "TAOS_ADAPTER_MAX_SYNC_CONCURRENT_LIMIT"
--monitor.collectDuration duration Set monitor duration. Env "TAOS_ADAPTER_MONITOR_COLLECT_DURATION" (default 3s)
--monitor.disable Whether to disable monitoring. Env "TAOS_ADAPTER_MONITOR_DISABLE" (default true)
--monitor.identity string The identity of the current instance, or 'hostname:port' if it is empty. Env "TAOS_ADAPTER_MONITOR_IDENTITY"
@ -126,6 +196,9 @@ Usage of taosAdapter:
--prometheus.enable enable prometheus. Env "TAOS_ADAPTER_PROMETHEUS_ENABLE" (default true)
--restfulRowLimit int restful returns the maximum number of rows (-1 means no limit). Env "TAOS_ADAPTER_RESTFUL_ROW_LIMIT" (default -1)
--smlAutoCreateDB Whether to automatically create db when writing with schemaless. Env "TAOS_ADAPTER_SML_AUTO_CREATE_DB"
--ssl.certFile string ssl cert file path. Env "TAOS_ADAPTER_SSL_CERT_FILE"
--ssl.enable enable ssl. Env "TAOS_ADAPTER_SSL_ENABLE"
--ssl.keyFile string ssl key file path. Env "TAOS_ADAPTER_SSL_KEY_FILE"
--statsd.allowPendingMessages int statsd allow pending messages. Env "TAOS_ADAPTER_STATSD_ALLOW_PENDING_MESSAGES" (default 50000)
--statsd.db string statsd db name. Env "TAOS_ADAPTER_STATSD_DB" (default "statsd")
--statsd.deleteCounters statsd delete counter cache after gather. Env "TAOS_ADAPTER_STATSD_DELETE_COUNTERS" (default true)
@ -152,27 +225,44 @@ Usage of taosAdapter:
-V, --version Print the version and exit
```
备注:
使用浏览器进行接口调用请根据实际情况设置如下跨源资源共享CORS参数
示例配置文件参见 [example/config/taosadapter.toml](https://github.com/taosdata/taosadapter/blob/3.0/example/config/taosadapter.toml)。
```text
AllowAllOrigins
AllowOrigins
AllowHeaders
ExposeHeaders
AllowCredentials
AllowWebSockets
```
### 跨域配置
使用浏览器进行接口调用时请根据实际情况设置如下跨域CORS参数
- **`cors.allowAllOrigins`**:是否允许所有来源访问,默认为 `true`
- **`cors.allowOrigins`**:允许跨域访问的来源列表,支持多个来源,以逗号分隔。
- **`cors.allowHeaders`**:允许跨域访问的请求头列表,支持多个请求头,以逗号分隔。
- **`cors.exposeHeaders`**:允许跨域访问的响应头列表,支持多个响应头,以逗号分隔。
- **`cors.allowCredentials`**:是否允许跨域请求包含用户凭证,如 cookies、HTTP 认证信息或客户端 SSL 证书。
- **`cors.allowWebSockets`**:是否允许 WebSockets 连接。
如果不通过浏览器进行接口调用无需关心这几项配置。
以上配置对以下接口生效:
* RESTful 接口请求
* WebSocket 接口请求
* InfluxDB v1 写接口
* OpenTSDB HTTP 写入接口
关于 CORS 协议细节请参考:[https://www.w3.org/wiki/CORS_Enabled](https://www.w3.org/wiki/CORS_Enabled) 或 [https://developer.mozilla.org/zh-CN/docs/Web/HTTP/CORS](https://developer.mozilla.org/zh-CN/docs/Web/HTTP/CORS)。
示例配置文件参见 [example/config/taosadapter.toml](https://github.com/taosdata/taosadapter/blob/3.0/example/config/taosadapter.toml)。
### 连接池配置
### 连接池参数说明
taosAdapter 使用连接池管理与 TDengine 的连接,以提高并发性能和资源利用率。连接池配置对以下接口生效,且以下接口共享一个连接池:
在使用 RESTful 接口请求时,系统将通过连接池管理 TDengine 连接。连接池可通过以下参数进行配置:
* RESTful 接口请求
* InfluxDB v1 写接口
* OpenTSDB JSON 和 telnet 格式写入
* Telegraf 数据写入
* collectd 数据写入
* StatsD 数据写入
* 采集 node_exporter 数据写入
* Prometheus remote_read 和 remote_write
连接池的配置参数如下:
- **`pool.maxConnect`**:连接池允许的最大连接数,默认值为 2 倍 CPU 核心数。建议保持默认设置。
- **`pool.maxIdle`**:连接池中允许的最大空闲连接数,默认与 `pool.maxConnect` 相同。建议保持默认设置。
@ -180,205 +270,167 @@ AllowWebSockets
- **`pool.waitTimeout`**:从连接池获取连接的超时时间,默认设置为 60 秒。如果在超时时间内未能获取连接,将返回 HTTP 状态码 503。该参数从版本 3.3.3.0 开始提供。
- **`pool.maxWait`**:连接池中等待获取连接的请求数上限,默认值为 0表示不限制。当排队请求数超过此值时新的请求将返回 HTTP 状态码 503。该参数从版本 3.3.3.0 开始提供。
## 功能列表
### HTTP 返回码配置
- RESTful 接口
[RESTful API](../../connector/rest-api)
- 兼容 InfluxDB v1 写接口
[https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/](https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/)
- 兼容 OpenTSDB JSON 和 telnet 格式写入
- [http://opentsdb.net/docs/build/html/api_http/put.html](http://opentsdb.net/docs/build/html/api_http/put.html)
- [http://opentsdb.net/docs/build/html/api_telnet/put.html](http://opentsdb.net/docs/build/html/api_telnet/put.html)
- 与 collectd 无缝连接。
collectd 是一个系统统计收集守护程序,请访问 [https://collectd.org/](https://collectd.org/) 了解更多信息。
- Seamless connection with StatsD。
StatsD 是一个简单而强大的统计信息汇总的守护程序。请访问 [https://github.com/statsd/statsd](https://github.com/statsd/statsd) 了解更多信息。
- 与 icinga2 的无缝连接。
icinga2 是一个收集检查结果指标和性能数据的软件。请访问 [https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer](https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer) 了解更多信息。
- 与 tcollector 无缝连接。
TCollector是一个客户端进程从本地收集器收集数据并将数据推送到 OpenTSDB。请访问 [http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html) 了解更多信息。
- 无缝连接 node_exporter。
node_export 是一个机器指标的导出器。请访问 [https://github.com/prometheus/node_exporter](https://github.com/prometheus/node_exporter) 了解更多信息。
- 支持 Prometheus remote_read 和 remote_write。
remote_read 和 remote_write 是 Prometheus 数据读写分离的集群方案。请访问[https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis](https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis) 了解更多信息。
- 获取 table 所在的虚拟节点组VGroup的 VGroup ID。
taosAdapter 通过参数 `httpCodeServerError` 来控制当底层 C 接口返回错误时,是否在 RESTful 接口请求中返回非 200 的 HTTP 状态码。当设置为 `true`taosAdapter 会根据 C 接口返回的错误码映射为相应的 HTTP 状态码。具体映射规则请参考 [HTTP 响应码](../../connector/rest-api/#http-响应码)。
## 接口
该配置只会影响 **RESTful 接口**
### TDengine RESTful 接口
**参数说明**
您可以使用任何支持 http 协议的客户端通过访问 RESTful 接口地址 `http://<fqdn>:6041/rest/sql` 来写入数据到 TDengine 或从 TDengine 中查询数据。细节请参考[REST API 文档](../../connector/rest-api/)。
- **`httpCodeServerError`**
- **设置为 `true` 时**:根据 C 接口返回的错误码映射为相应的 HTTP 状态码。
- **设置为 `false` 时**:无论 C 接口返回什么错误,始终返回 HTTP 状态码 `200`(默认值)。
### InfluxDB
您可以使用任何支持 http 协议的客户端访问 Restful 接口地址 `http://<fqdn>:6041/influxdb/v1/write` 来写入 InfluxDB 兼容格式的数据到 TDengine。
### 内存限制配置
支持 InfluxDB 参数如下:
taosAdapter 将监测自身运行过程中内存使用率并通过两个阈值进行调节。有效值范围为 1 到 100 的整数,单位为系统物理内存的百分比。
- `db` 指定 TDengine 使用的数据库名
- `precision` TDengine 使用的时间精度
- `u` TDengine 用户名
- `p` TDengine 密码
- `ttl` 自动创建的子表生命周期,以子表的第一条数据的 TTL 参数为准,不可更新。更多信息请参考[创建表文档](../../taos-sql/table/#创建表)的 TTL 参数。
该配置只会影响以下接口:
注意: 目前不支持 InfluxDB 的 token 验证方式,仅支持 Basic 验证和查询参数验证。
示例: curl --request POST http://127.0.0.1:6041/influxdb/v1/write?db=test --user "root:taosdata" --data-binary "measurement,host=host1 field1=2i,field2=2.0 1577836800000000000"
### OpenTSDB
* RESTful 接口请求
* InfluxDB v1 写接口
* OpenTSDB HTTP 写入接口
* Prometheus remote_read 和 remote_write 接口
您可以使用任何支持 http 协议的客户端访问 Restful 接口地址 `http://<fqdn>:6041/<APIEndPoint>` 来写入 OpenTSDB 兼容格式的数据到 TDengine。EndPoint 如下:
**参数说明**
```text
/opentsdb/v1/put/json/<db>
/opentsdb/v1/put/telnet/<db>
```
- **`pauseQueryMemoryThreshold`**
- 当内存使用超过此阈值时taosAdapter 将停止处理查询请求。
- 默认值:`70`(即 70% 的系统物理内存)。
- **`pauseAllMemoryThreshold`**
- 当内存使用超过此阈值时taosAdapter 将停止处理所有请求(包括写入和查询)。
- 默认值:`80`(即 80% 的系统物理内存)。
### collectd
当内存使用回落到阈值以下时taosAdapter 会自动恢复相应功能。
<CollectD />
**HTTP 返回内容:**
### StatsD
- **超过 `pauseQueryMemoryThreshold` 时**
- HTTP 状态码:`503`
- 返回内容:`"query memory exceeds threshold"`
- **超过 `pauseAllMemoryThreshold` 时**
- HTTP 状态码:`503`
- 返回内容:`"memory exceeds threshold"`
<StatsD />
**状态检查接口:**
### icinga2 OpenTSDB writer
可以通过以下接口检查 taosAdapter 的内存状态:
- **正常状态**`http://<fqdn>:6041/-/ping` 返回 `code 200`
- **内存超过阈值**
- 如果内存超过 `pauseAllMemoryThreshold`,返回 `code 503`
- 如果内存超过 `pauseQueryMemoryThreshold`,且请求参数包含 `action=query`,返回 `code 503`
<Icinga2 />
**相关配置参数:**
### TCollector
<TCollector />
### node_exporter
Prometheus 使用的由 \*NIX 内核暴露的硬件和操作系统指标的输出器
- 启用 taosAdapter 的配置 node_exporter.enable
- 设置 node_exporter 的相关配置
- 重新启动 taosAdapter
### prometheus
<Prometheus />
### 获取 table 的 VGroup ID
可以 POST 请求 http 接口 `http://<fqdn>:<port>/rest/sql/<db>/vgid` 获取 table 的 VGroup IDbody 是多个表名 JSON 数组。
样例:获取数据库为 power表名为 d_bind_1 和 d_bind_2 的 VGroup ID
```shell
curl --location 'http://127.0.0.1:6041/rest/sql/power/vgid' \
--user 'root:taosdata' \
--data '["d_bind_1","d_bind_2"]'
```
响应:
```json
{"code":0,"vgIDs":[153,152]}
```
## 内存使用优化方法
taosAdapter 将监测自身运行过程中内存使用率并通过两个阈值进行调节。有效值范围为 -1 到 100 的整数,单位为系统物理内存的百分比。
- pauseQueryMemoryThreshold
- pauseAllMemoryThreshold
当超过 pauseQueryMemoryThreshold 阈值时时停止处理查询请求。
http 返回内容:
- code 503
- body "query memory exceeds threshold"
当超过 pauseAllMemoryThreshold 阈值时停止处理所有写入和查询请求。
http 返回内容:
- code 503
- body "memory exceeds threshold"
当内存回落到阈值之下时恢复对应功能。
状态检查接口 `http://<fqdn>:6041/-/ping`
- 正常返回 `code 200`
- 无参数 如果内存超过 pauseAllMemoryThreshold 将返回 `code 503`
- 请求参数 `action=query` 如果内存超过 pauseQueryMemoryThreshold 或 pauseAllMemoryThreshold 将返回 `code 503`
对应配置参数
```text
monitor.collectDuration 监测间隔 环境变量 "TAOS_MONITOR_COLLECT_DURATION" (默认值 3s)
monitor.incgroup 是否是cgroup中运行(容器中运行设置为 true) 环境变量 "TAOS_MONITOR_INCGROUP"
monitor.pauseAllMemoryThreshold 不再进行插入和查询的内存阈值 环境变量 "TAOS_MONITOR_PAUSE_ALL_MEMORY_THRESHOLD" (默认值 80)
monitor.pauseQueryMemoryThreshold 不再进行查询的内存阈值 环境变量 "TAOS_MONITOR_PAUSE_QUERY_MEMORY_THRESHOLD" (默认值 70)
```
- **`monitor.collectDuration`**:内存监控间隔,默认值为 `3s`,环境变量为 `TAOS_MONITOR_COLLECT_DURATION`
- **`monitor.incgroup`**:是否在容器中运行(容器中运行设置为 `true`),默认值为 `false`,环境变量为 `TAOS_MONITOR_INCGROUP`
- **`monitor.pauseQueryMemoryThreshold`**:查询请求暂停的内存阈值(百分比),默认值为 `70`,环境变量为 `TAOS_MONITOR_PAUSE_QUERY_MEMORY_THRESHOLD`
- **`monitor.pauseAllMemoryThreshold`**:查询和写入请求暂停的内存阈值(百分比),默认值为 `80`,环境变量为 `TAOS_MONITOR_PAUSE_ALL_MEMORY_THRESHOLD`
您可以根据具体项目应用场景和运营策略进行相应调整,并建议使用运营监控软件及时进行系统内存状态监控。负载均衡器也可以通过这个接口检查 taosAdapter 运行状态。
## taosAdapter 监控指标
### 无模式写入创建 DB 配置
taosAdapter 采集 REST/WebSocket 相关请求的监控指标。将监控指标上报给 taosKeeper这些监控指标会被 taosKeeper 写入监控数据库,默认是 `log` 库,可以在 taoskeeper 配置文件中修改。以下是这些监控指标的详细介绍。
**3.0.4.0 版本** 开始taosAdapter 提供了参数 `smlAutoCreateDB`,用于控制在 schemaless 协议写入时是否自动创建数据库DB
#### adapter\_requests 表
`smlAutoCreateDB` 参数只会影响以下接口:
`adapter_requests` 记录 taosadapter 监控数据。
- InfluxDB v1 写接口
- OpenTSDB JSON 和 telnet 格式写入
- Telegraf 数据写入
- collectd 数据写入
- StatsD 数据写入
- node_exporter 数据写入
| field | type | is\_tag | comment |
| :----------------- | :----------- | :------ | :---------------------------------- |
| ts | TIMESTAMP | | timestamp |
| total | INT UNSIGNED | | 总请求数 |
| query | INT UNSIGNED | | 查询请求数 |
| write | INT UNSIGNED | | 写入请求数 |
| other | INT UNSIGNED | | 其他请求数 |
| in\_process | INT UNSIGNED | | 正在处理请求数 |
| success | INT UNSIGNED | | 成功请求数 |
| fail | INT UNSIGNED | | 失败请求数 |
| query\_success | INT UNSIGNED | | 查询成功请求数 |
| query\_fail | INT UNSIGNED | | 查询失败请求数 |
| write\_success | INT UNSIGNED | | 写入成功请求数 |
| write\_fail | INT UNSIGNED | | 写入失败请求数 |
| other\_success | INT UNSIGNED | | 其他成功请求数 |
| other\_fail | INT UNSIGNED | | 其他失败请求数 |
| query\_in\_process | INT UNSIGNED | | 正在处理查询请求数 |
| write\_in\_process | INT UNSIGNED | | 正在处理写入请求数 |
| endpoint | VARCHAR | | 请求端点 |
**参数说明**
- **`smlAutoCreateDB`**
- **设置为 `true` 时**:在 schemaless 协议写入时如果目标数据库不存在taosAdapter 会自动创建该数据库。
- **设置为 `false` 时**:用户需要手动创建数据库,否则写入会失败(默认值)。
### 结果返回条数配置
taosAdapter 提供了参数 `restfulRowLimit`,用于控制 HTTP 接口返回的结果条数。
`restfulRowLimit` 参数只会影响以下接口的返回结果:
- RESTful 接口
- Prometheus remote_read 接口
**参数说明**
- **`restfulRowLimit`**
- **设置为正整数时**:接口返回的结果条数将不超过该值。
- **设置为 `-1` 时**:接口返回的结果条数无限制(默认值)。
### 日志配置
1. 可以通过设置 --log.level 参数或者环境变量 TAOS_ADAPTER_LOG_LEVEL 来设置 taosAdapter 日志输出详细程度。有效值包括panic、fatal、error、warn、warning、info、debug 以及 trace。
2. 从 **3.3.5.0 版本** 开始taosAdapter 支持通过 HTTP 接口动态修改日志级别。用户可以通过发送 HTTP PUT 请求到 /config 接口,动态调整日志级别。该接口的验证方式与 /rest/sql 接口相同,请求体中需传入 JSON 格式的配置项键值对。
以下是通过 curl 命令将日志级别设置为 debug 的示例:
```shell
curl --location --request PUT 'http://127.0.0.1:6041/config' \
-u root:taosdata \
--data '{"log.level": "debug"}'
```
## 服务管理
### 启动/停止 taosAdapter
在 Linux 系统上 taosAdapter 服务默认由 systemd 管理。使用命令 `systemctl start taosadapter` 可以启动 taosAdapter 服务。使用命令 `systemctl stop taosadapter` 可以停止 taosAdapter 服务。使用命令 `systemctl status taosadapter` 来检查 taosAdapter 运行状态。
### 升级 taosAdapter
taosAdapter 和 TDengine server 需要使用相同版本。请通过升级 TDengine server 来升级 taosAdapter。
与 taosd 分离部署的 taosAdapter 必须通过升级其所在服务器的 TDengine server 才能得到升级。
### 移除 taosAdapter
使用命令 rmtaos 可以移除包括 taosAdapter 在内的 TDengine server 软件。
## 监控指标
taosAdapter 目前仅采集 RESTful/WebSocket 相关请求的监控指标,其他接口暂无监控指标。
taosAdapter 将监控指标上报给 taosKeeper这些监控指标会被 taosKeeper 写入监控数据库,默认是 `log` 库,可以在 taoskeeper 配置文件中修改。以下是这些监控指标的详细介绍。
`adapter_requests` 表记录 taosAdapter 监控数据,字段如下:
| field | type | is\_tag | comment |
|:-------------------|:-------------|:--------|:----------------------------|
| ts | TIMESTAMP | | timestamp |
| total | INT UNSIGNED | | 总请求数 |
| query | INT UNSIGNED | | 查询请求数 |
| write | INT UNSIGNED | | 写入请求数 |
| other | INT UNSIGNED | | 其他请求数 |
| in\_process | INT UNSIGNED | | 正在处理请求数 |
| success | INT UNSIGNED | | 成功请求数 |
| fail | INT UNSIGNED | | 失败请求数 |
| query\_success | INT UNSIGNED | | 查询成功请求数 |
| query\_fail | INT UNSIGNED | | 查询失败请求数 |
| write\_success | INT UNSIGNED | | 写入成功请求数 |
| write\_fail | INT UNSIGNED | | 写入失败请求数 |
| other\_success | INT UNSIGNED | | 其他成功请求数 |
| other\_fail | INT UNSIGNED | | 其他失败请求数 |
| query\_in\_process | INT UNSIGNED | | 正在处理查询请求数 |
| write\_in\_process | INT UNSIGNED | | 正在处理写入请求数 |
| endpoint | VARCHAR | | 请求端点 |
| req\_type | NCHAR | TAG | 请求类型0 为 REST1 为 WebSocket |
## 结果返回条数限制
## httpd 升级为 taosAdapter 的变化
taosAdapter 通过参数 `restfulRowLimit` 来控制结果的返回条数,-1 代表无限制,默认无限制。
在 TDengine server 2.2.x.x 或更早期版本中taosd 进程包含一个内嵌的 http 服务httpd。如前面所述taosAdapter 是一个使用 systemd 管理的独立软件,拥有自己的进程。并且两者有一些配置参数和行为是不同的,请见下表:
该参数控制以下接口返回
- `http://<fqdn>:6041/rest/sql`
- `http://<fqdn>:6041/prometheus/v1/remote_read/:db`
## 配置 http 返回码
taosAdapter 通过参数 `httpCodeServerError` 来设置当 C 接口返回错误时是否返回非 200 的 http 状态码。当设置为 true 时将根据 C 返回的错误码返回不同 http 状态码。具体见 [HTTP 响应码](../../connector/rest-api/#http-响应码)。
## 配置 schemaless 写入是否自动创建 DB
taosAdapter 从 3.0.4.0 版本开始,提供参数 `smlAutoCreateDB` 来控制在 schemaless 协议写入时是否自动创建 DB。默认值为 false 不自动创建 DB需要用户手动创建 DB 后进行 schemaless 写入。
## 故障解决
您可以通过命令 `systemctl status taosadapter` 来检查 taosAdapter 运行状态。
您也可以通过设置 --logLevel 参数或者环境变量 TAOS_ADAPTER_LOG_LEVEL 来调节 taosAdapter 日志输出详细程度。有效值包括: panic、fatal、error、warn、warning、info、debug 以及 trace。
## 如何从旧版本 TDengine 迁移到 taosAdapter
在 TDengine server 2.2.x.x 或更早期版本中taosd 进程包含一个内嵌的 http 服务。如前面所述taosAdapter 是一个使用 systemd 管理的独立软件,拥有自己的进程。并且两者有一些配置参数和行为是不同的,请见下表:
| **#** | **embedded httpd** | **taosAdapter** | **comment** |
| ----- | ------------------- | ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------ |
| 1 | httpEnableRecordSql | --logLevel=debug | |
| 2 | httpMaxThreads | n/a | taosAdapter 自动管理线程池,无需此参数 |
| **#** | **embedded httpd** | **taosAdapter** | **comment** |
|-------|---------------------|-------------------------------|------------------------------------------------------------------------------------------------|
| 1 | httpEnableRecordSql | --logLevel=debug | |
| 2 | httpMaxThreads | n/a | taosAdapter 自动管理线程池,无需此参数 |
| 3 | telegrafUseFieldNum | 请参考 taosAdapter telegraf 配置方法 |
| 4 | restfulRowLimit | restfulRowLimit | 内嵌 httpd 默认输出 10240 行数据,最大允许值为 102400。taosAdapter 也提供 restfulRowLimit 但是默认不做限制。您可以根据实际场景需求进行配置 |
| 5 | httpDebugFlag | 不适用 | httpdDebugFlag 对 taosAdapter 不起作用 |
| 6 | httpDBNameMandatory | 不适用 | taosAdapter 要求 URL 中必须指定数据库名 |
| 4 | restfulRowLimit | restfulRowLimit | 内嵌 httpd 默认输出 10240 行数据,最大允许值为 102400。taosAdapter 也提供 restfulRowLimit 但是默认不做限制。您可以根据实际场景需求进行配置 |
| 5 | httpDebugFlag | 不适用 | httpdDebugFlag 对 taosAdapter 不起作用 |
| 6 | httpDBNameMandatory | 不适用 | taosAdapter 要求 URL 中必须指定数据库名 |

Binary file not shown.

Before

Width:  |  Height:  |  Size: 104 KiB

After

Width:  |  Height:  |  Size: 131 KiB

View File

@ -60,6 +60,7 @@ TDinsight 仪表盘旨在提供 TDengine 相关资源的使用情况和状态,
- **Databases** - 数据库个数。
- **Connections** - 当前连接个数。
- **DNodes/MNodes/VGroups/VNodes**:每种资源的总数和存活数。
- **Classified Connection Counts**:当前活跃连接数,按用户、应用和 ip 分类。
- **DNodes/MNodes/VGroups/VNodes Alive Percent**:每种资源的存活数/总数的比例启用告警规则并在资源存活率1 分钟内平均健康资源比例)不足 100%时触发。
- **Measuring Points Used**:启用告警规则的测点数用量(社区版无数据,默认情况下是健康的)。

View File

@ -1,85 +1,85 @@
### 配置 taosAdapter
#### 配置 taosAdapter
配置 taosAdapter 接收 collectd 数据的方法:
- 在 taosAdapter 配置文件(默认位置为 /etc/taos/taosadapter.toml中使能配置项
```
...
[opentsdb_telnet]
enable = true
maxTCPConnections = 250
tcpKeepAlive = false
dbs = ["opentsdb_telnet", "collectd", "icinga2", "tcollector"]
ports = [6046, 6047, 6048, 6049]
user = "root"
password = "taosdata"
...
```
```toml
...
[opentsdb_telnet]
enable = true
maxTCPConnections = 250
tcpKeepAlive = false
dbs = ["opentsdb_telnet", "collectd", "icinga2", "tcollector"]
ports = [6046, 6047, 6048, 6049]
user = "root"
password = "taosdata"
...
```
其中 taosAdapter 默认写入的数据库名称为 `collectd`,也可以修改 taosAdapter 配置文件 dbs 项来指定不同的名称。user 和 password 填写实际 TDengine 配置的值。修改过配置文件 taosAdapter 需重新启动。
其中 taosAdapter 默认写入的数据库名称为 `collectd`,也可以修改 taosAdapter 配置文件 dbs 项来指定不同的名称。user 和 password 填写实际 TDengine 配置的值。修改过配置文件 taosAdapter 需重新启动。
- 也可以使用 taosAdapter 命令行参数或设置环境变量启动的方式,使能 taosAdapter 接收 collectd 数据功能,具体细节请参考 taosAdapter 的参考手册
- 使用 taosAdapter 命令行参数或设置环境变量启动的方式,使能 taosAdapter 接收 collectd 数据功能,具体细节请参考 taosAdapter 的参考手册。
#### 配置 collectd
### 配置 collectd
#
collectd 使用插件机制可以以多种形式将采集到的监控数据写入到不同的数据存储软件。TDengine 支持直接采集插件和 write_tsdb 插件。
#### 配置接收直接采集插件数据
1. **配置直接采集插件**
修改 collectd 配置文件(默认位置 /etc/collectd/collectd.conf相关配置项。
修改 collectd 配置文件(默认位置 /etc/collectd/collectd.conf相关配置项。
```text
LoadPlugin network
<Plugin network>
Server "<taosAdapter's host>" "<port for collectd direct>"
</Plugin>
```
```xml
LoadPlugin network
<Plugin network>
Server "<taosAdapter's host>" "<port for collectd direct>"
</Plugin>
```
其中 \<taosAdapter's host> 填写运行 taosAdapter 的服务器域名或 IP 地址。\<port for collectd direct> 填写 taosAdapter 用于接收 collectd 数据的端口(默认为 6045
其中 \<taosAdapter's host> 填写运行 taosAdapter 的服务器域名或 IP 地址。\<port for collectd direct> 填写 taosAdapter 用于接收 collectd 数据的端口(默认为 6045
示例如下:
示例如下:
```text
LoadPlugin network
<Plugin network>
Server "127.0.0.1" "6045"
</Plugin>
```
```xml
LoadPlugin network
<Plugin network>
Server "127.0.0.1" "6045"
</Plugin>
```
#### 配置 write_tsdb 插件数据
2. **配置 write_tsdb 插件**
修改 collectd 配置文件(默认位置 /etc/collectd/collectd.conf相关配置项。
修改 collectd 配置文件(默认位置 /etc/collectd/collectd.conf相关配置项。
```text
LoadPlugin write_tsdb
<Plugin write_tsdb>
<Node>
Host "<taosAdapter's host>"
Port "<port for collectd write_tsdb plugin>"
...
</Node>
</Plugin>
```
```xml
LoadPlugin write_tsdb
<Plugin write_tsdb>
<Node>
Host "<taosAdapter's host>"
Port "<port for collectd write_tsdb plugin>"
...
</Node>
</Plugin>
```
其中 \<taosAdapter's host> 填写运行 taosAdapter 的服务器域名或 IP 地址。\<port for collectd write_tsdb plugin> 填写 taosAdapter 用于接收 collectd write_tsdb 插件的数据(默认为 6047
其中 \<taosAdapter's host> 填写运行 taosAdapter 的服务器域名或 IP 地址。\<port for collectd write_tsdb plugin> 填写 taosAdapter 用于接收 collectd write_tsdb 插件的数据(默认为 6047
```text
LoadPlugin write_tsdb
<Plugin write_tsdb>
<Node>
Host "127.0.0.1"
Port "6047"
HostTags "status=production"
StoreRates false
AlwaysAppendDS false
</Node>
</Plugin>
```
```xml
LoadPlugin write_tsdb
<Plugin write_tsdb>
<Node>
Host "127.0.0.1"
Port "6047"
HostTags "status=production"
StoreRates false
AlwaysAppendDS false
</Node>
</Plugin>
```
然后重启 collectd
然后重启 collectd
```
systemctl restart collectd
```
```shell
systemctl restart collectd
```

View File

@ -1,44 +1,44 @@
### 配置 taosAdapter
#### 配置 taosAdapter
配置 taosAdapter 接收 icinga2 数据的方法:
- 在 taosAdapter 配置文件(默认位置 /etc/taos/taosadapter.toml中使能配置项
```
...
[opentsdb_telnet]
enable = true
maxTCPConnections = 250
tcpKeepAlive = false
dbs = ["opentsdb_telnet", "collectd", "icinga2", "tcollector"]
ports = [6046, 6047, 6048, 6049]
user = "root"
password = "taosdata"
...
```
```toml
...
[opentsdb_telnet]
enable = true
maxTCPConnections = 250
tcpKeepAlive = false
dbs = ["opentsdb_telnet", "collectd", "icinga2", "tcollector"]
ports = [6046, 6047, 6048, 6049]
user = "root"
password = "taosdata"
...
```
其中 taosAdapter 默认写入的数据库名称为 `icinga2`,也可以修改 taosAdapter 配置文件 dbs 项来指定不同的名称。user 和 password 填写实际 TDengine 配置的值。修改过 taosAdapter 需重新启动。
其中 taosAdapter 默认写入的数据库名称为 `icinga2`,也可以修改 taosAdapter 配置文件 dbs 项来指定不同的名称。user 和 password 填写实际 TDengine 配置的值。修改过 taosAdapter 需重新启动。
- 也可以使用 taosAdapter 命令行参数或设置环境变量启动的方式,使能 taosAdapter 接收 icinga2 数据功能,具体细节请参考 taosAdapter 的参考手册
- 使用 taosAdapter 命令行参数或设置环境变量启动的方式,使能 taosAdapter 接收 icinga2 数据功能,具体细节请参考 taosAdapter 的参考手册
### 配置 icinga2
#### 配置 icinga2
- 使能 icinga2 的 opentsdb-writer参考链接 https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer
- 修改配置文件 `/etc/icinga2/features-enabled/opentsdb.conf` 填写 \<taosAdapter's host> 为运行 taosAdapter 的服务器的域名或 IP 地址,\<port for icinga2> 填写 taosAdapter 支持接收 icinga2 数据的相应端口(默认为 6048
```
object OpenTsdbWriter "opentsdb" {
host = "<taosAdapter's host>"
port = <port for icinga2>
}
```
```c
object OpenTsdbWriter "opentsdb" {
host = "<taosAdapter's host>"
port = <port for icinga2>
}
```
示例文件:
示例文件:
```
object OpenTsdbWriter "opentsdb" {
host = "127.0.0.1"
port = 6048
}
```
```c
object OpenTsdbWriter "opentsdb" {
host = "127.0.0.1"
port = 6048
}
```

View File

@ -1,18 +1,18 @@
配置 Prometheus 是通过编辑 Prometheus 配置文件 prometheus.yml (默认位置 /etc/prometheus/prometheus.yml完成的。
### 配置第三方数据库地址
#### 配置第三方数据库地址
将其中的 remote_read url 和 remote_write url 指向运行 taosAdapter 服务的服务器域名或 IP 地址REST 服务端口taosAdapter 默认使用 6041以及希望写入 TDengine 的数据库名称,并确保相应的 URL 形式如下:
- remote_read url : `http://<taosAdapter's host>:<REST service port>/prometheus/v1/remote_read/<database name>`
- remote_write url : `http://<taosAdapter's host>:<REST service port>/prometheus/v1/remote_write/<database name>`
### 配置 Basic 验证
#### 配置 Basic 验证
- username \<TDengine's username>
- password \<TDengine's password>
### prometheus.yml 文件中 remote_write 和 remote_read 相关部分配置示例
#### prometheus.yml 文件中 remote_write 和 remote_read 相关部分配置示例
```yaml
remote_write:

View File

@ -1,35 +1,35 @@
### 配置 taosAdapter
#### 配置 taosAdapter
配置 taosAdapter 接收 StatsD 数据的方法:
- 在 taosAdapter 配置文件(默认位置 /etc/taos/taosadapter.toml中使能配置项
```
...
[statsd]
enable = true
port = 6044
db = "statsd"
user = "root"
password = "taosdata"
worker = 10
gatherInterval = "5s"
protocol = "udp"
maxTCPConnections = 250
tcpKeepAlive = false
allowPendingMessages = 50000
deleteCounters = true
deleteGauges = true
deleteSets = true
deleteTimings = true
...
```
```toml
...
[statsd]
enable = true
port = 6044
db = "statsd"
user = "root"
password = "taosdata"
worker = 10
gatherInterval = "5s"
protocol = "udp"
maxTCPConnections = 250
tcpKeepAlive = false
allowPendingMessages = 50000
deleteCounters = true
deleteGauges = true
deleteSets = true
deleteTimings = true
...
```
其中 taosAdapter 默认写入的数据库名称为 `statsd`,也可以修改 taosAdapter 配置文件 db 项来指定不同的名称。user 和 password 填写实际 TDengine 配置的值。修改过配置文件 taosAdapter 需重新启动。
其中 taosAdapter 默认写入的数据库名称为 `statsd`,也可以修改 taosAdapter 配置文件 db 项来指定不同的名称。user 和 password 填写实际 TDengine 配置的值。修改过配置文件 taosAdapter 需重新启动。
- 也可以使用 taosAdapter 命令行参数或设置环境变量启动的方式,使能 taosAdapter 接收 StatsD 数据功能,具体细节请参考 taosAdapter 的参考手册
- 使用 taosAdapter 命令行参数或设置环境变量启动的方式,使能 taosAdapter 接收 StatsD 数据功能,具体细节请参考 taosAdapter 的参考手册
### 配置 StatsD
#### 配置 StatsD
使用 StatsD 需要下载其[源代码](https://github.com/statsd/statsd)。其配置文件请参考其源代码下载到本地的根目录下的示例文件 `exampleConfig.js` 进行修改。其中 \<taosAdapter's host> 填写运行 taosAdapter 的服务器域名或 IP 地址,\<port for StatsD> 请填写 taosAdapter 接收 StatsD 数据的端口(默认为 6044
@ -40,7 +40,7 @@ repeater 部分添加 { host:'<taosAdapter's host>', port: <port for StatsD>}
示例配置文件:
```
```js
{
port: 8125
, backends: ["./backends/repeater"]
@ -50,7 +50,7 @@ port: 8125
增加如下内容后启动 StatsD假设配置文件修改为 config.js
```
```shell
npm install
node stats.js config.js &
```

View File

@ -1,28 +1,28 @@
### 配置 taosAdapter
#### 配置 taosAdapter
配置 taosAdapter 接收 TCollector 数据的方法:
- 在 taosAdapter 配置文件(默认位置 /etc/taos/taosadapter.toml中使能配置项
```
...
[opentsdb_telnet]
enable = true
maxTCPConnections = 250
tcpKeepAlive = false
dbs = ["opentsdb_telnet", "collectd", "icinga2", "tcollector"]
ports = [6046, 6047, 6048, 6049]
user = "root"
password = "taosdata"
...
```
```toml
...
[opentsdb_telnet]
enable = true
maxTCPConnections = 250
tcpKeepAlive = false
dbs = ["opentsdb_telnet", "collectd", "icinga2", "tcollector"]
ports = [6046, 6047, 6048, 6049]
user = "root"
password = "taosdata"
...
```
其中 taosAdapter 默认写入的数据库名称为 `tcollector`,也可以修改 taosAdapter 配置文件 dbs 项来指定不同的名称。user 和 password 填写实际 TDengine 配置的值。修改过配置文件 taosAdapter 需重新启动。
其中 taosAdapter 默认写入的数据库名称为 `tcollector`,也可以修改 taosAdapter 配置文件 dbs 项来指定不同的名称。user 和 password 填写实际 TDengine 配置的值。修改过配置文件 taosAdapter 需重新启动。
- 也可以使用 taosAdapter 命令行参数或设置环境变量启动的方式,使能 taosAdapter 接收 tcollector 数据功能,具体细节请参考 taosAdapter 的参考手册
- 使用 taosAdapter 命令行参数或设置环境变量启动的方式,使能 taosAdapter 接收 tcollector 数据功能,具体细节请参考 taosAdapter 的参考手册
### 配置 TCollector
#### 配置 TCollector
使用 TCollector 需下载其[源代码](https://github.com/OpenTSDB/tcollector)。其配置项在其源代码中。注意TCollector 各个版本区别较大,这里仅以当前 master 分支最新代码 (git commit: 37ae920) 为例。
@ -30,7 +30,7 @@ password = "taosdata"
示例为源代码修改内容的 git diff 输出:
```
```diff
index e7e7a1c..ec3e23c 100644
--- a/collectors/etc/config.py
+++ b/collectors/etc/config.py

View File

@ -4,17 +4,21 @@ sidebar_label: taos
toc_max_heading_level: 4
---
TDengine 命令行程序(以下简称 TDengine CLI是用户操作 TDengine 实例并与之交互最简洁常用工具。 使用前需要安装 TDengine Server 安装包或 TDengine Client 安装包。
TDengine 命令行程序(以下简称 TDengine CLI是用户操作 TDengine 实例并与之交互最简洁常用工具。
## 启动
## 工具获取
要进入 TDengine CLI您在终端执行 `taos` 即可。
TDengine CLI 是 TDengine 服务器及客户端安装包中默认安装组件,安装后即可使用,参考 [TDengine 安装](../../../get-started/)
## 运行
进入 TDengine CLI 交互执行模式,在终端命令行执行:
```bash
taos
```
如果连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息。
如果连接服务成功,将会打印出欢迎消息和版本信息。若失败,打印错误消息。
TDengine CLI 的提示符号如下:
@ -22,42 +26,24 @@ TDengine CLI 的提示符号如下:
taos>
```
进入 TDengine CLI 后,可执行各种 SQL 语句,包括插入、查询以及各种管理命令。
进入 TDengine CLI 后,可执行各种 SQL 语句,包括插入、查询以及各种管理命令。
退出 TDengine CLI 执行 `q``quit``exit` 回车即可
```shell
taos> quit
```
## 执行 SQL 脚本
在 TDengine CLI 里可以通过 `source` 命令来运行脚本文件中的多条 SQL 命令。
```sql
taos> source <filename>;
```
## 在线修改显示字符宽度
可以在 TDengine CLI 里使用如下命令调整字符显示宽度
```sql
taos> SET MAX_BINARY_DISPLAY_WIDTH <nn>;
```
如显示的内容后面以 ... 结尾时,表示该内容已被截断,可通过本命令修改显示字符宽度以显示完整的内容。
## 命令行参数
您可通过配置命令行参数来改变 TDengine CLI 的行为。以下为常用的几个命令行参数:
### 常用参数
可通过配置命令行参数来改变 TDengine CLI 的行为。以下为常用的几个命令行参数:
- -h HOST: 要连接的 TDengine 服务端所在服务器的 FQDN, 默认为连接本地服务
- -P PORT: 指定服务端所用端口号
- -u USER: 连接时使用的用户名
- -p PASSWORD: 连接服务端时使用的密码,特殊字符如 `! & ( ) < > ; |` 需使用字符 `\` 进行转义处理
- -h HOST: 要连接的 TDengine 服务端所在服务器的 FQDN, 默认值: 127.0.0.1
- -P PORT: 指定服务端所用端口号默认值6030
- -u USER: 连接时使用的用户名默认值root
- -p PASSWORD: 连接服务端时使用的密码,特殊字符如 `! & ( ) < > ; |` 需使用字符 `\` 进行转义处理, 默认值taosdata
- -?, --help: 打印出所有命令行参数
还有更多其他参数:
### 更多参数
- -a AUTHSTR: 连接服务端的授权信息
- -A: 通过用户名和密码计算授权信息
@ -79,27 +65,58 @@ taos> SET MAX_BINARY_DISPLAY_WIDTH <nn>;
- -z TIMEZONE: 指定时区,默认为本地时区
- -V: 打印出当前版本号
示例:
### 非交互式执行
使用 `-s` 参数可进行非交互式执行 SQL执行完成后退出此模式适合在自动化脚本中使用。
如以下命令连接到服务器 h1.taos.com, 执行 -s 指定的 SQL:
```bash
taos -h h1.taos.com -s "use db; show tables;"
```
## 配置文件
### taosc 配置文件
也可以通过配置文件中的参数设置来控制 TDengine CLI 的行为。可用配置参数请参考[客户端配置](../../components/taosc)
使用 `-c` 参数改变 `taosc` 客户端加载配置文件的位置,客户端配置参数参考 [客户端配置](../../components/taosc)
以下命令指定了 `taosc` 客户端加载 `/root/cfg/` 下的 `taos.cfg` 配置文件
```bash
taos -c /root/cfg/
```
## 错误代码表
在 TDengine 3.3.4.8 版本后 TDengine CLI 在返回错误信息中返回了具体错误码,用户可到 TDengine 官网错误码页面查找具体原因及解决措施,见:[错误码参考表](https://docs.taosdata.com/reference/error-code/)
## 执行 SQL 脚本
## TDengine CLI TAB 键补全
在 TDengine CLI 里可以通过 `source` 命令来运行脚本文件中的多条 SQL 命令。
```sql
taos> source <filename>;
```
## 数据导入/导出
### 导出查询结果
- 可以使用符号 “>>” 导出查询结果到某个文件中,语法为: sql 查询语句 >> ‘输出文件名’; 输出文件如果不写路径的话,将输出至当前目录下。如 select * from d0 >> /root/d0.csv; 将把查询结果输出到 /root/d0.csv 中。
### 数据从文件导入
- 可以使用 insert into table_name file '输入文件名',把上一步中导出的数据文件再导入到指定表中。如 insert into d0 file '/root/d0.csv'; 表示把上面导出的数据全部再导致至 d0 表中。
## 设置字符类型显示宽度
可以在 TDengine CLI 里使用如下命令调整字符显示宽度
```sql
taos> SET MAX_BINARY_DISPLAY_WIDTH <nn>;
```
如显示的内容后面以 ... 结尾时,表示该内容已被截断,可通过本命令修改显示字符宽度以显示完整的内容。
## TAB 键自动补全
- TAB 键前为空命令状态下按 TAB 键,会列出 TDengine CLI 支持的所有命令
- TAB 键前为空格状态下按 TAB 键,会显示此位置可以出现的所有命令词的第一个,再次按 TAB 键切为下一个
- TAB 键前为字符串,会搜索与此字符串前缀匹配的所有可出现命令词,并显示第一个,再次按 TAB 键切为下一个
- 输入反斜杠 `\` + TAB 键, 会自动补全为列显示模式命令词 `\G;`
## TDengine CLI 小技巧
## 使用小技巧
- 可以使用上下光标键查看历史输入的指令
- 在 TDengine CLI 中使用 `alter user` 命令可以修改用户密码,缺省密码为 `taosdata`
@ -107,10 +124,5 @@ taos -h h1.taos.com -s "use db; show tables;"
- 执行 `RESET QUERY CACHE` 可清除本地表 Schema 的缓存
- 批量执行 SQL 语句。可以将一系列的 TDengine CLI 命令(以英文 ; 结尾,每个 SQL 语句为一行)按行存放在文件里,在 TDengine CLI 里执行命令 `source <file-name>` 自动执行该文件里所有的 SQL 语句
## TDengine CLI 导出查询结果到文件中
- 可以使用符号 “>>” 导出查询结果到某个文件中,语法为: sql 查询语句 >> ‘输出文件名’; 输出文件如果不写路径的话,将输出至当前目录下。如 select * from d0 >> /root/d0.csv; 将把查询结果输出到 /root/d0.csv 中。
## TDengine CLI 导入文件中的数据到表中
- 可以使用 insert into table_name file '输入文件名',把上一步中导出的数据文件再导入到指定表中。如 insert into d0 file '/root/d0.csv'; 表示把上面导出的数据全部再导致至 d0 表中。
## 错误代码表
在 TDengine 3.3.4.8 版本后 TDengine CLI 在返回错误信息中返回了具体错误码,用户可到 TDengine 官网错误码页面查找具体原因及解决措施,见:[错误码参考表](https://docs.taosdata.com/reference/error-code/)

View File

@ -6,48 +6,24 @@ toc_max_heading_level: 4
taosdump 是为开源用户提供的 TDengine 数据备份/恢复工具,备份数据文件采用标准 [ Apache AVRO ](https://avro.apache.org/) 格式方便与外界生态交换数据。taosdump 提供多种数据备份及恢复选项来满足不同需求,可通过 --help 查看支持的全部选项。
## 工具获取
## 安装
taosdump 是 TDengine 服务器及客户端安装包中默认安装组件,安装后即可使用,参考 [TDengine 安装](../../../get-started/)
taosdump 提供两种安装方式:
## 运行
taosdump 需在命令行终端中运行,运行时必须带参数,指明是备份操作或还原操作,如:
``` bash
taosdump -h dev126 -D test -o /root/test/
```
以上命令表示备份主机名为 `dev126` 机器上的 `test` 数据库到 `/root/test/` 目录下
- taosdump 是 TDengine 安装包中默认安装组件,安装 TDengine 后即可使用,可参考[TDengine 安装](../../../get-started/)
- 单独编译 taos-tools 并安装, 参考 [taos-tools](https://github.com/taosdata/taos-tools) 仓库。
``` bash
taosdump -h dev126 -i /root/test/
```
以上命令表示把 `/root/test/` 目录下之前备份的数据文件恢复到主机名为 `dev126` 的主机上
## 常用使用场景
### taosdump 备份数据
1. 备份所有数据库:指定 `-A``--all-databases` 参数;
2. 备份多个指定数据库:使用 `-D db1,db2,...` 参数;
3. 备份指定数据库中某些超级表或普通表:使用 `dbname stbname1 stbname2 tbname1 tbname2 ...` 参数,注意这种输入序列第一个参数为数据库名称,且只支持一个数据库,第二个和之后的参数为该数据库中的超级表或普通表名称,中间以空格分隔;
4. 备份系统 log 库TDengine 集群通常会包含一个系统数据库,名为 `log`,这个数据库内的数据为 TDengine 自我运行的数据taosdump 默认不会对 log 库进行备份。如果有特定需求对 log 库进行备份,可以使用 `-a``--allow-sys` 命令行参数。
5. “宽容”模式备份taosdump 1.4.1 之后的版本提供 `-n` 参数和 `-L` 参数,用于备份数据时不使用转义字符和“宽容”模式,可以在表名、列名、标签名没使用转义字符的情况下减少备份数据时间和备份数据占用空间。如果不确定符合使用 `-n``-L` 条件时请使用默认参数进行“严格”模式进行备份。转义字符的说明请参考[官方文档](../../taos-sql/escape)。
6. `-o` 参数指定的目录下如果已存在备份文件为防止数据被覆盖taosdump 会报错并退出,请更换其它空目录或清空原来数据后再备份。
7. 目前 taosdump 不支持数据断点继备功能,一旦数据备份中断,需要从头开始。如果备份需要很长时间,建议使用(-S -E 选项)指定开始/结束时间进行分段备份的方法,
:::tip
- taosdump 1.4.1 之后的版本提供 `-I` 参数,用于解析 avro 文件 schema 和数据,如果指定 `-s` 参数将只解析 schema。
- taosdump 1.4.2 之后的备份使用 `-B` 参数指定的批次数,默认值为 16384如果在某些环境下由于网络速度或磁盘性能不足导致 "Error actual dump .. batch .." 可以通过 `-B` 参数调整为更小的值进行尝试。
- taosdump 的导出不支持中断恢复,所以当进程意外终止后,正确的处理方式是删除当前已导出或生成的所有相关文件。
- taosdump 的导入支持中断恢复,但是当进程重新启动时,会收到一些“表已经存在”的提示,可以忽视。
:::
### taosdump 恢复数据
- 恢复指定路径下的数据文件:使用 `-i` 参数加上数据文件所在路径。如前面提及,不应该使用同一个目录备份不同数据集合,也不应该在同一路径多次备份同一数据集,否则备份数据会造成覆盖或多次备份。
- taosdump 支持数据恢复至新数据库名下,参数是 -W, 详细见命令行参数说明。
:::tip
taosdump 内部使用 TDengine stmt binding API 进行恢复数据的写入,为提高数据恢复性能,目前使用 16384 为一次写入批次。如果备份数据中有比较多列数据,可能会导致产生 "WAL size exceeds limit" 错误,此时可以通过使用 `-B` 参数调整为一个更小的值进行尝试。
:::
## 详细命令行参数列表
## 命令行参数
以下为 taosdump 详细命令行参数列表:
@ -123,3 +99,34 @@ for any corresponding short options.
Report bugs to <support@taosdata.com>.
```
## 常用使用场景
### taosdump 备份数据
1. 备份所有数据库:指定 `-A``--all-databases` 参数;
2. 备份多个指定数据库:使用 `-D db1,db2,...` 参数;
3. 备份指定数据库中某些超级表或普通表:使用 `dbname stbname1 stbname2 tbname1 tbname2 ...` 参数,注意这种输入序列第一个参数为数据库名称,且只支持一个数据库,第二个和之后的参数为该数据库中的超级表或普通表名称,中间以空格分隔;
4. 备份系统 log 库TDengine 集群通常会包含一个系统数据库,名为 `log`,这个数据库内的数据为 TDengine 自我运行的数据taosdump 默认不会对 log 库进行备份。如果有特定需求对 log 库进行备份,可以使用 `-a``--allow-sys` 命令行参数。
5. “宽容”模式备份taosdump 1.4.1 之后的版本提供 `-n` 参数和 `-L` 参数,用于备份数据时不使用转义字符和“宽容”模式,可以在表名、列名、标签名没使用转义字符的情况下减少备份数据时间和备份数据占用空间。如果不确定符合使用 `-n``-L` 条件时请使用默认参数进行“严格”模式进行备份。转义字符的说明请参考[官方文档](../../taos-sql/escape)。
6. `-o` 参数指定的目录下如果已存在备份文件为防止数据被覆盖taosdump 会报错并退出,请更换其它空目录或清空原来数据后再备份。
7. 目前 taosdump 不支持数据断点继备功能,一旦数据备份中断,需要从头开始。如果备份需要很长时间,建议使用(-S -E 选项)指定开始/结束时间进行分段备份的方法,
:::tip
- taosdump 1.4.1 之后的版本提供 `-I` 参数,用于解析 avro 文件 schema 和数据,如果指定 `-s` 参数将只解析 schema。
- taosdump 1.4.2 之后的备份使用 `-B` 参数指定的批次数,默认值为 16384如果在某些环境下由于网络速度或磁盘性能不足导致 "Error actual dump .. batch .." 可以通过 `-B` 参数调整为更小的值进行尝试。
- taosdump 的导出不支持中断恢复,所以当进程意外终止后,正确的处理方式是删除当前已导出或生成的所有相关文件。
- taosdump 的导入支持中断恢复,但是当进程重新启动时,会收到一些“表已经存在”的提示,可以忽视。
:::
### taosdump 恢复数据
- 恢复指定路径下的数据文件:使用 `-i` 参数加上数据文件所在路径。如前面提及,不应该使用同一个目录备份不同数据集合,也不应该在同一路径多次备份同一数据集,否则备份数据会造成覆盖或多次备份。
- taosdump 支持数据恢复至新数据库名下,参数是 -W, 详细见命令行参数说明。
:::tip
taosdump 内部使用 TDengine stmt binding API 进行恢复数据的写入,为提高数据恢复性能,目前使用 16384 为一次写入批次。如果备份数据中有比较多列数据,可能会导致产生 "WAL size exceeds limit" 错误,此时可以通过使用 `-B` 参数调整为一个更小的值进行尝试。
:::

View File

@ -6,13 +6,9 @@ toc_max_heading_level: 4
taosBenchmark 是 TDengine 产品性能基准测试工具,提供对 TDengine 产品写入、查询及订阅性能测试,输出性能指标。
## 安装
## 工具获取
taosBenchmark 提供两种安装方式:
- taosBenchmark 是 TDengine 安装包中默认安装组件,安装 TDengine 后即可使用,参考 [TDengine 安装](../../../get-started/)
- 单独编译 taos-tools 并安装, 参考 [taos-tools](https://github.com/taosdata/taos-tools) 仓库。
taosBenchmark 是 TDengine 服务器及客户端安装包中默认安装组件,安装后即可使用,参考 [TDengine 安装](../../../get-started/)
## 运行
@ -62,7 +58,7 @@ taosBenchmark -f <json file>
<summary>insert.json</summary>
```json
{{#include /taos-tools/example/insert.json}}
{{#include /TDengine/tools/taos-tools/example/insert.json}}
```
</details>
@ -73,7 +69,7 @@ taosBenchmark -f <json file>
<summary>query.json</summary>
```json
{{#include /taos-tools/example/query.json}}
{{#include /TDengine/tools/taos-tools/example/query.json}}
```
</details>
@ -84,14 +80,14 @@ taosBenchmark -f <json file>
<summary>tmq.json</summary>
```json
{{#include /taos-tools/example/tmq.json}}
{{#include /TDengine/tools/taos-tools/example/tmq.json}}
```
</details>
查看更多 json 配置文件示例可 [点击这里](https://github.com/taosdata/taos-tools/tree/main/example)
查看更多 json 配置文件示例可 [点击这里](https://github.com/taosdata/TDengine/tree/main/tools/taos-tools/example)
## 命令行参数详解
## 命令行参数
| 命令行参数 | 功能说明 |
| ---------------------------- | ----------------------------------------------- |
| -f/--file \<json file> | 要使用的 JSON 配置文件,由该文件指定所有参数,本参数与命令行其他参数不能同时使用。没有默认值 |
@ -163,12 +159,10 @@ SUCC: insert delay, min: 19.6780ms, avg: 64.9390ms, p90: 94.6900ms, p95: 105.187
查询性能测试主要输出查询请求速度 QPS 指标, 输出格式如下:
``` bash
complete query with 3 threads and 10000 query delay avg: 0.002686s min: 0.001182s max: 0.012189s p90: 0.002977s p95: 0.003493s p99: 0.004645s SQL command: select ...
INFO: Total specified queries: 30000
INFO: Spend 26.9530 second completed total queries: 30000, the QPS of all threads: 1113.049
```
- 第一行表示 3 个线程每个线程执行 10000 次查询及查询请求延时百分位分布情况,`SQL command` 为测试的查询语句
- 第二行表示总共完成了 10000 * 3 = 30000 次查询总数
- 第三行表示查询总耗时为 26.9653 秒,每秒查询率(QPS)为1113.049 次/秒
- 第二行表示查询总耗时为 26.9653 秒,每秒查询率(QPS)为1113.049 次/秒
- 如果在查询中设置了 `continue_if_fail` 选项为 `yes`,在最后一行中会输出失败请求个数及错误率,格式 error + 失败请求个数 (错误率)
- QPS = 成功请求数量 / 花费时间(单位秒)
- 错误率 = 失败请求数量 /(成功请求数量 + 失败请求数量)
@ -189,7 +183,7 @@ INFO: Consumed total msgs: 3000, total rows: 30000000
- 4 ~ 6 行是测试完成后每个消费者总体统计,统计共消费了多少条消息,共计多少行
- 第 7 行所有消费者总体统计,`msgs` 表示共消费了多少条消息, `rows` 表示共消费了多少行数据
## 配置文件参数详解
## 配置文件参数
### 通用配置参数
@ -221,7 +215,7 @@ INFO: Consumed total msgs: 3000, total rows: 30000000
“continue_if_fail”: “yes”, 失败 taosBenchmark 警告用户,并继续写入
“continue_if_fail”: “smart”, 如果子表不存在失败taosBenchmark 会建立子表并继续写入
#### 数据库相关配置参数
#### 数据库相关
创建数据库时的相关参数在 json 配置文件中的 `dbinfo` 中配置,个别具体参数如下。其余参数均与 TDengine 中 `create database` 时所指定的数据库参数相对应,详见[../../taos-sql/database]
@ -229,23 +223,7 @@ INFO: Consumed total msgs: 3000, total rows: 30000000
- **drop** : 数据库已存在时是否删除,可选项为 "yes" 或 "no", 默认为 “yes”
#### 流式计算相关配置参数
创建流式计算的相关参数在 json 配置文件中的 `stream` 中配置,具体参数如下。
- **stream_name** : 流式计算的名称,必填项。
- **stream_stb** : 流式计算对应的超级表名称,必填项。
- **stream_sql** : 流式计算的sql语句必填项。
- **trigger_mode** : 流式计算的触发模式,可选项。
- **watermark** : 流式计算的水印,可选项。
- **drop** : 是否创建流式计算,可选项为 "yes" 或者 "no", 为 "no" 时不创建。
#### 超级表相关配置参数
#### 超级表相关
创建超级表时的相关参数在 json 配置文件中的 `super_tables` 中配置,具体参数如下。
@ -307,7 +285,7 @@ INFO: Consumed total msgs: 3000, total rows: 30000000
- **sqls** : 字符串数组类型,指定超级表创建成功后要执行的 sql 数组sql 中指定表名前面要带数据库名,否则会报未指定数据库错误
#### 标签列与数据列配置参数
#### 标签列与数据列
指定超级表标签列与数据列的配置参数分别在 `super_tables` 中的 `columns``tag` 中。
@ -342,11 +320,11 @@ INFO: Consumed total msgs: 3000, total rows: 30000000
- **fillNull**: 字符串类型,指定此列是否随机插入 NULL 值,可指定为 “true” 或 "false", 只有当 generate_row_rule 为 2 时有效
#### 插入行为配置参数
#### 插入行为相关
- **thread_count** : 插入数据的线程数量,默认为 8。
- **thread_bind_vgroup** : 写入时 vgroup 是否和写入线程绑定,绑定后可提升写入速度, 取值为 "yes" 或 "no",默认值为 “no”, 设置为 “no” 后与原来行为一致。 当设为 “yes” 时,如果 thread_count 大于写入数据库 vgroups 数量, thread_count 自动调整为 vgroups 数量;如果 thread_count 小于 vgroups 数量,写入线程数量不做调整,一个线程写完一个 vgroup 数据后再写下一个,同时保持一个 vgroup 同时只能由一个线程写入的规则。
**thread_bind_vgroup** : 写入时 vgroup 是否和写入线程绑定,绑定后可提升写入速度, 取值为 "yes" 或 "no",默认值为 “no”, 设置为 “no” 后与原来行为一致。 当设为 “yes” 时,如果 thread_count 大于写入数据库 vgroups 数量, thread_count 自动调整为 vgroups 数量;如果 thread_count 小于 vgroups 数量,写入线程数量不做调整,一个线程写完一个 vgroup 数据后再写下一个,同时保持一个 vgroup 同时只能由一个线程写入的规则。
- **create_table_thread_count** : 建表的线程数量,默认为 8。
@ -378,7 +356,7 @@ interval 控制休眠时间,避免持续查询慢查询消耗 CPU ,单位为
其它通用参数详见[通用配置参数](#通用配置参数)。
#### 执行指定查询语句的配置参数
#### 执行指定查询语句
查询指定表(可以指定超级表、子表或普通表)的配置参数在 `specified_table_query` 中设置。
@ -389,7 +367,7 @@ interval 控制休眠时间,避免持续查询慢查询消耗 CPU ,单位为
`查询总次数` = `sqls` 个数 * `query_times` * `threads`
`混合查询``sqls` 中所有 sql 分成 `threads` 个组,每个线程执行一组, 每个 sql 都需执行 `query_times` 次查询
`查询总次数` = `sqls` 个数 * `query_times`
`查询总次数` = `sqls` 个数 * `query_times`
- **query_interval** : 查询时间间隔,单位: millisecond默认值为 0。
@ -399,7 +377,7 @@ interval 控制休眠时间,避免持续查询慢查询消耗 CPU ,单位为
- **sql**: 执行的 SQL 命令,必填。
- **result**: 保存查询结果的文件,未指定则不保存。
#### 查询超级表的配置参数
#### 查询超级表
查询超级表的配置参数在 `super_table_query` 中设置。
超级表查询的线程模式与上面介绍的指定查询语句查询的 `正常查询` 模式相同,不同之处是本 `sqls` 使用所有子表填充。
@ -419,8 +397,6 @@ interval 控制休眠时间,避免持续查询慢查询消耗 CPU ,单位为
订阅场景下 `filetype` 必须设置为 `subscribe`,该参数及其它通用参数详见[通用配置参数](#通用配置参数)
#### 执行指定订阅语句的配置参数
订阅指定表(可以指定超级表、子表或者普通表)的配置参数在 `specified_table_query` 中设置。
- **threads/concurrent** : 执行 SQL 的线程数,默认为 1。
@ -429,7 +405,7 @@ interval 控制休眠时间,避免持续查询慢查询消耗 CPU ,单位为
- **sql** : 执行的 SQL 命令,必填。
#### 配置文件中数据类型书写对照表
### 配置文件中数据类型书写对照表
| # | **引擎** | **taosBenchmark**
| --- | :----------------: | :---------------:

View File

@ -215,7 +215,7 @@ SHOW db_name.ALIVE;
查询数据库 db_name 的可用状态,返回值 0不可用 1完全可用 2部分可用即数据库包含的 VNODE 部分节点可用,部分节点不可用)
## 查看DB 的磁盘空间占用
## 查看 DB 的磁盘空间占用
```sql
select * from INFORMATION_SCHEMA.INS_DISK_USAGE where db_name = 'db_name'

View File

@ -44,7 +44,7 @@ table_option: {
1. 表(列)名命名规则参见[名称命名规则](./19-limit.md#名称命名规则)。
2. 表名最大长度为 192。
3. 表的第一个字段必须是 TIMESTAMP并且系统自动将其设为主键。
4. 除时间戳主键列之外,还可以通过 PRIMARY KEY 关键字指定第二列为额外的主键列。被指定为主键列的第二列必须为整型或字符串类型VARCHAR
4. 除时间戳主键列之外,还可以通过 PRIMARY KEY 关键字指定第二列为额外的主键列,该列与时间戳列共同组成复合主键当设置了复合主键时,两条记录的时间戳列与 PRIMARY KEY 列都相同,才会被认为是重复记录,数据库只保留最新的一条;否则视为两条记录,全部保留。注意:被指定为主键列的第二列必须为整型或字符串类型VARCHAR
5. 表的每行长度不能超过 48KB从 3.0.5.0 版本开始为 64KB;(注意:每个 VARCHAR/NCHAR/GEOMETRY 类型的列还会额外占用 2 个字节的存储位置)。
6. 使用数据类型 VARCHAR/NCHAR/GEOMETRY需指定其最长的字节数如 VARCHAR(20),表示 20 字节。
7. 关于 `ENCODE``COMPRESS` 的使用,请参考[按列压缩](../compress)

View File

@ -26,7 +26,7 @@ table_option: {
**使用说明**
1. 超级表中列的最大个数为 4096需要注意这里的 4096 是包含 TAG 列在内的,最小个数为 3包含一个时间戳主键、一个 TAG 列和一个数据列。
2. 除时间戳主键列之外,还可以通过 PRIMARY KEY 关键字指定第二列为额外的主键列。被指定为主键列的第二列必须为整型或字符串类型varchar
2. 除时间戳主键列之外,还可以通过 PRIMARY KEY 关键字指定第二列为额外的主键列,该列与时间戳列共同组成复合主键当设置了复合主键时,两条记录的时间戳列与 PRIMARY KEY 列都相同,才会被认为是重复记录,数据库只保留最新的一条;否则视为两条记录,全部保留。注意:被指定为主键列的第二列必须为整型或字符串类型varchar
3. TAGS语法指定超级表的标签列标签列需要遵循以下约定
- TAGS 中的 TIMESTAMP 列写入数据时需要提供给定值,而暂不支持四则运算,例如 NOW + 10s 这类表达式。
- TAGS 列名不能与其他列名相同。

View File

@ -62,8 +62,7 @@ window_clause: {
| COUNT_WINDOW(count_val[, sliding_val])
interp_clause:
RANGE(ts_val [, ts_val]) EVERY(every_val) FILL(fill_mod_and_val)
| RANGE(ts_val, surrounding_time_val) FILL(fill_mod_and_val)
RANGE(ts_val [, ts_val] [, surrounding_time_val]) EVERY(every_val) FILL(fill_mod_and_val)
partition_by_clause:
PARTITION BY partition_by_expr [, partition_by_expr] ...

File diff suppressed because it is too large Load Diff

View File

@ -10,7 +10,7 @@ description: 流式计算的相关 SQL 的详细语法
```sql
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name[(field1_name, field2_name [PRIMARY KEY], ...)] [TAGS (create_definition [, create_definition] ...)] SUBTABLE(expression) AS subquery
stream_options: {
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time | FORCE_WINDOW_CLOSE]
WATERMARK time
IGNORE EXPIRED [0|1]
DELETE_MARK time
@ -56,6 +56,10 @@ window_clause: {
其中SESSION 是会话窗口tol_val 是时间间隔的最大范围。在 tol_val 时间间隔范围内的数据都属于同一个窗口,如果连续的两条数据的时间超过 tol_val则自动开启下一个窗口。该窗口的 _wend 等于最后一条数据的时间加上 tol_val。
STATE_WINDOW 是状态窗口col 用来标识状态量相同的状态量数值则归属于同一个状态窗口col 数值改变后则当前窗口结束,自动开启下一个窗口。
INTERVAL 是时间窗口又可分为滑动时间窗口和翻转时间窗口。INTERVAL 子句用于指定窗口相等时间周期SLIDING 字句用于指定窗口向前滑动的时间。当 interval_val 与 sliding_val 相等的时候时间窗口即为翻转时间窗口否则为滑动时间窗口注意sliding_val 必须小于等于 interval_val。
EVENT_WINDOW 是事件窗口,根据开始条件和结束条件来划定窗口。当 start_trigger_condition 满足时则窗口开始,直到 end_trigger_condition 满足时窗口关闭。 start_trigger_condition 和 end_trigger_condition 可以是任意 TDengine 支持的条件表达式,且可以包含不同的列。
COUNT_WINDOW 是计数窗口,按固定的数据行数来划分窗口。 count_val 是常量是正整数必须大于等于2小于2147483648。 count_val 表示每个 COUNT_WINDOW 包含的最大数据行数,总数据行数不能整除 count_val 时,最后一个窗口的行数会小于 count_val 。 sliding_val 是常量,表示窗口滑动的数量,类似于 INTERVAL 的 SLIDING 。

View File

@ -33,6 +33,7 @@ TDengine 的 JDBC 驱动实现尽可能与关系型数据库驱动保持一致
| taos-jdbcdriver 版本 | 主要变化 | TDengine 版本 |
| ------------------| ---------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------- |
| 3.5.3 | 在 WebSocket 连接上支持无符号数据类型 | - |
| 3.5.2 | 解决了 WebSocket 查询结果集释放 bug | - |
| 3.5.1 | 解决了数据订阅获取时间戳对象类型问题 | - |
| 3.5.0 | 1. 优化了 WebSocket 连接参数绑定性能,支持参数绑定查询使用二进制数据 <br/> 2. 优化了 WebSocket 连接在小查询上的性能 <br/> 3. WebSocket 连接上支持设置时区和应用信息 | 3.3.5.0 及更高版本 |
@ -128,24 +129,27 @@ JDBC 连接器可能报错的错误码包括 4 种:
TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对应类型转换如下:
| TDengine DataType | JDBCType |
| ----------------- | ------------------ |
| TIMESTAMP | java.sql.Timestamp |
| INT | java.lang.Integer |
| BIGINT | java.lang.Long |
| FLOAT | java.lang.Float |
| DOUBLE | java.lang.Double |
| SMALLINT | java.lang.Short |
| TINYINT | java.lang.Byte |
| BOOL | java.lang.Boolean |
| BINARY | byte array |
| NCHAR | java.lang.String |
| JSON | java.lang.String |
| VARBINARY | byte[] |
| GEOMETRY | byte[] |
| TDengine DataType | JDBCType | 备注|
| ----------------- | -------------------- |-------------------- |
| TIMESTAMP | java.sql.Timestamp ||
| BOOL | java.lang.Boolean ||
| TINYINT | java.lang.Byte ||
| TINYINT UNSIGNED | java.lang.Short |仅在 WebSocket 连接方式支持|
| SMALLINT | java.lang.Short ||
| SMALLINT UNSIGNED | java.lang.Integer |仅在 WebSocket 连接方式支持|
| INT | java.lang.Integer ||
| INT UNSIGNED | java.lang.Long |仅在 WebSocket 连接方式支持|
| BIGINT | java.lang.Long ||
| BIGINT UNSIGNED | java.math.BigInteger |仅在 WebSocket 连接方式支持|
| FLOAT | java.lang.Float ||
| DOUBLE | java.lang.Double ||
| BINARY | byte array ||
| NCHAR | java.lang.String ||
| JSON | java.lang.String |仅在 tag 中支持|
| VARBINARY | byte[] ||
| GEOMETRY | byte[] ||
**注意**JSON 类型仅在 tag 中支持。
由于历史原因TDengine中的BINARY底层不是真正的二进制数据已不建议使用。请用VARBINARY类型代替。
**注意**由于历史原因TDengine中的BINARY底层不是真正的二进制数据已不建议使用。请用VARBINARY类型代替。
GEOMETRY类型是little endian字节序的二进制数据符合WKB规范。详细信息请参考 [数据类型](../../taos-sql/data-type/#数据类型)
WKB规范请参考[Well-Known Binary (WKB)](https://libgeos.org/specifications/wkb/)
对于java连接器可以使用jts库来方便的创建GEOMETRY类型对象序列化后写入TDengine这里有一个样例[Geometry示例](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/GeometryDemo.java)

View File

@ -24,6 +24,8 @@ Node.js 连接器源码托管在 [GitHub](https://github.com/taosdata/taos-conne
| Node.js 连接器 版本 | 主要变化 | TDengine 版本 |
| ------------------| ----------------------| ----------------|
| 3.1.4 | 修改 readme | - |
| 3.1.3 | 升级了 es5-ext 版本,解决低版本的漏洞 | - |
| 3.1.2 | 对数据协议和解析进行了优化,性能得到大幅提升| - |
| 3.1.1 | 优化了数据传输性能 | 3.3.2.0 及更高版本 |
| 3.1.0 | 新版本发布,支持 WebSocket 连接 | 3.2.0.0 及更高版本 |
@ -130,16 +132,20 @@ Node.js 连接器(`@tdengine/websocket`, 其通过 taosAdapter 提供的 We
除了通过指定的 URL 获取连接,还可以使用 WSConfig 指定建立连接时的参数。
```js
try {
let url = 'ws://127.0.0.1:6041'
let conf = WsSql.NewConfig(url)
conf.setUser('root')
conf.setPwd('taosdata')
conf.setDb('db')
conf.setTimeOut(500)
let wsSql = await WsSql.open(conf);
} catch (e) {
console.error(e);
const taos = require("@tdengine/websocket");
async function createConnect() {
try {
let url = 'ws://127.0.0.1:6041'
let conf = new taos.WSConfig(url)
conf.setUser('root')
conf.setPwd('taosdata')
conf.setDb('db')
conf.setTimeOut(500)
let wsSql = await taos.sqlConnect(conf)
} catch (e) {
console.error(e);
}
}
```

View File

@ -191,7 +191,7 @@ TDengine 存储的数据包括采集的时序数据以及库、表相关的元
在进行海量数据管理时为了实现水平扩展通常需要采用数据分片sharding和数据分区partitioning策略。TDengine 通过 vnode 来实现数据分片,并通过按时间段划分数据文件来实现时序数据的分区。
vnode 不仅负责处理时序数据的写入、查询和计算任务还承担着负载均衡、数据恢复以及支持异构环境的重要角色。为了实现这些目标TDengine 将一个 dnode 根据其计算和存储资源切分为多个 vnode。这些 vnode 的管理过程对应用程序是完全透明的由TDengine 自动完成。
vnode 不仅负责处理时序数据的写入、查询和计算任务还承担着负载均衡、数据恢复以及支持异构环境的重要角色。为了实现这些目标TDengine 将一个 dnode 根据其计算和存储资源切分为多个 vnode。这些 vnode 的管理过程对应用程序是完全透明的由TDengine 自动完成。
对于单个数据采集点,无论其数据量有多大,一个 vnode 都拥有足够的计算资源和存储资源来应对(例如,如果每秒生成一条 16B 的记录,一年产生的原始数据量也不到 0.5GB。因此TDengine 将一张表即一个数据采集点的所有数据都存储在一个vnode 中,避免将同一数据采集点的数据分散到两个或多个 dnode 上。同时,一个 vnode 可以存储多个数据采集点(表)的数据,最大可容纳的表数目上限为 100 万。设计上,一个 vnode 中的所有表都属于同一个数据库。
@ -371,4 +371,4 @@ alter dnode 1 "/mnt/disk2/taos 1";
3. 多级存储目前不支持删除已经挂载的硬盘的功能。
4. 0 级存储至少存在一个 disable_create_new_file 为 0 的挂载点1 级 和 2 级存储没有该限制。
:::
:::

View File

@ -26,7 +26,7 @@ TDengine 会为 WAL 文件自动创建索引以支持快速随机访问。通过
### 消费者
消费者负责从主题中获取数据。在订阅主题之后,消费者可以消费分配给该消费者的 vnode 中的所有数据。为了实现高效、有序的数据获取消费者采用了推拉push 和poll相结合的方式。
消费者负责从主题中获取数据。在订阅主题之后,消费者可以消费分配给该消费者的 vnode 中的所有数据。为了实现高效、有序的数据获取消费者采用了推拉push 和 poll相结合的方式。
当 vnode 中存在大量未被消费的数据时,消费者会按照顺序向 vnode 发送推送请求,以便一次性拉取大量数据。同时,消费者会在本地记录每个 vnode 的消费位置,确保所有数据都能被顺序地推送。

View File

@ -310,5 +310,17 @@ TDinsight插件中展示的数据是通过taosKeeper和taosAdapter服务收集
### 34 超级表带 TAG 过滤查子查数据与直接查子表哪个块?
直接查子表更快。超级表带 TAG 过滤查询子查数据是为满足查询方便性,同时可对多个子表中数据进行过滤,如果目的是追求性能并已明确查询子表,直接从子表查性能更高
### 35 如何查看数据压缩率指标?
TDengine 目前只提供以表为统计单位的压缩率,数据库及整体还未提供,查看命令是在客户端 taos-CLI 中执行 `SHOW TABLE DISTRIBUTED table_name;` 命令table_name 为要查看压缩率的表,可以为超级表、普通表及子表,详细可 [查看此处](https://docs.taosdata.com/reference/taos-sql/show/#show-table-distributed)
### 35 如何查看数据库的数据压缩率和磁盘占用指标?
TDengine 3.3.5.0 之前的版本,只提供以表为统计单位的压缩率,数据库及整体还未提供,查看命令是在客户端 taos-CLI 中执行 `SHOW TABLE DISTRIBUTED table_name;` 命令table_name 为要查看压缩率的表,可以为超级表、普通表及子表,详细可 [查看此处](https://docs.taosdata.com/reference/taos-sql/show/#show-table-distributed)
TDengine 3.3.5.0 及以上的版本,还提供了数据库整体压缩率和磁盘空间占用统计。查看数据库整体的数据压缩率和磁盘空间占用的命令为 `SHOW db_name.disk_info;`,查看数据库各个模块的磁盘空间占用的命令为 `SELECT * FROM INFORMATION_SCHEMA.INS_DISK_USAGE WHERE db_name='db_name';`db_name 为要查看的数据库名称。详细可 [查看此处](https://docs.taosdata.com/reference/taos-sql/database/#%E6%9F%A5%E7%9C%8B-db-%E7%9A%84%E7%A3%81%E7%9B%98%E7%A9%BA%E9%97%B4%E5%8D%A0%E7%94%A8)
### 36 短时间内,通过 systemd 重启 taosd 超过一定次数后重启失败报错start-limit-hit。
问题描述:
TDengine 3.3.5.1 及以上的版本taosd.service 的 systemd 配置文件中StartLimitInterval 参数从 60 秒调整为 900 秒。若在 900 秒内 taosd 服务重启达到 3 次,后续通过 systemd 启动 taosd 服务时会失败,执行 `systemctl status taosd.service` 显示错误Failed with result 'start-limit-hit'。
问题原因:
TDengine 3.3.5.1 之前的版本StartLimitInterval 为 60 秒。若在 60 秒内无法完成 3 次重启(例如,因从 WAL预写式日志中恢复大量数据导致启动时间较长则下一个 60 秒周期内的重启会重新计数,导致系统持续不断地重启 taosd 服务。为避免无限重启问题,将 StartLimitInterval 由 60 秒调整为 900 秒。因此,在使用 systemd 短时间内多次启动 taosd 时遇到 start-limit-hit 错误的机率增多。
问题解决:
1通过 systemd 重启 taosd 服务:推荐方法是先执行命令 `systemctl reset-failed taosd.service` 重置失败计数器,然后再通过 `systemctl restart taosd.service` 重启;若需长期调整,可手动修改 /etc/systemd/system/taosd.service 文件,将 StartLimitInterval 调小或将 StartLimitBurst 调大(注:重新安装 taosd 会重置该参数,需要重新修改),执行 `systemctl daemon-reload` 重新加载配置然后再重启。2也可以不通过 systemd 而是通过 taosd 命令直接重启 taosd 服务,此时不受 StartLimitInterval 和 StartLimitBurst 参数限制。

View File

@ -4,9 +4,9 @@ title: taosTools 发布历史及下载链接
description: taosTools 的发布历史、Release Notes 和下载链接
---
taosTools 各版本安装包下载链接如下:
从 3.0.6.0 开始,taosTools 集成到 TDengine 的安装包中不再单独提供。taosTools对应 TDengine 3.0.5.2及以下)各版本安装包下载链接如下:
其他历史版本安装包请访问[这里](https://www.taosdata.com/all-downloads)
2.6 的历史版本安装包请访问[这里](https://www.taosdata.com/all-downloads)
import Release from "/components/ReleaseV3";

View File

@ -4,9 +4,9 @@ sidebar_label: 版本说明
description: 各版本版本说明
---
[3.3.5.2](./3.3.5.2)
[3.3.5.0](./3.3.5.0)
[3.3.4.8](./3.3.4.8)
[3.3.4.3](./3.3.4.3)
[3.3.3.0](./3.3.3.0)
[3.3.2.0](./3.3.2.0)
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```

View File

@ -86,7 +86,7 @@ int32_t taosAnalBufWriteDataEnd(SAnalyticBuf *pBuf);
int32_t taosAnalBufClose(SAnalyticBuf *pBuf);
void taosAnalBufDestroy(SAnalyticBuf *pBuf);
const char *taosAnalAlgoStr(EAnalAlgoType algoType);
const char *taosAnalysisAlgoType(EAnalAlgoType algoType);
EAnalAlgoType taosAnalAlgoInt(const char *algoName);
const char *taosAnalAlgoUrlStr(EAnalAlgoType algoType);

View File

@ -326,7 +326,6 @@ int32_t tDeserializeSConfigArray(SDecoder *pDecoder, SArray *array);
int32_t setAllConfigs(SConfig *pCfg);
void printConfigNotMatch(SArray *array);
int32_t compareSConfigItemArrays(SArray *mArray, const SArray *dArray, SArray *diffArray);
bool isConifgItemLazyMode(SConfigItem *item);
int32_t taosUpdateTfsItemDisable(SConfig *pCfg, const char *value, void *pTfs);

View File

@ -2616,6 +2616,8 @@ typedef struct {
int8_t assignedAcked;
SMArbUpdateGroupAssigned assignedLeader;
int64_t version;
int32_t code;
int64_t updateTimeMs;
} SMArbUpdateGroup;
typedef struct {

Some files were not shown because too many files have changed in this diff Show More