|
@ -10,6 +10,7 @@ on:
|
||||||
- 'docs/**'
|
- 'docs/**'
|
||||||
- 'packaging/**'
|
- 'packaging/**'
|
||||||
- 'tests/**'
|
- 'tests/**'
|
||||||
|
- '*.md'
|
||||||
|
|
||||||
concurrency:
|
concurrency:
|
||||||
group: ${{ github.workflow }}-${{ github.ref }}
|
group: ${{ github.workflow }}-${{ github.ref }}
|
||||||
|
@ -17,8 +18,17 @@ concurrency:
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
build:
|
build:
|
||||||
runs-on: ubuntu-latest
|
name: Build and test on ${{ matrix.os }}
|
||||||
name: Build and test
|
runs-on: ${{ matrix.os }}
|
||||||
|
strategy:
|
||||||
|
matrix:
|
||||||
|
os:
|
||||||
|
- ubuntu-20.04
|
||||||
|
- ubuntu-22.04
|
||||||
|
- ubuntu-24.04
|
||||||
|
- macos-13
|
||||||
|
- macos-14
|
||||||
|
- macos-15
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout the repository
|
- name: Checkout the repository
|
||||||
|
@ -29,12 +39,19 @@ jobs:
|
||||||
with:
|
with:
|
||||||
go-version: 1.18
|
go-version: 1.18
|
||||||
|
|
||||||
- name: Install system dependencies
|
- name: Install dependencies on Linux
|
||||||
|
if: runner.os == 'Linux'
|
||||||
run: |
|
run: |
|
||||||
sudo apt update -y
|
sudo apt update -y
|
||||||
sudo apt install -y build-essential cmake \
|
sudo apt install -y build-essential cmake \
|
||||||
libgeos-dev libjansson-dev libsnappy-dev liblzma-dev libz-dev \
|
libgeos-dev libjansson-dev libsnappy-dev liblzma-dev libz-dev \
|
||||||
zlib1g pkg-config libssl-dev gawk
|
zlib1g-dev pkg-config libssl-dev gawk
|
||||||
|
|
||||||
|
- name: Install dependencies on macOS
|
||||||
|
if: runner.os == 'macOS'
|
||||||
|
run: |
|
||||||
|
brew update
|
||||||
|
brew install argp-standalone gflags pkg-config snappy zlib geos jansson gawk openssl
|
||||||
|
|
||||||
- name: Build and install TDengine
|
- name: Build and install TDengine
|
||||||
run: |
|
run: |
|
||||||
|
|
415
README-CN.md
|
@ -10,7 +10,36 @@
|
||||||
|
|
||||||
简体中文 | [English](README.md) | [TDengine 云服务](https://cloud.taosdata.com/?utm_medium=cn&utm_source=github) | 很多职位正在热招中,请看[这里](https://www.taosdata.com/careers/)
|
简体中文 | [English](README.md) | [TDengine 云服务](https://cloud.taosdata.com/?utm_medium=cn&utm_source=github) | 很多职位正在热招中,请看[这里](https://www.taosdata.com/careers/)
|
||||||
|
|
||||||
# TDengine 简介
|
# 目录
|
||||||
|
|
||||||
|
1. [TDengine 简介](#1-tdengine-简介)
|
||||||
|
1. [文档](#2-文档)
|
||||||
|
1. [必备工具](#3-必备工具)
|
||||||
|
- [3.1 Linux预备](#31-linux系统)
|
||||||
|
- [3.2 macOS预备](#32-macos系统)
|
||||||
|
- [3.3 Windows预备](#33-windows系统)
|
||||||
|
- [3.4 克隆仓库](#34-克隆仓库)
|
||||||
|
1. [构建](#4-构建)
|
||||||
|
- [4.1 Linux系统上构建](#41-linux系统上构建)
|
||||||
|
- [4.2 macOS系统上构建](#42-macos系统上构建)
|
||||||
|
- [4.3 Windows系统上构建](#43-windows系统上构建)
|
||||||
|
1. [打包](#5-打包)
|
||||||
|
1. [安装](#6-安装)
|
||||||
|
- [6.1 Linux系统上安装](#61-linux系统上安装)
|
||||||
|
- [6.2 macOS系统上安装](#62-macos系统上安装)
|
||||||
|
- [6.3 Windows系统上安装](#63-windows系统上安装)
|
||||||
|
1. [快速运行](#7-快速运行)
|
||||||
|
- [7.1 Linux系统上运行](#71-linux系统上运行)
|
||||||
|
- [7.2 macOS系统上运行](#72-macos系统上运行)
|
||||||
|
- [7.3 Windows系统上运行](#73-windows系统上运行)
|
||||||
|
1. [测试](#8-测试)
|
||||||
|
1. [版本发布](#9-版本发布)
|
||||||
|
1. [工作流](#10-工作流)
|
||||||
|
1. [覆盖率](#11-覆盖率)
|
||||||
|
1. [成为社区贡献者](#12-成为社区贡献者)
|
||||||
|
|
||||||
|
|
||||||
|
# 1. 简介
|
||||||
|
|
||||||
TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series Database, TSDB)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外,TDengine 还提供缓存、数据订阅、流式计算等功能,是一极简的时序数据处理平台,最大程度的减小系统设计的复杂度,降低研发和运营成本。与其他时序数据库相比,TDengine 的主要优势如下:
|
TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series Database, TSDB)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外,TDengine 还提供缓存、数据订阅、流式计算等功能,是一极简的时序数据处理平台,最大程度的减小系统设计的复杂度,降低研发和运营成本。与其他时序数据库相比,TDengine 的主要优势如下:
|
||||||
|
|
||||||
|
@ -26,323 +55,335 @@ TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series
|
||||||
|
|
||||||
- **核心开源**:TDengine 的核心代码包括集群功能全部开源,截止到2022年8月1日,全球超过 135.9k 个运行实例,GitHub Star 18.7k,Fork 4.4k,社区活跃。
|
- **核心开源**:TDengine 的核心代码包括集群功能全部开源,截止到2022年8月1日,全球超过 135.9k 个运行实例,GitHub Star 18.7k,Fork 4.4k,社区活跃。
|
||||||
|
|
||||||
# 文档
|
了解TDengine高级功能的完整列表,请 [点击](https://tdengine.com/tdengine/)。体验TDengine最简单的方式是通过[TDengine云平台](https://cloud.tdengine.com)。
|
||||||
|
|
||||||
关于完整的使用手册,系统架构和更多细节,请参考 [TDengine 文档](https://docs.taosdata.com) 或者 [TDengine Documentation](https://docs.tdengine.com)。
|
# 2. 文档
|
||||||
|
|
||||||
# 构建
|
关于完整的使用手册,系统架构和更多细节,请参考 [TDengine](https://www.taosdata.com/) 或者 [TDengine 官方文档](https://docs.taosdata.com)。
|
||||||
|
|
||||||
|
用户可根据需求选择通过[容器](https://docs.taosdata.com/get-started/docker/)、[安装包](https://docs.taosdata.com/get-started/package/)、[Kubernetes](https://docs.taosdata.com/deployment/k8s/)来安装或直接使用无需安装部署的[云服务](https://cloud.taosdata.com/)。本快速指南是面向想自己编译、打包、测试的开发者的。
|
||||||
|
|
||||||
|
如果想编译或测试TDengine连接器,请访问以下仓库: [JDBC连接器](https://github.com/taosdata/taos-connector-jdbc), [Go连接器](https://github.com/taosdata/driver-go), [Python连接器](https://github.com/taosdata/taos-connector-python), [Node.js连接器](https://github.com/taosdata/taos-connector-node), [C#连接器](https://github.com/taosdata/taos-connector-dotnet), [Rust连接器](https://github.com/taosdata/taos-connector-rust).
|
||||||
|
|
||||||
|
# 3. 前置条件
|
||||||
|
|
||||||
TDengine 目前可以在 Linux、 Windows、macOS 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64,后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。目前不支持使用交叉编译器构建。
|
TDengine 目前可以在 Linux、 Windows、macOS 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64,后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。目前不支持使用交叉编译器构建。
|
||||||
|
|
||||||
用户可根据需求选择通过源码、[容器](https://docs.taosdata.com/get-started/docker/)、[安装包](https://docs.taosdata.com/get-started/package/)或[Kubernetes](https://docs.taosdata.com/deployment/k8s/)来安装。本快速指南仅适用于通过源码安装。
|
## 3.1 Linux系统
|
||||||
|
|
||||||
TDengine 还提供一组辅助工具软件 taosTools,目前它包含 taosBenchmark(曾命名为 taosdemo)和 taosdump 两个软件。默认 TDengine 编译不包含 taosTools, 您可以在编译 TDengine 时使用`cmake .. -DBUILD_TOOLS=true` 来同时编译 taosTools。
|
<details>
|
||||||
|
|
||||||
为了构建TDengine, 请使用 [CMake](https://cmake.org/) 3.13.0 或者更高版本。
|
<summary>安装Linux必备工具</summary>
|
||||||
|
|
||||||
## 安装工具
|
### Ubuntu 18.04、20.04、22.04
|
||||||
|
|
||||||
### Ubuntu 18.04 及以上版本 & Debian:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
sudo apt-get install -y gcc cmake build-essential git libssl-dev libgflags2.2 libgflags-dev
|
sudo apt-get udpate
|
||||||
|
sudo apt-get install -y gcc cmake build-essential git libjansson-dev \
|
||||||
|
libsnappy-dev liblzma-dev zlib1g-dev pkg-config
|
||||||
```
|
```
|
||||||
|
|
||||||
#### 为 taos-tools 安装编译需要的软件
|
### CentOS 8
|
||||||
|
|
||||||
为了在 Ubuntu/Debian 系统上编译 [taos-tools](https://github.com/taosdata/taos-tools) 需要安装如下软件:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev zlib1g pkg-config
|
|
||||||
```
|
|
||||||
|
|
||||||
### CentOS 7.9
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo yum install epel-release
|
|
||||||
sudo yum update
|
sudo yum update
|
||||||
sudo yum install -y gcc gcc-c++ make cmake3 gflags git openssl-devel
|
yum install -y epel-release gcc gcc-c++ make cmake git perl dnf-plugins-core
|
||||||
sudo ln -sf /usr/bin/cmake3 /usr/bin/cmake
|
yum config-manager --set-enabled powertools
|
||||||
|
yum install -y zlib-static xz-devel snappy-devel jansson-devel pkgconfig libatomic-static libstdc++-static
|
||||||
```
|
```
|
||||||
|
|
||||||
### CentOS 8/Fedora/Rocky Linux
|
</details>
|
||||||
|
|
||||||
|
## 3.2 macOS系统
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary>安装macOS必备工具</summary>
|
||||||
|
|
||||||
|
根据提示安装依赖工具 [brew](https://brew.sh/).
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
sudo dnf install -y gcc gcc-c++ gflags make cmake epel-release git openssl-devel
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 在 CentOS 上构建 taosTools 安装依赖软件
|
|
||||||
|
|
||||||
|
|
||||||
#### CentOS 7.9
|
|
||||||
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
|
|
||||||
```
|
|
||||||
|
|
||||||
#### CentOS 8/Fedora/Rocky Linux
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo yum install -y epel-release
|
|
||||||
sudo yum install -y dnf-plugins-core
|
|
||||||
sudo yum config-manager --set-enabled powertools
|
|
||||||
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
|
|
||||||
```
|
|
||||||
|
|
||||||
注意:由于 snappy 缺乏 pkg-config 支持(参考 [链接](https://github.com/google/snappy/pull/86)),会导致 cmake 提示无法发现 libsnappy,实际上工作正常。
|
|
||||||
|
|
||||||
若 powertools 安装失败,可以尝试改用:
|
|
||||||
```
|
|
||||||
sudo yum config-manager --set-enabled powertools
|
|
||||||
```
|
|
||||||
|
|
||||||
#### CentOS + devtoolset
|
|
||||||
|
|
||||||
除上述编译依赖包,需要执行以下命令:
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo yum install centos-release-scl
|
|
||||||
sudo yum install devtoolset-9 devtoolset-9-libatomic-devel
|
|
||||||
scl enable devtoolset-9 -- bash
|
|
||||||
```
|
|
||||||
|
|
||||||
### macOS
|
|
||||||
|
|
||||||
```
|
|
||||||
brew install argp-standalone gflags pkgconfig
|
brew install argp-standalone gflags pkgconfig
|
||||||
```
|
```
|
||||||
|
|
||||||
### 设置 golang 开发环境
|
</details>
|
||||||
|
|
||||||
TDengine 包含数个使用 Go 语言开发的组件,比如taosAdapter, 请参考 golang.org 官方文档设置 go 开发环境。
|
## 3.3 Windows系统
|
||||||
|
|
||||||
请使用 1.20 及以上版本。对于中国用户,我们建议使用代理来加速软件包下载。
|
<details>
|
||||||
|
|
||||||
```
|
<summary>安装Windows必备工具</summary>
|
||||||
go env -w GO111MODULE=on
|
|
||||||
go env -w GOPROXY=https://goproxy.cn,direct
|
|
||||||
```
|
|
||||||
|
|
||||||
缺省是不会构建 taosAdapter, 但您可以使用以下命令选择构建 taosAdapter 作为 RESTful 接口的服务。
|
进行中。
|
||||||
|
|
||||||
```
|
</details>
|
||||||
cmake .. -DBUILD_HTTP=false
|
|
||||||
```
|
|
||||||
|
|
||||||
### 设置 rust 开发环境
|
## 3.4 克隆仓库
|
||||||
|
|
||||||
TDengine 包含数个使用 Rust 语言开发的组件. 请参考 rust-lang.org 官方文档设置 rust 开发环境。
|
通过如下命令将TDengine仓库克隆到指定计算机:
|
||||||
|
|
||||||
## 获取源码
|
|
||||||
|
|
||||||
首先,你需要从 GitHub 克隆源码:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git clone https://github.com/taosdata/TDengine.git
|
git clone https://github.com/taosdata/TDengine.git
|
||||||
cd TDengine
|
cd TDengine
|
||||||
```
|
```
|
||||||
如果使用 https 协议下载比较慢,可以通过修改 ~/.gitconfig 文件添加以下两行设置使用 ssh 协议下载。需要首先上传 ssh 密钥到 GitHub,详细方法请参考 GitHub 官方文档。
|
|
||||||
|
|
||||||
```
|
# 4. 构建
|
||||||
[url "git@github.com:"]
|
|
||||||
insteadOf = https://github.com/
|
|
||||||
```
|
|
||||||
## 特别说明
|
|
||||||
|
|
||||||
[JDBC 连接器](https://github.com/taosdata/taos-connector-jdbc), [Go 连接器](https://github.com/taosdata/driver-go),[Python 连接器](https://github.com/taosdata/taos-connector-python),[Node.js 连接器](https://github.com/taosdata/taos-connector-node),[C# 连接器](https://github.com/taosdata/taos-connector-dotnet) ,[Rust 连接器](https://github.com/taosdata/taos-connector-rust) 和 [Grafana 插件](https://github.com/taosdata/grafanaplugin)已移到独立仓库。
|
TDengine 还提供一组辅助工具软件 taosTools,目前它包含 taosBenchmark(曾命名为 taosdemo)和 taosdump 两个软件。默认 TDengine 编译不包含 taosTools, 您可以在编译 TDengine 时使用`cmake .. -DBUILD_TOOLS=true` 来同时编译 taosTools。
|
||||||
|
|
||||||
|
为了构建TDengine, 请使用 [CMake](https://cmake.org/) 3.13.0 或者更高版本。
|
||||||
|
|
||||||
## 构建 TDengine
|
## 4.1 Linux系统上构建
|
||||||
|
|
||||||
### Linux 系统
|
<details>
|
||||||
|
|
||||||
可以运行代码仓库中的 `build.sh` 脚本编译出 TDengine 和 taosTools(包含 taosBenchmark 和 taosdump)。
|
<summary>Linux系统上构建步骤</summary>
|
||||||
|
|
||||||
|
可以通过以下命令使用脚本 `build.sh` 编译TDengine和taosTools,包括taosBenchmark和taosdump:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
./build.sh
|
./build.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
这个脚本等价于执行如下命令:
|
也可以通过以下命令进行构建:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
mkdir debug
|
mkdir debug && cd debug
|
||||||
cd debug
|
|
||||||
cmake .. -DBUILD_TOOLS=true -DBUILD_CONTRIB=true
|
cmake .. -DBUILD_TOOLS=true -DBUILD_CONTRIB=true
|
||||||
make
|
make
|
||||||
```
|
```
|
||||||
|
|
||||||
您也可以选择使用 jemalloc 作为内存分配器,替代默认的 glibc:
|
可以使用Jemalloc作为内存分配器,而不是使用glibc:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
apt install autoconf
|
|
||||||
cmake .. -DJEMALLOC_ENABLED=true
|
cmake .. -DJEMALLOC_ENABLED=true
|
||||||
```
|
```
|
||||||
|
TDengine构建脚本可以自动检测x86、x86-64、arm64平台上主机的体系结构。
|
||||||
在 X86-64、X86、arm64 平台上,TDengine 生成脚本可以自动检测机器架构。也可以手动配置 CPUTYPE 参数来指定 CPU 类型,如 aarch64 等。
|
您也可以通过CPUTYPE选项手动指定架构:
|
||||||
|
|
||||||
aarch64:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cmake .. -DCPUTYPE=aarch64 && cmake --build .
|
cmake .. -DCPUTYPE=aarch64 && cmake --build .
|
||||||
```
|
```
|
||||||
|
|
||||||
### Windows 系统
|
</details>
|
||||||
|
|
||||||
如果你使用的是 Visual Studio 2013 版本:
|
## 4.2 macOS系统上构建
|
||||||
|
|
||||||
打开 cmd.exe,执行 vcvarsall.bat 时,为 64 位操作系统指定“x86_amd64”,为 32 位操作系统指定“x86”。
|
<details>
|
||||||
|
|
||||||
```bash
|
<summary>macOS系统上构建步骤</summary>
|
||||||
|
|
||||||
|
请安装XCode命令行工具和cmake。使用XCode 11.4+在Catalina和Big Sur上完成验证。
|
||||||
|
|
||||||
|
```shell
|
||||||
mkdir debug && cd debug
|
mkdir debug && cd debug
|
||||||
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" < x86_amd64 | x86 >
|
cmake .. && cmake --build .
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
## 4.3 Windows系统上构建
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary>Windows系统上构建步骤</summary>
|
||||||
|
|
||||||
|
如果您使用的是Visual Studio 2013,请执行“cmd.exe”打开命令窗口执行如下命令。
|
||||||
|
执行vcvarsall.bat时,64位的Windows请指定“amd64”,32位的Windows请指定“x86”。
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
mkdir debug && cd debug
|
||||||
|
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" < amd64 | x86 >
|
||||||
cmake .. -G "NMake Makefiles"
|
cmake .. -G "NMake Makefiles"
|
||||||
nmake
|
nmake
|
||||||
```
|
```
|
||||||
|
|
||||||
如果你使用的是 Visual Studio 2019 或 2017 版本:
|
如果您使用Visual Studio 2019或2017:
|
||||||
|
|
||||||
打开 cmd.exe,执行 vcvarsall.bat 时,为 64 位操作系统指定“x64”,为 32 位操作系统指定“x86”。
|
请执行“cmd.exe”打开命令窗口执行如下命令。
|
||||||
|
执行vcvarsall.bat时,64位的Windows请指定“x64”,32位的Windows请指定“x86”。
|
||||||
|
|
||||||
```bash
|
```cmd
|
||||||
mkdir debug && cd debug
|
mkdir debug && cd debug
|
||||||
"c:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvarsall.bat" < x64 | x86 >
|
"c:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvarsall.bat" < x64 | x86 >
|
||||||
cmake .. -G "NMake Makefiles"
|
cmake .. -G "NMake Makefiles"
|
||||||
nmake
|
nmake
|
||||||
```
|
```
|
||||||
|
|
||||||
你也可以从开始菜单中找到"Visual Studio < 2019 | 2017 >"菜单项,根据你的系统选择"x64 Native Tools Command Prompt for VS < 2019 | 2017 >"或"x86 Native Tools Command Prompt for VS < 2019 | 2017 >",打开命令行窗口,执行:
|
或者,您可以通过点击Windows开始菜单打开命令窗口->“Visual Studio < 2019 | 2017 >”文件夹->“x64原生工具命令提示符VS < 2019 | 2017 >”或“x86原生工具命令提示符VS < 2019 | 2017 >”取决于你的Windows是什么架构,然后执行命令如下:
|
||||||
|
|
||||||
```bash
|
```cmd
|
||||||
mkdir debug && cd debug
|
mkdir debug && cd debug
|
||||||
cmake .. -G "NMake Makefiles"
|
cmake .. -G "NMake Makefiles"
|
||||||
nmake
|
nmake
|
||||||
```
|
```
|
||||||
|
</details>
|
||||||
|
|
||||||
### macOS 系统
|
# 5. 打包
|
||||||
|
|
||||||
安装 XCode 命令行工具和 cmake. 在 Catalina 和 Big Sur 操作系统上,需要安装 XCode 11.4+ 版本。
|
由于一些组件依赖关系,TDengine社区安装程序不能仅由该存储库创建。我们仍在努力改进。
|
||||||
|
|
||||||
|
# 6. 安装
|
||||||
|
|
||||||
|
|
||||||
|
## 6.1 Linux系统上安装
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary>Linux系统上安装详细步骤</summary>
|
||||||
|
|
||||||
|
构建成功后,TDengine可以通过以下命令进行安装:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
mkdir debug && cd debug
|
sudo make install
|
||||||
cmake .. && cmake --build .
|
|
||||||
```
|
```
|
||||||
|
从源代码安装还将为TDengine配置服务管理。用户也可以使用[TDengine安装包](https://docs.taosdata.com/get-started/package/)进行安装。
|
||||||
|
|
||||||
# 安装
|
</details>
|
||||||
|
|
||||||
## Linux 系统
|
## 6.2 macOS系统上安装
|
||||||
|
|
||||||
生成完成后,安装 TDengine:
|
<details>
|
||||||
|
|
||||||
|
<summary>macOS系统上安装详细步骤</summary>
|
||||||
|
|
||||||
|
构建成功后,TDengine可以通过以下命令进行安装:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
sudo make install
|
sudo make install
|
||||||
```
|
```
|
||||||
|
|
||||||
用户可以在[文件目录结构](https://docs.taosdata.com/reference/directory/)中了解更多在操作系统中生成的目录或文件。
|
</details>
|
||||||
|
|
||||||
从源代码安装也会为 TDengine 配置服务管理 ,用户也可以选择[从安装包中安装](https://docs.taosdata.com/get-started/package/)。
|
## 6.3 Windows系统上安装
|
||||||
|
|
||||||
安装成功后,在终端中启动 TDengine 服务:
|
<details>
|
||||||
|
|
||||||
```bash
|
<summary>Windows系统上安装详细步骤</summary>
|
||||||
sudo systemctl start taosd
|
|
||||||
```
|
|
||||||
|
|
||||||
用户可以使用 TDengine CLI 来连接 TDengine 服务,在终端中,输入:
|
构建成功后,TDengine可以通过以下命令进行安装:
|
||||||
|
|
||||||
```bash
|
|
||||||
taos
|
|
||||||
```
|
|
||||||
|
|
||||||
如果 TDengine CLI 连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印出错误消息。
|
|
||||||
|
|
||||||
## Windows 系统
|
|
||||||
|
|
||||||
生成完成后,安装 TDengine:
|
|
||||||
|
|
||||||
```cmd
|
```cmd
|
||||||
nmake install
|
nmake install
|
||||||
```
|
```
|
||||||
|
|
||||||
## macOS 系统
|
</details>
|
||||||
|
|
||||||
生成完成后,安装 TDengine:
|
# 7. 快速运行
|
||||||
|
|
||||||
|
## 7.1 Linux系统上运行
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary>Linux系统上运行详细步骤</summary>
|
||||||
|
|
||||||
|
在Linux系统上安装TDengine完成后,在终端运行如下命令启动服务:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
sudo make install
|
sudo systemctl start taosd
|
||||||
```
|
```
|
||||||
|
然后用户可以通过如下命令使用TDengine命令行连接TDengine服务:
|
||||||
用户可以在[文件目录结构](https://docs.taosdata.com/reference/directory/)中了解更多在操作系统中生成的目录或文件。
|
|
||||||
|
|
||||||
从源代码安装也会为 TDengine 配置服务管理 ,用户也可以选择[从安装包中安装](https://docs.taosdata.com/get-started/package/)。
|
|
||||||
|
|
||||||
安装成功后,可以在应用程序中双击 TDengine 图标启动服务,或者在终端中启动 TDengine 服务:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo launchctl start com.tdengine.taosd
|
|
||||||
```
|
|
||||||
|
|
||||||
用户可以使用 TDengine CLI 来连接 TDengine 服务,在终端中,输入:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
taos
|
taos
|
||||||
```
|
```
|
||||||
|
|
||||||
如果 TDengine CLI 连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印出错误消息。
|
如果TDengine 命令行连接服务器成功,系统将打印欢迎信息和版本信息。否则,将显示连接错误信息。
|
||||||
|
|
||||||
## 快速运行
|
如果您不想将TDengine作为服务运行,您可以在当前终端中运行它。例如,要在构建完成后快速启动TDengine服务器,在终端中运行以下命令:(我们以Linux为例,Windows上的命令为 `taosd.exe`)
|
||||||
|
|
||||||
如果不希望以服务方式运行 TDengine,也可以在终端中直接运行它。也即在生成完成后,执行以下命令(在 Windows 下,生成的可执行文件会带有 .exe 后缀,例如会名为 taosd.exe ):
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
./build/bin/taosd -c test/cfg
|
./build/bin/taosd -c test/cfg
|
||||||
```
|
```
|
||||||
|
|
||||||
在另一个终端,使用 TDengine CLI 连接服务器:
|
在另一个终端上,使用TDengine命令行连接服务器:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
./build/bin/taos -c test/cfg
|
./build/bin/taos -c test/cfg
|
||||||
```
|
```
|
||||||
|
|
||||||
"-c test/cfg"指定系统配置文件所在目录。
|
选项 `-c test/cfg` 指定系统配置文件的目录。
|
||||||
|
|
||||||
# 体验 TDengine
|
</details>
|
||||||
|
|
||||||
在 TDengine 终端中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行插入查询操作。
|
## 7.2 macOS系统上运行
|
||||||
|
|
||||||
```sql
|
<details>
|
||||||
CREATE DATABASE demo;
|
|
||||||
USE demo;
|
<summary>macOS系统上运行详细步骤</summary>
|
||||||
CREATE TABLE t (ts TIMESTAMP, speed INT);
|
|
||||||
INSERT INTO t VALUES('2019-07-15 00:00:00', 10);
|
在macOS上安装完成后启动服务,双击/applications/TDengine启动程序,或者在终端中执行如下命令:
|
||||||
INSERT INTO t VALUES('2019-07-15 01:00:00', 20);
|
|
||||||
SELECT * FROM t;
|
```bash
|
||||||
ts | speed |
|
sudo launchctl start com.tdengine.taosd
|
||||||
===================================
|
|
||||||
19-07-15 00:00:00.000| 10|
|
|
||||||
19-07-15 01:00:00.000| 20|
|
|
||||||
Query OK, 2 row(s) in set (0.001700s)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
# 应用开发
|
然后在终端中使用如下命令通过TDengine命令行连接TDengine服务器:
|
||||||
|
|
||||||
## 官方连接器
|
```bash
|
||||||
|
taos
|
||||||
|
```
|
||||||
|
|
||||||
TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用:
|
如果TDengine命令行连接服务器成功,系统将打印欢迎信息和版本信息。否则,将显示错误信息。
|
||||||
|
|
||||||
- [Java](https://docs.taosdata.com/reference/connector/java/)
|
</details>
|
||||||
- [C/C++](https://docs.taosdata.com/reference/connector/cpp/)
|
|
||||||
- [Python](https://docs.taosdata.com/reference/connector/python/)
|
|
||||||
- [Go](https://docs.taosdata.com/reference/connector/go/)
|
|
||||||
- [Node.js](https://docs.taosdata.com/reference/connector/node/)
|
|
||||||
- [Rust](https://docs.taosdata.com/reference/connector/rust/)
|
|
||||||
- [C#](https://docs.taosdata.com/reference/connector/csharp/)
|
|
||||||
- [RESTful API](https://docs.taosdata.com/reference/connector/rest-api/)
|
|
||||||
|
|
||||||
# 成为社区贡献者
|
|
||||||
|
## 7.3 Windows系统上运行
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary>Windows系统上运行详细步骤</summary>
|
||||||
|
|
||||||
|
您可以使用以下命令在Windows平台上启动TDengine服务器:
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
.\build\bin\taosd.exe -c test\cfg
|
||||||
|
```
|
||||||
|
|
||||||
|
在另一个终端上,使用TDengine命令行连接服务器:
|
||||||
|
|
||||||
|
```cmd
|
||||||
|
.\build\bin\taos.exe -c test\cfg
|
||||||
|
```
|
||||||
|
|
||||||
|
选项 `-c test/cfg` 指定系统配置文件的目录。
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
# 8. 测试
|
||||||
|
|
||||||
|
有关如何在TDengine上运行不同类型的测试,请参考 [TDengine测试](./tests/README-CN.md)
|
||||||
|
|
||||||
|
# 9. 版本发布
|
||||||
|
|
||||||
|
TDengine发布版本的完整列表,请参考 [版本列表](https://github.com/taosdata/TDengine/releases)
|
||||||
|
|
||||||
|
# 10. 工作流
|
||||||
|
|
||||||
|
TDengine构建检查工作流可以在参考 [Github Action](https://github.com/taosdata/TDengine/actions/workflows/taosd-ci-build.yml), 更多的工作流正在创建中,将很快可用。
|
||||||
|
|
||||||
|
# 11. 覆盖率
|
||||||
|
|
||||||
|
最新的TDengine测试覆盖率报告可参考 [coveralls.io](https://coveralls.io/github/taosdata/TDengine)
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary>如何在本地运行测试覆盖率报告?</summary>
|
||||||
|
|
||||||
|
在本地创建测试覆盖率报告(HTML格式),请运行以下命令:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd tests
|
||||||
|
bash setup-lcov.sh -v 1.16 && ./run_local_coverage.sh -b main -c task
|
||||||
|
# on main branch and run cases in longtimeruning_cases.task
|
||||||
|
# for more infomation about options please refer to ./run_local_coverage.sh -h
|
||||||
|
```
|
||||||
|
> **注意:**
|
||||||
|
> 请注意,-b和-i选项将使用-DCOVER=true选项重新编译TDengine,这可能需要花费一些时间。
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
# 12. 成为社区贡献者
|
||||||
|
|
||||||
点击 [这里](https://www.taosdata.com/contributor),了解如何成为 TDengine 的贡献者。
|
点击 [这里](https://www.taosdata.com/contributor),了解如何成为 TDengine 的贡献者。
|
||||||
|
|
||||||
# 加入技术交流群
|
|
||||||
|
|
||||||
TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小 T 为好友,即可入群。
|
|
||||||
|
|
28
README.md
|
@ -10,10 +10,10 @@
|
||||||
|
|
||||||
[](https://github.com/taosdata/TDengine/actions/workflows/taosd-ci-build.yml)
|
[](https://github.com/taosdata/TDengine/actions/workflows/taosd-ci-build.yml)
|
||||||
[](https://coveralls.io/github/taosdata/TDengine?branch=3.0)
|
[](https://coveralls.io/github/taosdata/TDengine?branch=3.0)
|
||||||

|
[](https://github.com/feici02/TDengine/commits/main/)
|
||||||
<br />
|
<br />
|
||||||

|
[](https://github.com/taosdata/TDengine/releases)
|
||||||

|
[](https://github.com/taosdata/TDengine/blob/main/LICENSE)
|
||||||
[](https://bestpractices.coreinfrastructure.org/projects/4201)
|
[](https://bestpractices.coreinfrastructure.org/projects/4201)
|
||||||
<br />
|
<br />
|
||||||
[](https://twitter.com/tdenginedb)
|
[](https://twitter.com/tdenginedb)
|
||||||
|
@ -74,8 +74,14 @@ For a full list of TDengine competitive advantages, please [check here](https://
|
||||||
|
|
||||||
For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.tdengine.com) ([TDengine 文档](https://docs.taosdata.com))
|
For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.tdengine.com) ([TDengine 文档](https://docs.taosdata.com))
|
||||||
|
|
||||||
|
You can choose to install TDengine via [container](https://docs.tdengine.com/get-started/deploy-in-docker/), [installation package](https://docs.tdengine.com/get-started/deploy-from-package/), [Kubernetes](https://docs.tdengine.com/operations-and-maintenance/deploy-your-cluster/#kubernetes-deployment) or try [fully managed service](https://cloud.tdengine.com/) without installation. This quick guide is for developers who want to contribute, build, release and test TDengine by themselves.
|
||||||
|
|
||||||
|
For contributing/building/testing TDengine Connectors, please check the following repositories: [JDBC Connector](https://github.com/taosdata/taos-connector-jdbc), [Go Connector](https://github.com/taosdata/driver-go), [Python Connector](https://github.com/taosdata/taos-connector-python), [Node.js Connector](https://github.com/taosdata/taos-connector-node), [C# Connector](https://github.com/taosdata/taos-connector-dotnet), [Rust Connector](https://github.com/taosdata/taos-connector-rust).
|
||||||
|
|
||||||
# 3. Prerequisites
|
# 3. Prerequisites
|
||||||
|
|
||||||
|
At the moment, TDengine server supports running on Linux/Windows/MacOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service. TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. Right now we don't support build with cross-compiling environment.
|
||||||
|
|
||||||
## 3.1 On Linux
|
## 3.1 On Linux
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
|
@ -85,7 +91,7 @@ For user manual, system design and architecture, please refer to [TDengine Docum
|
||||||
### For Ubuntu 18.04、20.04、22.04
|
### For Ubuntu 18.04、20.04、22.04
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
sudo apt-get udpate
|
sudo apt-get update
|
||||||
sudo apt-get install -y gcc cmake build-essential git libjansson-dev \
|
sudo apt-get install -y gcc cmake build-essential git libjansson-dev \
|
||||||
libsnappy-dev liblzma-dev zlib1g-dev pkg-config
|
libsnappy-dev liblzma-dev zlib1g-dev pkg-config
|
||||||
```
|
```
|
||||||
|
@ -127,10 +133,6 @@ Work in Progress.
|
||||||
|
|
||||||
## 3.4 Clone the repo
|
## 3.4 Clone the repo
|
||||||
|
|
||||||
<details>
|
|
||||||
|
|
||||||
<summary>Clone the repo</summary>
|
|
||||||
|
|
||||||
Clone the repository to the target machine:
|
Clone the repository to the target machine:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
@ -138,21 +140,13 @@ git clone https://github.com/taosdata/TDengine.git
|
||||||
cd TDengine
|
cd TDengine
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
> **NOTE:**
|
|
||||||
> TDengine Connectors can be found in following repositories: [JDBC Connector](https://github.com/taosdata/taos-connector-jdbc), [Go Connector](https://github.com/taosdata/driver-go), [Python Connector](https://github.com/taosdata/taos-connector-python), [Node.js Connector](https://github.com/taosdata/taos-connector-node), [C# Connector](https://github.com/taosdata/taos-connector-dotnet), [Rust Connector](https://github.com/taosdata/taos-connector-rust).
|
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
# 4. Building
|
# 4. Building
|
||||||
|
|
||||||
At the moment, TDengine server supports running on Linux/Windows/MacOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service. TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. Right now we don't support build with cross-compiling environment.
|
|
||||||
|
|
||||||
You can choose to install through source code, [container](https://docs.tdengine.com/get-started/deploy-in-docker/), [installation package](https://docs.tdengine.com/get-started/deploy-from-package/) or [Kubernetes](https://docs.tdengine.com/operations-and-maintenance/deploy-your-cluster/#kubernetes-deployment). This quick guide only applies to install from source.
|
|
||||||
|
|
||||||
TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine.
|
TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine.
|
||||||
|
|
||||||
To build TDengine, use [CMake](https://cmake.org/) 3.13.0 or higher versions in the project directory.
|
TDengine requires [GCC](https://gcc.gnu.org/) 9.3.1 or higher and [CMake](https://cmake.org/) 3.13.0 or higher for building.
|
||||||
|
|
||||||
## 4.1 Build on Linux
|
## 4.1 Build on Linux
|
||||||
|
|
||||||
|
|
|
@ -166,6 +166,10 @@ IF(${BUILD_WITH_ANALYSIS})
|
||||||
set(BUILD_WITH_S3 ON)
|
set(BUILD_WITH_S3 ON)
|
||||||
ENDIF()
|
ENDIF()
|
||||||
|
|
||||||
|
IF(${TD_LINUX})
|
||||||
|
set(BUILD_WITH_ANALYSIS ON)
|
||||||
|
ENDIF()
|
||||||
|
|
||||||
IF(${BUILD_S3})
|
IF(${BUILD_S3})
|
||||||
|
|
||||||
IF(${BUILD_WITH_S3})
|
IF(${BUILD_WITH_S3})
|
||||||
|
@ -205,13 +209,6 @@ option(
|
||||||
off
|
off
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
option(
|
|
||||||
BUILD_WITH_NURAFT
|
|
||||||
"If build with NuRaft"
|
|
||||||
OFF
|
|
||||||
)
|
|
||||||
|
|
||||||
option(
|
option(
|
||||||
BUILD_WITH_UV
|
BUILD_WITH_UV
|
||||||
"If build with libuv"
|
"If build with libuv"
|
||||||
|
|
|
@ -12,7 +12,7 @@ ExternalProject_Add(curl2
|
||||||
BUILD_IN_SOURCE TRUE
|
BUILD_IN_SOURCE TRUE
|
||||||
BUILD_ALWAYS 1
|
BUILD_ALWAYS 1
|
||||||
UPDATE_COMMAND ""
|
UPDATE_COMMAND ""
|
||||||
CONFIGURE_COMMAND ${CONTRIB_CONFIG_ENV} ./configure --prefix=$ENV{HOME}/.cos-local.2 --with-ssl=$ENV{HOME}/.cos-local.2 --enable-websockets --enable-shared=no --disable-ldap --disable-ldaps --without-brotli --without-zstd --without-libidn2 --without-nghttp2 --without-libpsl #--enable-debug
|
CONFIGURE_COMMAND ${CONTRIB_CONFIG_ENV} ./configure --prefix=$ENV{HOME}/.cos-local.2 --with-ssl=$ENV{HOME}/.cos-local.2 --enable-websockets --enable-shared=no --disable-ldap --disable-ldaps --without-brotli --without-zstd --without-libidn2 --without-nghttp2 --without-libpsl --without-librtmp #--enable-debug
|
||||||
BUILD_COMMAND make -j
|
BUILD_COMMAND make -j
|
||||||
INSTALL_COMMAND make install
|
INSTALL_COMMAND make install
|
||||||
TEST_COMMAND ""
|
TEST_COMMAND ""
|
||||||
|
|
|
@ -1,13 +0,0 @@
|
||||||
|
|
||||||
# taos-tools
|
|
||||||
ExternalProject_Add(taos-tools
|
|
||||||
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
|
|
||||||
GIT_TAG 3.0
|
|
||||||
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
|
|
||||||
BINARY_DIR ""
|
|
||||||
#BUILD_IN_SOURCE TRUE
|
|
||||||
CONFIGURE_COMMAND ""
|
|
||||||
BUILD_COMMAND ""
|
|
||||||
INSTALL_COMMAND ""
|
|
||||||
TEST_COMMAND ""
|
|
||||||
)
|
|
|
@ -42,11 +42,6 @@ endif()
|
||||||
set(CONTRIB_TMP_FILE "${CMAKE_BINARY_DIR}/deps_tmp_CMakeLists.txt.in")
|
set(CONTRIB_TMP_FILE "${CMAKE_BINARY_DIR}/deps_tmp_CMakeLists.txt.in")
|
||||||
configure_file("${TD_SUPPORT_DIR}/deps_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
configure_file("${TD_SUPPORT_DIR}/deps_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
||||||
|
|
||||||
# taos-tools
|
|
||||||
if(${BUILD_TOOLS})
|
|
||||||
cat("${TD_SUPPORT_DIR}/taostools_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
|
||||||
endif()
|
|
||||||
|
|
||||||
# taosws-rs
|
# taosws-rs
|
||||||
if(${WEBSOCKET})
|
if(${WEBSOCKET})
|
||||||
cat("${TD_SUPPORT_DIR}/taosws_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
cat("${TD_SUPPORT_DIR}/taosws_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
||||||
|
@ -210,9 +205,18 @@ ENDIF()
|
||||||
# download dependencies
|
# download dependencies
|
||||||
configure_file(${CONTRIB_TMP_FILE} "${TD_CONTRIB_DIR}/deps-download/CMakeLists.txt")
|
configure_file(${CONTRIB_TMP_FILE} "${TD_CONTRIB_DIR}/deps-download/CMakeLists.txt")
|
||||||
execute_process(COMMAND "${CMAKE_COMMAND}" -G "${CMAKE_GENERATOR}" .
|
execute_process(COMMAND "${CMAKE_COMMAND}" -G "${CMAKE_GENERATOR}" .
|
||||||
WORKING_DIRECTORY "${TD_CONTRIB_DIR}/deps-download")
|
WORKING_DIRECTORY "${TD_CONTRIB_DIR}/deps-download"
|
||||||
|
RESULT_VARIABLE result)
|
||||||
|
IF(NOT result EQUAL "0")
|
||||||
|
message(FATAL_ERROR "CMake step for dowloading dependencies failed: ${result}")
|
||||||
|
ENDIF()
|
||||||
|
|
||||||
execute_process(COMMAND "${CMAKE_COMMAND}" --build .
|
execute_process(COMMAND "${CMAKE_COMMAND}" --build .
|
||||||
WORKING_DIRECTORY "${TD_CONTRIB_DIR}/deps-download")
|
WORKING_DIRECTORY "${TD_CONTRIB_DIR}/deps-download"
|
||||||
|
RESULT_VARIABLE result)
|
||||||
|
IF(NOT result EQUAL "0")
|
||||||
|
message(FATAL_ERROR "CMake step for building dependencies failed: ${result}")
|
||||||
|
ENDIF()
|
||||||
|
|
||||||
# ================================================================================================
|
# ================================================================================================
|
||||||
# Build
|
# Build
|
||||||
|
|
|
@ -20,14 +20,6 @@ if(${BUILD_WITH_SQLITE})
|
||||||
add_subdirectory(sqlite)
|
add_subdirectory(sqlite)
|
||||||
endif(${BUILD_WITH_SQLITE})
|
endif(${BUILD_WITH_SQLITE})
|
||||||
|
|
||||||
if(${BUILD_WITH_CRAFT})
|
|
||||||
add_subdirectory(craft)
|
|
||||||
endif(${BUILD_WITH_CRAFT})
|
|
||||||
|
|
||||||
if(${BUILD_WITH_TRAFT})
|
|
||||||
# add_subdirectory(traft)
|
|
||||||
endif(${BUILD_WITH_TRAFT})
|
|
||||||
|
|
||||||
if(${BUILD_S3})
|
if(${BUILD_S3})
|
||||||
add_subdirectory(azure)
|
add_subdirectory(azure)
|
||||||
endif()
|
endif()
|
||||||
|
|
|
@ -26,7 +26,7 @@ CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name
|
||||||
SUBTABLE(expression) AS subquery
|
SUBTABLE(expression) AS subquery
|
||||||
|
|
||||||
stream_options: {
|
stream_options: {
|
||||||
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
|
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time | FORCE_WINDOW_CLOSE]
|
||||||
WATERMARK time
|
WATERMARK time
|
||||||
IGNORE EXPIRED [0|1]
|
IGNORE EXPIRED [0|1]
|
||||||
DELETE_MARK time
|
DELETE_MARK time
|
||||||
|
@ -56,13 +56,17 @@ window_clause: {
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
The subquery supports session windows, state windows, and sliding windows. When used with supertables, session windows and state windows must be used together with `partition by tbname`.
|
The subquery supports session windows, state windows, time windows, event windows, and count windows. When used with supertables, state windows, event windows, and count windows must be used together with `partition by tbname`.
|
||||||
|
|
||||||
1. SESSION is a session window, where tol_val is the maximum range of the time interval. All data within the tol_val time interval belong to the same window. If the time interval between two consecutive data points exceeds tol_val, the next window automatically starts.
|
1. SESSION is a session window, where tol_val is the maximum range of the time interval. All data within the tol_val time interval belong to the same window. If the time interval between two consecutive data points exceeds tol_val, the next window automatically starts.
|
||||||
|
|
||||||
2. EVENT_WINDOW is an event window, defined by start and end conditions. The window starts when the start_trigger_condition is met and closes when the end_trigger_condition is met. start_trigger_condition and end_trigger_condition can be any condition expressions supported by TDengine and can include different columns.
|
2. STATE_WINDOW is a state window. The col is used to identify the state value. Values with the same state value belong to the same state window. When the value of col changes, the current window ends and the next window is automatically opened.
|
||||||
|
|
||||||
3. COUNT_WINDOW is a counting window, divided by a fixed number of data rows. count_val is a constant, a positive integer, and must be at least 2 and less than 2147483648. count_val represents the maximum number of data rows in each COUNT_WINDOW. If the total number of data rows cannot be evenly divided by count_val, the last window will have fewer rows than count_val. sliding_val is a constant, representing the number of rows the window slides, similar to the SLIDING in INTERVAL.
|
3. INTERVAL is a time window, which can be further divided into sliding time windows and tumbling time windows.The INTERVAL clause is used to specify the equal time period of the window, and the SLIDING clause is used to specify the time by which the window slides forward. When the value of interval_val is equal to the value of sliding_val, the time window is a tumbling time window; otherwise, it is a sliding time window. Note: The value of sliding_val must be less than or equal to the value of interval_val.
|
||||||
|
|
||||||
|
4. EVENT_WINDOW is an event window, defined by start and end conditions. The window starts when the start_trigger_condition is met and closes when the end_trigger_condition is met. start_trigger_condition and end_trigger_condition can be any condition expressions supported by TDengine and can include different columns.
|
||||||
|
|
||||||
|
5. COUNT_WINDOW is a counting window, divided by a fixed number of data rows. count_val is a constant, a positive integer, and must be at least 2 and less than 2147483648. count_val represents the maximum number of data rows in each COUNT_WINDOW. If the total number of data rows cannot be evenly divided by count_val, the last window will have fewer rows than count_val. sliding_val is a constant, representing the number of rows the window slides, similar to the SLIDING in INTERVAL.
|
||||||
|
|
||||||
The definition of a window is exactly the same as in the time-series data window query, for details refer to the TDengine window functions section.
|
The definition of a window is exactly the same as in the time-series data window query, for details refer to the TDengine window functions section.
|
||||||
|
|
||||||
|
|
|
@ -31,12 +31,12 @@ There are many parameters for creating consumers, which flexibly support various
|
||||||
|
|
||||||
| Parameter Name | Type | Description | Remarks |
|
| Parameter Name | Type | Description | Remarks |
|
||||||
| :-----------------------: | :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| :-----------------------: | :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| `td.connect.ip` | string | Server IP address | |
|
| `td.connect.ip` | string | FQDN of Server | ip or host name |
|
||||||
| `td.connect.user` | string | Username | |
|
| `td.connect.user` | string | Username | |
|
||||||
| `td.connect.pass` | string | Password | |
|
| `td.connect.pass` | string | Password | |
|
||||||
| `td.connect.port` | integer | Server port number | |
|
| `td.connect.port` | integer | Server port number | |
|
||||||
| `group.id` | string | Consumer group ID, the same consumer group shares consumption progress | <br />**Required**. Maximum length: 192.<br />Each topic can have up to 100 consumer groups |
|
| `group.id` | string | Consumer group ID, the same consumer group shares consumption progress | <br />**Required**. Maximum length: 192,excess length will be cut off.<br />Each topic can have up to 100 consumer groups |
|
||||||
| `client.id` | string | Client ID | Maximum length: 192 |
|
| `client.id` | string | Client ID | Maximum length: 255, excess length will be cut off. |
|
||||||
| `auto.offset.reset` | enum | Initial position of the consumer group subscription | <br />`earliest`: default(version < 3.2.0.0); subscribe from the beginning; <br/>`latest`: default(version >= 3.2.0.0); only subscribe from the latest data; <br/>`none`: cannot subscribe without a committed offset |
|
| `auto.offset.reset` | enum | Initial position of the consumer group subscription | <br />`earliest`: default(version < 3.2.0.0); subscribe from the beginning; <br/>`latest`: default(version >= 3.2.0.0); only subscribe from the latest data; <br/>`none`: cannot subscribe without a committed offset |
|
||||||
| `enable.auto.commit` | boolean | Whether to enable automatic consumption point submission, true: automatic submission, client application does not need to commit; false: client application needs to commit manually | Default is true |
|
| `enable.auto.commit` | boolean | Whether to enable automatic consumption point submission, true: automatic submission, client application does not need to commit; false: client application needs to commit manually | Default is true |
|
||||||
| `auto.commit.interval.ms` | integer | Time interval for automatically submitting consumption records, in milliseconds | Default is 5000 |
|
| `auto.commit.interval.ms` | integer | Time interval for automatically submitting consumption records, in milliseconds | Default is 5000 |
|
||||||
|
|
|
@ -1,882 +0,0 @@
|
||||||
---
|
|
||||||
title: Deploying Your Cluster
|
|
||||||
slug: /operations-and-maintenance/deploy-your-cluster
|
|
||||||
---
|
|
||||||
|
|
||||||
Since TDengine was designed with a distributed architecture from the beginning, it has powerful horizontal scaling capabilities to meet the growing data processing needs. Therefore, TDengine supports clustering and has open-sourced this core functionality. Users can choose from four deployment methods according to their actual environment and needs—manual deployment, Docker deployment, Kubernetes deployment, and Helm deployment.
|
|
||||||
|
|
||||||
## Manual Deployment
|
|
||||||
|
|
||||||
### Deploying taosd
|
|
||||||
|
|
||||||
taosd is the most important service component in the TDengine cluster. This section describes the steps to manually deploy a taosd cluster.
|
|
||||||
|
|
||||||
#### 1. Clear Data
|
|
||||||
|
|
||||||
If the physical nodes for setting up the cluster contain previous test data or have had other versions of TDengine installed (such as 1.x/2.x), please delete them and clear all data first.
|
|
||||||
|
|
||||||
#### 2. Check Environment
|
|
||||||
|
|
||||||
Before deploying the TDengine cluster, it is crucial to thoroughly check the network settings of all dnodes and the physical nodes where the applications are located. Here are the steps to check:
|
|
||||||
|
|
||||||
- Step 1: Execute the `hostname -f` command on each physical node to view and confirm that all node hostnames are unique. This step can be omitted for nodes where application drivers are located.
|
|
||||||
- Step 2: Execute the `ping host` command on each physical node, where host is the hostname of other physical nodes. This step aims to detect the network connectivity between the current node and other physical nodes. If you cannot ping through, immediately check the network and DNS settings. For Linux operating systems, check the `/etc/hosts` file; for Windows operating systems, check the `C:\Windows\system32\drivers\etc\hosts` file. Network issues will prevent the formation of a cluster, so be sure to resolve this issue.
|
|
||||||
- Step 3: Repeat the above network detection steps on the physical nodes where the application is running. If the network is found to be problematic, the application will not be able to connect to the taosd service. At this point, carefully check the DNS settings or hosts file of the physical node where the application is located to ensure it is configured correctly.
|
|
||||||
- Step 4: Check ports to ensure that all hosts in the cluster can communicate over TCP on port 6030.
|
|
||||||
|
|
||||||
By following these steps, you can ensure that all nodes communicate smoothly at the network level, laying a solid foundation for the successful deployment of the TDengine cluster.
|
|
||||||
|
|
||||||
#### 3. Installation
|
|
||||||
|
|
||||||
To ensure consistency and stability within the cluster, install the same version of TDengine on all physical nodes.
|
|
||||||
|
|
||||||
#### 4. Modify Configuration
|
|
||||||
|
|
||||||
Modify the configuration file of TDengine (the configuration files of all nodes need to be modified). Assuming the endpoint of the first dnode to be started is `h1.tdengine.com:6030`, the cluster-related parameters are as follows.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
# firstEp is the first dnode that each dnode connects to after the initial startup
|
|
||||||
firstEp h1.tdengine.com:6030
|
|
||||||
# Must be configured to the FQDN of this dnode, if there is only one hostname on this machine, you can comment out or delete the following line
|
|
||||||
fqdn h1.tdengine.com
|
|
||||||
# Configure the port of this dnode, default is 6030
|
|
||||||
serverPort 6030
|
|
||||||
```
|
|
||||||
|
|
||||||
The parameters that must be modified are firstEp and fqdn. For each dnode, the firstEp configuration should remain consistent, but fqdn must be set to the value of the dnode it is located on. Other parameters do not need to be modified unless you are clear on why they should be changed.
|
|
||||||
|
|
||||||
For dnodes wishing to join the cluster, it is essential to ensure that the parameters related to the TDengine cluster listed in the table below are set identically. Any mismatch in parameters may prevent the dnode from successfully joining the cluster.
|
|
||||||
|
|
||||||
| Parameter Name | Meaning |
|
|
||||||
|:----------------:|:---------------------------------------------------------:|
|
|
||||||
| statusInterval | Interval at which dnode reports status to mnode |
|
|
||||||
| timezone | Time zone |
|
|
||||||
| locale | System locale information and encoding format |
|
|
||||||
| charset | Character set encoding |
|
|
||||||
| ttlChangeOnWrite | Whether ttl expiration changes with table modification |
|
|
||||||
|
|
||||||
#### 5. Start
|
|
||||||
|
|
||||||
Start the first dnode, such as `h1.tdengine.com`, following the steps mentioned above. Then execute taos in the terminal to start TDengine's CLI program taos, and execute the `show dnodes` command within it to view all dnode information in the current cluster.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
taos> show dnodes;
|
|
||||||
id | endpoint | vnodes|support_vnodes|status| create_time | note |
|
|
||||||
===================================================================================
|
|
||||||
1| h1.tdengine.com:6030 | 0| 1024| ready| 2022-07-16 10:50:42.673 | |
|
|
||||||
```
|
|
||||||
|
|
||||||
You can see that the endpoint of the dnode node that has just started is `h1.tdengine.com:6030`. This address is the first Ep of the new cluster.
|
|
||||||
|
|
||||||
#### 6. Adding dnode
|
|
||||||
|
|
||||||
Follow the steps mentioned earlier, start taosd on each physical node. Each dnode needs to configure the firstEp parameter in the taos.cfg file to the endpoint of the first node of the new cluster, which in this case is `h1.tdengine.com:6030`. On the machine where the first dnode is located, run taos in the terminal, open TDengine's CLI program taos, then log into the TDengine cluster, and execute the following SQL.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
create dnode "h2.tdengine.com:6030"
|
|
||||||
```
|
|
||||||
|
|
||||||
Add the new dnode's endpoint to the cluster's endpoint list. You need to put `fqdn:port` in double quotes, otherwise, it will cause an error when running. Please note to replace the example h2.tdengine.com:6030 with the endpoint of this new dnode. Then execute the following SQL to see if the new node has successfully joined. If the dnode you want to join is currently offline, please refer to the "Common Issues" section later in this chapter for a solution.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
show dnodes;
|
|
||||||
```
|
|
||||||
|
|
||||||
In the logs, please confirm that the fqdn and port of the output dnode are consistent with the endpoint you just tried to add. If they are not consistent, correct it to the correct endpoint. By following the steps above, you can continuously add new dnodes to the cluster one by one, thereby expanding the scale of the cluster and improving overall performance. Make sure to follow the correct process when adding new nodes, which helps maintain the stability and reliability of the cluster.
|
|
||||||
|
|
||||||
**Tips**
|
|
||||||
|
|
||||||
- Any dnode that has joined the cluster can serve as the firstEp for subsequent nodes to be added. The firstEp parameter only functions when that dnode first joins the cluster. After joining, the dnode will save the latest mnode's endpoint list, and subsequently, it no longer depends on this parameter. The firstEp parameter in the configuration file is mainly used for client connections, and if no parameters are set for TDengine's CLI, it will default to connecting to the node specified by firstEp.
|
|
||||||
- Two dnodes that have not configured the firstEp parameter will run independently after starting. At this time, it is not possible to join one dnode to another to form a cluster.
|
|
||||||
- TDengine does not allow merging two independent clusters into a new cluster.
|
|
||||||
|
|
||||||
#### 7. Adding mnode
|
|
||||||
|
|
||||||
When creating a TDengine cluster, the first dnode automatically becomes the mnode of the cluster, responsible for managing and coordinating the cluster. To achieve high availability of mnode, subsequent dnodes need to manually create mnode. Please note that a cluster can create up to 3 mnodes, and only one mnode can be created on each dnode. When the number of dnodes in the cluster reaches or exceeds 3, you can create mnode for the existing cluster. In the first dnode, first log into TDengine through the CLI program taos, then execute the following SQL.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
create mnode on dnode <dnodeId>
|
|
||||||
```
|
|
||||||
|
|
||||||
Please note to replace the dnodeId in the example above with the serial number of the newly created dnode (which can be obtained by executing the `show dnodes` command). Finally, execute the following `show mnodes` to see if the newly created mnode has successfully joined the cluster.
|
|
||||||
|
|
||||||
**Tips**
|
|
||||||
|
|
||||||
During the process of setting up a TDengine cluster, if a new node always shows as offline after executing the create dnode command to add a new node, please follow these steps for troubleshooting.
|
|
||||||
|
|
||||||
- Step 1, check whether the taosd service on the new node has started normally. You can confirm this by checking the log files or using the ps command.
|
|
||||||
- Step 2, if the taosd service has started, next check whether the new node's network connection is smooth and confirm whether the firewall has been turned off. Network issues or firewall settings may prevent the node from communicating with other nodes in the cluster.
|
|
||||||
- Step 3, use the taos -h fqdn command to try to connect to the new node, then execute the show dnodes command. This will display the running status of the new node as an independent cluster. If the displayed list is inconsistent with that shown on the main node, it indicates that the new node may have formed a single-node cluster on its own. To resolve this issue, follow these steps. First, stop the taosd service on the new node. Second, clear all files in the dataDir directory specified in the taos.cfg configuration file on the new node. This will delete all data and configuration information related to that node. Finally, restart the taosd service on the new node. This will reset the new node to its initial state, ready to rejoin the main cluster.
|
|
||||||
|
|
||||||
### Deploying taosAdapter
|
|
||||||
|
|
||||||
This section discusses how to deploy taosAdapter, which provides RESTful and WebSocket access capabilities for the TDengine cluster, thus playing a very important role in the cluster.
|
|
||||||
|
|
||||||
1. Installation
|
|
||||||
|
|
||||||
After the installation of TDengine Enterprise is complete, taosAdapter can be used. If you want to deploy taosAdapter on different servers, TDengine Enterprise needs to be installed on these servers.
|
|
||||||
|
|
||||||
2. Single Instance Deployment
|
|
||||||
|
|
||||||
Deploying a single instance of taosAdapter is very simple. For specific commands and configuration parameters, please refer to the taosAdapter section in the manual.
|
|
||||||
|
|
||||||
3. Multiple Instances Deployment
|
|
||||||
|
|
||||||
The main purposes of deploying multiple instances of taosAdapter are as follows:
|
|
||||||
|
|
||||||
- To increase the throughput of the cluster and prevent taosAdapter from becoming a system bottleneck.
|
|
||||||
- To enhance the robustness and high availability of the cluster, allowing requests entering the business system to be automatically routed to other instances when one instance fails.
|
|
||||||
|
|
||||||
When deploying multiple instances of taosAdapter, it is necessary to address load balancing issues to avoid overloading some nodes while others remain idle. During the deployment process, multiple single instances need to be deployed separately, and the deployment steps for each instance are exactly the same as those for deploying a single instance. The next critical part is configuring Nginx. Below is a verified best practice configuration; you only need to replace the endpoint with the correct address in the actual environment. For the meanings of each parameter, please refer to the official Nginx documentation.
|
|
||||||
|
|
||||||
```json
|
|
||||||
user root;
|
|
||||||
worker_processes auto;
|
|
||||||
error_log /var/log/nginx_error.log;
|
|
||||||
|
|
||||||
|
|
||||||
events {
|
|
||||||
use epoll;
|
|
||||||
worker_connections 1024;
|
|
||||||
}
|
|
||||||
|
|
||||||
http {
|
|
||||||
|
|
||||||
access_log off;
|
|
||||||
|
|
||||||
map $http_upgrade $connection_upgrade {
|
|
||||||
default upgrade;
|
|
||||||
'' close;
|
|
||||||
}
|
|
||||||
|
|
||||||
server {
|
|
||||||
listen 6041;
|
|
||||||
location ~* {
|
|
||||||
proxy_pass http://dbserver;
|
|
||||||
proxy_read_timeout 600s;
|
|
||||||
proxy_send_timeout 600s;
|
|
||||||
proxy_connect_timeout 600s;
|
|
||||||
proxy_next_upstream error http_502 non_idempotent;
|
|
||||||
proxy_http_version 1.1;
|
|
||||||
proxy_set_header Upgrade $http_upgrade;
|
|
||||||
proxy_set_header Connection $http_connection;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
server {
|
|
||||||
listen 6043;
|
|
||||||
location ~* {
|
|
||||||
proxy_pass http://keeper;
|
|
||||||
proxy_read_timeout 60s;
|
|
||||||
proxy_next_upstream error http_502 http_500 non_idempotent;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
server {
|
|
||||||
listen 6060;
|
|
||||||
location ~* {
|
|
||||||
proxy_pass http://explorer;
|
|
||||||
proxy_read_timeout 60s;
|
|
||||||
proxy_next_upstream error http_502 http_500 non_idempotent;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
upstream dbserver {
|
|
||||||
least_conn;
|
|
||||||
server 172.16.214.201:6041 max_fails=0;
|
|
||||||
server 172.16.214.202:6041 max_fails=0;
|
|
||||||
server 172.16.214.203:6041 max_fails=0;
|
|
||||||
}
|
|
||||||
upstream keeper {
|
|
||||||
ip_hash;
|
|
||||||
server 172.16.214.201:6043 ;
|
|
||||||
server 172.16.214.202:6043 ;
|
|
||||||
server 172.16.214.203:6043 ;
|
|
||||||
}
|
|
||||||
upstream explorer{
|
|
||||||
ip_hash;
|
|
||||||
server 172.16.214.201:6060 ;
|
|
||||||
server 172.16.214.202:6060 ;
|
|
||||||
server 172.16.214.203:6060 ;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Deploying taosKeeper
|
|
||||||
|
|
||||||
To use the monitoring capabilities of TDengine, taosKeeper is an essential component. For monitoring, please refer to [TDinsight](../../tdengine-reference/components/tdinsight), and for details on deploying taosKeeper, please refer to the [taosKeeper Reference Manual](../../tdengine-reference/components/taoskeeper).
|
|
||||||
|
|
||||||
### Deploying taosX
|
|
||||||
|
|
||||||
To utilize the data ingestion capabilities of TDengine, it is necessary to deploy the taosX service. For detailed explanations and deployment, please refer to the enterprise edition reference manual.
|
|
||||||
|
|
||||||
### Deploying taosX-Agent
|
|
||||||
|
|
||||||
For some data sources such as Pi, OPC, etc., due to network conditions and data source access restrictions, taosX cannot directly access the data sources. In such cases, a proxy service, taosX-Agent, needs to be deployed. For detailed explanations and deployment, please refer to the enterprise edition reference manual.
|
|
||||||
|
|
||||||
### Deploying taos-Explorer
|
|
||||||
|
|
||||||
TDengine provides the capability to visually manage TDengine clusters. To use the graphical interface, the taos-Explorer service needs to be deployed. For detailed explanations and deployment, please refer to the [taos-Explorer Reference Manual](../../tdengine-reference/components/taosexplorer/)
|
|
||||||
|
|
||||||
## Docker Deployment
|
|
||||||
|
|
||||||
This section will explain how to start TDengine services in Docker containers and access them. You can use environment variables in the docker run command line or docker-compose file to control the behavior of services in the container.
|
|
||||||
|
|
||||||
### Starting TDengine
|
|
||||||
|
|
||||||
The TDengine image is launched with HTTP service activated by default. Use the following command to create a containerized TDengine environment with HTTP service.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
docker run -d --name tdengine \
|
|
||||||
-v ~/data/taos/dnode/data:/var/lib/taos \
|
|
||||||
-v ~/data/taos/dnode/log:/var/log/taos \
|
|
||||||
-p 6041:6041 tdengine/tdengine
|
|
||||||
```
|
|
||||||
|
|
||||||
Detailed parameter explanations are as follows:
|
|
||||||
|
|
||||||
- /var/lib/taos: Default data file directory for TDengine, can be modified through the configuration file.
|
|
||||||
- /var/log/taos: Default log file directory for TDengine, can be modified through the configuration file.
|
|
||||||
|
|
||||||
The above command starts a container named tdengine and maps the HTTP service's port 6041 to the host port 6041. The following command can verify if the HTTP service in the container is available.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
curl -u root:taosdata -d "show databases" localhost:6041/rest/sql
|
|
||||||
```
|
|
||||||
|
|
||||||
Run the following command to access TDengine within the container.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ docker exec -it tdengine taos
|
|
||||||
|
|
||||||
taos> show databases;
|
|
||||||
name |
|
|
||||||
=================================
|
|
||||||
information_schema |
|
|
||||||
performance_schema |
|
|
||||||
Query OK, 2 rows in database (0.033802s)
|
|
||||||
```
|
|
||||||
|
|
||||||
Within the container, TDengine CLI or various connectors (such as JDBC-JNI) connect to the server via the container's hostname. Accessing TDengine inside the container from outside is more complex, and using RESTful/WebSocket connection methods is the simplest approach.
|
|
||||||
|
|
||||||
### Starting TDengine in host network mode
|
|
||||||
|
|
||||||
Run the following command to start TDengine in host network mode, which allows using the host's FQDN to establish connections, rather than using the container's hostname.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
docker run -d --name tdengine --network host tdengine/tdengine
|
|
||||||
```
|
|
||||||
|
|
||||||
This method is similar to starting TDengine on the host using the systemctl command. If the TDengine client is already installed on the host, you can directly use the following command to access the TDengine service.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ taos
|
|
||||||
|
|
||||||
taos> show dnodes;
|
|
||||||
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
|
||||||
=================================================================================================================================================
|
|
||||||
1 | vm98:6030 | 0 | 32 | ready | 2022-08-19 14:50:05.337 | |
|
|
||||||
Query OK, 1 rows in database (0.010654s)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Start TDengine with a specified hostname and port
|
|
||||||
|
|
||||||
Use the following command to establish a connection on a specified hostname using the TAOS_FQDN environment variable or the fqdn configuration item in taos.cfg. This method provides greater flexibility for deploying TDengine.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
docker run -d \
|
|
||||||
--name tdengine \
|
|
||||||
-e TAOS_FQDN=tdengine \
|
|
||||||
-p 6030:6030 \
|
|
||||||
-p 6041-6049:6041-6049 \
|
|
||||||
-p 6041-6049:6041-6049/udp \
|
|
||||||
tdengine/tdengine
|
|
||||||
```
|
|
||||||
|
|
||||||
First, the above command starts a TDengine service in the container, listening on the hostname tdengine, and maps the container's port 6030 to the host's port 6030, and the container's port range [6041, 6049] to the host's port range [6041, 6049]. If the port range on the host is already in use, you can modify the command to specify a free port range on the host.
|
|
||||||
|
|
||||||
Secondly, ensure that the hostname tdengine is resolvable in /etc/hosts. Use the following command to save the correct configuration information to the hosts file.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
echo 127.0.0.1 tdengine |sudo tee -a /etc/hosts
|
|
||||||
```
|
|
||||||
|
|
||||||
Finally, you can access the TDengine service using the TDengine CLI with tdengine as the server address, as follows.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
taos -h tdengine -P 6030
|
|
||||||
```
|
|
||||||
|
|
||||||
If TAOS_FQDN is set to the same as the hostname of the host, the effect is the same as "starting TDengine in host network mode".
|
|
||||||
|
|
||||||
## Kubernetes Deployment
|
|
||||||
|
|
||||||
As a time-series database designed for cloud-native architectures, TDengine inherently supports Kubernetes deployment. This section introduces how to step-by-step create a highly available TDengine cluster for production use using YAML files, with a focus on common operations of TDengine in a Kubernetes environment. This subsection requires readers to have a certain understanding of Kubernetes, be proficient in running common kubectl commands, and understand concepts such as statefulset, service, and pvc. Readers unfamiliar with these concepts can refer to the Kubernetes official website for learning.
|
|
||||||
To meet the requirements of high availability, the cluster needs to meet the following requirements:
|
|
||||||
|
|
||||||
- 3 or more dnodes: Multiple vnodes in the same vgroup of TDengine should not be distributed on the same dnode, so if creating a database with 3 replicas, the number of dnodes should be 3 or more.
|
|
||||||
- 3 mnodes: mnodes are responsible for managing the entire cluster, with TDengine defaulting to one mnode. If the dnode hosting this mnode goes offline, the entire cluster becomes unavailable.
|
|
||||||
- 3 replicas of the database: TDengine's replica configuration is at the database level, so 3 replicas can ensure that the cluster remains operational even if any one of the 3 dnodes goes offline. If 2 dnodes go offline, the cluster becomes unavailable because RAFT cannot complete the election. (Enterprise edition: In disaster recovery scenarios, if the data files of any node are damaged, recovery can be achieved by restarting the dnode.)
|
|
||||||
|
|
||||||
### Prerequisites
|
|
||||||
|
|
||||||
To deploy and manage a TDengine cluster using Kubernetes, the following preparations need to be made.
|
|
||||||
|
|
||||||
- This article applies to Kubernetes v1.19 and above.
|
|
||||||
- This article uses the kubectl tool for installation and deployment, please install the necessary software in advance.
|
|
||||||
- Kubernetes has been installed and deployed and can normally access or update necessary container repositories or other services.
|
|
||||||
|
|
||||||
### Configure Service
|
|
||||||
|
|
||||||
Create a Service configuration file: taosd-service.yaml, the service name metadata.name (here "taosd") will be used in the next step. First, add the ports used by TDengine, then set the determined labels app (here "tdengine") in the selector.
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
---
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
name: "taosd"
|
|
||||||
labels:
|
|
||||||
app: "tdengine"
|
|
||||||
spec:
|
|
||||||
ports:
|
|
||||||
- name: tcp6030
|
|
||||||
protocol: "TCP"
|
|
||||||
port: 6030
|
|
||||||
- name: tcp6041
|
|
||||||
protocol: "TCP"
|
|
||||||
port: 6041
|
|
||||||
selector:
|
|
||||||
app: "tdengine"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Stateful Services StatefulSet
|
|
||||||
|
|
||||||
According to Kubernetes' descriptions of various deployment types, we will use StatefulSet as the deployment resource type for TDengine. Create the file tdengine.yaml, where replicas define the number of cluster nodes as 3. The node timezone is set to China (Asia/Shanghai), and each node is allocated 5G of standard storage, which you can modify according to actual conditions.
|
|
||||||
|
|
||||||
Please pay special attention to the configuration of startupProbe. After a dnode's Pod goes offline for a period of time and then restarts, the newly online dnode will be temporarily unavailable. If the startupProbe configuration is too small, Kubernetes will consider the Pod to be in an abnormal state and attempt to restart the Pod. This dnode's Pod will frequently restart and never return to a normal state.
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
---
|
|
||||||
apiVersion: apps/v1
|
|
||||||
kind: StatefulSet
|
|
||||||
metadata:
|
|
||||||
name: "tdengine"
|
|
||||||
labels:
|
|
||||||
app: "tdengine"
|
|
||||||
spec:
|
|
||||||
serviceName: "taosd"
|
|
||||||
replicas: 3
|
|
||||||
updateStrategy:
|
|
||||||
type: RollingUpdate
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: "tdengine"
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
name: "tdengine"
|
|
||||||
labels:
|
|
||||||
app: "tdengine"
|
|
||||||
spec:
|
|
||||||
affinity:
|
|
||||||
podAntiAffinity:
|
|
||||||
preferredDuringSchedulingIgnoredDuringExecution:
|
|
||||||
- weight: 100
|
|
||||||
podAffinityTerm:
|
|
||||||
labelSelector:
|
|
||||||
matchExpressions:
|
|
||||||
- key: app
|
|
||||||
operator: In
|
|
||||||
values:
|
|
||||||
- tdengine
|
|
||||||
topologyKey: kubernetes.io/hostname
|
|
||||||
containers:
|
|
||||||
- name: "tdengine"
|
|
||||||
image: "tdengine/tdengine:3.2.3.0"
|
|
||||||
imagePullPolicy: "IfNotPresent"
|
|
||||||
ports:
|
|
||||||
- name: tcp6030
|
|
||||||
protocol: "TCP"
|
|
||||||
containerPort: 6030
|
|
||||||
- name: tcp6041
|
|
||||||
protocol: "TCP"
|
|
||||||
containerPort: 6041
|
|
||||||
env:
|
|
||||||
# POD_NAME for FQDN config
|
|
||||||
- name: POD_NAME
|
|
||||||
valueFrom:
|
|
||||||
fieldRef:
|
|
||||||
fieldPath: metadata.name
|
|
||||||
# SERVICE_NAME and NAMESPACE for fqdn resolve
|
|
||||||
- name: SERVICE_NAME
|
|
||||||
value: "taosd"
|
|
||||||
- name: STS_NAME
|
|
||||||
value: "tdengine"
|
|
||||||
- name: STS_NAMESPACE
|
|
||||||
valueFrom:
|
|
||||||
fieldRef:
|
|
||||||
fieldPath: metadata.namespace
|
|
||||||
# TZ for timezone settings, we recommend to always set it.
|
|
||||||
- name: TZ
|
|
||||||
value: "Asia/Shanghai"
|
|
||||||
# Environment variables with prefix TAOS_ will be parsed and converted into corresponding parameter in taos.cfg. For example, serverPort in taos.cfg should be configured by TAOS_SERVER_PORT when using K8S to deploy
|
|
||||||
- name: TAOS_SERVER_PORT
|
|
||||||
value: "6030"
|
|
||||||
# Must set if you want a cluster.
|
|
||||||
- name: TAOS_FIRST_EP
|
|
||||||
value: "$(STS_NAME)-0.$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local:$(TAOS_SERVER_PORT)"
|
|
||||||
# TAOS_FQND should always be set in k8s env.
|
|
||||||
- name: TAOS_FQDN
|
|
||||||
value: "$(POD_NAME).$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local"
|
|
||||||
volumeMounts:
|
|
||||||
- name: taosdata
|
|
||||||
mountPath: /var/lib/taos
|
|
||||||
startupProbe:
|
|
||||||
exec:
|
|
||||||
command:
|
|
||||||
- taos-check
|
|
||||||
failureThreshold: 360
|
|
||||||
periodSeconds: 10
|
|
||||||
readinessProbe:
|
|
||||||
exec:
|
|
||||||
command:
|
|
||||||
- taos-check
|
|
||||||
initialDelaySeconds: 5
|
|
||||||
timeoutSeconds: 5000
|
|
||||||
livenessProbe:
|
|
||||||
exec:
|
|
||||||
command:
|
|
||||||
- taos-check
|
|
||||||
initialDelaySeconds: 15
|
|
||||||
periodSeconds: 20
|
|
||||||
volumeClaimTemplates:
|
|
||||||
- metadata:
|
|
||||||
name: taosdata
|
|
||||||
spec:
|
|
||||||
accessModes:
|
|
||||||
- "ReadWriteOnce"
|
|
||||||
storageClassName: "standard"
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
storage: "5Gi"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Deploying TDengine Cluster Using kubectl Command
|
|
||||||
|
|
||||||
First, create the corresponding namespace `dengine-test`, as well as the PVC, ensuring that there is enough remaining space with `storageClassName` set to `standard`. Then execute the following commands in sequence:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl apply -f taosd-service.yaml -n tdengine-test
|
|
||||||
```
|
|
||||||
|
|
||||||
The above configuration will create a three-node TDengine cluster, with `dnode` automatically configured. You can use the `show dnodes` command to view the current cluster nodes:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
|
|
||||||
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show dnodes"
|
|
||||||
kubectl exec -it tdengine-2 -n tdengine-test -- taos -s "show dnodes"
|
|
||||||
```
|
|
||||||
|
|
||||||
The output is as follows:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
taos show dnodes
|
|
||||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
|
||||||
=============================================================================================================================================================================================================================================
|
|
||||||
1 | tdengine-0.ta... | 0 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | |
|
|
||||||
2 | tdengine-1.ta... | 0 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | |
|
|
||||||
3 | tdengine-2.ta... | 0 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | |
|
|
||||||
Query OK, 3 row(s) in set (0.001853s)
|
|
||||||
```
|
|
||||||
|
|
||||||
View the current mnode:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
|
|
||||||
taos> show mnodes\G
|
|
||||||
*************************** 1.row ***************************
|
|
||||||
id: 1
|
|
||||||
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
|
|
||||||
role: leader
|
|
||||||
status: ready
|
|
||||||
create_time: 2023-07-19 17:54:18.559
|
|
||||||
reboot_time: 2023-07-19 17:54:19.520
|
|
||||||
Query OK, 1 row(s) in set (0.001282s)
|
|
||||||
```
|
|
||||||
|
|
||||||
Create mnode
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 2"
|
|
||||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 3"
|
|
||||||
```
|
|
||||||
|
|
||||||
View mnode
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
|
|
||||||
|
|
||||||
taos> show mnodes\G
|
|
||||||
*************************** 1.row ***************************
|
|
||||||
id: 1
|
|
||||||
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
|
|
||||||
role: leader
|
|
||||||
status: ready
|
|
||||||
create_time: 2023-07-19 17:54:18.559
|
|
||||||
reboot_time: 2023-07-20 09:19:36.060
|
|
||||||
*************************** 2.row ***************************
|
|
||||||
id: 2
|
|
||||||
endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030
|
|
||||||
role: follower
|
|
||||||
status: ready
|
|
||||||
create_time: 2023-07-20 09:22:05.600
|
|
||||||
reboot_time: 2023-07-20 09:22:12.838
|
|
||||||
*************************** 3.row ***************************
|
|
||||||
id: 3
|
|
||||||
endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030
|
|
||||||
role: follower
|
|
||||||
status: ready
|
|
||||||
create_time: 2023-07-20 09:22:20.042
|
|
||||||
reboot_time: 2023-07-20 09:22:23.271
|
|
||||||
Query OK, 3 row(s) in set (0.003108s)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Port Forwarding
|
|
||||||
|
|
||||||
Using kubectl port forwarding feature allows applications to access the TDengine cluster running in the Kubernetes environment.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl port-forward -n tdengine-test tdengine-0 6041:6041 &
|
|
||||||
```
|
|
||||||
|
|
||||||
Use the curl command to verify the TDengine REST API using port 6041.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
|
|
||||||
{"code":0,"column_meta":[["name","VARCHAR",64]],"data":[["information_schema"],["performance_schema"],["test"],["test1"]],"rows":4}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Cluster Expansion
|
|
||||||
|
|
||||||
TDengine supports cluster expansion:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl scale statefulsets tdengine -n tdengine-test --replicas=4
|
|
||||||
```
|
|
||||||
|
|
||||||
The command line argument `--replica=4` indicates that the TDengine cluster is to be expanded to 4 nodes. After execution, first check the status of the POD:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl get pod -l app=tdengine -n tdengine-test -o wide
|
|
||||||
```
|
|
||||||
|
|
||||||
Output as follows:
|
|
||||||
|
|
||||||
```text
|
|
||||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
|
||||||
tdengine-0 1/1 Running 4 (6h26m ago) 6h53m 10.244.2.75 node86 <none> <none>
|
|
||||||
tdengine-1 1/1 Running 1 (6h39m ago) 6h53m 10.244.0.59 node84 <none> <none>
|
|
||||||
tdengine-2 1/1 Running 0 5h16m 10.244.1.224 node85 <none> <none>
|
|
||||||
tdengine-3 1/1 Running 0 3m24s 10.244.2.76 node86 <none> <none>
|
|
||||||
```
|
|
||||||
|
|
||||||
At this point, the Pod status is still Running. The dnode status in the TDengine cluster can be seen after the Pod status changes to ready:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl exec -it tdengine-3 -n tdengine-test -- taos -s "show dnodes"
|
|
||||||
```
|
|
||||||
|
|
||||||
The dnode list of the four-node TDengine cluster after expansion:
|
|
||||||
|
|
||||||
```text
|
|
||||||
taos> show dnodes
|
|
||||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
|
||||||
=============================================================================================================================================================================================================================================
|
|
||||||
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
|
|
||||||
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
|
|
||||||
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
|
|
||||||
4 | tdengine-3.ta... | 0 | 16 | ready | 2023-07-20 16:01:44.007 | 2023-07-20 16:01:44.889 | | | |
|
|
||||||
Query OK, 4 row(s) in set (0.003628s)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Cleaning up the Cluster
|
|
||||||
|
|
||||||
**Warning**
|
|
||||||
When deleting PVCs, pay attention to the PV persistentVolumeReclaimPolicy. It is recommended to set it to Delete, so that when the PVC is deleted, the PV will be automatically cleaned up, along with the underlying CSI storage resources. If the policy to automatically clean up PVs when deleting PVCs is not configured, after deleting the PVCs, manually cleaning up the PVs may not release the corresponding CSI storage resources.
|
|
||||||
|
|
||||||
To completely remove the TDengine cluster, you need to clean up the statefulset, svc, pvc, and finally delete the namespace.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl delete statefulset -l app=tdengine -n tdengine-test
|
|
||||||
kubectl delete svc -l app=tdengine -n tdengine-test
|
|
||||||
kubectl delete pvc -l app=tdengine -n tdengine-test
|
|
||||||
kubectl delete namespace tdengine-test
|
|
||||||
```
|
|
||||||
|
|
||||||
### Cluster Disaster Recovery Capabilities
|
|
||||||
|
|
||||||
For high availability and reliability of TDengine in a Kubernetes environment, in terms of hardware damage and disaster recovery, it is discussed on two levels:
|
|
||||||
|
|
||||||
- The disaster recovery capabilities of the underlying distributed block storage, which includes multiple replicas of block storage. Popular distributed block storage like Ceph has multi-replica capabilities, extending storage replicas to different racks, cabinets, rooms, and data centers (or directly using block storage services provided by public cloud vendors).
|
|
||||||
- TDengine's disaster recovery, in TDengine Enterprise, inherently supports the recovery of a dnode's work by launching a new blank dnode when an existing dnode permanently goes offline (due to physical disk damage and data loss).
|
|
||||||
|
|
||||||
## Deploying TDengine Cluster with Helm
|
|
||||||
|
|
||||||
Helm is the package manager for Kubernetes.
|
|
||||||
The previous section on deploying the TDengine cluster with Kubernetes was simple enough, but Helm can provide even more powerful capabilities.
|
|
||||||
|
|
||||||
### Installing Helm
|
|
||||||
|
|
||||||
```shell
|
|
||||||
curl -fsSL -o get_helm.sh \
|
|
||||||
https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
|
|
||||||
chmod +x get_helm.sh
|
|
||||||
./get_helm.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
Helm operates Kubernetes using kubectl and kubeconfig configurations, which can be set up following the Rancher installation configuration for Kubernetes.
|
|
||||||
|
|
||||||
### Installing TDengine Chart
|
|
||||||
|
|
||||||
The TDengine Chart has not yet been released to the Helm repository, it can currently be downloaded directly from GitHub:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
wget https://github.com/taosdata/TDengine-Operator/raw/3.0/helm/tdengine-3.0.2.tgz
|
|
||||||
```
|
|
||||||
|
|
||||||
Retrieve the current Kubernetes storage class:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl get storageclass
|
|
||||||
```
|
|
||||||
|
|
||||||
In minikube, the default is standard. Then, use the helm command to install:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
helm install tdengine tdengine-3.0.2.tgz \
|
|
||||||
--set storage.className=<your storage class name> \
|
|
||||||
--set image.tag=3.2.3.0
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
In a minikube environment, you can set a smaller capacity to avoid exceeding disk space:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
helm install tdengine tdengine-3.0.2.tgz \
|
|
||||||
--set storage.className=standard \
|
|
||||||
--set storage.dataSize=2Gi \
|
|
||||||
--set storage.logSize=10Mi \
|
|
||||||
--set image.tag=3.2.3.0
|
|
||||||
```
|
|
||||||
|
|
||||||
After successful deployment, the TDengine Chart will output instructions for operating TDengine:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
export POD_NAME=$(kubectl get pods --namespace default \
|
|
||||||
-l "app.kubernetes.io/name=tdengine,app.kubernetes.io/instance=tdengine" \
|
|
||||||
-o jsonpath="{.items[0].metadata.name}")
|
|
||||||
kubectl --namespace default exec $POD_NAME -- taos -s "show dnodes; show mnodes"
|
|
||||||
kubectl --namespace default exec -it $POD_NAME -- taos
|
|
||||||
```
|
|
||||||
|
|
||||||
You can create a table for testing:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl --namespace default exec $POD_NAME -- \
|
|
||||||
taos -s "create database test;
|
|
||||||
use test;
|
|
||||||
create table t1 (ts timestamp, n int);
|
|
||||||
insert into t1 values(now, 1)(now + 1s, 2);
|
|
||||||
select * from t1;"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Configuring values
|
|
||||||
|
|
||||||
TDengine supports customization through `values.yaml`.
|
|
||||||
You can obtain the complete list of values supported by the TDengine Chart with helm show values:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
helm show values tdengine-3.0.2.tgz
|
|
||||||
```
|
|
||||||
|
|
||||||
You can save the results as `values.yaml`, then modify various parameters in it, such as the number of replicas, storage class name, capacity size, TDengine configuration, etc., and then use the following command to install the TDengine cluster:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
helm install tdengine tdengine-3.0.2.tgz -f values.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
All parameters are as follows:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# Default values for tdengine.
|
|
||||||
# This is a YAML-formatted file.
|
|
||||||
# Declare variables to be passed into helm templates.
|
|
||||||
|
|
||||||
replicaCount: 1
|
|
||||||
|
|
||||||
image:
|
|
||||||
prefix: tdengine/tdengine
|
|
||||||
#pullPolicy: Always
|
|
||||||
# Overrides the image tag whose default is the chart appVersion.
|
|
||||||
# tag: "3.0.2.0"
|
|
||||||
|
|
||||||
service:
|
|
||||||
# ClusterIP is the default service type, use NodeIP only if you know what you are doing.
|
|
||||||
type: ClusterIP
|
|
||||||
ports:
|
|
||||||
# TCP range required
|
|
||||||
tcp: [6030, 6041, 6042, 6043, 6044, 6046, 6047, 6048, 6049, 6060]
|
|
||||||
# UDP range
|
|
||||||
udp: [6044, 6045]
|
|
||||||
|
|
||||||
|
|
||||||
# Set timezone here, not in taoscfg
|
|
||||||
timezone: "Asia/Shanghai"
|
|
||||||
|
|
||||||
resources:
|
|
||||||
# We usually recommend not to specify default resources and to leave this as a conscious
|
|
||||||
# choice for the user. This also increases chances charts run on environments with little
|
|
||||||
# resources, such as Minikube. If you do want to specify resources, uncomment the following
|
|
||||||
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
|
||||||
# limits:
|
|
||||||
# cpu: 100m
|
|
||||||
# memory: 128Mi
|
|
||||||
# requests:
|
|
||||||
# cpu: 100m
|
|
||||||
# memory: 128Mi
|
|
||||||
|
|
||||||
storage:
|
|
||||||
# Set storageClassName for pvc. K8s use default storage class if not set.
|
|
||||||
#
|
|
||||||
className: ""
|
|
||||||
dataSize: "100Gi"
|
|
||||||
logSize: "10Gi"
|
|
||||||
|
|
||||||
nodeSelectors:
|
|
||||||
taosd:
|
|
||||||
# node selectors
|
|
||||||
|
|
||||||
clusterDomainSuffix: ""
|
|
||||||
# Config settings in taos.cfg file.
|
|
||||||
#
|
|
||||||
# The helm/k8s support will use environment variables for taos.cfg,
|
|
||||||
# converting an upper-snake-cased variable like `TAOS_DEBUG_FLAG`,
|
|
||||||
# to a camelCase taos config variable `debugFlag`.
|
|
||||||
#
|
|
||||||
# Note:
|
|
||||||
# 1. firstEp/secondEp: should not be set here, it's auto generated at scale-up.
|
|
||||||
# 2. serverPort: should not be set, we'll use the default 6030 in many places.
|
|
||||||
# 3. fqdn: will be auto generated in kubernetes, user should not care about it.
|
|
||||||
# 4. role: currently role is not supported - every node is able to be mnode and vnode.
|
|
||||||
#
|
|
||||||
# Btw, keep quotes "" around the value like below, even the value will be number or not.
|
|
||||||
taoscfg:
|
|
||||||
# Starts as cluster or not, must be 0 or 1.
|
|
||||||
# 0: all pods will start as a separate TDengine server
|
|
||||||
# 1: pods will start as TDengine server cluster. [default]
|
|
||||||
CLUSTER: "1"
|
|
||||||
|
|
||||||
# number of replications, for cluster only
|
|
||||||
TAOS_REPLICA: "1"
|
|
||||||
|
|
||||||
|
|
||||||
# TAOS_NUM_OF_RPC_THREADS: number of threads for RPC
|
|
||||||
#TAOS_NUM_OF_RPC_THREADS: "2"
|
|
||||||
|
|
||||||
#
|
|
||||||
# TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data
|
|
||||||
#TAOS_NUM_OF_COMMIT_THREADS: "4"
|
|
||||||
|
|
||||||
# enable/disable installation / usage report
|
|
||||||
#TAOS_TELEMETRY_REPORTING: "1"
|
|
||||||
|
|
||||||
# time interval of system monitor, seconds
|
|
||||||
#TAOS_MONITOR_INTERVAL: "30"
|
|
||||||
|
|
||||||
# time interval of dnode status reporting to mnode, seconds, for cluster only
|
|
||||||
#TAOS_STATUS_INTERVAL: "1"
|
|
||||||
|
|
||||||
# time interval of heart beat from shell to dnode, seconds
|
|
||||||
#TAOS_SHELL_ACTIVITY_TIMER: "3"
|
|
||||||
|
|
||||||
# minimum sliding window time, milli-second
|
|
||||||
#TAOS_MIN_SLIDING_TIME: "10"
|
|
||||||
|
|
||||||
# minimum time window, milli-second
|
|
||||||
#TAOS_MIN_INTERVAL_TIME: "1"
|
|
||||||
|
|
||||||
# the compressed rpc message, option:
|
|
||||||
# -1 (no compression)
|
|
||||||
# 0 (all message compressed),
|
|
||||||
# > 0 (rpc message body which larger than this value will be compressed)
|
|
||||||
#TAOS_COMPRESS_MSG_SIZE: "-1"
|
|
||||||
|
|
||||||
# max number of connections allowed in dnode
|
|
||||||
#TAOS_MAX_SHELL_CONNS: "50000"
|
|
||||||
|
|
||||||
# stop writing logs when the disk size of the log folder is less than this value
|
|
||||||
#TAOS_MINIMAL_LOG_DIR_G_B: "0.1"
|
|
||||||
|
|
||||||
# stop writing temporary files when the disk size of the tmp folder is less than this value
|
|
||||||
#TAOS_MINIMAL_TMP_DIR_G_B: "0.1"
|
|
||||||
|
|
||||||
# if disk free space is less than this value, taosd service exit directly within startup process
|
|
||||||
#TAOS_MINIMAL_DATA_DIR_G_B: "0.1"
|
|
||||||
|
|
||||||
# One mnode is equal to the number of vnode consumed
|
|
||||||
#TAOS_MNODE_EQUAL_VNODE_NUM: "4"
|
|
||||||
|
|
||||||
# enbale/disable http service
|
|
||||||
#TAOS_HTTP: "1"
|
|
||||||
|
|
||||||
# enable/disable system monitor
|
|
||||||
#TAOS_MONITOR: "1"
|
|
||||||
|
|
||||||
# enable/disable async log
|
|
||||||
#TAOS_ASYNC_LOG: "1"
|
|
||||||
|
|
||||||
#
|
|
||||||
# time of keeping log files, days
|
|
||||||
#TAOS_LOG_KEEP_DAYS: "0"
|
|
||||||
|
|
||||||
# The following parameters are used for debug purpose only.
|
|
||||||
# debugFlag 8 bits mask: FILE-SCREEN-UNUSED-HeartBeat-DUMP-TRACE_WARN-ERROR
|
|
||||||
# 131: output warning and error
|
|
||||||
# 135: output debug, warning and error
|
|
||||||
# 143: output trace, debug, warning and error to log
|
|
||||||
# 199: output debug, warning and error to both screen and file
|
|
||||||
# 207: output trace, debug, warning and error to both screen and file
|
|
||||||
#
|
|
||||||
# debug flag for all log type, take effect when non-zero value\
|
|
||||||
#TAOS_DEBUG_FLAG: "143"
|
|
||||||
|
|
||||||
# generate core file when service crash
|
|
||||||
#TAOS_ENABLE_CORE_FILE: "1"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Expansion
|
|
||||||
|
|
||||||
For expansion, refer to the explanation in the previous section, with some additional operations needed from the helm deployment.
|
|
||||||
First, retrieve the name of the StatefulSet from the deployment.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
export STS_NAME=$(kubectl get statefulset \
|
|
||||||
-l "app.kubernetes.io/name=tdengine" \
|
|
||||||
-o jsonpath="{.items[0].metadata.name}")
|
|
||||||
```
|
|
||||||
|
|
||||||
The expansion operation is extremely simple, just increase the replica. The following command expands TDengine to three nodes:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl scale --replicas 3 statefulset/$STS_NAME
|
|
||||||
```
|
|
||||||
|
|
||||||
Use the commands `show dnodes` and `show mnodes` to check if the expansion was successful.
|
|
||||||
|
|
||||||
### Cleaning up the Cluster
|
|
||||||
|
|
||||||
Under Helm management, the cleanup operation also becomes simple:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
helm uninstall tdengine
|
|
||||||
```
|
|
||||||
|
|
||||||
However, Helm will not automatically remove PVCs, you need to manually retrieve and then delete the PVCs.
|
|
|
@ -0,0 +1,215 @@
|
||||||
|
---
|
||||||
|
title: Manual Deployment
|
||||||
|
slug: /operations-and-maintenance/deploy-your-cluster/manual-deployment
|
||||||
|
---
|
||||||
|
|
||||||
|
You can deploy TDengine manually on a physical or virtual machine.
|
||||||
|
|
||||||
|
## Deploying taosd
|
||||||
|
|
||||||
|
taosd is the most important service component in the TDengine cluster. This section describes the steps to manually deploy a taosd cluster.
|
||||||
|
|
||||||
|
### 1. Clear Data
|
||||||
|
|
||||||
|
If the physical nodes for setting up the cluster contain previous test data or have had other versions of TDengine installed (such as 1.x/2.x), please delete them and clear all data first.
|
||||||
|
|
||||||
|
### 2. Check Environment
|
||||||
|
|
||||||
|
Before deploying the TDengine cluster, it is crucial to thoroughly check the network settings of all dnodes and the physical nodes where the applications are located. Here are the steps to check:
|
||||||
|
|
||||||
|
- Step 1: Execute the `hostname -f` command on each physical node to view and confirm that all node hostnames are unique. This step can be omitted for nodes where application drivers are located.
|
||||||
|
- Step 2: Execute the `ping host` command on each physical node, where host is the hostname of other physical nodes. This step aims to detect the network connectivity between the current node and other physical nodes. If you cannot ping through, immediately check the network and DNS settings. For Linux operating systems, check the `/etc/hosts` file; for Windows operating systems, check the `C:\Windows\system32\drivers\etc\hosts` file. Network issues will prevent the formation of a cluster, so be sure to resolve this issue.
|
||||||
|
- Step 3: Repeat the above network detection steps on the physical nodes where the application is running. If the network is found to be problematic, the application will not be able to connect to the taosd service. At this point, carefully check the DNS settings or hosts file of the physical node where the application is located to ensure it is configured correctly.
|
||||||
|
- Step 4: Check ports to ensure that all hosts in the cluster can communicate over TCP on port 6030.
|
||||||
|
|
||||||
|
By following these steps, you can ensure that all nodes communicate smoothly at the network level, laying a solid foundation for the successful deployment of the TDengine cluster.
|
||||||
|
|
||||||
|
### 3. Installation
|
||||||
|
|
||||||
|
To ensure consistency and stability within the cluster, install the same version of TDengine on all physical nodes.
|
||||||
|
|
||||||
|
### 4. Modify Configuration
|
||||||
|
|
||||||
|
Modify the configuration file of TDengine (the configuration files of all nodes need to be modified). Assuming the endpoint of the first dnode to be started is `h1.tdengine.com:6030`, the cluster-related parameters are as follows.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# firstEp is the first dnode that each dnode connects to after the initial startup
|
||||||
|
firstEp h1.tdengine.com:6030
|
||||||
|
# Must be configured to the FQDN of this dnode, if there is only one hostname on this machine, you can comment out or delete the following line
|
||||||
|
fqdn h1.tdengine.com
|
||||||
|
# Configure the port of this dnode, default is 6030
|
||||||
|
serverPort 6030
|
||||||
|
```
|
||||||
|
|
||||||
|
The parameters that must be modified are firstEp and fqdn. For each dnode, the firstEp configuration should remain consistent, but fqdn must be set to the value of the dnode it is located on. Other parameters do not need to be modified unless you are clear on why they should be changed.
|
||||||
|
|
||||||
|
For dnodes wishing to join the cluster, it is essential to ensure that the parameters related to the TDengine cluster listed in the table below are set identically. Any mismatch in parameters may prevent the dnode from successfully joining the cluster.
|
||||||
|
|
||||||
|
| Parameter Name | Meaning |
|
||||||
|
|:----------------:|:---------------------------------------------------------:|
|
||||||
|
| statusInterval | Interval at which dnode reports status to mnode |
|
||||||
|
| timezone | Time zone |
|
||||||
|
| locale | System locale information and encoding format |
|
||||||
|
| charset | Character set encoding |
|
||||||
|
| ttlChangeOnWrite | Whether ttl expiration changes with table modification |
|
||||||
|
|
||||||
|
### 5. Start
|
||||||
|
|
||||||
|
Start the first dnode, such as `h1.tdengine.com`, following the steps mentioned above. Then execute taos in the terminal to start TDengine's CLI program taos, and execute the `show dnodes` command within it to view all dnode information in the current cluster.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
taos> show dnodes;
|
||||||
|
id | endpoint | vnodes|support_vnodes|status| create_time | note |
|
||||||
|
===================================================================================
|
||||||
|
1| h1.tdengine.com:6030 | 0| 1024| ready| 2022-07-16 10:50:42.673 | |
|
||||||
|
```
|
||||||
|
|
||||||
|
You can see that the endpoint of the dnode node that has just started is `h1.tdengine.com:6030`. This address is the first Ep of the new cluster.
|
||||||
|
|
||||||
|
### 6. Adding dnode
|
||||||
|
|
||||||
|
Follow the steps mentioned earlier, start taosd on each physical node. Each dnode needs to configure the firstEp parameter in the taos.cfg file to the endpoint of the first node of the new cluster, which in this case is `h1.tdengine.com:6030`. On the machine where the first dnode is located, run taos in the terminal, open TDengine's CLI program taos, then log into the TDengine cluster, and execute the following SQL.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
create dnode "h2.tdengine.com:6030"
|
||||||
|
```
|
||||||
|
|
||||||
|
Add the new dnode's endpoint to the cluster's endpoint list. You need to put `fqdn:port` in double quotes, otherwise, it will cause an error when running. Please note to replace the example h2.tdengine.com:6030 with the endpoint of this new dnode. Then execute the following SQL to see if the new node has successfully joined. If the dnode you want to join is currently offline, please refer to the "Common Issues" section later in this chapter for a solution.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
show dnodes;
|
||||||
|
```
|
||||||
|
|
||||||
|
In the logs, please confirm that the fqdn and port of the output dnode are consistent with the endpoint you just tried to add. If they are not consistent, correct it to the correct endpoint. By following the steps above, you can continuously add new dnodes to the cluster one by one, thereby expanding the scale of the cluster and improving overall performance. Make sure to follow the correct process when adding new nodes, which helps maintain the stability and reliability of the cluster.
|
||||||
|
|
||||||
|
**Tips**
|
||||||
|
|
||||||
|
- Any dnode that has joined the cluster can serve as the firstEp for subsequent nodes to be added. The firstEp parameter only functions when that dnode first joins the cluster. After joining, the dnode will save the latest mnode's endpoint list, and subsequently, it no longer depends on this parameter. The firstEp parameter in the configuration file is mainly used for client connections, and if no parameters are set for TDengine's CLI, it will default to connecting to the node specified by firstEp.
|
||||||
|
- Two dnodes that have not configured the firstEp parameter will run independently after starting. At this time, it is not possible to join one dnode to another to form a cluster.
|
||||||
|
- TDengine does not allow merging two independent clusters into a new cluster.
|
||||||
|
|
||||||
|
### 7. Adding mnode
|
||||||
|
|
||||||
|
When creating a TDengine cluster, the first dnode automatically becomes the mnode of the cluster, responsible for managing and coordinating the cluster. To achieve high availability of mnode, subsequent dnodes need to manually create mnode. Please note that a cluster can create up to 3 mnodes, and only one mnode can be created on each dnode. When the number of dnodes in the cluster reaches or exceeds 3, you can create mnode for the existing cluster. In the first dnode, first log into TDengine through the CLI program taos, then execute the following SQL.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
create mnode on dnode <dnodeId>
|
||||||
|
```
|
||||||
|
|
||||||
|
Please note to replace the dnodeId in the example above with the serial number of the newly created dnode (which can be obtained by executing the `show dnodes` command). Finally, execute the following `show mnodes` to see if the newly created mnode has successfully joined the cluster.
|
||||||
|
|
||||||
|
**Tips**
|
||||||
|
|
||||||
|
During the process of setting up a TDengine cluster, if a new node always shows as offline after executing the create dnode command to add a new node, please follow these steps for troubleshooting.
|
||||||
|
|
||||||
|
- Step 1, check whether the taosd service on the new node has started normally. You can confirm this by checking the log files or using the ps command.
|
||||||
|
- Step 2, if the taosd service has started, next check whether the new node's network connection is smooth and confirm whether the firewall has been turned off. Network issues or firewall settings may prevent the node from communicating with other nodes in the cluster.
|
||||||
|
- Step 3, use the taos -h fqdn command to try to connect to the new node, then execute the show dnodes command. This will display the running status of the new node as an independent cluster. If the displayed list is inconsistent with that shown on the main node, it indicates that the new node may have formed a single-node cluster on its own. To resolve this issue, follow these steps. First, stop the taosd service on the new node. Second, clear all files in the dataDir directory specified in the taos.cfg configuration file on the new node. This will delete all data and configuration information related to that node. Finally, restart the taosd service on the new node. This will reset the new node to its initial state, ready to rejoin the main cluster.
|
||||||
|
|
||||||
|
## Deploying taosAdapter
|
||||||
|
|
||||||
|
This section discusses how to deploy taosAdapter, which provides RESTful and WebSocket access capabilities for the TDengine cluster, thus playing a very important role in the cluster.
|
||||||
|
|
||||||
|
1. Installation
|
||||||
|
|
||||||
|
After the installation of TDengine Enterprise is complete, taosAdapter can be used. If you want to deploy taosAdapter on different servers, TDengine Enterprise needs to be installed on these servers.
|
||||||
|
|
||||||
|
2. Single Instance Deployment
|
||||||
|
|
||||||
|
Deploying a single instance of taosAdapter is very simple. For specific commands and configuration parameters, please refer to the taosAdapter section in the manual.
|
||||||
|
|
||||||
|
3. Multiple Instances Deployment
|
||||||
|
|
||||||
|
The main purposes of deploying multiple instances of taosAdapter are as follows:
|
||||||
|
|
||||||
|
- To increase the throughput of the cluster and prevent taosAdapter from becoming a system bottleneck.
|
||||||
|
- To enhance the robustness and high availability of the cluster, allowing requests entering the business system to be automatically routed to other instances when one instance fails.
|
||||||
|
|
||||||
|
When deploying multiple instances of taosAdapter, it is necessary to address load balancing issues to avoid overloading some nodes while others remain idle. During the deployment process, multiple single instances need to be deployed separately, and the deployment steps for each instance are exactly the same as those for deploying a single instance. The next critical part is configuring Nginx. Below is a verified best practice configuration; you only need to replace the endpoint with the correct address in the actual environment. For the meanings of each parameter, please refer to the official Nginx documentation.
|
||||||
|
|
||||||
|
```json
|
||||||
|
user root;
|
||||||
|
worker_processes auto;
|
||||||
|
error_log /var/log/nginx_error.log;
|
||||||
|
|
||||||
|
|
||||||
|
events {
|
||||||
|
use epoll;
|
||||||
|
worker_connections 1024;
|
||||||
|
}
|
||||||
|
|
||||||
|
http {
|
||||||
|
|
||||||
|
access_log off;
|
||||||
|
|
||||||
|
map $http_upgrade $connection_upgrade {
|
||||||
|
default upgrade;
|
||||||
|
'' close;
|
||||||
|
}
|
||||||
|
|
||||||
|
server {
|
||||||
|
listen 6041;
|
||||||
|
location ~* {
|
||||||
|
proxy_pass http://dbserver;
|
||||||
|
proxy_read_timeout 600s;
|
||||||
|
proxy_send_timeout 600s;
|
||||||
|
proxy_connect_timeout 600s;
|
||||||
|
proxy_next_upstream error http_502 non_idempotent;
|
||||||
|
proxy_http_version 1.1;
|
||||||
|
proxy_set_header Upgrade $http_upgrade;
|
||||||
|
proxy_set_header Connection $http_connection;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
server {
|
||||||
|
listen 6043;
|
||||||
|
location ~* {
|
||||||
|
proxy_pass http://keeper;
|
||||||
|
proxy_read_timeout 60s;
|
||||||
|
proxy_next_upstream error http_502 http_500 non_idempotent;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
server {
|
||||||
|
listen 6060;
|
||||||
|
location ~* {
|
||||||
|
proxy_pass http://explorer;
|
||||||
|
proxy_read_timeout 60s;
|
||||||
|
proxy_next_upstream error http_502 http_500 non_idempotent;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
upstream dbserver {
|
||||||
|
least_conn;
|
||||||
|
server 172.16.214.201:6041 max_fails=0;
|
||||||
|
server 172.16.214.202:6041 max_fails=0;
|
||||||
|
server 172.16.214.203:6041 max_fails=0;
|
||||||
|
}
|
||||||
|
upstream keeper {
|
||||||
|
ip_hash;
|
||||||
|
server 172.16.214.201:6043 ;
|
||||||
|
server 172.16.214.202:6043 ;
|
||||||
|
server 172.16.214.203:6043 ;
|
||||||
|
}
|
||||||
|
upstream explorer{
|
||||||
|
ip_hash;
|
||||||
|
server 172.16.214.201:6060 ;
|
||||||
|
server 172.16.214.202:6060 ;
|
||||||
|
server 172.16.214.203:6060 ;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Deploying taosKeeper
|
||||||
|
|
||||||
|
To use the monitoring capabilities of TDengine, taosKeeper is an essential component. For monitoring, please refer to [TDinsight](../../../tdengine-reference/components/tdinsight), and for details on deploying taosKeeper, please refer to the [taosKeeper Reference Manual](../../../tdengine-reference/components/taoskeeper).
|
||||||
|
|
||||||
|
## Deploying taosX
|
||||||
|
|
||||||
|
To utilize the data ingestion capabilities of TDengine, it is necessary to deploy the taosX service. For detailed explanations and deployment, please refer to the enterprise edition reference manual.
|
||||||
|
|
||||||
|
## Deploying taosX-Agent
|
||||||
|
|
||||||
|
For some data sources such as Pi, OPC, etc., due to network conditions and data source access restrictions, taosX cannot directly access the data sources. In such cases, a proxy service, taosX-Agent, needs to be deployed. For detailed explanations and deployment, please refer to the enterprise edition reference manual.
|
||||||
|
|
||||||
|
## Deploying taos-Explorer
|
||||||
|
|
||||||
|
TDengine provides the capability to visually manage TDengine clusters. To use the graphical interface, the taos-Explorer service needs to be deployed. For detailed explanations and deployment, please refer to the [taos-Explorer Reference Manual](../../../tdengine-reference/components/taosexplorer/)
|
|
@ -0,0 +1,93 @@
|
||||||
|
---
|
||||||
|
title: Docker Deployment
|
||||||
|
slug: /operations-and-maintenance/deploy-your-cluster/docker-deployment
|
||||||
|
---
|
||||||
|
|
||||||
|
You can deploy TDengine services in Docker containers and use environment variables in the docker run command line or docker-compose file to control the behavior of services in the container.
|
||||||
|
|
||||||
|
## Starting TDengine
|
||||||
|
|
||||||
|
The TDengine image is launched with HTTP service activated by default. Use the following command to create a containerized TDengine environment with HTTP service.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
docker run -d --name tdengine \
|
||||||
|
-v ~/data/taos/dnode/data:/var/lib/taos \
|
||||||
|
-v ~/data/taos/dnode/log:/var/log/taos \
|
||||||
|
-p 6041:6041 tdengine/tdengine
|
||||||
|
```
|
||||||
|
|
||||||
|
Detailed parameter explanations are as follows:
|
||||||
|
|
||||||
|
- /var/lib/taos: Default data file directory for TDengine, can be modified through the configuration file.
|
||||||
|
- /var/log/taos: Default log file directory for TDengine, can be modified through the configuration file.
|
||||||
|
|
||||||
|
The above command starts a container named tdengine and maps the HTTP service's port 6041 to the host port 6041. The following command can verify if the HTTP service in the container is available.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
curl -u root:taosdata -d "show databases" localhost:6041/rest/sql
|
||||||
|
```
|
||||||
|
|
||||||
|
Run the following command to access TDengine within the container.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ docker exec -it tdengine taos
|
||||||
|
|
||||||
|
taos> show databases;
|
||||||
|
name |
|
||||||
|
=================================
|
||||||
|
information_schema |
|
||||||
|
performance_schema |
|
||||||
|
Query OK, 2 rows in database (0.033802s)
|
||||||
|
```
|
||||||
|
|
||||||
|
Within the container, TDengine CLI or various connectors (such as JDBC-JNI) connect to the server via the container's hostname. Accessing TDengine inside the container from outside is more complex, and using RESTful/WebSocket connection methods is the simplest approach.
|
||||||
|
|
||||||
|
## Starting TDengine in host network mode
|
||||||
|
|
||||||
|
Run the following command to start TDengine in host network mode, which allows using the host's FQDN to establish connections, rather than using the container's hostname.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
docker run -d --name tdengine --network host tdengine/tdengine
|
||||||
|
```
|
||||||
|
|
||||||
|
This method is similar to starting TDengine on the host using the systemctl command. If the TDengine client is already installed on the host, you can directly use the following command to access the TDengine service.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ taos
|
||||||
|
|
||||||
|
taos> show dnodes;
|
||||||
|
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
||||||
|
=================================================================================================================================================
|
||||||
|
1 | vm98:6030 | 0 | 32 | ready | 2022-08-19 14:50:05.337 | |
|
||||||
|
Query OK, 1 rows in database (0.010654s)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Start TDengine with a specified hostname and port
|
||||||
|
|
||||||
|
Use the following command to establish a connection on a specified hostname using the TAOS_FQDN environment variable or the fqdn configuration item in taos.cfg. This method provides greater flexibility for deploying TDengine.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
docker run -d \
|
||||||
|
--name tdengine \
|
||||||
|
-e TAOS_FQDN=tdengine \
|
||||||
|
-p 6030:6030 \
|
||||||
|
-p 6041-6049:6041-6049 \
|
||||||
|
-p 6041-6049:6041-6049/udp \
|
||||||
|
tdengine/tdengine
|
||||||
|
```
|
||||||
|
|
||||||
|
First, the above command starts a TDengine service in the container, listening on the hostname tdengine, and maps the container's port 6030 to the host's port 6030, and the container's port range [6041, 6049] to the host's port range [6041, 6049]. If the port range on the host is already in use, you can modify the command to specify a free port range on the host.
|
||||||
|
|
||||||
|
Secondly, ensure that the hostname tdengine is resolvable in /etc/hosts. Use the following command to save the correct configuration information to the hosts file.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
echo 127.0.0.1 tdengine |sudo tee -a /etc/hosts
|
||||||
|
```
|
||||||
|
|
||||||
|
Finally, you can access the TDengine service using the TDengine CLI with tdengine as the server address, as follows.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
taos -h tdengine -P 6030
|
||||||
|
```
|
||||||
|
|
||||||
|
If TAOS_FQDN is set to the same as the hostname of the host, the effect is the same as "starting TDengine in host network mode".
|
|
@ -0,0 +1,812 @@
|
||||||
|
---
|
||||||
|
title: Kubernetes Deployment
|
||||||
|
slug: /operations-and-maintenance/deploy-your-cluster/kubernetes-deployment
|
||||||
|
---
|
||||||
|
|
||||||
|
You can use kubectl or Helm to deploy TDengine in Kubernetes.
|
||||||
|
|
||||||
|
Note that Helm is only supported in TDengine Enterprise. To deploy TDengine OSS in Kubernetes, use kubectl.
|
||||||
|
|
||||||
|
## Deploy TDengine with kubectl
|
||||||
|
|
||||||
|
As a time-series database designed for cloud-native architectures, TDengine inherently supports Kubernetes deployment. This section introduces how to step-by-step create a highly available TDengine cluster for production use using YAML files, with a focus on common operations of TDengine in a Kubernetes environment. This subsection requires readers to have a certain understanding of Kubernetes, be proficient in running common kubectl commands, and understand concepts such as statefulset, service, and pvc. Readers unfamiliar with these concepts can refer to the Kubernetes official website for learning.
|
||||||
|
To meet the requirements of high availability, the cluster needs to meet the following requirements:
|
||||||
|
|
||||||
|
- 3 or more dnodes: Multiple vnodes in the same vgroup of TDengine should not be distributed on the same dnode, so if creating a database with 3 replicas, the number of dnodes should be 3 or more.
|
||||||
|
- 3 mnodes: mnodes are responsible for managing the entire cluster, with TDengine defaulting to one mnode. If the dnode hosting this mnode goes offline, the entire cluster becomes unavailable.
|
||||||
|
- 3 replicas of the database: TDengine's replica configuration is at the database level, so 3 replicas can ensure that the cluster remains operational even if any one of the 3 dnodes goes offline. If 2 dnodes go offline, the cluster becomes unavailable because RAFT cannot complete the election. (Enterprise edition: In disaster recovery scenarios, if the data files of any node are damaged, recovery can be achieved by restarting the dnode.)
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
To deploy and manage a TDengine cluster using Kubernetes, the following preparations need to be made.
|
||||||
|
|
||||||
|
- This article applies to Kubernetes v1.19 and above.
|
||||||
|
- This article uses the kubectl tool for installation and deployment, please install the necessary software in advance.
|
||||||
|
- Kubernetes has been installed and deployed and can normally access or update necessary container repositories or other services.
|
||||||
|
|
||||||
|
### Configure Service
|
||||||
|
|
||||||
|
Create a Service configuration file: taosd-service.yaml, the service name metadata.name (here "taosd") will be used in the next step. First, add the ports used by TDengine, then set the determined labels app (here "tdengine") in the selector.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: "taosd"
|
||||||
|
labels:
|
||||||
|
app: "tdengine"
|
||||||
|
spec:
|
||||||
|
ports:
|
||||||
|
- name: tcp6030
|
||||||
|
protocol: "TCP"
|
||||||
|
port: 6030
|
||||||
|
- name: tcp6041
|
||||||
|
protocol: "TCP"
|
||||||
|
port: 6041
|
||||||
|
selector:
|
||||||
|
app: "tdengine"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Stateful Services StatefulSet
|
||||||
|
|
||||||
|
According to Kubernetes' descriptions of various deployment types, we will use StatefulSet as the deployment resource type for TDengine. Create the file tdengine.yaml, where replicas define the number of cluster nodes as 3. The node timezone is set to China (Asia/Shanghai), and each node is allocated 5G of standard storage, which you can modify according to actual conditions.
|
||||||
|
|
||||||
|
Please pay special attention to the configuration of startupProbe. After a dnode's Pod goes offline for a period of time and then restarts, the newly online dnode will be temporarily unavailable. If the startupProbe configuration is too small, Kubernetes will consider the Pod to be in an abnormal state and attempt to restart the Pod. This dnode's Pod will frequently restart and never return to a normal state.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: StatefulSet
|
||||||
|
metadata:
|
||||||
|
name: "tdengine"
|
||||||
|
labels:
|
||||||
|
app: "tdengine"
|
||||||
|
spec:
|
||||||
|
serviceName: "taosd"
|
||||||
|
replicas: 3
|
||||||
|
updateStrategy:
|
||||||
|
type: RollingUpdate
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: "tdengine"
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
name: "tdengine"
|
||||||
|
labels:
|
||||||
|
app: "tdengine"
|
||||||
|
spec:
|
||||||
|
affinity:
|
||||||
|
podAntiAffinity:
|
||||||
|
preferredDuringSchedulingIgnoredDuringExecution:
|
||||||
|
- weight: 100
|
||||||
|
podAffinityTerm:
|
||||||
|
labelSelector:
|
||||||
|
matchExpressions:
|
||||||
|
- key: app
|
||||||
|
operator: In
|
||||||
|
values:
|
||||||
|
- tdengine
|
||||||
|
topologyKey: kubernetes.io/hostname
|
||||||
|
containers:
|
||||||
|
- name: "tdengine"
|
||||||
|
image: "tdengine/tdengine:3.2.3.0"
|
||||||
|
imagePullPolicy: "IfNotPresent"
|
||||||
|
ports:
|
||||||
|
- name: tcp6030
|
||||||
|
protocol: "TCP"
|
||||||
|
containerPort: 6030
|
||||||
|
- name: tcp6041
|
||||||
|
protocol: "TCP"
|
||||||
|
containerPort: 6041
|
||||||
|
env:
|
||||||
|
# POD_NAME for FQDN config
|
||||||
|
- name: POD_NAME
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: metadata.name
|
||||||
|
# SERVICE_NAME and NAMESPACE for fqdn resolve
|
||||||
|
- name: SERVICE_NAME
|
||||||
|
value: "taosd"
|
||||||
|
- name: STS_NAME
|
||||||
|
value: "tdengine"
|
||||||
|
- name: STS_NAMESPACE
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: metadata.namespace
|
||||||
|
# TZ for timezone settings, we recommend to always set it.
|
||||||
|
- name: TZ
|
||||||
|
value: "Asia/Shanghai"
|
||||||
|
# Environment variables with prefix TAOS_ will be parsed and converted into corresponding parameter in taos.cfg. For example, serverPort in taos.cfg should be configured by TAOS_SERVER_PORT when using K8S to deploy
|
||||||
|
- name: TAOS_SERVER_PORT
|
||||||
|
value: "6030"
|
||||||
|
# Must set if you want a cluster.
|
||||||
|
- name: TAOS_FIRST_EP
|
||||||
|
value: "$(STS_NAME)-0.$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local:$(TAOS_SERVER_PORT)"
|
||||||
|
# TAOS_FQND should always be set in k8s env.
|
||||||
|
- name: TAOS_FQDN
|
||||||
|
value: "$(POD_NAME).$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local"
|
||||||
|
volumeMounts:
|
||||||
|
- name: taosdata
|
||||||
|
mountPath: /var/lib/taos
|
||||||
|
startupProbe:
|
||||||
|
exec:
|
||||||
|
command:
|
||||||
|
- taos-check
|
||||||
|
failureThreshold: 360
|
||||||
|
periodSeconds: 10
|
||||||
|
readinessProbe:
|
||||||
|
exec:
|
||||||
|
command:
|
||||||
|
- taos-check
|
||||||
|
initialDelaySeconds: 5
|
||||||
|
timeoutSeconds: 5000
|
||||||
|
livenessProbe:
|
||||||
|
exec:
|
||||||
|
command:
|
||||||
|
- taos-check
|
||||||
|
initialDelaySeconds: 15
|
||||||
|
periodSeconds: 20
|
||||||
|
volumeClaimTemplates:
|
||||||
|
- metadata:
|
||||||
|
name: taosdata
|
||||||
|
spec:
|
||||||
|
accessModes:
|
||||||
|
- "ReadWriteOnce"
|
||||||
|
storageClassName: "standard"
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: "5Gi"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Deploying TDengine Cluster Using kubectl Command
|
||||||
|
|
||||||
|
First, create the corresponding namespace `dengine-test`, as well as the PVC, ensuring that there is enough remaining space with `storageClassName` set to `standard`. Then execute the following commands in sequence:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl apply -f taosd-service.yaml -n tdengine-test
|
||||||
|
```
|
||||||
|
|
||||||
|
The above configuration will create a three-node TDengine cluster, with `dnode` automatically configured. You can use the `show dnodes` command to view the current cluster nodes:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
|
||||||
|
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show dnodes"
|
||||||
|
kubectl exec -it tdengine-2 -n tdengine-test -- taos -s "show dnodes"
|
||||||
|
```
|
||||||
|
|
||||||
|
The output is as follows:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
taos show dnodes
|
||||||
|
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||||
|
=============================================================================================================================================================================================================================================
|
||||||
|
1 | tdengine-0.ta... | 0 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | |
|
||||||
|
2 | tdengine-1.ta... | 0 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | |
|
||||||
|
3 | tdengine-2.ta... | 0 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | |
|
||||||
|
Query OK, 3 row(s) in set (0.001853s)
|
||||||
|
```
|
||||||
|
|
||||||
|
View the current mnode:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
|
||||||
|
taos> show mnodes\G
|
||||||
|
*************************** 1.row ***************************
|
||||||
|
id: 1
|
||||||
|
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
|
||||||
|
role: leader
|
||||||
|
status: ready
|
||||||
|
create_time: 2023-07-19 17:54:18.559
|
||||||
|
reboot_time: 2023-07-19 17:54:19.520
|
||||||
|
Query OK, 1 row(s) in set (0.001282s)
|
||||||
|
```
|
||||||
|
|
||||||
|
Create mnode
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 2"
|
||||||
|
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 3"
|
||||||
|
```
|
||||||
|
|
||||||
|
View mnode
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
|
||||||
|
|
||||||
|
taos> show mnodes\G
|
||||||
|
*************************** 1.row ***************************
|
||||||
|
id: 1
|
||||||
|
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
|
||||||
|
role: leader
|
||||||
|
status: ready
|
||||||
|
create_time: 2023-07-19 17:54:18.559
|
||||||
|
reboot_time: 2023-07-20 09:19:36.060
|
||||||
|
*************************** 2.row ***************************
|
||||||
|
id: 2
|
||||||
|
endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030
|
||||||
|
role: follower
|
||||||
|
status: ready
|
||||||
|
create_time: 2023-07-20 09:22:05.600
|
||||||
|
reboot_time: 2023-07-20 09:22:12.838
|
||||||
|
*************************** 3.row ***************************
|
||||||
|
id: 3
|
||||||
|
endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030
|
||||||
|
role: follower
|
||||||
|
status: ready
|
||||||
|
create_time: 2023-07-20 09:22:20.042
|
||||||
|
reboot_time: 2023-07-20 09:22:23.271
|
||||||
|
Query OK, 3 row(s) in set (0.003108s)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Port Forwarding
|
||||||
|
|
||||||
|
Using kubectl port forwarding feature allows applications to access the TDengine cluster running in the Kubernetes environment.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl port-forward -n tdengine-test tdengine-0 6041:6041 &
|
||||||
|
```
|
||||||
|
|
||||||
|
Use the curl command to verify the TDengine REST API using port 6041.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
|
||||||
|
{"code":0,"column_meta":[["name","VARCHAR",64]],"data":[["information_schema"],["performance_schema"],["test"],["test1"]],"rows":4}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cluster Expansion
|
||||||
|
|
||||||
|
TDengine supports cluster expansion:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl scale statefulsets tdengine -n tdengine-test --replicas=4
|
||||||
|
```
|
||||||
|
|
||||||
|
The command line argument `--replica=4` indicates that the TDengine cluster is to be expanded to 4 nodes. After execution, first check the status of the POD:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl get pod -l app=tdengine -n tdengine-test -o wide
|
||||||
|
```
|
||||||
|
|
||||||
|
Output as follows:
|
||||||
|
|
||||||
|
```text
|
||||||
|
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||||
|
tdengine-0 1/1 Running 4 (6h26m ago) 6h53m 10.244.2.75 node86 <none> <none>
|
||||||
|
tdengine-1 1/1 Running 1 (6h39m ago) 6h53m 10.244.0.59 node84 <none> <none>
|
||||||
|
tdengine-2 1/1 Running 0 5h16m 10.244.1.224 node85 <none> <none>
|
||||||
|
tdengine-3 1/1 Running 0 3m24s 10.244.2.76 node86 <none> <none>
|
||||||
|
```
|
||||||
|
|
||||||
|
At this point, the Pod status is still Running. The dnode status in the TDengine cluster can be seen after the Pod status changes to ready:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl exec -it tdengine-3 -n tdengine-test -- taos -s "show dnodes"
|
||||||
|
```
|
||||||
|
|
||||||
|
The dnode list of the four-node TDengine cluster after expansion:
|
||||||
|
|
||||||
|
```text
|
||||||
|
taos> show dnodes
|
||||||
|
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||||
|
=============================================================================================================================================================================================================================================
|
||||||
|
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
|
||||||
|
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
|
||||||
|
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
|
||||||
|
4 | tdengine-3.ta... | 0 | 16 | ready | 2023-07-20 16:01:44.007 | 2023-07-20 16:01:44.889 | | | |
|
||||||
|
Query OK, 4 row(s) in set (0.003628s)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cleaning up the Cluster
|
||||||
|
|
||||||
|
**Warning**
|
||||||
|
When deleting PVCs, pay attention to the PV persistentVolumeReclaimPolicy. It is recommended to set it to Delete, so that when the PVC is deleted, the PV will be automatically cleaned up, along with the underlying CSI storage resources. If the policy to automatically clean up PVs when deleting PVCs is not configured, after deleting the PVCs, manually cleaning up the PVs may not release the corresponding CSI storage resources.
|
||||||
|
|
||||||
|
To completely remove the TDengine cluster, you need to clean up the statefulset, svc, pvc, and finally delete the namespace.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl delete statefulset -l app=tdengine -n tdengine-test
|
||||||
|
kubectl delete svc -l app=tdengine -n tdengine-test
|
||||||
|
kubectl delete pvc -l app=tdengine -n tdengine-test
|
||||||
|
kubectl delete namespace tdengine-test
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cluster Disaster Recovery Capabilities
|
||||||
|
|
||||||
|
For high availability and reliability of TDengine in a Kubernetes environment, in terms of hardware damage and disaster recovery, it is discussed on two levels:
|
||||||
|
|
||||||
|
- The disaster recovery capabilities of the underlying distributed block storage, which includes multiple replicas of block storage. Popular distributed block storage like Ceph has multi-replica capabilities, extending storage replicas to different racks, cabinets, rooms, and data centers (or directly using block storage services provided by public cloud vendors).
|
||||||
|
- TDengine's disaster recovery, in TDengine Enterprise, inherently supports the recovery of a dnode's work by launching a new blank dnode when an existing dnode permanently goes offline (due to physical disk damage and data loss).
|
||||||
|
|
||||||
|
## Deploy TDengine with Helm
|
||||||
|
|
||||||
|
Helm is the package manager for Kubernetes.
|
||||||
|
The previous section on deploying the TDengine cluster with Kubernetes was simple enough, but Helm can provide even more powerful capabilities.
|
||||||
|
|
||||||
|
### Installing Helm
|
||||||
|
|
||||||
|
```shell
|
||||||
|
curl -fsSL -o get_helm.sh \
|
||||||
|
https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
|
||||||
|
chmod +x get_helm.sh
|
||||||
|
./get_helm.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
Helm operates Kubernetes using kubectl and kubeconfig configurations, which can be set up following the Rancher installation configuration for Kubernetes.
|
||||||
|
|
||||||
|
### Installing TDengine Chart
|
||||||
|
|
||||||
|
The TDengine Chart has not yet been released to the Helm repository, it can currently be downloaded directly from GitHub:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
wget https://github.com/taosdata/TDengine-Operator/raw/3.0/helm/tdengine-enterpise-3.5.0.tgz
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that it's for the enterprise edition, and the community edition is not yet available.
|
||||||
|
|
||||||
|
Follow the steps below to install the TDengine Chart:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# Edit the values.yaml file to set the topology of the cluster
|
||||||
|
vim values.yaml
|
||||||
|
helm install tdengine tdengine-enterprise-3.5.0.tgz -f values.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Case 1: Simple 1-node Deployment
|
||||||
|
|
||||||
|
The following is a simple example of deploying a single-node TDengine cluster using Helm.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# This example is a simple deployment with one server replica.
|
||||||
|
name: "tdengine"
|
||||||
|
|
||||||
|
image:
|
||||||
|
repository: image.cloud.taosdata.com/ # Leave a trailing slash for the repository, or "" for no repository
|
||||||
|
server: taosx/integrated:3.3.5.1-b0a54bdd
|
||||||
|
|
||||||
|
# Set timezone here, not in taoscfg
|
||||||
|
timezone: "Asia/Shanghai"
|
||||||
|
|
||||||
|
labels:
|
||||||
|
app: "tdengine"
|
||||||
|
# Add more labels as needed.
|
||||||
|
|
||||||
|
services:
|
||||||
|
server:
|
||||||
|
type: ClusterIP
|
||||||
|
replica: 1
|
||||||
|
ports:
|
||||||
|
# TCP range required
|
||||||
|
tcp: [6041, 6030, 6060]
|
||||||
|
# UDP range, optional
|
||||||
|
udp:
|
||||||
|
volumes:
|
||||||
|
- name: data
|
||||||
|
mountPath: /var/lib/taos
|
||||||
|
spec:
|
||||||
|
storageClassName: "local-path"
|
||||||
|
accessModes: [ "ReadWriteOnce" ]
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: "10Gi"
|
||||||
|
- name: log
|
||||||
|
mountPath: /var/log/taos/
|
||||||
|
spec:
|
||||||
|
storageClassName: "local-path"
|
||||||
|
accessModes: [ "ReadWriteOnce" ]
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: "10Gi"
|
||||||
|
files:
|
||||||
|
- name: cfg # must be lower case.
|
||||||
|
mountPath: /etc/taos/taos.cfg
|
||||||
|
content: |
|
||||||
|
dataDir /var/lib/taos/
|
||||||
|
logDir /var/log/taos/
|
||||||
|
```
|
||||||
|
|
||||||
|
Let's explain the above configuration:
|
||||||
|
|
||||||
|
- name: The name of the deployment, here it is "tdengine".
|
||||||
|
- image:
|
||||||
|
- repository: The image repository address, remember to leave a trailing slash for the repository, or set it to an empty string to use docker.io.
|
||||||
|
- server: The specific name and tag of the server image. You need to ask your business partner for the TDengine Enterprise image.
|
||||||
|
- timezone: Set the timezone, here it is "Asia/Shanghai".
|
||||||
|
- labels: Add labels to the deployment, here is an app label with the value "tdengine", more labels can be added as needed.
|
||||||
|
- services:
|
||||||
|
- server: Configure the server service.
|
||||||
|
- type: The service type, here it is **ClusterIP**.
|
||||||
|
- replica: The number of replicas, here it is 1.
|
||||||
|
- ports: Configure the ports of the service.
|
||||||
|
- tcp: The required TCP port range, here it is [6041, 6030, 6060].
|
||||||
|
- udp: The optional UDP port range, which is not configured here.
|
||||||
|
- volumes: Configure the volumes.
|
||||||
|
- name: The name of the volume, here there are two volumes, data and log.
|
||||||
|
- mountPath: The mount path of the volume.
|
||||||
|
- spec: The specification of the volume.
|
||||||
|
- storageClassName: The storage class name, here it is **local-path**.
|
||||||
|
- accessModes: The access mode, here it is **ReadWriteOnce**.
|
||||||
|
- resources.requests.storage: The requested storage size, here it is **10Gi**.
|
||||||
|
- files: Configure the files to mount in TDengine server.
|
||||||
|
- name: The name of the file, here it is **cfg**.
|
||||||
|
- mountPath: The mount path of the file, which is **taos.cfg**.
|
||||||
|
- content: The content of the file, here the **dataDir** and **logDir** are configured.
|
||||||
|
|
||||||
|
After configuring the values.yaml file, use the following command to install the TDengine Chart:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
helm install simple tdengine-enterprise-3.5.0.tgz -f values.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
After installation, you can see the instructions to see the status of the TDengine cluster:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
NAME: simple
|
||||||
|
LAST DEPLOYED: Sun Feb 9 13:40:00 2025 default
|
||||||
|
STATUS: deployed
|
||||||
|
REVISION: 1
|
||||||
|
TEST SUITE: None
|
||||||
|
NOTES:
|
||||||
|
1. Get first POD name:
|
||||||
|
|
||||||
|
export POD_NAME=$(kubectl get pods --namespace default \
|
||||||
|
-l "app.kubernetes.io/name=tdengine,app.kubernetes.io/instance=simple" -o jsonpath="{.items[0].metadata.name}")
|
||||||
|
|
||||||
|
2. Show dnodes/mnodes:
|
||||||
|
|
||||||
|
kubectl --namespace default exec $POD_NAME -- taos -s "show dnodes; show mnodes"
|
||||||
|
|
||||||
|
3. Run into taos shell:
|
||||||
|
|
||||||
|
kubectl --namespace default exec -it $POD_NAME -- taos
|
||||||
|
```
|
||||||
|
|
||||||
|
Follow the instructions to check the status of the TDengine cluster:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
root@u1-58:/data1/projects/helm# kubectl --namespace default exec $POD_NAME -- taos -s "show dnodes; show mnodes"
|
||||||
|
Welcome to the TDengine Command Line Interface, Client Version:3.3.5.1
|
||||||
|
Copyright (c) 2023 by TDengine, all rights reserved.
|
||||||
|
|
||||||
|
taos> show dnodes; show mnodes
|
||||||
|
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | machine_id |
|
||||||
|
==========================================================================================================================================================================================================
|
||||||
|
1 | simple-tdengine-0.simple-td... | 0 | 85 | ready | 2025-02-07 21:17:34.903 | 2025-02-08 15:52:34.781 | | BWhWyPiEBrWZrQCSqTSc2a/H |
|
||||||
|
Query OK, 1 row(s) in set (0.005133s)
|
||||||
|
|
||||||
|
id | endpoint | role | status | create_time | role_time |
|
||||||
|
==================================================================================================================================
|
||||||
|
1 | simple-tdengine-0.simple-td... | leader | ready | 2025-02-07 21:17:34.906 | 2025-02-08 15:52:34.878 |
|
||||||
|
Query OK, 1 row(s) in set (0.004299s)
|
||||||
|
```
|
||||||
|
|
||||||
|
To clean up the TDengine cluster, use the following command:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
helm uninstall simple
|
||||||
|
kubectl delete pvc -l app.kubernetes.io/instance=simple
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Case 2: Tiered-Storage Deployment
|
||||||
|
|
||||||
|
The following is an example of deploying a TDengine cluster with tiered storage using Helm.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# This is an example of a 3-tiered storage deployment with one server replica.
|
||||||
|
name: "tdengine"
|
||||||
|
|
||||||
|
image:
|
||||||
|
repository: image.cloud.taosdata.com/ # Leave a trailing slash for the repository, or "" for no repository
|
||||||
|
server: taosx/integrated:3.3.5.1-b0a54bdd
|
||||||
|
|
||||||
|
# Set timezone here, not in taoscfg
|
||||||
|
timezone: "Asia/Shanghai"
|
||||||
|
|
||||||
|
labels:
|
||||||
|
# Add more labels as needed.
|
||||||
|
|
||||||
|
services:
|
||||||
|
server:
|
||||||
|
type: ClusterIP
|
||||||
|
replica: 1
|
||||||
|
ports:
|
||||||
|
# TCP range required
|
||||||
|
tcp: [6041, 6030, 6060]
|
||||||
|
# UDP range, optional
|
||||||
|
udp:
|
||||||
|
volumes:
|
||||||
|
- name: tier0
|
||||||
|
mountPath: /data/taos0/
|
||||||
|
spec:
|
||||||
|
storageClassName: "local-path"
|
||||||
|
accessModes: [ "ReadWriteOnce" ]
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: "10Gi"
|
||||||
|
- name: tier1
|
||||||
|
mountPath: /data/taos1/
|
||||||
|
spec:
|
||||||
|
storageClassName: "local-path"
|
||||||
|
accessModes: [ "ReadWriteOnce" ]
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: "10Gi"
|
||||||
|
- name: tier2
|
||||||
|
mountPath: /data/taos2/
|
||||||
|
spec:
|
||||||
|
storageClassName: "local-path"
|
||||||
|
accessModes: [ "ReadWriteOnce" ]
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: "10Gi"
|
||||||
|
- name: log
|
||||||
|
mountPath: /var/log/taos/
|
||||||
|
spec:
|
||||||
|
storageClassName: "local-path"
|
||||||
|
accessModes: [ "ReadWriteOnce" ]
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: "10Gi"
|
||||||
|
environment:
|
||||||
|
TAOS_DEBUG_FLAG: "131"
|
||||||
|
files:
|
||||||
|
- name: cfg # must be lower case.
|
||||||
|
mountPath: /etc/taos/taos.cfg
|
||||||
|
content: |
|
||||||
|
dataDir /data/taos0/ 0 1
|
||||||
|
dataDir /data/taos1/ 1 0
|
||||||
|
dataDir /data/taos2/ 2 0
|
||||||
|
```
|
||||||
|
|
||||||
|
You can see that the configuration is similar to the previous one, with the addition of the tiered storage configuration. The dataDir configuration in the taos.cfg file is also modified to support tiered storage.
|
||||||
|
|
||||||
|
After configuring the values.yaml file, use the following command to install the TDengine Chart:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
helm install tiered tdengine-enterprise-3.5.0.tgz -f values.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Case 3: 2-replica Deployment
|
||||||
|
|
||||||
|
TDengine support 2-replica deployment with an arbitrator, which can be configured as follows:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# This example shows how to deploy a 2-replica TDengine cluster with an arbitrator.
|
||||||
|
name: "tdengine"
|
||||||
|
|
||||||
|
image:
|
||||||
|
repository: image.cloud.taosdata.com/ # Leave a trailing slash for the repository, or "" for no repository
|
||||||
|
server: taosx/integrated:3.3.5.1-b0a54bdd
|
||||||
|
|
||||||
|
# Set timezone here, not in taoscfg
|
||||||
|
timezone: "Asia/Shanghai"
|
||||||
|
|
||||||
|
labels:
|
||||||
|
my-app: "tdengine"
|
||||||
|
# Add more labels as needed.
|
||||||
|
|
||||||
|
services:
|
||||||
|
arbitrator:
|
||||||
|
type: ClusterIP
|
||||||
|
volumes:
|
||||||
|
- name: arb-data
|
||||||
|
mountPath: /var/lib/taos
|
||||||
|
spec:
|
||||||
|
storageClassName: "local-path"
|
||||||
|
accessModes: [ "ReadWriteOnce" ]
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: "10Gi"
|
||||||
|
- name: arb-log
|
||||||
|
mountPath: /var/log/taos/
|
||||||
|
spec:
|
||||||
|
storageClassName: "local-path"
|
||||||
|
accessModes: [ "ReadWriteOnce" ]
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: "10Gi"
|
||||||
|
server:
|
||||||
|
type: ClusterIP
|
||||||
|
replica: 2
|
||||||
|
ports:
|
||||||
|
# TCP range required
|
||||||
|
tcp: [6041, 6030, 6060]
|
||||||
|
# UDP range, optional
|
||||||
|
udp:
|
||||||
|
volumes:
|
||||||
|
- name: data
|
||||||
|
mountPath: /var/lib/taos
|
||||||
|
spec:
|
||||||
|
storageClassName: "local-path"
|
||||||
|
accessModes: [ "ReadWriteOnce" ]
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: "10Gi"
|
||||||
|
- name: log
|
||||||
|
mountPath: /var/log/taos/
|
||||||
|
spec:
|
||||||
|
storageClassName: "local-path"
|
||||||
|
accessModes: [ "ReadWriteOnce" ]
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: "10Gi"
|
||||||
|
```
|
||||||
|
|
||||||
|
You can see that the configuration is similar to the first one, with the addition of the arbitrator configuration. The arbitrator service is configured with the same storage as the server service, and the server service is configured with 2 replicas (the arbitrator should be 1 replica and not able to be changed).
|
||||||
|
|
||||||
|
#### Case 4: 3-replica Deployment with Single taosX
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# This example shows how to deploy a 3-replica TDengine cluster with separate taosx/explorer service.
|
||||||
|
# Users should know that the explorer/taosx service is not cluster-ready, so it is recommended to deploy it separately.
|
||||||
|
name: "tdengine"
|
||||||
|
|
||||||
|
image:
|
||||||
|
repository: image.cloud.taosdata.com/ # Leave a trailing slash for the repository, or "" for no repository
|
||||||
|
server: taosx/integrated:3.3.5.1-b0a54bdd
|
||||||
|
|
||||||
|
# Set timezone here, not in taoscfg
|
||||||
|
timezone: "Asia/Shanghai"
|
||||||
|
|
||||||
|
labels:
|
||||||
|
# Add more labels as needed.
|
||||||
|
|
||||||
|
services:
|
||||||
|
server:
|
||||||
|
type: ClusterIP
|
||||||
|
replica: 3
|
||||||
|
ports:
|
||||||
|
# TCP range required
|
||||||
|
tcp: [6041, 6030]
|
||||||
|
# UDP range, optional
|
||||||
|
udp:
|
||||||
|
volumes:
|
||||||
|
- name: data
|
||||||
|
mountPath: /var/lib/taos
|
||||||
|
spec:
|
||||||
|
storageClassName: "local-path"
|
||||||
|
accessModes: ["ReadWriteOnce"]
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: "10Gi"
|
||||||
|
- name: log
|
||||||
|
mountPath: /var/log/taos/
|
||||||
|
spec:
|
||||||
|
storageClassName: "local-path"
|
||||||
|
accessModes: ["ReadWriteOnce"]
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: "10Gi"
|
||||||
|
environment:
|
||||||
|
ENABLE_TAOSX: "0" # Disable taosx in server replicas.
|
||||||
|
taosx:
|
||||||
|
type: ClusterIP
|
||||||
|
volumes:
|
||||||
|
- name: taosx-data
|
||||||
|
mountPath: /var/lib/taos
|
||||||
|
spec:
|
||||||
|
storageClassName: "local-path"
|
||||||
|
accessModes: ["ReadWriteOnce"]
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: "10Gi"
|
||||||
|
- name: taosx-log
|
||||||
|
mountPath: /var/log/taos/
|
||||||
|
spec:
|
||||||
|
storageClassName: "local-path"
|
||||||
|
accessModes: ["ReadWriteOnce"]
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: "10Gi"
|
||||||
|
files:
|
||||||
|
- name: taosx
|
||||||
|
mountPath: /etc/taos/taosx.toml
|
||||||
|
content: |-
|
||||||
|
# TAOSX configuration in TOML format.
|
||||||
|
[monitor]
|
||||||
|
# FQDN of taosKeeper service, no default value
|
||||||
|
fqdn = "localhost"
|
||||||
|
# How often to send metrics to taosKeeper, default every 10 seconds. Only value from 1 to 10 is valid.
|
||||||
|
interval = 10
|
||||||
|
|
||||||
|
# log configuration
|
||||||
|
[log]
|
||||||
|
# All log files are stored in this directory
|
||||||
|
#
|
||||||
|
#path = "/var/log/taos" # on linux/macOS
|
||||||
|
|
||||||
|
# log filter level
|
||||||
|
#
|
||||||
|
#level = "info"
|
||||||
|
|
||||||
|
# Compress archived log files or not
|
||||||
|
#
|
||||||
|
#compress = false
|
||||||
|
|
||||||
|
# The number of log files retained by the current explorer server instance in the `path` directory
|
||||||
|
#
|
||||||
|
#rotationCount = 30
|
||||||
|
|
||||||
|
# Rotate when the log file reaches this size
|
||||||
|
#
|
||||||
|
#rotationSize = "1GB"
|
||||||
|
|
||||||
|
# Log downgrade when the remaining disk space reaches this size, only logging `ERROR` level logs
|
||||||
|
#
|
||||||
|
#reservedDiskSize = "1GB"
|
||||||
|
|
||||||
|
# The number of days log files are retained
|
||||||
|
#
|
||||||
|
#keepDays = 30
|
||||||
|
|
||||||
|
# Watching the configuration file for log.loggers changes, default to true.
|
||||||
|
#
|
||||||
|
#watching = true
|
||||||
|
|
||||||
|
# Customize the log output level of modules, and changes will be applied after modifying the file when log.watching is enabled
|
||||||
|
#
|
||||||
|
# ## Examples:
|
||||||
|
#
|
||||||
|
# crate = "error"
|
||||||
|
# crate::mod1::mod2 = "info"
|
||||||
|
# crate::span[field=value] = "warn"
|
||||||
|
#
|
||||||
|
[log.loggers]
|
||||||
|
#"actix_server::accept" = "warn"
|
||||||
|
#"taos::query" = "warn"
|
||||||
|
```
|
||||||
|
|
||||||
|
You can see that the configuration is similar to the first one, with the addition of the taosx configuration. The taosx service is configured with similar storage configuration as the server service, and the server service is configured with 3 replicas. Since the taosx service is not cluster-ready, it is recommended to deploy it separately.
|
||||||
|
|
||||||
|
After configuring the values.yaml file, use the following command to install the TDengine Chart:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
helm install replica3 tdengine-enterprise-3.5.0.tgz -f values.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
You can use the following command to expose the explorer service to the outside world with ingress:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
tee replica3-ingress.yaml <<EOF
|
||||||
|
# This is a helm chart example for deploying 3 replicas of TDengine Explorer
|
||||||
|
apiVersion: networking.k8s.io/v1
|
||||||
|
kind: Ingress
|
||||||
|
metadata:
|
||||||
|
name: replica3-ingress
|
||||||
|
namespace: default
|
||||||
|
spec:
|
||||||
|
rules:
|
||||||
|
- host: replica3.local.tdengine.com
|
||||||
|
http:
|
||||||
|
paths:
|
||||||
|
- path: /
|
||||||
|
pathType: Prefix
|
||||||
|
backend:
|
||||||
|
service:
|
||||||
|
name: replica3-tdengine-taosx
|
||||||
|
port:
|
||||||
|
number: 6060
|
||||||
|
EOF
|
||||||
|
|
||||||
|
kubectl apply -f replica3-ingress.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Use `kubectl get ingress` to view the ingress service.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
root@server:/data1/projects/helm# kubectl get ingress
|
||||||
|
NAME CLASS HOSTS ADDRESS PORTS AGE
|
||||||
|
replica3-ingress nginx replica3.local.tdengine.com 192.168.1.58 80 48m
|
||||||
|
```
|
||||||
|
|
||||||
|
You can configure the domain name resolution to point to the ingress service's external IP address. For example, add the following line to the hosts file:
|
||||||
|
|
||||||
|
```conf
|
||||||
|
192.168.1.58 replica3.local.tdengine.com
|
||||||
|
```
|
||||||
|
|
||||||
|
Now you can access the explorer service through the domain name `replica3.local.tdengine.com`.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
curl http://replica3.local.tdengine.com
|
||||||
|
```
|
|
@ -0,0 +1,11 @@
|
||||||
|
---
|
||||||
|
title: Deploying Your Cluster
|
||||||
|
slug: /operations-and-maintenance/deploy-your-cluster
|
||||||
|
---
|
||||||
|
|
||||||
|
import DocCardList from '@theme/DocCardList';
|
||||||
|
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
|
||||||
|
|
||||||
|
You can deploy a TDengine cluster manually, by using Docker, or by using Kubernetes. For Kubernetes deployments, you can use kubectl or Helm.
|
||||||
|
|
||||||
|
<DocCardList items={useCurrentSidebarCategory().items}/>
|
|
@ -17,7 +17,9 @@ TDengine is designed for various writing scenarios, and many of these scenarios
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
COMPACT DATABASE db_name [start with 'XXXX'] [end with 'YYYY'];
|
COMPACT DATABASE db_name [start with 'XXXX'] [end with 'YYYY'];
|
||||||
SHOW COMPACT [compact_id];
|
COMPACT [db_name.]VGROUPS IN (vgroup_id1, vgroup_id2, ...) [start with 'XXXX'] [end with 'YYYY'];
|
||||||
|
SHOW COMPACTS;
|
||||||
|
SHOW COMPACT compact_id;
|
||||||
KILL COMPACT compact_id;
|
KILL COMPACT compact_id;
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -1,14 +1,19 @@
|
||||||
---
|
---
|
||||||
title: Data Backup and Restoration
|
title: Data Backup and Restoration
|
||||||
slug: /operations-and-maintenance/back-up-and-restore-data
|
slug: /operations-and-maintenance/data-backup-and-restoration
|
||||||
---
|
---
|
||||||
|
|
||||||
To prevent data loss and accidental deletions, TDengine provides comprehensive features such as data backup, restoration, fault tolerance, and real-time synchronization of remote data to ensure the security of data storage. This section briefly explains the backup and restoration functions.
|
import Image from '@theme/IdealImage';
|
||||||
|
import imgBackup from '../assets/data-backup-01.png';
|
||||||
|
|
||||||
|
You can back up the data in your TDengine cluster and restore it in the event that data is lost or damaged.
|
||||||
|
|
||||||
## Data Backup and Restoration Using taosdump
|
## Data Backup and Restoration Using taosdump
|
||||||
|
|
||||||
taosdump is an open-source tool that supports backing up data from a running TDengine cluster and restoring the backed-up data to the same or another running TDengine cluster. taosdump can back up the database as a logical data unit or back up data records within a specified time period in the database. When using taosdump, you can specify the directory path for data backup. If no directory path is specified, taosdump will default to backing up the data in the current directory.
|
taosdump is an open-source tool that supports backing up data from a running TDengine cluster and restoring the backed-up data to the same or another running TDengine cluster. taosdump can back up the database as a logical data unit or back up data records within a specified time period in the database. When using taosdump, you can specify the directory path for data backup. If no directory path is specified, taosdump will default to backing up the data in the current directory.
|
||||||
|
|
||||||
|
### Back Up Data with taosdump
|
||||||
|
|
||||||
Below is an example of using taosdump to perform data backup.
|
Below is an example of using taosdump to perform data backup.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
|
@ -19,6 +24,8 @@ After executing the above command, taosdump will connect to the TDengine cluster
|
||||||
|
|
||||||
When using taosdump, if the specified storage path already contains data files, taosdump will prompt the user and exit immediately to avoid data overwriting. This means the same storage path can only be used for one backup. If you see related prompts, please operate carefully to avoid accidental data loss.
|
When using taosdump, if the specified storage path already contains data files, taosdump will prompt the user and exit immediately to avoid data overwriting. This means the same storage path can only be used for one backup. If you see related prompts, please operate carefully to avoid accidental data loss.
|
||||||
|
|
||||||
|
### Restore Data with taosdump
|
||||||
|
|
||||||
To restore data files from a specified local file path to a running TDengine cluster, you can execute the taosdump command by specifying command-line parameters and the data file path. Below is an example code for taosdump performing data restoration.
|
To restore data files from a specified local file path to a running TDengine cluster, you can execute the taosdump command by specifying command-line parameters and the data file path. Below is an example code for taosdump performing data restoration.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
|
@ -27,25 +34,62 @@ taosdump -i /file/path -h localhost -P 6030
|
||||||
|
|
||||||
After executing the above command, taosdump will connect to the TDengine cluster at localhost:6030 and restore the data files from /file/path to the TDengine cluster.
|
After executing the above command, taosdump will connect to the TDengine cluster at localhost:6030 and restore the data files from /file/path to the TDengine cluster.
|
||||||
|
|
||||||
## Data Backup and Restoration Based on TDengine Enterprise
|
## Data Backup and Restoration in TDengine Enterprise
|
||||||
|
|
||||||
TDengine Enterprise provides an efficient incremental backup feature, with the following process.
|
TDengine Enterprise implements incremental backup and recovery of data by using data subscription. The backup and recovery functions of TDengine Enterprise include the following concepts:
|
||||||
|
|
||||||
Step 1, access the taosExplorer service through a browser, usually at the port 6060 of the IP address where the TDengine cluster is located, such as `http://localhost:6060`.
|
1. Incremental data backup: Based on TDengine's data subscription function, all data changes of **the backup object** (including: addition, modification, deletion, metadata change, etc.) are recorded to generate a backup file.
|
||||||
|
2. Data recovery: Use the backup file generated by incremental data backup to restore **the backup object** to a specified point in time.
|
||||||
|
3. Backup object: The object that the user backs up can be a **database** or a **supertable**.
|
||||||
|
4. Backup plan: The user creates a periodic backup task for the backup object. The backup plan starts at a specified time point and periodically executes the backup task at intervals of **the backup cycle. Each backup task generates a** **backup point** .
|
||||||
|
5. Backup point: Each time a backup task is executed, a set of backup files is generated. They correspond to a time point, called **a backup point** . The first backup point is called **the initial backup point** .
|
||||||
|
6. Restore task: The user selects a backup point in the backup plan and creates a restore task. The restore task starts from **the initial backup point** and plays back the data changes in **the backup file** one by one until the specified backup point ends.
|
||||||
|
|
||||||
Step 2, in the "System Management - Backup" page of the taosExplorer service, add a new data backup task, fill in the database name and backup storage file path in the task configuration information, and start the data backup after completing the task creation. Three parameters can be configured on the data backup configuration page:
|
### Incremental Backup Example
|
||||||
|
|
||||||
- Backup cycle: Required, configure the time interval for each data backup execution, which can be selected from a dropdown menu to execute once every day, every 7 days, or every 30 days. After configuration, a data backup task will be initiated at 0:00 of the corresponding backup cycle;
|
<figure>
|
||||||
- Database: Required, configure the name of the database to be backed up (the database's wal_retention_period parameter must be greater than 0);
|
<Image img={imgBackup} alt="Incremental backup process"/>
|
||||||
- Directory: Required, configure the path in the running environment of taosX where the data will be backed up, such as `/root/data_backup`;
|
<figcaption>Figure 1. Incremental backup process</figcaption>
|
||||||
|
</figure>
|
||||||
|
|
||||||
Step 3, after the data backup task is completed, find the created data backup task in the list of created tasks on the same page, and directly perform one-click restoration to restore the data to TDengine.
|
1. The user creates a backup plan to execute the backup task every 1 day starting from 2024-08-27 00:00:00 .
|
||||||
|
2. The first backup task was executed at 2024-08-27 00:00:00, generating an initial backup point .
|
||||||
|
3. After that, the backup task is executed every 1 day, and multiple backup points are generated .
|
||||||
|
4. Users can select a backup point and create a restore task .
|
||||||
|
5. The restore task starts from the initial backup point, applies the backup points one by one, and restores to the specified backup point.
|
||||||
|
|
||||||
Compared to taosdump, if the same data is backed up multiple times in the specified storage path, since TDengine Enterprise not only has high backup efficiency but also implements incremental processing, each backup task will be completed quickly. As taosdump always performs full backups, TDengine Enterprise can significantly reduce system overhead in scenarios with large data volumes and is more convenient.
|
### Back Up Data in TDengine Enterprise
|
||||||
|
|
||||||
**Common Error Troubleshooting**
|
1. In a web browser, open the taosExplorer interface for TDengine. This interface is located on port 6060 on the hostname or IP address running TDengine.
|
||||||
|
2. In the main menu on the left, click **Management** and open the **Backup** tab.
|
||||||
|
3. Under **Backup Plan**, click **Create New Backup** to define your backup plan.
|
||||||
|
1. **Database:** Select the database that you want to backup.
|
||||||
|
2. **Super Table:** (Optional) Select the supertable that you want to backup. If you do not select a supertable, all data in the database is backed up.
|
||||||
|
3. **Next execution time:** Enter the date and time when you want to perform the initial backup for this backup plan. If you specify a date and time in the past, the initial backup is performed immediately.
|
||||||
|
4. **Backup Cycle:** Specify how often you want to perform incremental backups. The value of this field must be less than the value of `WAL_RETENTION_PERIOD` for the specified database.
|
||||||
|
5. **Retry times:** Enter how many times you want to retry a backup task that has failed, provided that the specific failure might be resolved by retrying.
|
||||||
|
6. **Retry interval:** Enter the delay in seconds between retry attempts.
|
||||||
|
7. **Directory:** Enter the full path of the directory in which you want to store backup files.
|
||||||
|
8. **Backup file max size:** Enter the maximum size of a single backup file. If the total size of your backup exceeds this number, the backup is split into multiple files.
|
||||||
|
9. **Compression level:** Select **fastest** for the fastest performance but lowest compression ratio, **best** for the highest compression ratio but slowest performance, or **balanced** for a combination of performance and compression.
|
||||||
|
|
||||||
1. If the task fails to start and reports the following error:
|
4. Click **Confirm** to create the backup plan.
|
||||||
|
|
||||||
|
You can view your backup plans and modify, clone, or delete them using the buttons in the **Operation** columns. Click **Refresh** to update the status of your plans. Note that you must stop a backup plan before you can delete it. You can also click **View** in the **Backup File** column to view the backup record points and files created by each plan.
|
||||||
|
|
||||||
|
### Restore Data in TDengine Enterprise
|
||||||
|
|
||||||
|
1. Locate the backup plan containing data that you want to restore and click **View** in the **Backup File** column.
|
||||||
|
2. Determine the backup record point to which you want to restore and click the Restore icon in the **Operation** column.
|
||||||
|
3. Select the backup file timestamp and target database and click **Confirm**.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Port Access Exception
|
||||||
|
|
||||||
|
A port access exception is indicated by the following error:
|
||||||
|
|
||||||
```text
|
```text
|
||||||
Error: tmq to td task exec error
|
Error: tmq to td task exec error
|
||||||
|
@ -54,9 +98,11 @@ Caused by:
|
||||||
[0x000B] Unable to establish connection
|
[0x000B] Unable to establish connection
|
||||||
```
|
```
|
||||||
|
|
||||||
The cause is an abnormal connection to the data source port, check whether the data source FQDN is connected and whether port 6030 is accessible.
|
If you encounter this error, check whether the data source FQDN is connected and whether port 6030 is listening and accessible.
|
||||||
|
|
||||||
2. If using a WebSocket connection, the task fails to start and reports the following error:
|
### Connection Issues
|
||||||
|
|
||||||
|
A connection issue is indicated by the task failing to start and reporting the following error:
|
||||||
|
|
||||||
```text
|
```text
|
||||||
Error: tmq to td task exec error
|
Error: tmq to td task exec error
|
||||||
|
@ -67,15 +113,16 @@ Caused by:
|
||||||
2: failed to lookup address information: Temporary failure in name resolution
|
2: failed to lookup address information: Temporary failure in name resolution
|
||||||
```
|
```
|
||||||
|
|
||||||
When using a WebSocket connection, you may encounter various types of errors, which can be seen after "Caused by". Here are some possible errors:
|
The following are some possible errors for WebSocket connections:
|
||||||
|
- "Temporary failure in name resolution": DNS resolution error. Check whether the specified IP address or FQDN can be accessed normally.
|
||||||
|
- "IO error: Connection refused (os error 111)": Port access failed. Check whether the port is configured correctly and is enabled and accessible.
|
||||||
|
- "IO error: received corrupt message": Message parsing failed. This may be because SSL was enabled using the WSS method, but the source port is not supported.
|
||||||
|
- "HTTP error: *": Confirm that you are connecting to the correct taosAdapter port and that your LSB/Nginx/Proxy has been configured correctly.
|
||||||
|
- "WebSocket protocol error: Handshake not finished": WebSocket connection error. This is typically caused by an incorrectly configured port.
|
||||||
|
|
||||||
- "Temporary failure in name resolution": DNS resolution error, check if the IP or FQDN can be accessed normally.
|
### WAL Configuration
|
||||||
- "IO error: Connection refused (os error 111)": Port access failure, check if the port is configured correctly or if it is open and accessible.
|
|
||||||
- "IO error: received corrupt message": Message parsing failed, possibly because SSL was enabled using wss, but the source port does not support it.
|
|
||||||
- "HTTP error: *": Possibly connected to the wrong taosAdapter port or incorrect LSB/Nginx/Proxy configuration.
|
|
||||||
- "WebSocket protocol error: Handshake not finished": WebSocket connection error, usually because the configured port is incorrect.
|
|
||||||
|
|
||||||
3. If the task fails to start and reports the following error:
|
A WAL configuration issue is indicated by the task failing to start and reporting the following error:
|
||||||
|
|
||||||
```text
|
```text
|
||||||
Error: tmq to td task exec error
|
Error: tmq to td task exec error
|
||||||
|
@ -84,11 +131,8 @@ Caused by:
|
||||||
[0x038C] WAL retention period is zero
|
[0x038C] WAL retention period is zero
|
||||||
```
|
```
|
||||||
|
|
||||||
This is due to incorrect WAL configuration in the source database, preventing subscription.
|
To resolve this error, modify the WAL retention period for the affected database:
|
||||||
|
|
||||||
Solution:
|
|
||||||
Modify the data WAL configuration:
|
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
alter database test wal_retention_period 3600;
|
ALTER DATABASE test WAL_RETENTION_PERIOD 3600;
|
||||||
```
|
```
|
||||||
|
|
|
@ -13,45 +13,108 @@ import Icinga2 from "../../assets/resources/_icinga2.mdx"
|
||||||
import TCollector from "../../assets/resources/_tcollector.mdx"
|
import TCollector from "../../assets/resources/_tcollector.mdx"
|
||||||
|
|
||||||
taosAdapter is a companion tool for TDengine, serving as a bridge and adapter between the TDengine cluster and applications. It provides an easy and efficient way to ingest data directly from data collection agents (such as Telegraf, StatsD, collectd, etc.). It also offers InfluxDB/OpenTSDB compatible data ingestion interfaces, allowing InfluxDB/OpenTSDB applications to be seamlessly ported to TDengine.
|
taosAdapter is a companion tool for TDengine, serving as a bridge and adapter between the TDengine cluster and applications. It provides an easy and efficient way to ingest data directly from data collection agents (such as Telegraf, StatsD, collectd, etc.). It also offers InfluxDB/OpenTSDB compatible data ingestion interfaces, allowing InfluxDB/OpenTSDB applications to be seamlessly ported to TDengine.
|
||||||
|
The connectors of TDengine in various languages communicate with TDengine through the WebSocket interface, hence the taosAdapter must be installed.
|
||||||
|
|
||||||
taosAdapter offers the following features:
|
The architecture diagram is as follows:
|
||||||
|
|
||||||
- RESTful interface
|
|
||||||
- Compatible with InfluxDB v1 write interface
|
|
||||||
- Compatible with OpenTSDB JSON and telnet format writing
|
|
||||||
- Seamless connection to Telegraf
|
|
||||||
- Seamless connection to collectd
|
|
||||||
- Seamless connection to StatsD
|
|
||||||
- Supports Prometheus remote_read and remote_write
|
|
||||||
- Retrieves the VGroup ID of the virtual node group (VGroup) where the table is located
|
|
||||||
|
|
||||||
## taosAdapter Architecture Diagram
|
|
||||||
|
|
||||||
<figure>
|
<figure>
|
||||||
<Image img={imgAdapter} alt="taosAdapter architecture"/>
|
<Image img={imgAdapter} alt="taosAdapter architecture"/>
|
||||||
<figcaption>Figure 1. taosAdapter architecture</figcaption>
|
<figcaption>Figure 1. taosAdapter architecture</figcaption>
|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
## Deployment Methods for taosAdapter
|
## Feature List
|
||||||
|
|
||||||
### Installing taosAdapter
|
The taosAdapter provides the following features:
|
||||||
|
|
||||||
|
- WebSocket Interface:
|
||||||
|
Supports executing SQL, schemaless writing, parameter binding, and data subscription through the WebSocket protocol.
|
||||||
|
- Compatible with InfluxDB v1 write interface:
|
||||||
|
[https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/](https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/)
|
||||||
|
- Compatible with OpenTSDB JSON and telnet format writing:
|
||||||
|
- [http://opentsdb.net/docs/build/html/api_http/put.html](http://opentsdb.net/docs/build/html/api_http/put.html)
|
||||||
|
- [http://opentsdb.net/docs/build/html/api_telnet/put.html](http://opentsdb.net/docs/build/html/api_telnet/put.html)
|
||||||
|
- collectd data writing:
|
||||||
|
collectd is a system statistics collection daemon, visit [https://collectd.org/](https://collectd.org/) for more information.
|
||||||
|
- StatsD data writing:
|
||||||
|
StatsD is a simple yet powerful daemon for gathering statistics. Visit [https://github.com/statsd/statsd](https://github.com/statsd/statsd) for more information.
|
||||||
|
- icinga2 OpenTSDB writer data writing:
|
||||||
|
icinga2 is a software for collecting check results metrics and performance data. Visit [https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer](https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer) for more information.
|
||||||
|
- TCollector data writing:
|
||||||
|
TCollector is a client process that collects data from local collectors and pushes it to OpenTSDB. Visit [http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html) for more information.
|
||||||
|
- node_exporter data collection and writing:
|
||||||
|
node_exporter is an exporter of machine metrics. Visit [https://github.com/prometheus/node_exporter](https://github.com/prometheus/node_exporter) for more information.
|
||||||
|
- Supports Prometheus remote_read and remote_write:
|
||||||
|
remote_read and remote_write are Prometheus's data read-write separation cluster solutions. Visit [https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis](https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis) for more information.
|
||||||
|
- RESTful API:
|
||||||
|
[RESTful API](../../client-libraries/rest-api/)
|
||||||
|
|
||||||
|
### WebSocket Interface
|
||||||
|
|
||||||
|
Through the WebSocket interface of taosAdapter, connectors in various languages can achieve SQL execution, schemaless writing, parameter binding, and data subscription functionalities. Refer to the [Development Guide](../../../developer-guide/connecting-to-tdengine/#websocket-connection) for more details.
|
||||||
|
|
||||||
|
### Compatible with InfluxDB v1 write interface
|
||||||
|
|
||||||
|
You can use any client that supports the HTTP protocol to write data in InfluxDB compatible format to TDengine by accessing the Restful interface URL `http://<fqdn>:6041/influxdb/v1/write`.
|
||||||
|
|
||||||
|
Supported InfluxDB parameters are as follows:
|
||||||
|
|
||||||
|
- `db` specifies the database name used by TDengine
|
||||||
|
- `precision` the time precision used by TDengine
|
||||||
|
- `u` TDengine username
|
||||||
|
- `p` TDengine password
|
||||||
|
- `ttl` the lifespan of automatically created subtables, determined by the TTL parameter of the first data entry in the subtable, which cannot be updated. For more information, please refer to the TTL parameter in the [table creation document](../../sql-manual/manage-tables/).
|
||||||
|
|
||||||
|
Note: Currently, InfluxDB's token authentication method is not supported, only Basic authentication and query parameter verification are supported.
|
||||||
|
Example: `curl --request POST http://127.0.0.1:6041/influxdb/v1/write?db=test --user "root:taosdata" --data-binary "measurement,host=host1 field1=2i,field2=2.0 1577836800000000000"`
|
||||||
|
|
||||||
|
### Compatible with OpenTSDB JSON and telnet format writing
|
||||||
|
|
||||||
|
You can use any client that supports the HTTP protocol to write data in OpenTSDB compatible format to TDengine by accessing the Restful interface URL `http://<fqdn>:6041/<APIEndPoint>`. EndPoint as follows:
|
||||||
|
|
||||||
|
```text
|
||||||
|
/opentsdb/v1/put/json/<db>
|
||||||
|
/opentsdb/v1/put/telnet/<db>
|
||||||
|
```
|
||||||
|
|
||||||
|
### collectd data writing
|
||||||
|
|
||||||
|
<CollectD />
|
||||||
|
|
||||||
|
### StatsD data writing
|
||||||
|
|
||||||
|
<StatsD />
|
||||||
|
|
||||||
|
### icinga2 OpenTSDB writer data writing
|
||||||
|
|
||||||
|
<Icinga2 />
|
||||||
|
|
||||||
|
### TCollector data writing
|
||||||
|
|
||||||
|
<TCollector />
|
||||||
|
|
||||||
|
### node_exporter data collection and writing
|
||||||
|
|
||||||
|
An exporter used by Prometheus that exposes hardware and operating system metrics from \*NIX kernels
|
||||||
|
|
||||||
|
- Enable configuration of taosAdapter node_exporter.enable
|
||||||
|
- Set the relevant configuration for node_exporter
|
||||||
|
- Restart taosAdapter
|
||||||
|
|
||||||
|
### Supports Prometheus remote_read and remote_write
|
||||||
|
|
||||||
|
<Prometheus />
|
||||||
|
|
||||||
|
### RESTful API
|
||||||
|
|
||||||
|
You can use any client that supports the HTTP protocol to write data to TDengine or query data from TDengine by accessing the RESTful interface URL `http://<fqdn>:6041/rest/sql`. For details, please refer to the [REST API documentation](../../client-libraries/rest-api/).
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
taosAdapter is part of the TDengine server software. If you are using TDengine server, you do not need any additional steps to install taosAdapter. If you need to deploy taosAdapter separately from the TDengine server, you should install the complete TDengine on that server to install taosAdapter. If you need to compile taosAdapter from source code, you can refer to the [Build taosAdapter](https://github.com/taosdata/taosadapter/blob/3.0/BUILD.md) document.
|
taosAdapter is part of the TDengine server software. If you are using TDengine server, you do not need any additional steps to install taosAdapter. If you need to deploy taosAdapter separately from the TDengine server, you should install the complete TDengine on that server to install taosAdapter. If you need to compile taosAdapter from source code, you can refer to the [Build taosAdapter](https://github.com/taosdata/taosadapter/blob/3.0/BUILD.md) document.
|
||||||
|
|
||||||
### Starting/Stopping taosAdapter
|
After the installation is complete, you can start the taosAdapter service using the command `systemctl start taosadapter`.
|
||||||
|
|
||||||
On Linux systems, the taosAdapter service is managed by default by systemd. Use the command `systemctl start taosadapter` to start the taosAdapter service. Use the command `systemctl stop taosadapter` to stop the taosAdapter service.
|
## Configuration
|
||||||
|
|
||||||
### Removing taosAdapter
|
|
||||||
|
|
||||||
Use the command rmtaos to remove the TDengine server software, including taosAdapter.
|
|
||||||
|
|
||||||
### Upgrading taosAdapter
|
|
||||||
|
|
||||||
taosAdapter and TDengine server need to use the same version. Please upgrade taosAdapter by upgrading the TDengine server.
|
|
||||||
taosAdapter deployed separately from taosd must be upgraded by upgrading the TDengine server on its server.
|
|
||||||
|
|
||||||
## taosAdapter Parameter List
|
|
||||||
|
|
||||||
taosAdapter supports configuration through command-line parameters, environment variables, and configuration files. The default configuration file is `/etc/taos/taosadapter.toml`.
|
taosAdapter supports configuration through command-line parameters, environment variables, and configuration files. The default configuration file is `/etc/taos/taosadapter.toml`.
|
||||||
|
|
||||||
|
@ -80,6 +143,7 @@ Usage of taosAdapter:
|
||||||
--instanceId int instance ID. Env "TAOS_ADAPTER_INSTANCE_ID" (default 32)
|
--instanceId int instance ID. Env "TAOS_ADAPTER_INSTANCE_ID" (default 32)
|
||||||
--log.compress whether to compress old log. Env "TAOS_ADAPTER_LOG_COMPRESS"
|
--log.compress whether to compress old log. Env "TAOS_ADAPTER_LOG_COMPRESS"
|
||||||
--log.enableRecordHttpSql whether to record http sql. Env "TAOS_ADAPTER_LOG_ENABLE_RECORD_HTTP_SQL"
|
--log.enableRecordHttpSql whether to record http sql. Env "TAOS_ADAPTER_LOG_ENABLE_RECORD_HTTP_SQL"
|
||||||
|
--log.keepDays uint log retention days, must be a positive integer. Env "TAOS_ADAPTER_LOG_KEEP_DAYS" (default 30)
|
||||||
--log.level string log level (trace debug info warning error). Env "TAOS_ADAPTER_LOG_LEVEL" (default "info")
|
--log.level string log level (trace debug info warning error). Env "TAOS_ADAPTER_LOG_LEVEL" (default "info")
|
||||||
--log.path string log path. Env "TAOS_ADAPTER_LOG_PATH" (default "/var/log/taos")
|
--log.path string log path. Env "TAOS_ADAPTER_LOG_PATH" (default "/var/log/taos")
|
||||||
--log.reservedDiskSize string reserved disk size for log dir (KB MB GB), must be a positive integer. Env "TAOS_ADAPTER_LOG_RESERVED_DISK_SIZE" (default "1GB")
|
--log.reservedDiskSize string reserved disk size for log dir (KB MB GB), must be a positive integer. Env "TAOS_ADAPTER_LOG_RESERVED_DISK_SIZE" (default "1GB")
|
||||||
|
@ -90,6 +154,8 @@ Usage of taosAdapter:
|
||||||
--log.sqlRotationSize string record sql log rotation size(KB MB GB), must be a positive integer. Env "TAOS_ADAPTER_LOG_SQL_ROTATION_SIZE" (default "1GB")
|
--log.sqlRotationSize string record sql log rotation size(KB MB GB), must be a positive integer. Env "TAOS_ADAPTER_LOG_SQL_ROTATION_SIZE" (default "1GB")
|
||||||
--log.sqlRotationTime duration record sql log rotation time. Env "TAOS_ADAPTER_LOG_SQL_ROTATION_TIME" (default 24h0m0s)
|
--log.sqlRotationTime duration record sql log rotation time. Env "TAOS_ADAPTER_LOG_SQL_ROTATION_TIME" (default 24h0m0s)
|
||||||
--logLevel string log level (trace debug info warning error). Env "TAOS_ADAPTER_LOG_LEVEL" (default "info")
|
--logLevel string log level (trace debug info warning error). Env "TAOS_ADAPTER_LOG_LEVEL" (default "info")
|
||||||
|
--maxAsyncConcurrentLimit int The maximum number of concurrent calls allowed for the C asynchronous method. 0 means use CPU core count. Env "TAOS_ADAPTER_MAX_ASYNC_CONCURRENT_LIMIT"
|
||||||
|
--maxSyncConcurrentLimit int The maximum number of concurrent calls allowed for the C synchronized method. 0 means use CPU core count. Env "TAOS_ADAPTER_MAX_SYNC_CONCURRENT_LIMIT"
|
||||||
--monitor.collectDuration duration Set monitor duration. Env "TAOS_ADAPTER_MONITOR_COLLECT_DURATION" (default 3s)
|
--monitor.collectDuration duration Set monitor duration. Env "TAOS_ADAPTER_MONITOR_COLLECT_DURATION" (default 3s)
|
||||||
--monitor.disable Whether to disable monitoring. Env "TAOS_ADAPTER_MONITOR_DISABLE" (default true)
|
--monitor.disable Whether to disable monitoring. Env "TAOS_ADAPTER_MONITOR_DISABLE" (default true)
|
||||||
--monitor.identity string The identity of the current instance, or 'hostname:port' if it is empty. Env "TAOS_ADAPTER_MONITOR_IDENTITY"
|
--monitor.identity string The identity of the current instance, or 'hostname:port' if it is empty. Env "TAOS_ADAPTER_MONITOR_IDENTITY"
|
||||||
|
@ -118,7 +184,7 @@ Usage of taosAdapter:
|
||||||
--opentsdb_telnet.flushInterval duration opentsdb_telnet flush interval (0s means not valid) . Env "TAOS_ADAPTER_OPENTSDB_TELNET_FLUSH_INTERVAL"
|
--opentsdb_telnet.flushInterval duration opentsdb_telnet flush interval (0s means not valid) . Env "TAOS_ADAPTER_OPENTSDB_TELNET_FLUSH_INTERVAL"
|
||||||
--opentsdb_telnet.maxTCPConnections int max tcp connections. Env "TAOS_ADAPTER_OPENTSDB_TELNET_MAX_TCP_CONNECTIONS" (default 250)
|
--opentsdb_telnet.maxTCPConnections int max tcp connections. Env "TAOS_ADAPTER_OPENTSDB_TELNET_MAX_TCP_CONNECTIONS" (default 250)
|
||||||
--opentsdb_telnet.password string opentsdb_telnet password. Env "TAOS_ADAPTER_OPENTSDB_TELNET_PASSWORD" (default "taosdata")
|
--opentsdb_telnet.password string opentsdb_telnet password. Env "TAOS_ADAPTER_OPENTSDB_TELNET_PASSWORD" (default "taosdata")
|
||||||
--opentsdb_telnet.ports ints opentsdb_telnet tcp port. Env "TAOS_ADAPTER_OPENTSDB_TELNET_PORTS" (default [6046,6047,6048,6049])
|
--opentsdb_telnet.ports ints opentsdb telnet tcp port. Env "TAOS_ADAPTER_OPENTSDB_TELNET_PORTS" (default [6046,6047,6048,6049])
|
||||||
--opentsdb_telnet.tcpKeepAlive enable tcp keep alive. Env "TAOS_ADAPTER_OPENTSDB_TELNET_TCP_KEEP_ALIVE"
|
--opentsdb_telnet.tcpKeepAlive enable tcp keep alive. Env "TAOS_ADAPTER_OPENTSDB_TELNET_TCP_KEEP_ALIVE"
|
||||||
--opentsdb_telnet.ttl int opentsdb_telnet data ttl. Env "TAOS_ADAPTER_OPENTSDB_TELNET_TTL"
|
--opentsdb_telnet.ttl int opentsdb_telnet data ttl. Env "TAOS_ADAPTER_OPENTSDB_TELNET_TTL"
|
||||||
--opentsdb_telnet.user string opentsdb_telnet user. Env "TAOS_ADAPTER_OPENTSDB_TELNET_USER" (default "root")
|
--opentsdb_telnet.user string opentsdb_telnet user. Env "TAOS_ADAPTER_OPENTSDB_TELNET_USER" (default "root")
|
||||||
|
@ -131,6 +197,9 @@ Usage of taosAdapter:
|
||||||
--prometheus.enable enable prometheus. Env "TAOS_ADAPTER_PROMETHEUS_ENABLE" (default true)
|
--prometheus.enable enable prometheus. Env "TAOS_ADAPTER_PROMETHEUS_ENABLE" (default true)
|
||||||
--restfulRowLimit int restful returns the maximum number of rows (-1 means no limit). Env "TAOS_ADAPTER_RESTFUL_ROW_LIMIT" (default -1)
|
--restfulRowLimit int restful returns the maximum number of rows (-1 means no limit). Env "TAOS_ADAPTER_RESTFUL_ROW_LIMIT" (default -1)
|
||||||
--smlAutoCreateDB Whether to automatically create db when writing with schemaless. Env "TAOS_ADAPTER_SML_AUTO_CREATE_DB"
|
--smlAutoCreateDB Whether to automatically create db when writing with schemaless. Env "TAOS_ADAPTER_SML_AUTO_CREATE_DB"
|
||||||
|
--ssl.certFile string ssl cert file path. Env "TAOS_ADAPTER_SSL_CERT_FILE"
|
||||||
|
--ssl.enable enable ssl. Env "TAOS_ADAPTER_SSL_ENABLE"
|
||||||
|
--ssl.keyFile string ssl key file path. Env "TAOS_ADAPTER_SSL_KEY_FILE"
|
||||||
--statsd.allowPendingMessages int statsd allow pending messages. Env "TAOS_ADAPTER_STATSD_ALLOW_PENDING_MESSAGES" (default 50000)
|
--statsd.allowPendingMessages int statsd allow pending messages. Env "TAOS_ADAPTER_STATSD_ALLOW_PENDING_MESSAGES" (default 50000)
|
||||||
--statsd.db string statsd db name. Env "TAOS_ADAPTER_STATSD_DB" (default "statsd")
|
--statsd.db string statsd db name. Env "TAOS_ADAPTER_STATSD_DB" (default "statsd")
|
||||||
--statsd.deleteCounters statsd delete counter cache after gather. Env "TAOS_ADAPTER_STATSD_DELETE_COUNTERS" (default true)
|
--statsd.deleteCounters statsd delete counter cache after gather. Env "TAOS_ADAPTER_STATSD_DELETE_COUNTERS" (default true)
|
||||||
|
@ -157,27 +226,44 @@ Usage of taosAdapter:
|
||||||
-V, --version Print the version and exit
|
-V, --version Print the version and exit
|
||||||
```
|
```
|
||||||
|
|
||||||
Note:
|
See the example configuration file at [example/config/taosadapter.toml](https://github.com/taosdata/taosadapter/blob/3.0/example/config/taosadapter.toml).
|
||||||
When using a browser to make API calls, please set the following Cross-Origin Resource Sharing (CORS) parameters according to the actual situation:
|
|
||||||
|
|
||||||
```text
|
### Cross-Origin Configuration
|
||||||
AllowAllOrigins
|
|
||||||
AllowOrigins
|
When making API calls from the browser, please configure the following Cross-Origin Resource Sharing (CORS) parameters based on your actual situation:
|
||||||
AllowHeaders
|
|
||||||
ExposeHeaders
|
- **`cors.allowAllOrigins`**: Whether to allow all origins to access, default is true.
|
||||||
AllowCredentials
|
- **`cors.allowOrigins`**: A comma-separated list of origins allowed to access. Multiple origins can be specified.
|
||||||
AllowWebSockets
|
- **`cors.allowHeaders`**: A comma-separated list of request headers allowed for cross-origin access. Multiple headers can be specified.
|
||||||
```
|
- **`cors.exposeHeaders`**: A comma-separated list of response headers exposed for cross-origin access. Multiple headers can be specified.
|
||||||
|
- **`cors.allowCredentials`**: Whether to allow cross-origin requests to include user credentials, such as cookies, HTTP authentication information, or client SSL certificates.
|
||||||
|
- **`cors.allowWebSockets`**: Whether to allow WebSockets connections.
|
||||||
|
|
||||||
If you are not making API calls through a browser, you do not need to worry about these configurations.
|
If you are not making API calls through a browser, you do not need to worry about these configurations.
|
||||||
|
|
||||||
|
The above configurations take effect for the following interfaces:
|
||||||
|
|
||||||
|
* RESTful API requests
|
||||||
|
* WebSocket API requests
|
||||||
|
* InfluxDB v1 write interface
|
||||||
|
* OpenTSDB HTTP write interface
|
||||||
|
|
||||||
For details about the CORS protocol, please refer to: [https://www.w3.org/wiki/CORS_Enabled](https://www.w3.org/wiki/CORS_Enabled) or [https://developer.mozilla.org/docs/Web/HTTP/CORS](https://developer.mozilla.org/docs/Web/HTTP/CORS).
|
For details about the CORS protocol, please refer to: [https://www.w3.org/wiki/CORS_Enabled](https://www.w3.org/wiki/CORS_Enabled) or [https://developer.mozilla.org/docs/Web/HTTP/CORS](https://developer.mozilla.org/docs/Web/HTTP/CORS).
|
||||||
|
|
||||||
See the example configuration file at [example/config/taosadapter.toml](https://github.com/taosdata/taosadapter/blob/3.0/example/config/taosadapter.toml).
|
### Connection Pool Configuration
|
||||||
|
|
||||||
### Connection Pool Parameters Description
|
taosAdapter uses a connection pool to manage connections to TDengine, improving concurrency performance and resource utilization. The connection pool configuration applies to the following interfaces, and these interfaces share a single connection pool:
|
||||||
|
|
||||||
When using the RESTful API, the system will manage TDengine connections through a connection pool. The connection pool can be configured with the following parameters:
|
* RESTful API requests
|
||||||
|
* InfluxDB v1 write interface
|
||||||
|
* OpenTSDB JSON and telnet format writing
|
||||||
|
* Telegraf data writing
|
||||||
|
* collectd data writing
|
||||||
|
* StatsD data writing
|
||||||
|
* node_exporter data collection writing
|
||||||
|
* Prometheus remote_read and remote_write
|
||||||
|
|
||||||
|
The configuration parameters for the connection pool are as follows:
|
||||||
|
|
||||||
- **`pool.maxConnect`**: The maximum number of connections allowed in the pool, default is twice the number of CPU cores. It is recommended to keep the default setting.
|
- **`pool.maxConnect`**: The maximum number of connections allowed in the pool, default is twice the number of CPU cores. It is recommended to keep the default setting.
|
||||||
- **`pool.maxIdle`**: The maximum number of idle connections in the pool, default is the same as `pool.maxConnect`. It is recommended to keep the default setting.
|
- **`pool.maxIdle`**: The maximum number of idle connections in the pool, default is the same as `pool.maxConnect`. It is recommended to keep the default setting.
|
||||||
|
@ -185,153 +271,136 @@ When using the RESTful API, the system will manage TDengine connections through
|
||||||
- **`pool.waitTimeout`**: Timeout for obtaining a connection from the pool, default is set to 60 seconds. If a connection is not obtained within the timeout period, HTTP status code 503 will be returned. This parameter is available starting from version 3.3.3.0.
|
- **`pool.waitTimeout`**: Timeout for obtaining a connection from the pool, default is set to 60 seconds. If a connection is not obtained within the timeout period, HTTP status code 503 will be returned. This parameter is available starting from version 3.3.3.0.
|
||||||
- **`pool.maxWait`**: The maximum number of requests waiting to get a connection in the pool, default is 0, which means no limit. When the number of queued requests exceeds this value, new requests will return HTTP status code 503. This parameter is available starting from version 3.3.3.0.
|
- **`pool.maxWait`**: The maximum number of requests waiting to get a connection in the pool, default is 0, which means no limit. When the number of queued requests exceeds this value, new requests will return HTTP status code 503. This parameter is available starting from version 3.3.3.0.
|
||||||
|
|
||||||
## Feature List
|
### HTTP Response Code Configuration
|
||||||
|
|
||||||
- RESTful API
|
taosAdapter uses the parameter `httpCodeServerError` to set whether to return a non-200 HTTP status code when the C interface returns an error. When set to true, it will return different HTTP status codes based on the error code returned by C. See [HTTP Response Codes](../../client-libraries/rest-api/) for details.
|
||||||
[RESTful API](../../client-libraries/rest-api/)
|
|
||||||
- Compatible with InfluxDB v1 write interface
|
|
||||||
[https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/](https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/)
|
|
||||||
- Compatible with OpenTSDB JSON and telnet format writing
|
|
||||||
- [http://opentsdb.net/docs/build/html/api_http/put.html](http://opentsdb.net/docs/build/html/api_http/put.html)
|
|
||||||
- [http://opentsdb.net/docs/build/html/api_telnet/put.html](http://opentsdb.net/docs/build/html/api_telnet/put.html)
|
|
||||||
- Seamless connection with collectd.
|
|
||||||
collectd is a system statistics collection daemon, visit [https://collectd.org/](https://collectd.org/) for more information.
|
|
||||||
- Seamless connection with StatsD.
|
|
||||||
StatsD is a simple yet powerful daemon for gathering statistics. Visit [https://github.com/statsd/statsd](https://github.com/statsd/statsd) for more information.
|
|
||||||
- Seamless connection with icinga2.
|
|
||||||
icinga2 is a software for collecting check results metrics and performance data. Visit [https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer](https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer) for more information.
|
|
||||||
- Seamless connection with tcollector.
|
|
||||||
TCollector is a client process that collects data from local collectors and pushes it to OpenTSDB. Visit [http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html) for more information.
|
|
||||||
- Seamless connection with node_exporter.
|
|
||||||
node_exporter is an exporter of machine metrics. Visit [https://github.com/prometheus/node_exporter](https://github.com/prometheus/node_exporter) for more information.
|
|
||||||
- Supports Prometheus remote_read and remote_write.
|
|
||||||
remote_read and remote_write are Prometheus's data read-write separation cluster solutions. Visit [https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis](https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis) for more information.
|
|
||||||
- Get the VGroup ID of the virtual node group (VGroup) where the table is located.
|
|
||||||
|
|
||||||
## Interface
|
This configuration only affects the **RESTful interface**.
|
||||||
|
|
||||||
### TDengine RESTful Interface
|
**Parameter Description**
|
||||||
|
|
||||||
You can use any client that supports the HTTP protocol to write data to TDengine or query data from TDengine by accessing the RESTful interface URL `http://<fqdn>:6041/rest/sql`. For details, please refer to the [REST API documentation](../../client-libraries/rest-api/).
|
- **`httpCodeServerError`**:
|
||||||
|
- **When set to `true`**: Map the error code returned by the C interface to the corresponding HTTP status code.
|
||||||
|
- **When set to `false`**: Regardless of the error returned by the C interface, always return the HTTP status code `200` (default value).
|
||||||
|
|
||||||
### InfluxDB
|
### Memory limit configuration
|
||||||
|
|
||||||
You can use any client that supports the HTTP protocol to write data in InfluxDB compatible format to TDengine by accessing the Restful interface URL `http://<fqdn>:6041/influxdb/v1/write`.
|
taosAdapter will monitor the memory usage during its operation and adjust it through two thresholds. The valid value range is an integer from 1 to 100, and the unit is the percentage of system physical memory.
|
||||||
|
|
||||||
Supported InfluxDB parameters are as follows:
|
This configuration only affects the following interfaces:
|
||||||
|
|
||||||
- `db` specifies the database name used by TDengine
|
* RESTful interface request
|
||||||
- `precision` the time precision used by TDengine
|
* InfluxDB v1 write interface
|
||||||
- `u` TDengine username
|
* OpenTSDB HTTP write interface
|
||||||
- `p` TDengine password
|
* Prometheus remote_read and remote_write interfaces
|
||||||
- `ttl` the lifespan of automatically created subtables, determined by the TTL parameter of the first data entry in the subtable, which cannot be updated. For more information, please refer to the TTL parameter in the [table creation document](../../sql-manual/manage-tables/).
|
|
||||||
|
|
||||||
Note: Currently, InfluxDB's token authentication method is not supported, only Basic authentication and query parameter verification are supported.
|
**Parameter Description**
|
||||||
Example: curl --request POST `http://127.0.0.1:6041/influxdb/v1/write?db=test` --user "root:taosdata" --data-binary "measurement,host=host1 field1=2i,field2=2.0 1577836800000000000"
|
|
||||||
|
|
||||||
### OpenTSDB
|
- **`pauseQueryMemoryThreshold`**:
|
||||||
|
- When memory usage exceeds this threshold, taosAdapter will stop processing query requests.
|
||||||
|
- Default value: `70` (i.e. 70% of system physical memory).
|
||||||
|
- **`pauseAllMemoryThreshold`**:
|
||||||
|
- When memory usage exceeds this threshold, taosAdapter will stop processing all requests (including writes and queries).
|
||||||
|
- Default value: `80` (i.e. 80% of system physical memory).
|
||||||
|
|
||||||
You can use any client that supports the HTTP protocol to write data in OpenTSDB compatible format to TDengine by accessing the Restful interface URL `http://<fqdn>:6041/<APIEndPoint>`. EndPoint as follows:
|
When memory usage falls below the threshold, taosAdapter will automatically resume the corresponding function.
|
||||||
|
|
||||||
```text
|
**HTTP return content:**
|
||||||
/opentsdb/v1/put/json/<db>
|
|
||||||
/opentsdb/v1/put/telnet/<db>
|
|
||||||
```
|
|
||||||
|
|
||||||
### collectd
|
- **When `pauseQueryMemoryThreshold` is exceeded**:
|
||||||
|
- HTTP status code: `503`
|
||||||
|
- Return content: `"query memory exceeds threshold"`
|
||||||
|
|
||||||
<CollectD />
|
- **When `pauseAllMemoryThreshold` is exceeded**:
|
||||||
|
- HTTP status code: `503`
|
||||||
|
- Return content: `"memory exceeds threshold"`
|
||||||
|
|
||||||
### StatsD
|
**Status check interface:**
|
||||||
|
|
||||||
<StatsD />
|
The memory status of taosAdapter can be checked through the following interface:
|
||||||
|
- **Normal status**: `http://<fqdn>:6041/-/ping` returns `code 200`.
|
||||||
|
- **Memory exceeds threshold**:
|
||||||
|
- If the memory exceeds `pauseAllMemoryThreshold`, `code 503` is returned.
|
||||||
|
- If the memory exceeds `pauseQueryMemoryThreshold` and the request parameter contains `action=query`, `code 503` is returned.
|
||||||
|
|
||||||
### icinga2 OpenTSDB writer
|
**Related configuration parameters:**
|
||||||
|
|
||||||
<Icinga2 />
|
- **`monitor.collectDuration`**: memory monitoring interval, default value is `3s`, environment variable is `TAOS_MONITOR_COLLECT_DURATION`.
|
||||||
|
- **`monitor.incgroup`**: whether to run in a container (set to `true` for running in a container), default value is `false`, environment variable is `TAOS_MONITOR_INCGROUP`.
|
||||||
|
- **`monitor.pauseQueryMemoryThreshold`**: memory threshold (percentage) for query request pause, default value is `70`, environment variable is `TAOS_MONITOR_PAUSE_QUERY_MEMORY_THRESHOLD`.
|
||||||
|
- **`monitor.pauseAllMemoryThreshold`**: memory threshold (percentage) for query and write request pause, default value is `80`, environment variable is `TAOS_MONITOR_PAUSE_ALL_MEMORY_THRESHOLD`.
|
||||||
|
|
||||||
### TCollector
|
You can make corresponding adjustments based on the specific project application scenario and operation strategy, and it is recommended to use operation monitoring software to monitor the system memory status in a timely manner. The load balancer can also check the operation status of taosAdapter through this interface.
|
||||||
|
|
||||||
<TCollector />
|
### Schemaless write create DB configuration
|
||||||
|
|
||||||
### node_exporter
|
Starting from **version 3.0.4.0**, taosAdapter provides the parameter `smlAutoCreateDB` to control whether to automatically create a database (DB) when writing to the schemaless protocol.
|
||||||
|
|
||||||
An exporter used by Prometheus that exposes hardware and operating system metrics from \*NIX kernels
|
The `smlAutoCreateDB` parameter only affects the following interfaces:
|
||||||
|
|
||||||
- Enable configuration of taosAdapter node_exporter.enable
|
- InfluxDB v1 write interface
|
||||||
- Set the relevant configuration for node_exporter
|
- OpenTSDB JSON and telnet format writing
|
||||||
- Restart taosAdapter
|
- Telegraf data writing
|
||||||
|
- collectd data writing
|
||||||
|
- StatsD data writing
|
||||||
|
- node_exporter data writing
|
||||||
|
|
||||||
### prometheus
|
**Parameter Description**
|
||||||
|
|
||||||
<Prometheus />
|
- **`smlAutoCreateDB`**:
|
||||||
|
- **When set to `true`**: When writing to the schemaless protocol, if the target database does not exist, taosAdapter will automatically create the database.
|
||||||
|
- **When set to `false`**: The user needs to manually create the database, otherwise the write will fail (default value).
|
||||||
|
|
||||||
### Getting the VGroup ID of a table
|
### Number of results returned configuration
|
||||||
|
|
||||||
You can send a POST request to the HTTP interface `http://<fqdn>:<port>/rest/sql/<db>/vgid` to get the VGroup ID of a table.
|
taosAdapter provides the parameter `restfulRowLimit` to control the number of results returned by the HTTP interface.
|
||||||
The body should be a JSON array of multiple table names.
|
|
||||||
|
|
||||||
Example: Get the VGroup ID for the database power and tables d_bind_1 and d_bind_2.
|
The `restfulRowLimit` parameter only affects the return results of the following interfaces:
|
||||||
|
|
||||||
|
- RESTful interface
|
||||||
|
- Prometheus remote_read interface
|
||||||
|
|
||||||
|
**Parameter Description**
|
||||||
|
|
||||||
|
- **`restfulRowLimit`**:
|
||||||
|
- **When set to a positive integer**: The number of results returned by the interface will not exceed this value.
|
||||||
|
- **When set to `-1`**: The number of results returned by the interface is unlimited (default value).
|
||||||
|
|
||||||
|
### Log configuration
|
||||||
|
|
||||||
|
1. You can set the taosAdapter log output detail level by setting the --log.level parameter or the environment variable TAOS_ADAPTER_LOG_LEVEL. Valid values include: panic, fatal, error, warn, warning, info, debug, and trace.
|
||||||
|
2. Starting from **3.3.5.0 version**, taosAdapter supports dynamic modification of log level through HTTP interface. Users can dynamically adjust the log level by sending HTTP PUT request to /config interface. The authentication method of this interface is the same as /rest/sql interface, and the configuration item key-value pair in JSON format must be passed in the request body.
|
||||||
|
|
||||||
|
The following is an example of setting the log level to debug through the curl command:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
curl --location 'http://127.0.0.1:6041/rest/sql/power/vgid' \
|
curl --location --request PUT 'http://127.0.0.1:6041/config' \
|
||||||
--user 'root:taosdata' \
|
-u root:taosdata \
|
||||||
--data '["d_bind_1","d_bind_2"]'
|
--data '{"log.level": "debug"}'
|
||||||
```
|
```
|
||||||
|
|
||||||
response:
|
## Service Management
|
||||||
|
|
||||||
```json
|
### Starting/Stopping taosAdapter
|
||||||
{"code":0,"vgIDs":[153,152]}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Memory Usage Optimization Methods
|
On Linux systems, the taosAdapter service is managed by default by systemd. Use the command `systemctl start taosadapter` to start the taosAdapter service. Use the command `systemctl stop taosadapter` to stop the taosAdapter service.
|
||||||
|
|
||||||
taosAdapter will monitor its memory usage during operation and adjust it through two thresholds. Valid values range from -1 to 100 as a percentage of system physical memory.
|
### Upgrading taosAdapter
|
||||||
|
|
||||||
- pauseQueryMemoryThreshold
|
taosAdapter and TDengine server need to use the same version. Please upgrade taosAdapter by upgrading the TDengine server.
|
||||||
- pauseAllMemoryThreshold
|
taosAdapter deployed separately from taosd must be upgraded by upgrading the TDengine server on its server.
|
||||||
|
|
||||||
When the pauseQueryMemoryThreshold threshold is exceeded, it stops processing query requests.
|
### Removing taosAdapter
|
||||||
|
|
||||||
HTTP return content:
|
Use the command rmtaos to remove the TDengine server software, including taosAdapter.
|
||||||
|
|
||||||
- code 503
|
## Monitoring Metrics
|
||||||
- body "query memory exceeds threshold"
|
|
||||||
|
|
||||||
When the pauseAllMemoryThreshold threshold is exceeded, it stops processing all write and query requests.
|
Currently, taosAdapter only collects monitoring indicators for RESTful/WebSocket related requests. There are no monitoring indicators for other interfaces.
|
||||||
|
|
||||||
HTTP return content:
|
taosAdapter reports monitoring indicators to taosKeeper, which will be written to the monitoring database by taosKeeper. The default is the `log` database, which can be modified in the taoskeeper configuration file. The following is a detailed introduction to these monitoring indicators.
|
||||||
|
|
||||||
- code 503
|
The `adapter_requests` table records taosAdapter monitoring data, and the fields are as follows:
|
||||||
- body "memory exceeds threshold"
|
|
||||||
|
|
||||||
When memory falls below the threshold, the corresponding functions are resumed.
|
|
||||||
|
|
||||||
Status check interface `http://<fqdn>:6041/-/ping`
|
|
||||||
|
|
||||||
- Normally returns `code 200`
|
|
||||||
- No parameters If memory exceeds pauseAllMemoryThreshold, it will return `code 503`
|
|
||||||
- Request parameter `action=query` If memory exceeds either pauseQueryMemoryThreshold or pauseAllMemoryThreshold, it will return `code 503`
|
|
||||||
|
|
||||||
Corresponding configuration parameters
|
|
||||||
|
|
||||||
```text
|
|
||||||
monitor.collectDuration Monitoring interval Environment variable "TAOS_MONITOR_COLLECT_DURATION" (default value 3s)
|
|
||||||
monitor.incgroup Whether it is running in cgroup (set to true in containers) Environment variable "TAOS_MONITOR_INCGROUP"
|
|
||||||
monitor.pauseAllMemoryThreshold Memory threshold for stopping inserts and queries Environment variable "TAOS_MONITOR_PAUSE_ALL_MEMORY_THRESHOLD" (default value 80)
|
|
||||||
monitor.pauseQueryMemoryThreshold Memory threshold for stopping queries Environment variable "TAOS_MONITOR_PAUSE_QUERY_MEMORY_THRESHOLD" (default value 70)
|
|
||||||
```
|
|
||||||
|
|
||||||
You can adjust according to the specific project application scenario and operational strategy, and it is recommended to use operational monitoring software to monitor the system memory status in real time. Load balancers can also check the running status of taosAdapter through this interface.
|
|
||||||
|
|
||||||
## taosAdapter Monitoring Metrics
|
|
||||||
|
|
||||||
taosAdapter collects monitoring metrics related to REST/WebSocket requests. These monitoring metrics are reported to taosKeeper, which writes them into the monitoring database, by default the `log` database, which can be modified in the taoskeeper configuration file. Below is a detailed introduction to these monitoring metrics.
|
|
||||||
|
|
||||||
### adapter_requests table
|
|
||||||
|
|
||||||
`adapter_requests` records taosadapter monitoring data.
|
|
||||||
|
|
||||||
| field | type | is_tag | comment |
|
| field | type | is_tag | comment |
|
||||||
| :--------------- | :----------- | :----- | :---------------------------------------- |
|
| :--------------- | :----------- | :----- | :---------------------------------------- |
|
||||||
|
@ -354,32 +423,10 @@ taosAdapter collects monitoring metrics related to REST/WebSocket requests. Thes
|
||||||
| endpoint | VARCHAR | | request endpoint |
|
| endpoint | VARCHAR | | request endpoint |
|
||||||
| req_type | NCHAR | tag | request type: 0 for REST, 1 for WebSocket |
|
| req_type | NCHAR | tag | request type: 0 for REST, 1 for WebSocket |
|
||||||
|
|
||||||
## Result Return Limit
|
|
||||||
|
|
||||||
taosAdapter controls the number of results returned through the parameter `restfulRowLimit`, -1 represents no limit, default is no limit.
|
## Changes after upgrading httpd to taosAdapter
|
||||||
|
|
||||||
This parameter controls the return of the following interfaces
|
In TDengine server version 2.2.x.x or earlier, the taosd process included an embedded HTTP service(httpd). As mentioned earlier, taosAdapter is a standalone software managed by systemd, having its own process. Moreover, there are some differences in configuration parameters and behaviors between the two, as shown in the table below:
|
||||||
|
|
||||||
- `http://<fqdn>:6041/rest/sql`
|
|
||||||
- `http://<fqdn>:6041/prometheus/v1/remote_read/:db`
|
|
||||||
|
|
||||||
## Configure HTTP Return Codes
|
|
||||||
|
|
||||||
taosAdapter uses the parameter `httpCodeServerError` to set whether to return a non-200 HTTP status code when the C interface returns an error. When set to true, it will return different HTTP status codes based on the error code returned by C. See [HTTP Response Codes](../../client-libraries/rest-api/) for details.
|
|
||||||
|
|
||||||
## Configure Automatic DB Creation for Schemaless Writes
|
|
||||||
|
|
||||||
Starting from version 3.0.4.0, taosAdapter provides the parameter `smlAutoCreateDB` to control whether to automatically create a DB when writing via the schemaless protocol. The default value is false, which does not automatically create a DB, and requires the user to manually create a DB before performing schemaless writes.
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
You can check the running status of taosAdapter with the command `systemctl status taosadapter`.
|
|
||||||
|
|
||||||
You can also adjust the detail level of taosAdapter log output by setting the --logLevel parameter or the environment variable TAOS_ADAPTER_LOG_LEVEL. Valid values include: panic, fatal, error, warn, warning, info, debug, and trace.
|
|
||||||
|
|
||||||
## How to Migrate from Older Versions of TDengine to taosAdapter
|
|
||||||
|
|
||||||
In TDengine server version 2.2.x.x or earlier, the taosd process included an embedded HTTP service. As mentioned earlier, taosAdapter is a standalone software managed by systemd, having its own process. Moreover, there are some differences in configuration parameters and behaviors between the two, as shown in the table below:
|
|
||||||
|
|
||||||
| **#** | **embedded httpd** | **taosAdapter** | **comment** |
|
| **#** | **embedded httpd** | **taosAdapter** | **comment** |
|
||||||
| ----- | ------------------- | ---------------------------------------------------------- | ------------------------------------------------------------ |
|
| ----- | ------------------- | ---------------------------------------------------------- | ------------------------------------------------------------ |
|
||||||
|
|
|
@ -70,6 +70,7 @@ Metric details (from top to bottom, left to right):
|
||||||
- **Databases** - Number of databases.
|
- **Databases** - Number of databases.
|
||||||
- **Connections** - Current number of connections.
|
- **Connections** - Current number of connections.
|
||||||
- **DNodes/MNodes/VGroups/VNodes**: Total and alive count of each resource.
|
- **DNodes/MNodes/VGroups/VNodes**: Total and alive count of each resource.
|
||||||
|
- **Classified Connection Counts**: The current number of active connections, classified by user, application, and IP.
|
||||||
- **DNodes/MNodes/VGroups/VNodes Alive Percent**: The ratio of alive/total for each resource, enable alert rules, and trigger when the resource survival rate (average healthy resource ratio within 1 minute) is less than 100%.
|
- **DNodes/MNodes/VGroups/VNodes Alive Percent**: The ratio of alive/total for each resource, enable alert rules, and trigger when the resource survival rate (average healthy resource ratio within 1 minute) is less than 100%.
|
||||||
- **Measuring Points Used**: Number of measuring points used with alert rules enabled (no data for community edition, healthy by default).
|
- **Measuring Points Used**: Number of measuring points used with alert rules enabled (no data for community edition, healthy by default).
|
||||||
|
|
||||||
|
@ -184,7 +185,7 @@ After importing, click on "Alert rules" on the left side of the Grafana interfac
|
||||||
The specific configuration of the 14 alert rules is as follows:
|
The specific configuration of the 14 alert rules is as follows:
|
||||||
|
|
||||||
| alert rule | Rule threshold | Behavior when no data | Data scanning interval | Duration | SQL |
|
| alert rule | Rule threshold | Behavior when no data | Data scanning interval | Duration | SQL |
|
||||||
| ------ | --------- | ---------------- | ----------- |------- |----------------------|
|
| ------------------------------------------------------------- | ------------------------------------ | --------------------- | ---------------------- | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| CPU load of dnode node | average > 80% | Trigger alert | 5 minutes | 5 minutes | `select now(), dnode_id, last(cpu_system) as cup_use from log.taosd_dnodes_info where _ts >= (now- 5m) and _ts < now partition by dnode_id having first(_ts) > 0 ` |
|
| CPU load of dnode node | average > 80% | Trigger alert | 5 minutes | 5 minutes | `select now(), dnode_id, last(cpu_system) as cup_use from log.taosd_dnodes_info where _ts >= (now- 5m) and _ts < now partition by dnode_id having first(_ts) > 0 ` |
|
||||||
| Memory of dnode node | average > 60% | Trigger alert | 5 minutes | 5 minutes | `select now(), dnode_id, last(mem_engine) / last(mem_total) * 100 as taosd from log.taosd_dnodes_info where _ts >= (now- 5m) and _ts <now partition by dnode_id` |
|
| Memory of dnode node | average > 60% | Trigger alert | 5 minutes | 5 minutes | `select now(), dnode_id, last(mem_engine) / last(mem_total) * 100 as taosd from log.taosd_dnodes_info where _ts >= (now- 5m) and _ts <now partition by dnode_id` |
|
||||||
| Disk capacity occupancy of dnode nodes | > 80% | Trigger alert | 5 minutes | 5 minutes | `select now(), dnode_id, data_dir_level, data_dir_name, last(used) / last(total) * 100 as used from log.taosd_dnodes_data_dirs where _ts >= (now - 5m) and _ts < now partition by dnode_id, data_dir_level, data_dir_name` |
|
| Disk capacity occupancy of dnode nodes | > 80% | Trigger alert | 5 minutes | 5 minutes | `select now(), dnode_id, data_dir_level, data_dir_name, last(used) / last(total) * 100 as used from log.taosd_dnodes_data_dirs where _ts >= (now - 5m) and _ts < now partition by dnode_id, data_dir_level, data_dir_name` |
|
||||||
|
@ -259,7 +260,7 @@ Install and configure TDinsight dashboard in Grafana on Ubuntu 18.04/20.04 syste
|
||||||
Most command line options can also be achieved through environment variables.
|
Most command line options can also be achieved through environment variables.
|
||||||
|
|
||||||
| Short Option | Long Option | Environment Variable | Description |
|
| Short Option | Long Option | Environment Variable | Description |
|
||||||
| ------------ | ------------------------------- | ------------------------------ | -------------------------------------------------------- |
|
| ------------ | -------------------------- | ---------------------------- | ----------------------------------------------------------------------- |
|
||||||
| -v | --plugin-version | TDENGINE_PLUGIN_VERSION | TDengine datasource plugin version, default is latest. |
|
| -v | --plugin-version | TDENGINE_PLUGIN_VERSION | TDengine datasource plugin version, default is latest. |
|
||||||
| -P | --grafana-provisioning-dir | GF_PROVISIONING_DIR | Grafana provisioning directory, default is `/etc/grafana/provisioning/` |
|
| -P | --grafana-provisioning-dir | GF_PROVISIONING_DIR | Grafana provisioning directory, default is `/etc/grafana/provisioning/` |
|
||||||
| -G | --grafana-plugins-dir | GF_PLUGINS_DIR | Grafana plugins directory, default is `/var/lib/grafana/plugins`. |
|
| -G | --grafana-plugins-dir | GF_PLUGINS_DIR | Grafana plugins directory, default is `/var/lib/grafana/plugins`. |
|
||||||
|
|
|
@ -10,11 +10,7 @@ slug: /tdengine-reference/tools/taosdump
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
Taosdump provides two installation methods:
|
taosdump is the default installation component in the TDengine installation package, which can be used after installing TDengine. For how to install TDengine, please refer to [TDengine Installation](../../../get-started/)
|
||||||
|
|
||||||
- Taosdump is the default installation component in the TDengine installation package, which can be used after installing TDengine. For how to install TDengine, please refer to [TDengine Installation](../../../get-started/)
|
|
||||||
|
|
||||||
- Compile and install taos tools separately, refer to [taos tools](https://github.com/taosdata/taos-tools) .
|
|
||||||
|
|
||||||
## Common Use Cases
|
## Common Use Cases
|
||||||
|
|
||||||
|
|
|
@ -8,11 +8,7 @@ TaosBenchmark is a performance benchmarking tool for TDengine products, providin
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
taosBenchmark provides two installation methods:
|
taosBenchmark is the default installation component in the TDengine installation package, which can be used after installing TDengine. For how to install TDengine, please refer to [TDengine Installation](../../../get started/)
|
||||||
|
|
||||||
- taosBenchmark is the default installation component in the TDengine installation package, which can be used after installing TDengine. For how to install TDengine, please refer to [TDengine Installation](../../../get started/)
|
|
||||||
|
|
||||||
- Compile and install taos tools separately, refer to [taos tools](https://github.com/taosdata/taos-tools) .
|
|
||||||
|
|
||||||
## Operation
|
## Operation
|
||||||
|
|
||||||
|
@ -63,7 +59,7 @@ taosBenchmark -f <json file>
|
||||||
<summary>insert.json</summary>
|
<summary>insert.json</summary>
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{{#include /taos-tools/example/insert.json}}
|
{{#include /TDengine/tools/taos-tools/example/insert.json}}
|
||||||
```
|
```
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
@ -74,7 +70,7 @@ taosBenchmark -f <json file>
|
||||||
<summary>query.json</summary>
|
<summary>query.json</summary>
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{{#include /taos-tools/example/query.json}}
|
{{#include /TDengine/tools/taos-tools/example/query.json}}
|
||||||
```
|
```
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
@ -85,7 +81,7 @@ taosBenchmark -f <json file>
|
||||||
<summary>tmq.json</summary>
|
<summary>tmq.json</summary>
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{{#include /taos-tools/example/tmq.json}}
|
{{#include /TDengine/tools/taos-tools/example/tmq.json}}
|
||||||
```
|
```
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
@ -246,13 +242,14 @@ The query performance test mainly outputs the QPS indicator of query request spe
|
||||||
|
|
||||||
``` bash
|
``` bash
|
||||||
complete query with 3 threads and 10000 query delay avg: 0.002686s min: 0.001182s max: 0.012189s p90: 0.002977s p95: 0.003493s p99: 0.004645s SQL command: select ...
|
complete query with 3 threads and 10000 query delay avg: 0.002686s min: 0.001182s max: 0.012189s p90: 0.002977s p95: 0.003493s p99: 0.004645s SQL command: select ...
|
||||||
INFO: Total specified queries: 30000
|
|
||||||
INFO: Spend 26.9530 second completed total queries: 30000, the QPS of all threads: 1113.049
|
INFO: Spend 26.9530 second completed total queries: 30000, the QPS of all threads: 1113.049
|
||||||
```
|
```
|
||||||
|
|
||||||
- The first line represents the percentile distribution of query execution and query request delay for each of the three threads executing 10000 queries. The SQL command is the test query statement
|
- The first line represents the percentile distribution of query execution and query request delay for each of the three threads executing 10000 queries. The SQL command is the test query statement
|
||||||
- The second line indicates that a total of 10000 * 3 = 30000 queries have been completed
|
- The second line indicates that the total query time is 26.9653 seconds, the total queries is 10000 * 3 = 30000, and the query rate per second (QPS) is 1113.049 times/second
|
||||||
- The third line indicates that the total query time is 26.9653 seconds, and the query rate per second (QPS) is 1113.049 times/second
|
- If the `continue_if_fail` option is set to `yes` in the query, the last line will output the number of failed requests and error rate, the format like "error + number of failed requests (error rate)"
|
||||||
|
- QPS = number of successful requests / time spent (in seconds)
|
||||||
|
- Error rate = number of failed requests / (number of successful requests + number of failed requests)
|
||||||
|
|
||||||
#### Subscription metrics
|
#### Subscription metrics
|
||||||
|
|
||||||
|
@ -334,9 +331,9 @@ Parameters related to supertable creation are configured in the `super_tables` s
|
||||||
|
|
||||||
- **child_table_exists**: Whether the child table already exists, default is "no", options are "yes" or "no".
|
- **child_table_exists**: Whether the child table already exists, default is "no", options are "yes" or "no".
|
||||||
|
|
||||||
- **child_table_count**: Number of child tables, default is 10.
|
- **childtable_count**: Number of child tables, default is 10.
|
||||||
|
|
||||||
- **child_table_prefix**: Prefix for child table names, mandatory, no default value.
|
- **childtable_prefix**: Prefix for child table names, mandatory, no default value.
|
||||||
|
|
||||||
- **escape_character**: Whether the supertable and child table names contain escape characters, default is "no", options are "yes" or "no".
|
- **escape_character**: Whether the supertable and child table names contain escape characters, default is "no", options are "yes" or "no".
|
||||||
|
|
||||||
|
@ -431,11 +428,9 @@ Specify the configuration parameters for tag and data columns in `super_tables`
|
||||||
|
|
||||||
- **create_table_thread_count** : The number of threads for creating tables, default is 8.
|
- **create_table_thread_count** : The number of threads for creating tables, default is 8.
|
||||||
|
|
||||||
- **connection_pool_size** : The number of pre-established connections with the TDengine server. If not configured, it defaults to the specified number of threads.
|
|
||||||
|
|
||||||
- **result_file** : The path to the result output file, default is ./output.txt.
|
- **result_file** : The path to the result output file, default is ./output.txt.
|
||||||
|
|
||||||
- **confirm_parameter_prompt** : A toggle parameter that requires user confirmation after a prompt to continue. The default value is false.
|
- **confirm_parameter_prompt** : A toggle parameter that requires user confirmation after a prompt to continue. The value can be "yes" or "no", by default "no".
|
||||||
|
|
||||||
- **interlace_rows** : Enables interleaved insertion mode and specifies the number of rows to insert into each subtable at a time. Interleaved insertion mode refers to inserting the specified number of rows into each subtable in sequence and repeating this process until all subtable data has been inserted. The default value is 0, meaning data is inserted into one subtable completely before moving to the next.
|
- **interlace_rows** : Enables interleaved insertion mode and specifies the number of rows to insert into each subtable at a time. Interleaved insertion mode refers to inserting the specified number of rows into each subtable in sequence and repeating this process until all subtable data has been inserted. The default value is 0, meaning data is inserted into one subtable completely before moving to the next.
|
||||||
This parameter can also be configured in `super_tables`; if configured, the settings in `super_tables` take higher priority and override the global settings.
|
This parameter can also be configured in `super_tables`; if configured, the settings in `super_tables` take higher priority and override the global settings.
|
||||||
|
@ -464,12 +459,12 @@ For other common parameters, see Common Configuration Parameters.
|
||||||
|
|
||||||
Configuration parameters for querying specified tables (can specify supertables, subtables, or regular tables) are set in `specified_table_query`.
|
Configuration parameters for querying specified tables (can specify supertables, subtables, or regular tables) are set in `specified_table_query`.
|
||||||
|
|
||||||
- **mixed_query** "yes": `Mixed Query` "no": `Normal Query`, default is "no"
|
- `General Query`: Each SQL in `sqls` starts `threads` threads to query this SQL, Each thread exits after executing the `query_times` queries, and only after all threads executing this SQL have completed can the next SQL be executed.
|
||||||
`Mixed Query`: All SQL statements in `sqls` are grouped by the number of threads, with each thread executing one group. Each SQL statement in a thread needs to perform `query_times` queries.
|
The total number of queries(`General Query`) = the number of `sqls` * `query_times` * `threads`
|
||||||
`Normal Query `: Each SQL in `sqls` starts `threads` and exits after executing `query_times` times. The next SQL can only be executed after all previous SQL threads have finished executing and exited.
|
- `Mixed Query` : All SQL statements in `sqls` are divided into `threads` groups, with each thread executing one group. Each SQL statement needs to execute `query_times` queries.
|
||||||
Regardless of whether it is a `Normal Query` or `Mixed Query`, the total number of query executions is the same. The total number of queries = `sqls` * `threads` * `query_times`. The difference is that `Normal Query` starts `threads` for each SQL query, while ` Mixed Query` only starts `threads` once to complete all SQL queries. The number of thread startups for the two is different.
|
The total number of queries(`Mixed Query`) = the number of `sqls` * `query_times`
|
||||||
|
|
||||||
- **query_interval** : Query interval, in seconds, default is 0.
|
- **query_interval** : Query interval, in millisecond, default is 0.
|
||||||
|
|
||||||
- **threads** : Number of threads executing the SQL query, default is 1.
|
- **threads** : Number of threads executing the SQL query, default is 1.
|
||||||
|
|
||||||
|
@ -491,6 +486,7 @@ The thread mode of the super table query is the same as the `Normal Query` mode
|
||||||
- **sqls** :
|
- **sqls** :
|
||||||
- **sql** : The SQL command to execute, required; for supertable queries, keep "xxxx" in the SQL command, the program will automatically replace it with all subtable names of the supertable.
|
- **sql** : The SQL command to execute, required; for supertable queries, keep "xxxx" in the SQL command, the program will automatically replace it with all subtable names of the supertable.
|
||||||
- **result** : File to save the query results, if not specified, results are not saved.
|
- **result** : File to save the query results, if not specified, results are not saved.
|
||||||
|
- **Note**: The maximum number of SQL arrays configured under SQL is 100.
|
||||||
|
|
||||||
### Configuration Parameters for Subscription Scenarios
|
### Configuration Parameters for Subscription Scenarios
|
||||||
|
|
||||||
|
|
|
@ -11,7 +11,7 @@ import imgStream from './assets/stream-processing-01.png';
|
||||||
```sql
|
```sql
|
||||||
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name[(field1_name, field2_name [PRIMARY KEY], ...)] [TAGS (create_definition [, create_definition] ...)] SUBTABLE(expression) AS subquery
|
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name[(field1_name, field2_name [PRIMARY KEY], ...)] [TAGS (create_definition [, create_definition] ...)] SUBTABLE(expression) AS subquery
|
||||||
stream_options: {
|
stream_options: {
|
||||||
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
|
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time | FORCE_WINDOW_CLOSE]
|
||||||
WATERMARK time
|
WATERMARK time
|
||||||
IGNORE EXPIRED [0|1]
|
IGNORE EXPIRED [0|1]
|
||||||
DELETE_MARK time
|
DELETE_MARK time
|
||||||
|
@ -58,6 +58,10 @@ window_clause: {
|
||||||
|
|
||||||
Where SESSION is a session window, tol_val is the maximum range of the time interval. All data within the tol_val time interval belong to the same window. If the time between two consecutive data points exceeds tol_val, the next window automatically starts. The window's_wend equals the last data point's time plus tol_val.
|
Where SESSION is a session window, tol_val is the maximum range of the time interval. All data within the tol_val time interval belong to the same window. If the time between two consecutive data points exceeds tol_val, the next window automatically starts. The window's_wend equals the last data point's time plus tol_val.
|
||||||
|
|
||||||
|
STATE_WINDOW is a state window. The col is used to identify the state value. Values with the same state value belong to the same state window. When the value of col changes, the current window ends and the next window is automatically opened.
|
||||||
|
|
||||||
|
INTERVAL is a time window, which can be further divided into sliding time windows and tumbling time windows.The INTERVAL clause is used to specify the equal time period of the window, and the SLIDING clause is used to specify the time by which the window slides forward. When the value of interval_val is equal to the value of sliding_val, the time window is a tumbling time window; otherwise, it is a sliding time window. Note: The value of sliding_val must be less than or equal to the value of interval_val.
|
||||||
|
|
||||||
EVENT_WINDOW is an event window, defined by start and end conditions. The window starts when the start_trigger_condition is met and closes when the end_trigger_condition is met. start_trigger_condition and end_trigger_condition can be any condition expression supported by TDengine and can include different columns.
|
EVENT_WINDOW is an event window, defined by start and end conditions. The window starts when the start_trigger_condition is met and closes when the end_trigger_condition is met. start_trigger_condition and end_trigger_condition can be any condition expression supported by TDengine and can include different columns.
|
||||||
|
|
||||||
COUNT_WINDOW is a count window, dividing the window by a fixed number of data rows. count_val is a constant, a positive integer, must be at least 2 and less than 2147483648. count_val represents the maximum number of data rows each COUNT_WINDOW contains. If the total number of data rows is not divisible by count_val, the last window will have fewer rows than count_val. sliding_val is a constant, representing the number of rows the window slides, similar to the SLIDING in INTERVAL.
|
COUNT_WINDOW is a count window, dividing the window by a fixed number of data rows. count_val is a constant, a positive integer, must be at least 2 and less than 2147483648. count_val represents the maximum number of data rows each COUNT_WINDOW contains. If the total number of data rows is not divisible by count_val, the last window will have fewer rows than count_val. sliding_val is a constant, representing the number of rows the window slides, similar to the SLIDING in INTERVAL.
|
||||||
|
|
|
@ -43,7 +43,8 @@ TDengine supports `UNION ALL` and `UNION` operators. UNION ALL combines the resu
|
||||||
| 9 | LIKE | BINARY, NCHAR, and VARCHAR | Matches the specified pattern string with wildcard |
|
| 9 | LIKE | BINARY, NCHAR, and VARCHAR | Matches the specified pattern string with wildcard |
|
||||||
| 10 | NOT LIKE | BINARY, NCHAR, and VARCHAR | Does not match the specified pattern string with wildcard |
|
| 10 | NOT LIKE | BINARY, NCHAR, and VARCHAR | Does not match the specified pattern string with wildcard |
|
||||||
| 11 | MATCH, NMATCH | BINARY, NCHAR, and VARCHAR | Regular expression match |
|
| 11 | MATCH, NMATCH | BINARY, NCHAR, and VARCHAR | Regular expression match |
|
||||||
| 12 | CONTAINS | JSON | Whether a key exists in JSON |
|
| 12 | REGEXP, NOT REGEXP | BINARY, NCHAR, and VARCHAR | Regular expression match |
|
||||||
|
| 13 | CONTAINS | JSON | Whether a key exists in JSON |
|
||||||
|
|
||||||
LIKE conditions use wildcard strings for matching checks, with the following rules:
|
LIKE conditions use wildcard strings for matching checks, with the following rules:
|
||||||
|
|
||||||
|
@ -51,7 +52,7 @@ LIKE conditions use wildcard strings for matching checks, with the following rul
|
||||||
- If you want to match an underscore character that is originally in the string, you can write it as \_ in the wildcard string, i.e., add a backslash to escape it.
|
- If you want to match an underscore character that is originally in the string, you can write it as \_ in the wildcard string, i.e., add a backslash to escape it.
|
||||||
- The wildcard string cannot exceed 100 bytes in length. It is not recommended to use too long wildcard strings, as it may severely affect the performance of the LIKE operation.
|
- The wildcard string cannot exceed 100 bytes in length. It is not recommended to use too long wildcard strings, as it may severely affect the performance of the LIKE operation.
|
||||||
|
|
||||||
MATCH and NMATCH conditions use regular expressions for matching, with the following rules:
|
MATCH/REGEXP and NMATCH/NOT REGEXP conditions use regular expressions for matching, with the following rules:
|
||||||
|
|
||||||
- Supports regular expressions that comply with the POSIX standard, see Regular Expressions for specific standards.
|
- Supports regular expressions that comply with the POSIX standard, see Regular Expressions for specific standards.
|
||||||
- When MATCH matches a regular expression, it returns TRUE. When NMATCH does not match a regular expression, it returns TRUE.
|
- When MATCH matches a regular expression, it returns TRUE. When NMATCH does not match a regular expression, it returns TRUE.
|
||||||
|
|
|
@ -33,6 +33,7 @@ The JDBC driver implementation for TDengine strives to be consistent with relati
|
||||||
|
|
||||||
| taos-jdbcdriver Version | Major Changes | TDengine Version |
|
| taos-jdbcdriver Version | Major Changes | TDengine Version |
|
||||||
| ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ |
|
| ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ |
|
||||||
|
| 3.5.3 | Support unsigned data types in WebSocket connections. | - |
|
||||||
| 3.5.2 | Fixed WebSocket result set free bug. | - |
|
| 3.5.2 | Fixed WebSocket result set free bug. | - |
|
||||||
| 3.5.1 | Fixed the getObject issue in data subscription. | - |
|
| 3.5.1 | Fixed the getObject issue in data subscription. | - |
|
||||||
| 3.5.0 | 1. Optimized the performance of WebSocket connection parameter binding, supporting parameter binding queries using binary data. <br/> 2. Optimized the performance of small queries in WebSocket connection. <br/> 3. Added support for setting time zone and app info on WebSocket connection. | 3.3.5.0 and higher |
|
| 3.5.0 | 1. Optimized the performance of WebSocket connection parameter binding, supporting parameter binding queries using binary data. <br/> 2. Optimized the performance of small queries in WebSocket connection. <br/> 3. Added support for setting time zone and app info on WebSocket connection. | 3.3.5.0 and higher |
|
||||||
|
@ -128,24 +129,27 @@ Please refer to the specific error codes:
|
||||||
|
|
||||||
TDengine currently supports timestamp, numeric, character, boolean types, and the corresponding Java type conversions are as follows:
|
TDengine currently supports timestamp, numeric, character, boolean types, and the corresponding Java type conversions are as follows:
|
||||||
|
|
||||||
| TDengine DataType | JDBCType |
|
| TDengine DataType | JDBCType | Remark |
|
||||||
| ----------------- | ------------------ |
|
| ----------------- | -------------------- | --------------------------------------- |
|
||||||
| TIMESTAMP | java.sql.Timestamp |
|
| TIMESTAMP | java.sql.Timestamp | |
|
||||||
| INT | java.lang.Integer |
|
| BOOL | java.lang.Boolean | |
|
||||||
| BIGINT | java.lang.Long |
|
| TINYINT | java.lang.Byte | |
|
||||||
| FLOAT | java.lang.Float |
|
| TINYINT UNSIGNED | java.lang.Short | only supported in WebSocket connections |
|
||||||
| DOUBLE | java.lang.Double |
|
| SMALLINT | java.lang.Short | |
|
||||||
| SMALLINT | java.lang.Short |
|
| SMALLINT UNSIGNED | java.lang.Integer | only supported in WebSocket connections |
|
||||||
| TINYINT | java.lang.Byte |
|
| INT | java.lang.Integer | |
|
||||||
| BOOL | java.lang.Boolean |
|
| INT UNSIGNED | java.lang.Long | only supported in WebSocket connections |
|
||||||
| BINARY | byte array |
|
| BIGINT | java.lang.Long | |
|
||||||
| NCHAR | java.lang.String |
|
| BIGINT UNSIGNED | java.math.BigInteger | only supported in WebSocket connections |
|
||||||
| JSON | java.lang.String |
|
| FLOAT | java.lang.Float | |
|
||||||
| VARBINARY | byte[] |
|
| DOUBLE | java.lang.Double | |
|
||||||
| GEOMETRY | byte[] |
|
| BINARY | byte array | |
|
||||||
|
| NCHAR | java.lang.String | |
|
||||||
|
| JSON | java.lang.String | only supported in tags |
|
||||||
|
| VARBINARY | byte[] | |
|
||||||
|
| GEOMETRY | byte[] | |
|
||||||
|
|
||||||
**Note**: JSON type is only supported in tags.
|
**Note**: Due to historical reasons, the BINARY type in TDengine is not truly binary data and is no longer recommended. Please use VARBINARY type instead.
|
||||||
Due to historical reasons, the BINARY type in TDengine is not truly binary data and is no longer recommended. Please use VARBINARY type instead.
|
|
||||||
GEOMETRY type is binary data in little endian byte order, complying with the WKB standard. For more details, please refer to [Data Types](../../sql-manual/data-types/)
|
GEOMETRY type is binary data in little endian byte order, complying with the WKB standard. For more details, please refer to [Data Types](../../sql-manual/data-types/)
|
||||||
For the WKB standard, please refer to [Well-Known Binary (WKB)](https://libgeos.org/specifications/wkb/)
|
For the WKB standard, please refer to [Well-Known Binary (WKB)](https://libgeos.org/specifications/wkb/)
|
||||||
For the Java connector, you can use the jts library to conveniently create GEOMETRY type objects, serialize them, and write to TDengine. Here is an example [Geometry Example](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/GeometryDemo.java)
|
For the Java connector, you can use the jts library to conveniently create GEOMETRY type objects, serialize them, and write to TDengine. Here is an example [Geometry Example](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/GeometryDemo.java)
|
||||||
|
|
|
@ -4,7 +4,7 @@ title: taosTools Release History and Download Links
|
||||||
slug: /release-history/taostools
|
slug: /release-history/taostools
|
||||||
---
|
---
|
||||||
|
|
||||||
Download links for various versions of taosTools are as follows:
|
Starting from version 3.0.6.0, taosTools has been integrated into the TDengine installation package and is no longer provided separately. Download links for various versions of taosTools (corresponding to TDengine 3.0.5.2 and earlier) are as follows:
|
||||||
|
|
||||||
For other historical versions, please visit [here](https://tdengine.com/downloads/historical)
|
For other historical versions, please visit [here](https://tdengine.com/downloads/historical)
|
||||||
|
|
||||||
|
|
|
@ -3,13 +3,9 @@ title: Release Notes
|
||||||
slug: /release-history/release-notes
|
slug: /release-history/release-notes
|
||||||
---
|
---
|
||||||
|
|
||||||
[3.3.5.0](./3-3-5-0/)
|
```mdx-code-block
|
||||||
|
import DocCardList from '@theme/DocCardList';
|
||||||
|
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
|
||||||
|
|
||||||
[3.3.5.2](./3.3.5.2)
|
<DocCardList items={useCurrentSidebarCategory().items}/>
|
||||||
[3.3.4.8](./3-3-4-8/)
|
```
|
||||||
|
|
||||||
[3.3.4.3](./3-3-4-3/)
|
|
||||||
|
|
||||||
[3.3.3.0](./3-3-3-0/)
|
|
||||||
|
|
||||||
[3.3.2.0](./3-3-2-0/)
|
|
After Width: | Height: | Size: 56 KiB |
|
@ -1,10 +1,10 @@
|
||||||
### Configuring taosAdapter
|
#### Configuring taosAdapter
|
||||||
|
|
||||||
Method to configure taosAdapter to receive collectd data:
|
Method to configure taosAdapter to receive collectd data:
|
||||||
|
|
||||||
- Enable the configuration item in the taosAdapter configuration file (default location is /etc/taos/taosadapter.toml)
|
- Enable the configuration item in the taosAdapter configuration file (default location is /etc/taos/taosadapter.toml)
|
||||||
|
|
||||||
```
|
```toml
|
||||||
...
|
...
|
||||||
[opentsdb_telnet]
|
[opentsdb_telnet]
|
||||||
enable = true
|
enable = true
|
||||||
|
@ -21,15 +21,15 @@ The default database name written by taosAdapter is `collectd`, but you can also
|
||||||
|
|
||||||
- You can also use taosAdapter command line parameters or set environment variables to start, to enable taosAdapter to receive collectd data, for more details please refer to the taosAdapter reference manual.
|
- You can also use taosAdapter command line parameters or set environment variables to start, to enable taosAdapter to receive collectd data, for more details please refer to the taosAdapter reference manual.
|
||||||
|
|
||||||
### Configuring collectd
|
#### Configuring collectd
|
||||||
|
|
||||||
collectd uses a plugin mechanism that can write the collected monitoring data to different data storage software in various forms. TDengine supports direct collection plugins and write_tsdb plugins.
|
collectd uses a plugin mechanism that can write the collected monitoring data to different data storage software in various forms. TDengine supports direct collection plugins and write_tsdb plugins.
|
||||||
|
|
||||||
#### Configuring to receive direct collection plugin data
|
1. **Configuring to receive direct collection plugin data**
|
||||||
|
|
||||||
Modify the related configuration items in the collectd configuration file (default location /etc/collectd/collectd.conf).
|
Modify the related configuration items in the collectd configuration file (default location /etc/collectd/collectd.conf).
|
||||||
|
|
||||||
```text
|
```xml
|
||||||
LoadPlugin network
|
LoadPlugin network
|
||||||
<Plugin network>
|
<Plugin network>
|
||||||
Server "<taosAdapter's host>" "<port for collectd direct>"
|
Server "<taosAdapter's host>" "<port for collectd direct>"
|
||||||
|
@ -40,18 +40,18 @@ Where \<taosAdapter's host> should be filled with the domain name or IP address
|
||||||
|
|
||||||
Example as follows:
|
Example as follows:
|
||||||
|
|
||||||
```text
|
```xml
|
||||||
LoadPlugin network
|
LoadPlugin network
|
||||||
<Plugin network>
|
<Plugin network>
|
||||||
Server "127.0.0.1" "6045"
|
Server "127.0.0.1" "6045"
|
||||||
</Plugin>
|
</Plugin>
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Configuring write_tsdb plugin data
|
2. **Configuring write_tsdb plugin data**
|
||||||
|
|
||||||
Modify the related configuration items in the collectd configuration file (default location /etc/collectd/collectd.conf).
|
Modify the related configuration items in the collectd configuration file (default location /etc/collectd/collectd.conf).
|
||||||
|
|
||||||
```text
|
```xml
|
||||||
LoadPlugin write_tsdb
|
LoadPlugin write_tsdb
|
||||||
<Plugin write_tsdb>
|
<Plugin write_tsdb>
|
||||||
<Node>
|
<Node>
|
||||||
|
@ -64,7 +64,7 @@ LoadPlugin write_tsdb
|
||||||
|
|
||||||
Where \<taosAdapter's host> should be filled with the domain name or IP address of the server running taosAdapter. \<port for collectd write_tsdb plugin> should be filled with the port used by taosAdapter to receive collectd write_tsdb plugin data (default is 6047).
|
Where \<taosAdapter's host> should be filled with the domain name or IP address of the server running taosAdapter. \<port for collectd write_tsdb plugin> should be filled with the port used by taosAdapter to receive collectd write_tsdb plugin data (default is 6047).
|
||||||
|
|
||||||
```text
|
```xml
|
||||||
LoadPlugin write_tsdb
|
LoadPlugin write_tsdb
|
||||||
<Plugin write_tsdb>
|
<Plugin write_tsdb>
|
||||||
<Node>
|
<Node>
|
||||||
|
@ -79,6 +79,6 @@ LoadPlugin write_tsdb
|
||||||
|
|
||||||
Then restart collectd:
|
Then restart collectd:
|
||||||
|
|
||||||
```
|
```shell
|
||||||
systemctl restart collectd
|
systemctl restart collectd
|
||||||
```
|
```
|
||||||
|
|
|
@ -1,10 +1,10 @@
|
||||||
### Configuring taosAdapter
|
#### Configuring taosAdapter
|
||||||
|
|
||||||
Method to configure taosAdapter to receive icinga2 data:
|
Method to configure taosAdapter to receive icinga2 data:
|
||||||
|
|
||||||
- Enable the configuration item in the taosAdapter configuration file (default location /etc/taos/taosadapter.toml)
|
- Enable the configuration item in the taosAdapter configuration file (default location /etc/taos/taosadapter.toml)
|
||||||
|
|
||||||
```
|
```toml
|
||||||
...
|
...
|
||||||
[opentsdb_telnet]
|
[opentsdb_telnet]
|
||||||
enable = true
|
enable = true
|
||||||
|
@ -21,12 +21,12 @@ The default database name written by taosAdapter is `icinga2`, but you can also
|
||||||
|
|
||||||
- You can also use taosAdapter command line parameters or set environment variables to enable taosAdapter to receive icinga2 data, for more details please refer to the taosAdapter reference manual
|
- You can also use taosAdapter command line parameters or set environment variables to enable taosAdapter to receive icinga2 data, for more details please refer to the taosAdapter reference manual
|
||||||
|
|
||||||
### Configuring icinga2
|
#### Configuring icinga2
|
||||||
|
|
||||||
- Enable icinga2's opentsdb-writer (reference link https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer)
|
- Enable icinga2's opentsdb-writer (reference link https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer)
|
||||||
- Modify the configuration file `/etc/icinga2/features-enabled/opentsdb.conf` filling in \<taosAdapter's host> with the domain name or IP address of the server running taosAdapter, \<port for icinga2> with the corresponding port supported by taosAdapter for receiving icinga2 data (default is 6048)
|
- Modify the configuration file `/etc/icinga2/features-enabled/opentsdb.conf` filling in \<taosAdapter's host> with the domain name or IP address of the server running taosAdapter, \<port for icinga2> with the corresponding port supported by taosAdapter for receiving icinga2 data (default is 6048)
|
||||||
|
|
||||||
```
|
```c
|
||||||
object OpenTsdbWriter "opentsdb" {
|
object OpenTsdbWriter "opentsdb" {
|
||||||
host = "<taosAdapter's host>"
|
host = "<taosAdapter's host>"
|
||||||
port = <port for icinga2>
|
port = <port for icinga2>
|
||||||
|
@ -35,7 +35,7 @@ object OpenTsdbWriter "opentsdb" {
|
||||||
|
|
||||||
Example file:
|
Example file:
|
||||||
|
|
||||||
```
|
```c
|
||||||
object OpenTsdbWriter "opentsdb" {
|
object OpenTsdbWriter "opentsdb" {
|
||||||
host = "127.0.0.1"
|
host = "127.0.0.1"
|
||||||
port = 6048
|
port = 6048
|
||||||
|
|
|
@ -1,18 +1,18 @@
|
||||||
Configuring Prometheus is done by editing the Prometheus configuration file `prometheus.yml` (default location `/etc/prometheus/prometheus.yml`).
|
Configuring Prometheus is done by editing the Prometheus configuration file `prometheus.yml` (default location `/etc/prometheus/prometheus.yml`).
|
||||||
|
|
||||||
### Configure Third-Party Database Address
|
#### Configure Third-Party Database Address
|
||||||
|
|
||||||
Set the `remote_read url` and `remote_write url` to point to the domain name or IP address of the server running the taosAdapter service, the REST service port (taosAdapter defaults to 6041), and the name of the database you want to write to in TDengine, ensuring the URLs are formatted as follows:
|
Set the `remote_read url` and `remote_write url` to point to the domain name or IP address of the server running the taosAdapter service, the REST service port (taosAdapter defaults to 6041), and the name of the database you want to write to in TDengine, ensuring the URLs are formatted as follows:
|
||||||
|
|
||||||
- remote_read url: `http://<taosAdapter's host>:<REST service port>/prometheus/v1/remote_read/<database name>`
|
- remote_read url: `http://<taosAdapter's host>:<REST service port>/prometheus/v1/remote_read/<database name>`
|
||||||
- remote_write url: `http://<taosAdapter's host>:<REST service port>/prometheus/v1/remote_write/<database name>`
|
- remote_write url: `http://<taosAdapter's host>:<REST service port>/prometheus/v1/remote_write/<database name>`
|
||||||
|
|
||||||
### Configure Basic Authentication
|
#### Configure Basic Authentication
|
||||||
|
|
||||||
- username: \<TDengine's username>
|
- username: \<TDengine's username>
|
||||||
- password: \<TDengine's password>
|
- password: \<TDengine's password>
|
||||||
|
|
||||||
### Example configuration of remote_write and remote_read in the prometheus.yml file
|
#### Example configuration of remote_write and remote_read in the prometheus.yml file
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
remote_write:
|
remote_write:
|
||||||
|
|
|
@ -1,10 +1,10 @@
|
||||||
### Configure taosAdapter
|
#### Configure taosAdapter
|
||||||
|
|
||||||
Method to configure taosAdapter to receive StatsD data:
|
Method to configure taosAdapter to receive StatsD data:
|
||||||
|
|
||||||
- Enable the configuration item in the taosAdapter configuration file (default location /etc/taos/taosadapter.toml)
|
- Enable the configuration item in the taosAdapter configuration file (default location /etc/taos/taosadapter.toml)
|
||||||
|
|
||||||
```
|
```toml
|
||||||
...
|
...
|
||||||
[statsd]
|
[statsd]
|
||||||
enable = true
|
enable = true
|
||||||
|
@ -29,18 +29,18 @@ The default database name written by taosAdapter is `statsd`, but you can also m
|
||||||
|
|
||||||
- You can also use taosAdapter command line arguments or set environment variables to enable the taosAdapter to receive StatsD data. For more details, please refer to the taosAdapter reference manual.
|
- You can also use taosAdapter command line arguments or set environment variables to enable the taosAdapter to receive StatsD data. For more details, please refer to the taosAdapter reference manual.
|
||||||
|
|
||||||
### Configure StatsD
|
#### Configure StatsD
|
||||||
|
|
||||||
To use StatsD, download its [source code](https://github.com/statsd/statsd). Modify its configuration file according to the example file `exampleConfig.js` found in the root directory of the local source code download. Replace \<taosAdapter's host> with the domain name or IP address of the server running taosAdapter, and \<port for StatsD> with the port that taosAdapter uses to receive StatsD data (default is 6044).
|
To use StatsD, download its [source code](https://github.com/statsd/statsd). Modify its configuration file according to the example file `exampleConfig.js` found in the root directory of the local source code download. Replace \<taosAdapter's host> with the domain name or IP address of the server running taosAdapter, and \<port for StatsD> with the port that taosAdapter uses to receive StatsD data (default is 6044).
|
||||||
|
|
||||||
```
|
```text
|
||||||
Add to the backends section "./backends/repeater"
|
Add to the backends section "./backends/repeater"
|
||||||
Add to the repeater section { host:'<taosAdapter's host>', port: <port for StatsD>}
|
Add to the repeater section { host:'<taosAdapter's host>', port: <port for StatsD>}
|
||||||
```
|
```
|
||||||
|
|
||||||
Example configuration file:
|
Example configuration file:
|
||||||
|
|
||||||
```
|
```js
|
||||||
{
|
{
|
||||||
port: 8125
|
port: 8125
|
||||||
, backends: ["./backends/repeater"]
|
, backends: ["./backends/repeater"]
|
||||||
|
@ -50,7 +50,7 @@ port: 8125
|
||||||
|
|
||||||
After adding the following content, start StatsD (assuming the configuration file is modified to config.js).
|
After adding the following content, start StatsD (assuming the configuration file is modified to config.js).
|
||||||
|
|
||||||
```
|
```shell
|
||||||
npm install
|
npm install
|
||||||
node stats.js config.js &
|
node stats.js config.js &
|
||||||
```
|
```
|
||||||
|
|
|
@ -1,10 +1,10 @@
|
||||||
### Configuring taosAdapter
|
#### Configuring taosAdapter
|
||||||
|
|
||||||
To configure taosAdapter to receive data from TCollector:
|
To configure taosAdapter to receive data from TCollector:
|
||||||
|
|
||||||
- Enable the configuration in the taosAdapter configuration file (default location /etc/taos/taosadapter.toml)
|
- Enable the configuration in the taosAdapter configuration file (default location /etc/taos/taosadapter.toml)
|
||||||
|
|
||||||
```
|
```toml
|
||||||
...
|
...
|
||||||
[opentsdb_telnet]
|
[opentsdb_telnet]
|
||||||
enable = true
|
enable = true
|
||||||
|
@ -21,7 +21,7 @@ The default database name that taosAdapter writes to is `tcollector`, but you ca
|
||||||
|
|
||||||
- You can also use taosAdapter command line arguments or set environment variables to enable the taosAdapter to receive tcollector data. For more details, please refer to the taosAdapter reference manual.
|
- You can also use taosAdapter command line arguments or set environment variables to enable the taosAdapter to receive tcollector data. For more details, please refer to the taosAdapter reference manual.
|
||||||
|
|
||||||
### Configuring TCollector
|
#### Configuring TCollector
|
||||||
|
|
||||||
To use TCollector, download its [source code](https://github.com/OpenTSDB/tcollector). Its configuration options are in its source code. Note: There are significant differences between different versions of TCollector; this only refers to the latest code in the current master branch (git commit: 37ae920).
|
To use TCollector, download its [source code](https://github.com/OpenTSDB/tcollector). Its configuration options are in its source code. Note: There are significant differences between different versions of TCollector; this only refers to the latest code in the current master branch (git commit: 37ae920).
|
||||||
|
|
||||||
|
@ -29,7 +29,7 @@ Modify the contents of `collectors/etc/config.py` and `tcollector.py`. Change th
|
||||||
|
|
||||||
Example of git diff output for source code modifications:
|
Example of git diff output for source code modifications:
|
||||||
|
|
||||||
```
|
```diff
|
||||||
index e7e7a1c..ec3e23c 100644
|
index e7e7a1c..ec3e23c 100644
|
||||||
--- a/collectors/etc/config.py
|
--- a/collectors/etc/config.py
|
||||||
+++ b/collectors/etc/config.py
|
+++ b/collectors/etc/config.py
|
||||||
|
|
|
@ -19,7 +19,7 @@
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.taosdata.jdbc</groupId>
|
<groupId>com.taosdata.jdbc</groupId>
|
||||||
<artifactId>taos-jdbcdriver</artifactId>
|
<artifactId>taos-jdbcdriver</artifactId>
|
||||||
<version>3.5.2</version>
|
<version>3.5.3</version>
|
||||||
</dependency>
|
</dependency>
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>org.locationtech.jts</groupId>
|
<groupId>org.locationtech.jts</groupId>
|
||||||
|
|
|
@ -47,7 +47,7 @@
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.taosdata.jdbc</groupId>
|
<groupId>com.taosdata.jdbc</groupId>
|
||||||
<artifactId>taos-jdbcdriver</artifactId>
|
<artifactId>taos-jdbcdriver</artifactId>
|
||||||
<version>3.5.2</version>
|
<version>3.5.3</version>
|
||||||
</dependency>
|
</dependency>
|
||||||
|
|
||||||
</dependencies>
|
</dependencies>
|
||||||
|
|
|
@ -18,7 +18,7 @@
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.taosdata.jdbc</groupId>
|
<groupId>com.taosdata.jdbc</groupId>
|
||||||
<artifactId>taos-jdbcdriver</artifactId>
|
<artifactId>taos-jdbcdriver</artifactId>
|
||||||
<version>3.5.2</version>
|
<version>3.5.3</version>
|
||||||
</dependency>
|
</dependency>
|
||||||
<!-- druid -->
|
<!-- druid -->
|
||||||
<dependency>
|
<dependency>
|
||||||
|
|
|
@ -17,7 +17,7 @@
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.taosdata.jdbc</groupId>
|
<groupId>com.taosdata.jdbc</groupId>
|
||||||
<artifactId>taos-jdbcdriver</artifactId>
|
<artifactId>taos-jdbcdriver</artifactId>
|
||||||
<version>3.5.2</version>
|
<version>3.5.3</version>
|
||||||
</dependency>
|
</dependency>
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.google.guava</groupId>
|
<groupId>com.google.guava</groupId>
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>org.springframework.boot</groupId>
|
<groupId>org.springframework.boot</groupId>
|
||||||
<artifactId>spring-boot-starter-parent</artifactId>
|
<artifactId>spring-boot-starter-parent</artifactId>
|
||||||
<version>2.4.0</version>
|
<version>2.7.18</version>
|
||||||
<relativePath/> <!-- lookup parent from repository -->
|
<relativePath/> <!-- lookup parent from repository -->
|
||||||
</parent>
|
</parent>
|
||||||
<groupId>com.taosdata.example</groupId>
|
<groupId>com.taosdata.example</groupId>
|
||||||
|
@ -18,6 +18,18 @@
|
||||||
<java.version>1.8</java.version>
|
<java.version>1.8</java.version>
|
||||||
</properties>
|
</properties>
|
||||||
|
|
||||||
|
<dependencyManagement>
|
||||||
|
<dependencies>
|
||||||
|
<dependency>
|
||||||
|
<groupId>com.baomidou</groupId>
|
||||||
|
<artifactId>mybatis-plus-bom</artifactId>
|
||||||
|
<version>3.5.10.1</version>
|
||||||
|
<type>pom</type>
|
||||||
|
<scope>import</scope>
|
||||||
|
</dependency>
|
||||||
|
</dependencies>
|
||||||
|
</dependencyManagement>
|
||||||
|
|
||||||
<dependencies>
|
<dependencies>
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>org.springframework.boot</groupId>
|
<groupId>org.springframework.boot</groupId>
|
||||||
|
@ -28,14 +40,21 @@
|
||||||
<artifactId>lombok</artifactId>
|
<artifactId>lombok</artifactId>
|
||||||
<optional>true</optional>
|
<optional>true</optional>
|
||||||
</dependency>
|
</dependency>
|
||||||
|
<!-- spring boot2 引入可选模块 -->
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.baomidou</groupId>
|
<groupId>com.baomidou</groupId>
|
||||||
<artifactId>mybatis-plus-boot-starter</artifactId>
|
<artifactId>mybatis-plus-boot-starter</artifactId>
|
||||||
<version>3.1.2</version>
|
</dependency>
|
||||||
|
|
||||||
|
<!-- jdk 8+ 引入可选模块 -->
|
||||||
|
<dependency>
|
||||||
|
<groupId>com.baomidou</groupId>
|
||||||
|
<artifactId>mybatis-plus-jsqlparser-4.9</artifactId>
|
||||||
</dependency>
|
</dependency>
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.h2database</groupId>
|
<groupId>com.h2database</groupId>
|
||||||
<artifactId>h2</artifactId>
|
<artifactId>h2</artifactId>
|
||||||
|
<version>2.3.232</version>
|
||||||
<scope>runtime</scope>
|
<scope>runtime</scope>
|
||||||
</dependency>
|
</dependency>
|
||||||
<dependency>
|
<dependency>
|
||||||
|
@ -47,7 +66,7 @@
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.taosdata.jdbc</groupId>
|
<groupId>com.taosdata.jdbc</groupId>
|
||||||
<artifactId>taos-jdbcdriver</artifactId>
|
<artifactId>taos-jdbcdriver</artifactId>
|
||||||
<version>3.5.2</version>
|
<version>3.5.3</version>
|
||||||
</dependency>
|
</dependency>
|
||||||
|
|
||||||
<dependency>
|
<dependency>
|
||||||
|
|
|
@ -1,34 +1,26 @@
|
||||||
package com.taosdata.example.mybatisplusdemo.config;
|
package com.taosdata.example.mybatisplusdemo.config;
|
||||||
|
|
||||||
import com.baomidou.mybatisplus.extension.plugins.PaginationInterceptor;
|
|
||||||
|
import com.baomidou.mybatisplus.annotation.DbType;
|
||||||
|
import com.baomidou.mybatisplus.extension.plugins.MybatisPlusInterceptor;
|
||||||
|
import com.baomidou.mybatisplus.extension.plugins.inner.PaginationInnerInterceptor;
|
||||||
|
import org.mybatis.spring.annotation.MapperScan;
|
||||||
import org.springframework.context.annotation.Bean;
|
import org.springframework.context.annotation.Bean;
|
||||||
import org.springframework.context.annotation.Configuration;
|
import org.springframework.context.annotation.Configuration;
|
||||||
|
import org.springframework.transaction.annotation.EnableTransactionManagement;
|
||||||
|
|
||||||
|
@EnableTransactionManagement
|
||||||
@Configuration
|
@Configuration
|
||||||
|
@MapperScan("com.taosdata.example.mybatisplusdemo.mapper")
|
||||||
public class MybatisPlusConfig {
|
public class MybatisPlusConfig {
|
||||||
|
|
||||||
|
/**
|
||||||
/** mybatis 3.4.1 pagination config start ***/
|
* 添加分页插件
|
||||||
// @Bean
|
*/
|
||||||
// public MybatisPlusInterceptor mybatisPlusInterceptor() {
|
|
||||||
// MybatisPlusInterceptor interceptor = new MybatisPlusInterceptor();
|
|
||||||
// interceptor.addInnerInterceptor(new PaginationInnerInterceptor());
|
|
||||||
// return interceptor;
|
|
||||||
// }
|
|
||||||
|
|
||||||
// @Bean
|
|
||||||
// public ConfigurationCustomizer configurationCustomizer() {
|
|
||||||
// return configuration -> configuration.setUseDeprecatedExecutor(false);
|
|
||||||
// }
|
|
||||||
|
|
||||||
@Bean
|
@Bean
|
||||||
public PaginationInterceptor paginationInterceptor() {
|
public MybatisPlusInterceptor mybatisPlusInterceptor() {
|
||||||
// return new PaginationInterceptor();
|
MybatisPlusInterceptor interceptor = new MybatisPlusInterceptor();
|
||||||
PaginationInterceptor paginationInterceptor = new PaginationInterceptor();
|
interceptor.addInnerInterceptor(new PaginationInnerInterceptor(DbType.MYSQL));
|
||||||
//TODO: mybatis-plus do not support TDengine, use postgresql Dialect
|
return interceptor;
|
||||||
paginationInterceptor.setDialectType("postgresql");
|
|
||||||
|
|
||||||
return paginationInterceptor;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -5,6 +5,7 @@ import com.taosdata.example.mybatisplusdemo.domain.Meters;
|
||||||
import org.apache.ibatis.annotations.Insert;
|
import org.apache.ibatis.annotations.Insert;
|
||||||
import org.apache.ibatis.annotations.Param;
|
import org.apache.ibatis.annotations.Param;
|
||||||
import org.apache.ibatis.annotations.Update;
|
import org.apache.ibatis.annotations.Update;
|
||||||
|
import org.apache.ibatis.executor.BatchResult;
|
||||||
|
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
|
||||||
|
@ -15,17 +16,6 @@ public interface MetersMapper extends BaseMapper<Meters> {
|
||||||
|
|
||||||
@Insert("insert into meters (tbname, ts, groupid, location, current, voltage, phase) values(#{tbname}, #{ts}, #{groupid}, #{location}, #{current}, #{voltage}, #{phase})")
|
@Insert("insert into meters (tbname, ts, groupid, location, current, voltage, phase) values(#{tbname}, #{ts}, #{groupid}, #{location}, #{current}, #{voltage}, #{phase})")
|
||||||
int insertOne(Meters one);
|
int insertOne(Meters one);
|
||||||
|
|
||||||
@Insert({
|
|
||||||
"<script>",
|
|
||||||
"insert into meters (tbname, ts, groupid, location, current, voltage, phase) values ",
|
|
||||||
"<foreach collection='list' item='item' index='index' separator=','>",
|
|
||||||
"(#{item.tbname}, #{item.ts}, #{item.groupid}, #{item.location}, #{item.current}, #{item.voltage}, #{item.phase})",
|
|
||||||
"</foreach>",
|
|
||||||
"</script>"
|
|
||||||
})
|
|
||||||
int insertBatch(@Param("list") List<Meters> metersList);
|
|
||||||
|
|
||||||
@Update("drop stable if exists meters")
|
@Update("drop stable if exists meters")
|
||||||
void dropTable();
|
void dropTable();
|
||||||
}
|
}
|
||||||
|
|
|
@ -11,9 +11,6 @@ public interface TemperatureMapper extends BaseMapper<Temperature> {
|
||||||
@Update("CREATE TABLE if not exists temperature(ts timestamp, temperature float) tags(location nchar(64), tbIndex int)")
|
@Update("CREATE TABLE if not exists temperature(ts timestamp, temperature float) tags(location nchar(64), tbIndex int)")
|
||||||
int createSuperTable();
|
int createSuperTable();
|
||||||
|
|
||||||
@Update("create table #{tbName} using temperature tags( #{location}, #{tbindex})")
|
|
||||||
int createTable(@Param("tbName") String tbName, @Param("location") String location, @Param("tbindex") int tbindex);
|
|
||||||
|
|
||||||
@Update("drop table if exists temperature")
|
@Update("drop table if exists temperature")
|
||||||
void dropSuperTable();
|
void dropSuperTable();
|
||||||
|
|
||||||
|
|
|
@ -10,7 +10,7 @@ public interface WeatherMapper extends BaseMapper<Weather> {
|
||||||
@Update("CREATE TABLE if not exists weather(ts timestamp, temperature float, humidity int, location nchar(100))")
|
@Update("CREATE TABLE if not exists weather(ts timestamp, temperature float, humidity int, location nchar(100))")
|
||||||
int createTable();
|
int createTable();
|
||||||
|
|
||||||
@Insert("insert into weather (ts, temperature, humidity, location) values(#{ts}, #{temperature}, #{humidity}, #{location})")
|
@Insert("insert into weather (ts, temperature, humidity, location) values(#{ts}, #{temperature}, #{humidity}, #{location, jdbcType=NCHAR})")
|
||||||
int insertOne(Weather one);
|
int insertOne(Weather one);
|
||||||
|
|
||||||
@Update("drop table if exists weather")
|
@Update("drop table if exists weather")
|
||||||
|
|
|
@ -0,0 +1,19 @@
|
||||||
|
package com.taosdata.example.mybatisplusdemo.service;
|
||||||
|
|
||||||
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
|
import org.springframework.stereotype.Service;
|
||||||
|
|
||||||
|
import javax.sql.DataSource;
|
||||||
|
import java.sql.Connection;
|
||||||
|
import java.sql.SQLException;
|
||||||
|
|
||||||
|
@Service
|
||||||
|
public class DatabaseConnectionService {
|
||||||
|
|
||||||
|
@Autowired
|
||||||
|
private DataSource dataSource;
|
||||||
|
|
||||||
|
public Connection getConnection() throws SQLException {
|
||||||
|
return dataSource.getConnection();
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,23 @@
|
||||||
|
package com.taosdata.example.mybatisplusdemo.service;
|
||||||
|
|
||||||
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
|
import org.springframework.stereotype.Service;
|
||||||
|
|
||||||
|
import java.sql.Connection;
|
||||||
|
import java.sql.SQLException;
|
||||||
|
import java.sql.Statement;
|
||||||
|
|
||||||
|
@Service
|
||||||
|
public class TemperatureService {
|
||||||
|
@Autowired
|
||||||
|
private DatabaseConnectionService databaseConnectionService;
|
||||||
|
|
||||||
|
public void createTable(String tableName, String location, int tbIndex) throws SQLException {
|
||||||
|
|
||||||
|
|
||||||
|
try (Connection connection = databaseConnectionService.getConnection();
|
||||||
|
Statement statement = connection.createStatement()) {
|
||||||
|
statement.executeUpdate("create table " + tableName + " using temperature tags( '" + location +"', " + tbIndex + ")");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
|
@ -5,6 +5,7 @@ import com.baomidou.mybatisplus.core.metadata.IPage;
|
||||||
import com.baomidou.mybatisplus.extension.plugins.pagination.Page;
|
import com.baomidou.mybatisplus.extension.plugins.pagination.Page;
|
||||||
import com.taosdata.example.mybatisplusdemo.domain.Meters;
|
import com.taosdata.example.mybatisplusdemo.domain.Meters;
|
||||||
import com.taosdata.example.mybatisplusdemo.domain.Weather;
|
import com.taosdata.example.mybatisplusdemo.domain.Weather;
|
||||||
|
import org.apache.ibatis.executor.BatchResult;
|
||||||
import org.junit.Assert;
|
import org.junit.Assert;
|
||||||
import org.junit.Before;
|
import org.junit.Before;
|
||||||
import org.junit.Test;
|
import org.junit.Test;
|
||||||
|
@ -18,6 +19,8 @@ import java.util.LinkedList;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Random;
|
import java.util.Random;
|
||||||
|
|
||||||
|
import static java.sql.Statement.SUCCESS_NO_INFO;
|
||||||
|
|
||||||
@RunWith(SpringJUnit4ClassRunner.class)
|
@RunWith(SpringJUnit4ClassRunner.class)
|
||||||
@SpringBootTest
|
@SpringBootTest
|
||||||
public class MetersMapperTest {
|
public class MetersMapperTest {
|
||||||
|
@ -63,8 +66,19 @@ public class MetersMapperTest {
|
||||||
metersList.add(one);
|
metersList.add(one);
|
||||||
|
|
||||||
}
|
}
|
||||||
int affectRows = mapper.insertBatch(metersList);
|
List<BatchResult> affectRowsList = mapper.insert(metersList, 10000);
|
||||||
Assert.assertEquals(100, affectRows);
|
|
||||||
|
long totalAffectedRows = 0;
|
||||||
|
for (BatchResult batchResult : affectRowsList) {
|
||||||
|
int[] updateCounts = batchResult.getUpdateCounts();
|
||||||
|
for (int status : updateCounts) {
|
||||||
|
if (status == SUCCESS_NO_INFO) {
|
||||||
|
totalAffectedRows++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Assert.assertEquals(100, totalAffectedRows);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
|
@ -93,7 +107,7 @@ public class MetersMapperTest {
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void testSelectCount() {
|
public void testSelectCount() {
|
||||||
int count = mapper.selectCount(null);
|
long count = mapper.selectCount(null);
|
||||||
// Assert.assertEquals(5, count);
|
// Assert.assertEquals(5, count);
|
||||||
System.out.println(count);
|
System.out.println(count);
|
||||||
}
|
}
|
||||||
|
|
|
@ -4,6 +4,7 @@ import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
|
||||||
import com.baomidou.mybatisplus.core.metadata.IPage;
|
import com.baomidou.mybatisplus.core.metadata.IPage;
|
||||||
import com.baomidou.mybatisplus.extension.plugins.pagination.Page;
|
import com.baomidou.mybatisplus.extension.plugins.pagination.Page;
|
||||||
import com.taosdata.example.mybatisplusdemo.domain.Temperature;
|
import com.taosdata.example.mybatisplusdemo.domain.Temperature;
|
||||||
|
import com.taosdata.example.mybatisplusdemo.service.TemperatureService;
|
||||||
import org.junit.After;
|
import org.junit.After;
|
||||||
import org.junit.Assert;
|
import org.junit.Assert;
|
||||||
import org.junit.Before;
|
import org.junit.Before;
|
||||||
|
@ -13,6 +14,8 @@ import org.springframework.beans.factory.annotation.Autowired;
|
||||||
import org.springframework.boot.test.context.SpringBootTest;
|
import org.springframework.boot.test.context.SpringBootTest;
|
||||||
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
|
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
|
||||||
|
|
||||||
|
import java.sql.ResultSet;
|
||||||
|
import java.sql.SQLException;
|
||||||
import java.sql.Timestamp;
|
import java.sql.Timestamp;
|
||||||
import java.util.HashMap;
|
import java.util.HashMap;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
@ -22,18 +25,20 @@ import java.util.Random;
|
||||||
@RunWith(SpringJUnit4ClassRunner.class)
|
@RunWith(SpringJUnit4ClassRunner.class)
|
||||||
@SpringBootTest
|
@SpringBootTest
|
||||||
public class TemperatureMapperTest {
|
public class TemperatureMapperTest {
|
||||||
|
@Autowired
|
||||||
|
private TemperatureService temperatureService;
|
||||||
|
|
||||||
private static Random random = new Random(System.currentTimeMillis());
|
private static Random random = new Random(System.currentTimeMillis());
|
||||||
private static String[] locations = {"北京", "上海", "深圳", "广州", "杭州"};
|
private static String[] locations = {"北京", "上海", "深圳", "广州", "杭州"};
|
||||||
|
|
||||||
@Before
|
@Before
|
||||||
public void before() {
|
public void before() throws SQLException {
|
||||||
mapper.dropSuperTable();
|
mapper.dropSuperTable();
|
||||||
// create table temperature
|
// create table temperature
|
||||||
mapper.createSuperTable();
|
mapper.createSuperTable();
|
||||||
// create table t_X using temperature
|
// create table t_X using temperature
|
||||||
for (int i = 0; i < 10; i++) {
|
for (int i = 0; i < 10; i++) {
|
||||||
mapper.createTable("t" + i, locations[random.nextInt(locations.length)], i);
|
temperatureService.createTable("t" + i, locations[i % locations.length], i);
|
||||||
}
|
}
|
||||||
// insert into table
|
// insert into table
|
||||||
int affectRows = 0;
|
int affectRows = 0;
|
||||||
|
@ -107,7 +112,7 @@ public class TemperatureMapperTest {
|
||||||
* **/
|
* **/
|
||||||
@Test
|
@Test
|
||||||
public void testSelectCount() {
|
public void testSelectCount() {
|
||||||
int count = mapper.selectCount(null);
|
long count = mapper.selectCount(null);
|
||||||
Assert.assertEquals(10, count);
|
Assert.assertEquals(10, count);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -52,7 +52,7 @@ public class WeatherMapperTest {
|
||||||
one.setTemperature(random.nextFloat() * 50);
|
one.setTemperature(random.nextFloat() * 50);
|
||||||
one.setHumidity(random.nextInt(100));
|
one.setHumidity(random.nextInt(100));
|
||||||
one.setLocation("望京");
|
one.setLocation("望京");
|
||||||
int affectRows = mapper.insert(one);
|
int affectRows = mapper.insertOne(one);
|
||||||
Assert.assertEquals(1, affectRows);
|
Assert.assertEquals(1, affectRows);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -82,7 +82,7 @@ public class WeatherMapperTest {
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void testSelectCount() {
|
public void testSelectCount() {
|
||||||
int count = mapper.selectCount(null);
|
long count = mapper.selectCount(null);
|
||||||
// Assert.assertEquals(5, count);
|
// Assert.assertEquals(5, count);
|
||||||
System.out.println(count);
|
System.out.println(count);
|
||||||
}
|
}
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
<parent>
|
<parent>
|
||||||
<groupId>org.springframework.boot</groupId>
|
<groupId>org.springframework.boot</groupId>
|
||||||
<artifactId>spring-boot-starter-parent</artifactId>
|
<artifactId>spring-boot-starter-parent</artifactId>
|
||||||
<version>2.6.15</version>
|
<version>2.7.18</version>
|
||||||
<relativePath/> <!-- lookup parent from repository -->
|
<relativePath/> <!-- lookup parent from repository -->
|
||||||
</parent>
|
</parent>
|
||||||
<groupId>com.taosdata.example</groupId>
|
<groupId>com.taosdata.example</groupId>
|
||||||
|
@ -34,7 +34,7 @@
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>org.mybatis.spring.boot</groupId>
|
<groupId>org.mybatis.spring.boot</groupId>
|
||||||
<artifactId>mybatis-spring-boot-starter</artifactId>
|
<artifactId>mybatis-spring-boot-starter</artifactId>
|
||||||
<version>2.1.1</version>
|
<version>2.3.2</version>
|
||||||
</dependency>
|
</dependency>
|
||||||
|
|
||||||
<dependency>
|
<dependency>
|
||||||
|
@ -70,7 +70,7 @@
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.taosdata.jdbc</groupId>
|
<groupId>com.taosdata.jdbc</groupId>
|
||||||
<artifactId>taos-jdbcdriver</artifactId>
|
<artifactId>taos-jdbcdriver</artifactId>
|
||||||
<version>3.5.2</version>
|
<version>3.5.3</version>
|
||||||
</dependency>
|
</dependency>
|
||||||
|
|
||||||
<dependency>
|
<dependency>
|
||||||
|
|
|
@ -50,13 +50,6 @@
|
||||||
), groupId int)
|
), groupId int)
|
||||||
</update>
|
</update>
|
||||||
|
|
||||||
<update id="createTable" parameterType="com.taosdata.example.springbootdemo.domain.Weather">
|
|
||||||
create table if not exists test.t#{groupId} using test.weather tags
|
|
||||||
(
|
|
||||||
#{location},
|
|
||||||
#{groupId}
|
|
||||||
)
|
|
||||||
</update>
|
|
||||||
|
|
||||||
<select id="select" resultMap="BaseResultMap">
|
<select id="select" resultMap="BaseResultMap">
|
||||||
select * from test.weather order by ts desc
|
select * from test.weather order by ts desc
|
||||||
|
@ -69,8 +62,8 @@
|
||||||
</select>
|
</select>
|
||||||
|
|
||||||
<insert id="insert" parameterType="com.taosdata.example.springbootdemo.domain.Weather">
|
<insert id="insert" parameterType="com.taosdata.example.springbootdemo.domain.Weather">
|
||||||
insert into test.t#{groupId} (ts, temperature, humidity, note, bytes)
|
insert into test.t${groupId} (ts, temperature, humidity, note, bytes)
|
||||||
values (#{ts}, ${temperature}, ${humidity}, #{note}, #{bytes})
|
values (#{ts}, #{temperature}, #{humidity}, #{note}, #{bytes})
|
||||||
</insert>
|
</insert>
|
||||||
|
|
||||||
<select id="getSubTables" resultType="String">
|
<select id="getSubTables" resultType="String">
|
||||||
|
|
|
@ -0,0 +1,19 @@
|
||||||
|
package com.taosdata.example.springbootdemo.service;
|
||||||
|
|
||||||
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
|
import org.springframework.stereotype.Service;
|
||||||
|
|
||||||
|
import javax.sql.DataSource;
|
||||||
|
import java.sql.Connection;
|
||||||
|
import java.sql.SQLException;
|
||||||
|
|
||||||
|
@Service
|
||||||
|
public class DatabaseConnectionService {
|
||||||
|
|
||||||
|
@Autowired
|
||||||
|
private DataSource dataSource;
|
||||||
|
|
||||||
|
public Connection getConnection() throws SQLException {
|
||||||
|
return dataSource.getConnection();
|
||||||
|
}
|
||||||
|
}
|
|
@ -6,6 +6,9 @@ import org.springframework.beans.factory.annotation.Autowired;
|
||||||
import org.springframework.stereotype.Service;
|
import org.springframework.stereotype.Service;
|
||||||
|
|
||||||
import java.nio.charset.StandardCharsets;
|
import java.nio.charset.StandardCharsets;
|
||||||
|
import java.sql.Connection;
|
||||||
|
import java.sql.SQLException;
|
||||||
|
import java.sql.Statement;
|
||||||
import java.sql.Timestamp;
|
import java.sql.Timestamp;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Map;
|
import java.util.Map;
|
||||||
|
@ -16,6 +19,9 @@ public class WeatherService {
|
||||||
|
|
||||||
@Autowired
|
@Autowired
|
||||||
private WeatherMapper weatherMapper;
|
private WeatherMapper weatherMapper;
|
||||||
|
|
||||||
|
@Autowired
|
||||||
|
private DatabaseConnectionService databaseConnectionService;
|
||||||
private Random random = new Random(System.currentTimeMillis());
|
private Random random = new Random(System.currentTimeMillis());
|
||||||
private String[] locations = {"北京", "上海", "广州", "深圳", "天津"};
|
private String[] locations = {"北京", "上海", "广州", "深圳", "天津"};
|
||||||
|
|
||||||
|
@ -32,7 +38,7 @@ public class WeatherService {
|
||||||
weather.setGroupId(i % locations.length);
|
weather.setGroupId(i % locations.length);
|
||||||
weather.setNote("note-" + i);
|
weather.setNote("note-" + i);
|
||||||
weather.setBytes(locations[random.nextInt(locations.length)].getBytes(StandardCharsets.UTF_8));
|
weather.setBytes(locations[random.nextInt(locations.length)].getBytes(StandardCharsets.UTF_8));
|
||||||
weatherMapper.createTable(weather);
|
createTable(weather);
|
||||||
count += weatherMapper.insert(weather);
|
count += weatherMapper.insert(weather);
|
||||||
}
|
}
|
||||||
return count;
|
return count;
|
||||||
|
@ -78,4 +84,14 @@ public class WeatherService {
|
||||||
weather.setLocation(location);
|
weather.setLocation(location);
|
||||||
return weather;
|
return weather;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public void createTable(Weather weather) {
|
||||||
|
try (Connection connection = databaseConnectionService.getConnection();
|
||||||
|
Statement statement = connection.createStatement()) {
|
||||||
|
String tableName = "t" + weather.getGroupId();
|
||||||
|
statement.executeUpdate("create table if not exists " + tableName + " using test.weather tags( '" + weather.getLocation() +"', " + weather.getGroupId() + ")");
|
||||||
|
} catch (SQLException e) {
|
||||||
|
throw new RuntimeException(e);
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -4,8 +4,8 @@
|
||||||
#spring.datasource.username=root
|
#spring.datasource.username=root
|
||||||
#spring.datasource.password=taosdata
|
#spring.datasource.password=taosdata
|
||||||
# datasource config - JDBC-RESTful
|
# datasource config - JDBC-RESTful
|
||||||
spring.datasource.driver-class-name=com.taosdata.jdbc.rs.RestfulDriver
|
spring.datasource.driver-class-name=com.taosdata.jdbc.ws.WebSocketDriver
|
||||||
spring.datasource.url=jdbc:TAOS-RS://localhost:6041/test
|
spring.datasource.url=jdbc:TAOS-WS://localhost:6041/test
|
||||||
spring.datasource.username=root
|
spring.datasource.username=root
|
||||||
spring.datasource.password=taosdata
|
spring.datasource.password=taosdata
|
||||||
spring.datasource.druid.initial-size=5
|
spring.datasource.druid.initial-size=5
|
||||||
|
|
|
@ -67,7 +67,7 @@
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.taosdata.jdbc</groupId>
|
<groupId>com.taosdata.jdbc</groupId>
|
||||||
<artifactId>taos-jdbcdriver</artifactId>
|
<artifactId>taos-jdbcdriver</artifactId>
|
||||||
<version>3.5.2</version>
|
<version>3.5.3</version>
|
||||||
<!-- <scope>system</scope>-->
|
<!-- <scope>system</scope>-->
|
||||||
<!-- <systemPath>${project.basedir}/src/main/resources/lib/taos-jdbcdriver-2.0.15-dist.jar</systemPath>-->
|
<!-- <systemPath>${project.basedir}/src/main/resources/lib/taos-jdbcdriver-2.0.15-dist.jar</systemPath>-->
|
||||||
</dependency>
|
</dependency>
|
||||||
|
|
|
@ -22,7 +22,7 @@
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.taosdata.jdbc</groupId>
|
<groupId>com.taosdata.jdbc</groupId>
|
||||||
<artifactId>taos-jdbcdriver</artifactId>
|
<artifactId>taos-jdbcdriver</artifactId>
|
||||||
<version>3.5.2</version>
|
<version>3.5.3</version>
|
||||||
</dependency>
|
</dependency>
|
||||||
<!-- ANCHOR_END: dep-->
|
<!-- ANCHOR_END: dep-->
|
||||||
|
|
||||||
|
|
|
@ -2,6 +2,7 @@ package com.taos.example;
|
||||||
|
|
||||||
import com.taosdata.jdbc.ws.TSWSPreparedStatement;
|
import com.taosdata.jdbc.ws.TSWSPreparedStatement;
|
||||||
|
|
||||||
|
import java.math.BigInteger;
|
||||||
import java.sql.*;
|
import java.sql.*;
|
||||||
import java.util.Random;
|
import java.util.Random;
|
||||||
|
|
||||||
|
@ -26,7 +27,12 @@ public class WSParameterBindingFullDemo {
|
||||||
"binary_col BINARY(100), " +
|
"binary_col BINARY(100), " +
|
||||||
"nchar_col NCHAR(100), " +
|
"nchar_col NCHAR(100), " +
|
||||||
"varbinary_col VARBINARY(100), " +
|
"varbinary_col VARBINARY(100), " +
|
||||||
"geometry_col GEOMETRY(100)) " +
|
"geometry_col GEOMETRY(100)," +
|
||||||
|
"utinyint_col tinyint unsigned," +
|
||||||
|
"usmallint_col smallint unsigned," +
|
||||||
|
"uint_col int unsigned," +
|
||||||
|
"ubigint_col bigint unsigned" +
|
||||||
|
") " +
|
||||||
"tags (" +
|
"tags (" +
|
||||||
"int_tag INT, " +
|
"int_tag INT, " +
|
||||||
"double_tag DOUBLE, " +
|
"double_tag DOUBLE, " +
|
||||||
|
@ -34,7 +40,12 @@ public class WSParameterBindingFullDemo {
|
||||||
"binary_tag BINARY(100), " +
|
"binary_tag BINARY(100), " +
|
||||||
"nchar_tag NCHAR(100), " +
|
"nchar_tag NCHAR(100), " +
|
||||||
"varbinary_tag VARBINARY(100), " +
|
"varbinary_tag VARBINARY(100), " +
|
||||||
"geometry_tag GEOMETRY(100))"
|
"geometry_tag GEOMETRY(100)," +
|
||||||
|
"utinyint_tag tinyint unsigned," +
|
||||||
|
"usmallint_tag smallint unsigned," +
|
||||||
|
"uint_tag int unsigned," +
|
||||||
|
"ubigint_tag bigint unsigned" +
|
||||||
|
")"
|
||||||
};
|
};
|
||||||
private static final int numOfSubTable = 10, numOfRow = 10;
|
private static final int numOfSubTable = 10, numOfRow = 10;
|
||||||
|
|
||||||
|
@ -79,7 +90,7 @@ public class WSParameterBindingFullDemo {
|
||||||
// set table name
|
// set table name
|
||||||
pstmt.setTableName("ntb_json_" + i);
|
pstmt.setTableName("ntb_json_" + i);
|
||||||
// set tags
|
// set tags
|
||||||
pstmt.setTagJson(1, "{\"device\":\"device_" + i + "\"}");
|
pstmt.setTagJson(0, "{\"device\":\"device_" + i + "\"}");
|
||||||
// set columns
|
// set columns
|
||||||
long current = System.currentTimeMillis();
|
long current = System.currentTimeMillis();
|
||||||
for (int j = 0; j < numOfRow; j++) {
|
for (int j = 0; j < numOfRow; j++) {
|
||||||
|
@ -94,25 +105,29 @@ public class WSParameterBindingFullDemo {
|
||||||
}
|
}
|
||||||
|
|
||||||
private static void stmtAll(Connection conn) throws SQLException {
|
private static void stmtAll(Connection conn) throws SQLException {
|
||||||
String sql = "INSERT INTO ? using stb tags(?,?,?,?,?,?,?) VALUES (?,?,?,?,?,?,?,?)";
|
String sql = "INSERT INTO ? using stb tags(?,?,?,?,?,?,?,?,?,?,?) VALUES (?,?,?,?,?,?,?,?,?,?,?,?)";
|
||||||
|
|
||||||
try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) {
|
try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) {
|
||||||
|
|
||||||
// set table name
|
// set table name
|
||||||
pstmt.setTableName("ntb");
|
pstmt.setTableName("ntb");
|
||||||
// set tags
|
// set tags
|
||||||
pstmt.setTagInt(1, 1);
|
pstmt.setTagInt(0, 1);
|
||||||
pstmt.setTagDouble(2, 1.1);
|
pstmt.setTagDouble(1, 1.1);
|
||||||
pstmt.setTagBoolean(3, true);
|
pstmt.setTagBoolean(2, true);
|
||||||
pstmt.setTagString(4, "binary_value");
|
pstmt.setTagString(3, "binary_value");
|
||||||
pstmt.setTagNString(5, "nchar_value");
|
pstmt.setTagNString(4, "nchar_value");
|
||||||
pstmt.setTagVarbinary(6, new byte[] { (byte) 0x98, (byte) 0xf4, 0x6e });
|
pstmt.setTagVarbinary(5, new byte[] { (byte) 0x98, (byte) 0xf4, 0x6e });
|
||||||
pstmt.setTagGeometry(7, new byte[] {
|
pstmt.setTagGeometry(6, new byte[] {
|
||||||
0x01, 0x01, 0x00, 0x00,
|
0x01, 0x01, 0x00, 0x00,
|
||||||
0x00, 0x00, 0x00, 0x00,
|
0x00, 0x00, 0x00, 0x00,
|
||||||
0x00, 0x00, 0x00, 0x59,
|
0x00, 0x00, 0x00, 0x59,
|
||||||
0x40, 0x00, 0x00, 0x00,
|
0x40, 0x00, 0x00, 0x00,
|
||||||
0x00, 0x00, 0x00, 0x59, 0x40 });
|
0x00, 0x00, 0x00, 0x59, 0x40 });
|
||||||
|
pstmt.setTagShort(7, (short)255);
|
||||||
|
pstmt.setTagInt(8, 65535);
|
||||||
|
pstmt.setTagLong(9, 4294967295L);
|
||||||
|
pstmt.setTagBigInteger(10, new BigInteger("18446744073709551615"));
|
||||||
|
|
||||||
long current = System.currentTimeMillis();
|
long current = System.currentTimeMillis();
|
||||||
|
|
||||||
|
@ -129,6 +144,10 @@ public class WSParameterBindingFullDemo {
|
||||||
0x00, 0x00, 0x00, 0x59,
|
0x00, 0x00, 0x00, 0x59,
|
||||||
0x40, 0x00, 0x00, 0x00,
|
0x40, 0x00, 0x00, 0x00,
|
||||||
0x00, 0x00, 0x00, 0x59, 0x40 });
|
0x00, 0x00, 0x00, 0x59, 0x40 });
|
||||||
|
pstmt.setShort(9, (short)255);
|
||||||
|
pstmt.setInt(10, 65535);
|
||||||
|
pstmt.setLong(11, 4294967295L);
|
||||||
|
pstmt.setObject(12, new BigInteger("18446744073709551615"));
|
||||||
pstmt.addBatch();
|
pstmt.addBatch();
|
||||||
pstmt.executeBatch();
|
pstmt.executeBatch();
|
||||||
System.out.println("Successfully inserted rows to example_all_type_stmt.ntb");
|
System.out.println("Successfully inserted rows to example_all_type_stmt.ntb");
|
||||||
|
|
|
@ -23,7 +23,7 @@ CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name
|
||||||
SUBTABLE(expression) AS subquery
|
SUBTABLE(expression) AS subquery
|
||||||
|
|
||||||
stream_options: {
|
stream_options: {
|
||||||
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
|
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time | FORCE_WINDOW_CLOSE]
|
||||||
WATERMARK time
|
WATERMARK time
|
||||||
IGNORE EXPIRED [0|1]
|
IGNORE EXPIRED [0|1]
|
||||||
DELETE_MARK time
|
DELETE_MARK time
|
||||||
|
@ -52,13 +52,17 @@ window_cluse: {
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
subquery 支持会话窗口、状态窗口与滑动窗口。其中,会话窗口与状态窗口搭配超级表时必须与 partition by tbname 一起使用。
|
subquery 支持会话窗口、状态窗口、时间窗口、事件窗口与计数窗口。其中,状态窗口、事件窗口与计数窗口搭配超级表时必须与 partition by tbname 一起使用。
|
||||||
|
|
||||||
1. 其中,SESSION 是会话窗口,tol_val 是时间间隔的最大范围。在 tol_val 时间间隔范围内的数据都属于同一个窗口,如果连续的两条数据的时间间隔超过 tol_val,则自动开启下一个窗口。
|
1. 其中,SESSION 是会话窗口,tol_val 是时间间隔的最大范围。在 tol_val 时间间隔范围内的数据都属于同一个窗口,如果连续的两条数据的时间间隔超过 tol_val,则自动开启下一个窗口。
|
||||||
|
|
||||||
2. EVENT_WINDOW 是事件窗口,根据开始条件和结束条件来划定窗口。当 start_trigger_condition 满足时则窗口开始,直到 end_trigger_condition 满足时窗口关闭。 start_trigger_condition 和 end_trigger_condition 可以是任意 TDengine 支持的条件表达式,且可以包含不同的列。
|
2. STATE_WINDOW 是状态窗口,col 用来标识状态量,相同的状态量数值则归属于同一个状态窗口,col 数值改变后则当前窗口结束,自动开启下一个窗口。
|
||||||
|
|
||||||
3. COUNT_WINDOW 是计数窗口,按固定的数据行数来划分窗口。 count_val 是常量,是正整数,必须大于等于 2,小于 2147483648。 count_val 表示每个 COUNT_WINDOW 包含的最大数据行数,总数据行数不能整除 count_val 时,最后一个窗口的行数会小于 count_val 。 sliding_val 是常量,表示窗口滑动的数量,类似于 INTERVAL 的 SLIDING 。
|
3. INTERVAL 是时间窗口,又可分为滑动时间窗口和翻转时间窗口。INTERVAL 子句用于指定窗口相等时间周期,SLIDING 字句用于指定窗口向前滑动的时间。当 interval_val 与 sliding_val 相等的时候,时间窗口即为翻转时间窗口,否则为滑动时间窗口,注意:sliding_val 必须小于等于 interval_val。
|
||||||
|
|
||||||
|
4. EVENT_WINDOW 是事件窗口,根据开始条件和结束条件来划定窗口。当 start_trigger_condition 满足时则窗口开始,直到 end_trigger_condition 满足时窗口关闭。 start_trigger_condition 和 end_trigger_condition 可以是任意 TDengine 支持的条件表达式,且可以包含不同的列。
|
||||||
|
|
||||||
|
5. COUNT_WINDOW 是计数窗口,按固定的数据行数来划分窗口。 count_val 是常量,是正整数,必须大于等于 2,小于 2147483648。 count_val 表示每个 COUNT_WINDOW 包含的最大数据行数,总数据行数不能整除 count_val 时,最后一个窗口的行数会小于 count_val 。 sliding_val 是常量,表示窗口滑动的数量,类似于 INTERVAL 的 SLIDING 。
|
||||||
|
|
||||||
窗口的定义与时序数据窗口查询中的定义完全相同,具体可参考 TDengine 窗口函数部分。
|
窗口的定义与时序数据窗口查询中的定义完全相同,具体可参考 TDengine 窗口函数部分。
|
||||||
|
|
||||||
|
|
|
@ -91,3 +91,18 @@ taos> select _flow, _fhigh, _frowts, forecast(i32) from foo;
|
||||||
## 内置预测算法
|
## 内置预测算法
|
||||||
- [arima](./02-arima.md)
|
- [arima](./02-arima.md)
|
||||||
- [holtwinters](./03-holtwinters.md)
|
- [holtwinters](./03-holtwinters.md)
|
||||||
|
- CES (Complex Exponential Smoothing)
|
||||||
|
- Theta
|
||||||
|
- Prophet
|
||||||
|
- XGBoost
|
||||||
|
- LightGBM
|
||||||
|
- Multiple Seasonal-Trend decomposition using LOESS (MSTL)
|
||||||
|
- ETS (Error, Trend, Seasonal)
|
||||||
|
- Long Short-Term Memory (LSTM)
|
||||||
|
- Multilayer Perceptron (MLP)
|
||||||
|
- DeepAR
|
||||||
|
- N-BEATS
|
||||||
|
- N-HiTS
|
||||||
|
- PatchTST (Patch Time Series Transformer)
|
||||||
|
- Temporal Fusion Transformer
|
||||||
|
- TimesNet
|
||||||
|
|
|
@ -50,6 +50,13 @@ FROM foo
|
||||||
ANOMALY_WINDOW(foo.i32, "algo=shesd,direction=both,anoms=0.05")
|
ANOMALY_WINDOW(foo.i32, "algo=shesd,direction=both,anoms=0.05")
|
||||||
```
|
```
|
||||||
|
|
||||||
|
后续待添加异常检测算法
|
||||||
|
- Gaussian Process Regression
|
||||||
|
|
||||||
|
基于变点检测的异常检测算法
|
||||||
|
- CUSUM (Cumulative Sum Control Chart)
|
||||||
|
- PELT (Pruned Exact Linear Time)
|
||||||
|
|
||||||
### 参考文献
|
### 参考文献
|
||||||
1. [https://en.wikipedia.org/wiki/68–95–99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule)
|
1. [https://en.wikipedia.org/wiki/68–95–99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule)
|
||||||
2. https://en.wikipedia.org/wiki/Interquartile_range
|
2. https://en.wikipedia.org/wiki/Interquartile_range
|
||||||
|
|
|
@ -3,7 +3,7 @@ title: "数据密度算法"
|
||||||
sidebar_label: "数据密度算法"
|
sidebar_label: "数据密度算法"
|
||||||
---
|
---
|
||||||
|
|
||||||
### 基于数据密度的检测方法
|
### 基于数据密度/数据挖掘的检测算法
|
||||||
LOF<sup>[1]</sup>: Local Outlier Factor(LOF),局部离群因子/局部异常因子,
|
LOF<sup>[1]</sup>: Local Outlier Factor(LOF),局部离群因子/局部异常因子,
|
||||||
是 Breunig 在 2000 年提出的一种基于密度的局部离群点检测算法,该方法适用于不同类簇密度分散情况迥异的数据。根据数据点周围的数据密集情况,首先计算每个数据点的一个局部可达密度,然后通过局部可达密度进一步计算得到每个数据点的一个离群因子,
|
是 Breunig 在 2000 年提出的一种基于密度的局部离群点检测算法,该方法适用于不同类簇密度分散情况迥异的数据。根据数据点周围的数据密集情况,首先计算每个数据点的一个局部可达密度,然后通过局部可达密度进一步计算得到每个数据点的一个离群因子,
|
||||||
该离群因子即标识了一个数据点的离群程度,因子值越大,表示离群程度越高,因子值越小,表示离群程度越低。最后,输出离群程度最大的 $topK$ 个点。
|
该离群因子即标识了一个数据点的离群程度,因子值越大,表示离群程度越高,因子值越小,表示离群程度越低。最后,输出离群程度最大的 $topK$ 个点。
|
||||||
|
@ -15,6 +15,14 @@ FROM foo
|
||||||
ANOMALY_WINDOW(foo.i32, "algo=lof")
|
ANOMALY_WINDOW(foo.i32, "algo=lof")
|
||||||
```
|
```
|
||||||
|
|
||||||
|
后续待添加基于数据挖掘检测算法
|
||||||
|
- DBSCAN (Density-Based Spatial Clustering of Applications with Noise)
|
||||||
|
- K-Nearest Neighbors (KNN)
|
||||||
|
- Principal Component Analysis (PCA)
|
||||||
|
|
||||||
|
第三方异常检测算法库
|
||||||
|
- PyOD
|
||||||
|
|
||||||
### 参考文献
|
### 参考文献
|
||||||
|
|
||||||
1. Breunig, M. M.; Kriegel, H.-P.; Ng, R. T.; Sander, J. (2000). LOF: Identifying Density-based Local Outliers (PDF). Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data. SIGMOD. pp. 93–104. doi:10.1145/335191.335388. ISBN 1-58113-217-4.
|
1. Breunig, M. M.; Kriegel, H.-P.; Ng, R. T.; Sander, J. (2000). LOF: Identifying Density-based Local Outliers (PDF). Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data. SIGMOD. pp. 93–104. doi:10.1145/335191.335388. ISBN 1-58113-217-4.
|
||||||
|
|
|
@ -12,6 +12,11 @@ FROM foo
|
||||||
ANOMALY_WINDOW(col1, 'algo=encoder, model=ad_autoencoder_foo');
|
ANOMALY_WINDOW(col1, 'algo=encoder, model=ad_autoencoder_foo');
|
||||||
```
|
```
|
||||||
|
|
||||||
|
后续添加机器(深度)学习异常检测算法
|
||||||
|
- Isolation Forest
|
||||||
|
- One-Class Support Vector Machines (SVM)
|
||||||
|
- Prophet
|
||||||
|
|
||||||
### 参考文献
|
### 参考文献
|
||||||
|
|
||||||
1. https://en.wikipedia.org/wiki/Autoencoder
|
1. https://en.wikipedia.org/wiki/Autoencoder
|
||||||
|
|
|
@ -0,0 +1,40 @@
|
||||||
|
---
|
||||||
|
title: "常见问题"
|
||||||
|
sidebar_label: "常见问题"
|
||||||
|
---
|
||||||
|
|
||||||
|
<b>1. 创建 anode 失败,返回指定服务无法访问</b>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
taos> create anode '127.0.0.1:6090';
|
||||||
|
|
||||||
|
DB error: Analysis service can't access[0x80000441] (0.117446s)
|
||||||
|
```
|
||||||
|
|
||||||
|
请务必使用 `curl` 命令检查 anode 服务是否正常。`curl '127.0.0.1:6090'` 正常的 anode 服务会返回以下结果。
|
||||||
|
|
||||||
|
```bash
|
||||||
|
TDengine© Time Series Data Analytics Platform (ver 1.0.x)
|
||||||
|
```
|
||||||
|
|
||||||
|
如果出现下面的结果,表示 anode 服务不正常。
|
||||||
|
```bash
|
||||||
|
curl: (7) Failed to connect to 127.0.0.1 port 6090: Connection refused
|
||||||
|
```
|
||||||
|
|
||||||
|
如果 anode 服务启动/运行不正常,请检查 uWSGI 的运行日志 `/var/log/taos/taosanode/taosanode.log`,检查其中的错误信息,根据错误信息解决响应的问题。
|
||||||
|
|
||||||
|
>请勿使用 systemctl status taosanode 检查 taosanode 是否正常
|
||||||
|
|
||||||
|
<b>2. 服务正常,查询过程返回服务不可用</b>
|
||||||
|
```bash
|
||||||
|
taos> select _frowts,forecast(current, 'algo=arima, alpha=95, wncheck=0, rows=20') from d1 where ts<='2017-07-14 10:40:09.999';
|
||||||
|
|
||||||
|
DB error: Analysis service can't access[0x80000441] (60.195613s)
|
||||||
|
```
|
||||||
|
数据分析默认超时时间是 60s,出现这个问题的原因是输入数据分析过程超过默认的最长等待时间,请尝试采用限制数据输入范围的方式将输入数据规模减小或者更换分析算法再次尝试。
|
||||||
|
|
||||||
|
<b>3. 返回结果出现非法 JSON 格式错误 (Invalid json format) </b>
|
||||||
|
|
||||||
|
从 anode 返回到 TDengine 的分析结果有误,请检查 anode 运行日志 `/var/log/taos/taosanode/taosanode.app.log`,以便于获得具体的错误信息。
|
||||||
|
|
|
@ -89,7 +89,7 @@ TDengine 提供了丰富的应用程序开发接口,为了便于用户快速
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.taosdata.jdbc</groupId>
|
<groupId>com.taosdata.jdbc</groupId>
|
||||||
<artifactId>taos-jdbcdriver</artifactId>
|
<artifactId>taos-jdbcdriver</artifactId>
|
||||||
<version>3.5.2</version>
|
<version>3.5.3</version>
|
||||||
</dependency>
|
</dependency>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -30,12 +30,12 @@ TDengine 消费者的概念跟 Kafka 类似,消费者通过订阅主题来接
|
||||||
|
|
||||||
| 参数名称 | 类型 | 参数说明 | 备注 |
|
| 参数名称 | 类型 | 参数说明 | 备注 |
|
||||||
| :-----------------------: | :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| :-----------------------: | :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| `td.connect.ip` | string | 服务端的 IP 地址 | |
|
| `td.connect.ip` | string | 服务端的 FQDN | 可以是ip或者host name |
|
||||||
| `td.connect.user` | string | 用户名 | |
|
| `td.connect.user` | string | 用户名 | |
|
||||||
| `td.connect.pass` | string | 密码 | |
|
| `td.connect.pass` | string | 密码 | |
|
||||||
| `td.connect.port` | integer | 服务端的端口号 | |
|
| `td.connect.port` | integer | 服务端的端口号 | |
|
||||||
| `group.id` | string | 消费组 ID,同一消费组共享消费进度 | <br />**必填项**。最大长度:192。<br />每个topic最多可建立 100 个 consumer group |
|
| `group.id` | string | 消费组 ID,同一消费组共享消费进度 | <br />**必填项**。最大长度:192,超长将截断。<br />每个topic最多可建立 100 个 consumer group |
|
||||||
| `client.id` | string | 客户端 ID | 最大长度:192 |
|
| `client.id` | string | 客户端 ID | 最大长度:255,超长将截断。 |
|
||||||
| `auto.offset.reset` | enum | 消费组订阅的初始位置 | <br />`earliest`: default(version < 3.2.0.0);从头开始订阅; <br/>`latest`: default(version >= 3.2.0.0);仅从最新数据开始订阅; <br/>`none`: 没有提交的 offset 无法订阅 |
|
| `auto.offset.reset` | enum | 消费组订阅的初始位置 | <br />`earliest`: default(version < 3.2.0.0);从头开始订阅; <br/>`latest`: default(version >= 3.2.0.0);仅从最新数据开始订阅; <br/>`none`: 没有提交的 offset 无法订阅 |
|
||||||
| `enable.auto.commit` | boolean | 是否启用消费位点自动提交,true: 自动提交,客户端应用无需commit;false:客户端应用需要自行commit | 默认值为 true |
|
| `enable.auto.commit` | boolean | 是否启用消费位点自动提交,true: 自动提交,客户端应用无需commit;false:客户端应用需要自行commit | 默认值为 true |
|
||||||
| `auto.commit.interval.ms` | integer | 消费记录自动提交消费位点时间间隔,单位为毫秒 | 默认值为 5000 |
|
| `auto.commit.interval.ms` | integer | 消费记录自动提交消费位点时间间隔,单位为毫秒 | 默认值为 5000 |
|
||||||
|
|
|
@ -17,10 +17,11 @@ TDengine 面向多种写入场景,而很多写入场景下,TDengine 的存
|
||||||
### 语法
|
### 语法
|
||||||
|
|
||||||
```SQL
|
```SQL
|
||||||
COMPACT DATABASE db_name [start with 'XXXX'] [end with 'YYYY'];
|
COMPACT DATABASE db_name [start with 'XXXX'] [end with 'YYYY'];
|
||||||
COMPACT [db_name.]VGROUPS IN (vgroup_id1, vgroup_id2, ...) [start with 'XXXX'] [end with 'YYYY'];
|
COMPACT [db_name.]VGROUPS IN (vgroup_id1, vgroup_id2, ...) [start with 'XXXX'] [end with 'YYYY'];
|
||||||
SHOW COMPACT [compact_id];
|
SHOW COMPACTS;
|
||||||
KILL COMPACT compact_id;
|
SHOW COMPACT compact_id;
|
||||||
|
KILL COMPACT compact_id;
|
||||||
```
|
```
|
||||||
|
|
||||||
### 效果
|
### 效果
|
||||||
|
|
|
@ -69,7 +69,7 @@ taosExplorer 服务页面中,进入“系统管理 - 备份”页面,在“
|
||||||
1. 数据库:需要备份的数据库名称。一个备份计划只能备份一个数据库/超级表。
|
1. 数据库:需要备份的数据库名称。一个备份计划只能备份一个数据库/超级表。
|
||||||
2. 超级表:需要备份的超级表名称。如果不填写,则备份整个数据库。
|
2. 超级表:需要备份的超级表名称。如果不填写,则备份整个数据库。
|
||||||
3. 下次执行时间:首次执行备份任务的日期时间。
|
3. 下次执行时间:首次执行备份任务的日期时间。
|
||||||
4. 备份周期:备份点之间的时间间隔。注意:备份周期必须大于数据库的 WAL_RETENTION_PERIOD 参数值。
|
4. 备份周期:备份点之间的时间间隔。注意:备份周期必须小于数据库的 WAL_RETENTION_PERIOD 参数值。
|
||||||
5. 错误重试次数:对于可通过重试解决的错误,系统会按照此次数进行重试。
|
5. 错误重试次数:对于可通过重试解决的错误,系统会按照此次数进行重试。
|
||||||
6. 错误重试间隔:每次重试之间的时间间隔。
|
6. 错误重试间隔:每次重试之间的时间间隔。
|
||||||
7. 目录:存储备份文件的目录。
|
7. 目录:存储备份文件的目录。
|
||||||
|
|
|
@ -0,0 +1,35 @@
|
||||||
|
---
|
||||||
|
sidebar_label: Tableau
|
||||||
|
title: 与 Tableau 集成
|
||||||
|
---
|
||||||
|
|
||||||
|
Tableau 是一款知名的商业智能工具,它支持多种数据源,可方便地连接、导入和整合数据。并且可以通过直观的操作界面,让用户创建丰富多样的可视化图表,并具备强大的分析和筛选功能,为数据决策提供有力支持。
|
||||||
|
|
||||||
|
## 前置条件
|
||||||
|
|
||||||
|
准备以下环境:
|
||||||
|
- TDengine 3.3.5.4以上版本集群已部署并正常运行(企业及社区版均可)
|
||||||
|
- taosAdapter 能够正常运行。详细参考 [taosAdapter 使用手册](../../../reference/components/taosadapter)
|
||||||
|
- Tableau 桌面版安装并运行(如未安装,请下载并安装 Windows 操作系统 32/64 位 [Tableau 桌面版](https://www.tableau.com/products/desktop/download) )。安装 Tableau 桌面版请参考 [官方文档](https://www.tableau.com)。
|
||||||
|
- ODBC 驱动安装成功。详细参考[安装 ODBC 驱动](../../../reference/connector/odbc/#安装)
|
||||||
|
- ODBC 数据源配置成功。详细参考[配置ODBC数据源](../../../reference/connector/odbc/#配置数据源)
|
||||||
|
|
||||||
|
## 加载和分析 TDengine 数据
|
||||||
|
|
||||||
|
**第 1 步**,在 Windows 系统环境下启动 Tableau,之后在其连接页面中搜索 “ODBC”,并选择 “其他数据库 (ODBC)”。
|
||||||
|
|
||||||
|
**第 2 步**,点击 DNS 单选框,接着选择已配置好的数据源(MyTDengine),然后点击连接按钮。待连接成功后,删除字符串附加部分的内容,最后点击登录按钮即可。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**第 3 步**,在弹出的工作簿页面中,会显示已连接的数据源。点击数据库的下拉列表,会显示需要进行数据分析的数据库。在此基础上,点击表选项中的查找按钮,即可将该数据库下的所有表显示出来。然后,拖动需要分析的表到右侧区域,即可显示出表结构。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**第 4 步**,点击下方的"立即更新"按钮,即可将表中的数据展示出来。
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**第 5 步**,点击窗口下方的"工作表",弹出数据分析窗口, 并展示分析表的所有字段,将字段拖动到行列即可展示出图表。
|
||||||
|
|
||||||
|

|
After Width: | Height: | Size: 235 KiB |
After Width: | Height: | Size: 238 KiB |
After Width: | Height: | Size: 237 KiB |
After Width: | Height: | Size: 102 KiB |
|
@ -9,102 +9,504 @@ TDengine 客户端驱动提供了应用编程所需要的全部 API,并且在
|
||||||
## 配置参数
|
## 配置参数
|
||||||
|
|
||||||
### 连接相关
|
### 连接相关
|
||||||
|参数名称|支持版本|动态修改|参数含义|
|
|
||||||
|----------------------|----------|-------------------------|-------------|
|
#### firstEp
|
||||||
|firstEp | |支持动态修改 立即生效 |启动时,主动连接的集群中首个 dnode 的 endpoint,缺省值:hostname:6030,若无法获取该服务器的 hostname,则赋值为 localhost|
|
- 说明:启动时,主动连接的集群中首个 dnode 的 endpoint
|
||||||
|secondEp | |支持动态修改 立即生效 |启动时,如果 firstEp 连接不上,尝试连接集群中第二个 dnode 的 endpoint,没有缺省值|
|
- 默认值:hostname:6030,若无法获取该服务器的 hostname,则赋值为 localhost
|
||||||
|serverPort | |支持动态修改 立即生效 |taosd 监听的端口,默认值 6030|
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|compressMsgSize | |支持动态修改 立即生效 |是否对 RPC 消息进行压缩;-1:所有消息都不压缩;0:所有消息都压缩;N (N>0):只有大于 N 个字节的消息才压缩;缺省值 -1|
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|shellActivityTimer | |不支持动态修改 |客户端向 mnode 发送心跳的时长,单位为秒,取值范围 1-120,默认值 3|
|
|
||||||
|numOfRpcSessions | |支持动态修改 立即生效 |RPC 支持的最大连接数,取值范围 100-100000,缺省值 30000|
|
#### secondEp
|
||||||
|numOfRpcThreads | |不支持动态修改 |RPC 收发数据线程数目,取值范围1-50,默认值为 CPU 核数的一半|
|
- 说明:启动时,如果 firstEp 连接不上,尝试连接集群中第二个 dnode 的 endpoint
|
||||||
|numOfTaskQueueThreads | |不支持动态修改 |客户端处理 RPC消息的线程数, 范围4-16,默认值为 CPU 核数的一半|
|
- 默认值:无
|
||||||
|timeToGetAvailableConn| 3.3.4.*之后取消 |不支持动态修改 |获得可用连接的最长等待时间,取值范围 10-50000000,单位为毫秒,缺省值 500000|
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|useAdapter | |支持动态修改 立即生效 |内部参数,是否使用 taosadapter,影响 CSV 文件导入|
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|shareConnLimit |3.3.4.0 新增|不支持动态修改 |内部参数,一个链接可以共享的查询数目,取值范围 1-256,默认值 10|
|
|
||||||
|readTimeout |3.3.4.0 新增|不支持动态修改 |内部参数,最小超时时间,取值范围 64-604800,单位为秒,默认值 900|
|
#### serverPort
|
||||||
|
- 说明:taosd 监听的端口
|
||||||
|
- 默认值:6030
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### compressMsgSize
|
||||||
|
- 说明:是否对 RPC 消息进行压缩
|
||||||
|
- 类型:整数;-1:所有消息都不压缩;0:所有消息都压缩;N (N>0):只有大于 N 个字节的消息才压缩
|
||||||
|
- 单位:bytes
|
||||||
|
- 默认值:-1
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### shellActivityTimer
|
||||||
|
- 说明:客户端向 mnode 发送心跳的时长
|
||||||
|
- 类型:整数
|
||||||
|
- 单位:秒
|
||||||
|
- 默认值:3
|
||||||
|
- 最小值:1
|
||||||
|
- 最大值:120
|
||||||
|
- 动态修改:不支持
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### numOfRpcSessions
|
||||||
|
- 说明:RPC 支持的最大连接数
|
||||||
|
- 类型:整数
|
||||||
|
- 默认值:30000
|
||||||
|
- 最小值:100
|
||||||
|
- 最大值:100000
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### numOfRpcThreads
|
||||||
|
- 说明:RPC 收发数据线程数目
|
||||||
|
- 类型:整数
|
||||||
|
- 默认值:CPU 核数的一半
|
||||||
|
- 最小值:1
|
||||||
|
- 最大值:50
|
||||||
|
- 动态修改:不支持
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### numOfTaskQueueThreads
|
||||||
|
- 说明:客户端处理 RPC消息的线程数
|
||||||
|
- 类型:整数
|
||||||
|
- 默认值:CPU 核数的一半
|
||||||
|
- 最小值:4
|
||||||
|
- 最大值:16
|
||||||
|
- 动态修改:不支持
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### timeToGetAvailableConn
|
||||||
|
- 说明:获得可用连接的最长等待时间
|
||||||
|
- 类型:整数
|
||||||
|
- 单位:毫秒
|
||||||
|
- 默认值:500000
|
||||||
|
- 最小值:10
|
||||||
|
- 最大值:50000000
|
||||||
|
- 动态修改:不支持
|
||||||
|
- 支持版本:3.3.4.*之后取消
|
||||||
|
|
||||||
|
#### useAdapter
|
||||||
|
- 说明:是否使用 taosadapter,影响 CSV 文件导入 `内部参数`
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### shareConnLimit
|
||||||
|
- 说明:一个链接可以共享的查询数目 `内部参数`
|
||||||
|
- 最小值:1
|
||||||
|
- 最大值:256
|
||||||
|
- 默认值:10
|
||||||
|
- 动态修改:不支持
|
||||||
|
- 支持版本:从 v3.3.4.0 版本开始引入
|
||||||
|
|
||||||
|
#### readTimeout
|
||||||
|
- 说明:最小超时时间 `内部参数`
|
||||||
|
- 单位:秒
|
||||||
|
- 最小值:64
|
||||||
|
- 最大值:604800
|
||||||
|
- 默认值:900
|
||||||
|
- 动态修改:不支持
|
||||||
|
- 支持版本:从 v3.3.4.0 版本开始引入
|
||||||
|
|
||||||
### 查询相关
|
### 查询相关
|
||||||
|参数名称|支持版本|动态修改|参数含义|
|
|
||||||
|----------------------|----------|-------------------------|-------------|
|
#### countAlwaysReturnValue
|
||||||
|countAlwaysReturnValue | |支持动态修改 立即生效 |count/hyperloglog 函数在输入数据为空或者 NULL 的情况下是否返回值;0:返回空行,1:返回;默认值 1;该参数设置为 1 时,如果查询中含有 INTERVAL 子句或者该查询使用了 TSMA 时,且相应的组或窗口内数据为空或者 NULL,对应的组或窗口将不返回查询结果;注意此参数客户端和服务端值应保持一致|
|
- 说明:count/hyperloglog 函数在输入数据为空或者 NULL 的情况下是否返回值;该参数设置为 1 时,如果查询中含有 INTERVAL 子句或者该查询使用了 TSMA 时,且相应的组或窗口内数据为空或者 NULL,对应的组或窗口将不返回查询结果;注意此参数客户端和服务端值应保持一致
|
||||||
|keepColumnName | |支持动态修改 立即生效 |Last、First、LastRow 函数查询且未指定别名时,自动设置别名为列名(不含函数名),因此 order by 子句如果引用了该列名将自动引用该列对应的函数;1:表示自动设置别名为列名(不包含函数名),0:表示不自动设置别名;缺省值:0|
|
- 类型:整数;0:返回空行,1:返回
|
||||||
|multiResultFunctionStarReturnTags|3.3.3.0 后|支持动态修改 立即生效 |查询超级表时,last(\*)/last_row(\*)/first(\*) 是否返回标签列;查询普通表、子表时,不受该参数影响;0:不返回标签列,1:返回标签列;缺省值:0;该参数设置为 0 时,last(\*)/last_row(\*)/first(\*) 只返回超级表的普通列;为 1 时,返回超级表的普通列和标签列|
|
- 最小值:0
|
||||||
|metaCacheMaxSize | |支持动态修改 立即生效 |指定单个客户端元数据缓存大小的最大值,单位 MB;缺省值 -1,表示无限制|
|
- 最大值:1
|
||||||
|maxTsmaCalcDelay | |支持动态修改 立即生效 |查询时客户端可允许的 tsma 计算延迟,若 tsma 的计算延迟大于配置值,则该 TSMA 将不会被使用;取值范围 600s - 86400s,即 10 分钟 - 1 小时;缺省值:600 秒|
|
- 默认值:1
|
||||||
|tsmaDataDeleteMark | |支持动态修改 立即生效 |TSMA 计算的历史数据中间结果保存时间,单位为毫秒;取值范围 >= 3600000,即大于等于1h;缺省值:86400000,即 1d |
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|queryPolicy | |支持动态修改 立即生效 |查询语句的执行策略,1:只使用 vnode,不使用 qnode;2:没有扫描算子的子任务在 qnode 执行,带扫描算子的子任务在 vnode 执行;3:vnode 只运行扫描算子,其余算子均在 qnode 执行;缺省值:1|
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|queryTableNotExistAsEmpty | |支持动态修改 立即生效 |查询表不存在时是否返回空结果集;false:返回错误;true:返回空结果集;缺省值 false|
|
|
||||||
|querySmaOptimize | |支持动态修改 立即生效 |sma index 的优化策略,0:表示不使用 sma index,永远从原始数据进行查询;1:表示使用 sma index,对符合的语句,直接从预计算的结果进行查询;缺省值:0|
|
#### keepColumnName
|
||||||
|queryPlannerTrace | |支持动态修改 立即生效 |内部参数,查询计划是否输出详细日志|
|
- 说明:Last、First、LastRow 函数查询且未指定别名时,自动设置别名为列名(不含函数名),因此 order by 子句如果引用了该列名将自动引用该列对应的函数
|
||||||
|queryNodeChunkSize | |支持动态修改 立即生效 |内部参数,查询计划的块大小|
|
- 类型:整数;1:表示自动设置别名为列名(不包含函数名),0:表示不自动设置别名
|
||||||
|queryUseNodeAllocator | |支持动态修改 立即生效 |内部参数,查询计划的分配方法|
|
- 最小值:0
|
||||||
|queryMaxConcurrentTables | |不支持动态修改 |内部参数,查询计划的并发数目|
|
- 最大值:1
|
||||||
|enableQueryHb | |支持动态修改 立即生效 |内部参数,是否发送查询心跳消息|
|
- 默认值:0
|
||||||
|minSlidingTime | |支持动态修改 立即生效 |内部参数,sliding 的最小允许值|
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|minIntervalTime | |支持动态修改 立即生效 |内部参数,interval 的最小允许值|
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### multiResultFunctionStarReturnTags
|
||||||
|
- 说明:查询超级表时,last(\*)/last_row(\*)/first(\*) 是否返回标签列;查询普通表、子表时,不受该参数影响;
|
||||||
|
- 类型:整数;0:不返回标签列,1:返回标签列;该参数设置为 0 时,last(\*)/last_row(\*)/first(\*) 只返回超级表的普通列;为 1 时,返回超级表的普通列和标签列
|
||||||
|
- 最小值:0
|
||||||
|
- 最大值:1
|
||||||
|
- 默认值:0
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.3.3.0 版本开始引入
|
||||||
|
|
||||||
|
#### metaCacheMaxSize
|
||||||
|
- 说明:指定单个客户端元数据缓存大小的最大值
|
||||||
|
- 类型:整数;0:不返回标签列,1:返回标签列;该参数设置为 0 时,last(\*)/last_row(\*)/first(\*) 只返回超级表的普通列;为 1 时,返回超级表的普通列和标签列
|
||||||
|
- 单位:MB
|
||||||
|
- 最小值:-1
|
||||||
|
- 最大值:2147483647
|
||||||
|
- 默认值:-1 表示无限制
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### maxTsmaCalcDelay
|
||||||
|
- 说明:查询时客户端可允许的 tsma 计算延迟,若 tsma 的计算延迟大于配置值,则该 TSMA 将不会被使用
|
||||||
|
- 类型:整数
|
||||||
|
- 单位:秒
|
||||||
|
- 最小值:600
|
||||||
|
- 最大值:86400
|
||||||
|
- 默认值:600
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### tsmaDataDeleteMark
|
||||||
|
- 说明:TSMA 计算的历史数据中间结果保存时间
|
||||||
|
- 类型:整数
|
||||||
|
- 单位:毫秒
|
||||||
|
- 最小值:3600000
|
||||||
|
- 最大值:9223372036854775807
|
||||||
|
- 默认值:86400000
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### queryPolicy
|
||||||
|
- 说明:查询语句的执行策略
|
||||||
|
- 类型:整数;1:只使用 vnode,不使用 qnode;2:没有扫描算子的子任务在 qnode 执行,带扫描算子的子任务在 vnode 执行;3:vnode 只运行扫描算子,其余算子均在 qnode 执行;
|
||||||
|
- 单位:秒
|
||||||
|
- 默认值:1
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### queryTableNotExistAsEmpty
|
||||||
|
- 说明:查询表不存在时是否返回空结果集
|
||||||
|
- 类型:布尔;false:返回错误;true:返回空结果集
|
||||||
|
- 默认值:false
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### querySmaOptimize
|
||||||
|
- 说明:querSmaOptimize,永远从原始数据进行查询
|
||||||
|
- 类型:整数 ;1:表示使用 sma index,对符合的语句,直接从预计算的结果进行查询
|
||||||
|
- 默认值:false
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### queryPlannerTrace
|
||||||
|
- 说明:查询计划是否输出详细日志 `内部参数`
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### queryNodeChunkSize
|
||||||
|
- 说明:查询计划的块大小 `内部参数`
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### queryUseNodeAllocator
|
||||||
|
- 说明:查询计划的分配方法 `内部参数`
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### queryMaxConcurrentTables
|
||||||
|
- 说明:查询计划的并发数目 `内部参数`
|
||||||
|
- 动态修改:不支持
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### enableQueryHb
|
||||||
|
- 说明:是否发送查询心跳消息 `内部参数`
|
||||||
|
- 动态修改:不支持
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### minSlidingTime
|
||||||
|
- 说明:sliding 的最小允许值 `内部参数`
|
||||||
|
- 动态修改:不支持
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### minIntervalTime
|
||||||
|
- 说明:interval 的最小允许值 `内部参数`
|
||||||
|
- 动态修改:不支持
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
### 写入相关
|
### 写入相关
|
||||||
|参数名称|支持版本|动态修改|参数含义|
|
|
||||||
|----------------------|----------|-------------------------|-------------|
|
#### smlChildTableName
|
||||||
|smlChildTableName | |支持动态修改 立即生效 |schemaless 自定义的子表名的 key,无缺省值|
|
- 说明:schemaless 自定义的子表名的 key
|
||||||
|smlAutoChildTableNameDelimiter| |支持动态修改 立即生效 |schemaless tag 之间的连接符,连起来作为子表名,无缺省值|
|
- 默认值:无
|
||||||
|smlTagName | |支持动态修改 立即生效 |schemaless tag 为空时默认的 tag 名字,缺省值 "_tag_null"|
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|smlTsDefaultName | |支持动态修改 立即生效 |schemaless 自动建表的时间列名字通过该配置设置,缺省值 "_ts"|
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|smlDot2Underline | |支持动态修改 立即生效 |schemaless 把超级表名中的 dot 转成下划线|
|
|
||||||
|maxInsertBatchRows | |支持动态修改 立即生效 |内部参数,一批写入的最大条数|
|
#### smlAutoChildTableNameDelimiter
|
||||||
|
- 说明:schemaless tag 之间的连接符,连起来作为子表名
|
||||||
|
- 默认值:无
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### smlTagName
|
||||||
|
- 说明:schemaless tag 为空时默认的 tag 名字
|
||||||
|
- 默认值:"_tag_null"
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### smlTsDefaultName
|
||||||
|
- 说明:schemaless 自动建表的时间列名字通过该配置设置
|
||||||
|
- 默认值:"_ts"
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### smlDot2Underline
|
||||||
|
- 说明:schemaless 把超级表名中的 dot 转成下划线
|
||||||
|
- 默认值:true
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### maxInsertBatchRows
|
||||||
|
- 说明:一批写入的最大条数
|
||||||
|
- 默认值:1000000
|
||||||
|
- 最小值:1
|
||||||
|
- 最大值:2147483647
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.0.0.0 版本开始引入
|
||||||
|
|
||||||
### 区域相关
|
### 区域相关
|
||||||
|参数名称|支持版本|动态修改|参数含义|
|
|
||||||
|----------------------|----------|-------------------------|-------------|
|
#### timezone
|
||||||
|timezone | |支持动态修改 立即生效 |时区;缺省从系统中动态获取当前的时区设置|
|
- 说明:时区
|
||||||
|locale | |支持动态修改 立即生效 |系统区位信息及编码格式,缺省从系统中获取|
|
- 默认值:从系统中动态获取当前的时区设置
|
||||||
|charset | |支持动态修改 立即生效 |字符集编码,缺省从系统中获取|
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### locale
|
||||||
|
- 说明:系统区位信息及编码格式
|
||||||
|
- 默认值:从系统中获取
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### charset
|
||||||
|
- 说明:字符集编码
|
||||||
|
- 默认值:从系统中获取
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|
|
||||||
### 存储相关
|
### 存储相关
|
||||||
|参数名称|支持版本|动态修改|参数含义|
|
|
||||||
|----------------------|----------|-------------------------|-------------|
|
#### tempDir
|
||||||
|tempDir | |支持动态修改 立即生效 |指定所有运行过程中的临时文件生成的目录,Linux 平台默认值为 /tmp|
|
- 说明:指定所有运行过程中的临时文件生成的目录
|
||||||
|minimalTmpDirGB | |支持动态修改 立即生效 |tempDir 所指定的临时文件目录所需要保留的最小空间,单位 GB,缺省值:1|
|
- 默认值:Linux 平台默认值为 /tmp
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### minimalTmpDirGB
|
||||||
|
- 说明:tempDir 所指定的临时文件目录所需要保留的最小空间
|
||||||
|
- 类型:浮点数
|
||||||
|
- 单位:GB
|
||||||
|
- 默认值:1
|
||||||
|
- 最小值:0.001f
|
||||||
|
- 最大值:10000000
|
||||||
|
- 动态修改:不支持
|
||||||
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|
|
||||||
### 日志相关
|
### 日志相关
|
||||||
|参数名称|支持版本|动态修改|参数含义|
|
|
||||||
|----------------------|----------|-------------------------|-------------|
|
#### logDir
|
||||||
|logDir | |不支持动态修改 |日志文件目录,运行日志将写入该目录,缺省值:/var/log/taos|
|
- 说明:日志文件目录,运行日志将写入该目录
|
||||||
|minimalLogDirGB | |支持动态修改 立即生效 |日志文件夹所在磁盘可用空间大小小于该值时,停止写日志,单位 GB,缺省值:1|
|
- 类型:字符串
|
||||||
|numOfLogLines | |支持动态修改 立即生效 |单个日志文件允许的最大行数,缺省值:10,000,000|
|
- 默认值:/var/log/taos
|
||||||
|asyncLog | |支持动态修改 立即生效 |日志写入模式,0:同步,1:异步,缺省值:1|
|
- 动态修改:不支持
|
||||||
|logKeepDays | |支持动态修改 立即生效 |日志文件的最长保存时间,单位:天,缺省值:0,意味着无限保存,日志文件不会被重命名,也不会有新的日志文件滚动产生,但日志文件的内容有可能会不断滚动,取决于日志文件大小的设置;当设置为大于 0 的值时,当日志文件大小达到设置的上限时会被重命名为 taoslogx.yyy,其中 yyy 为日志文件最后修改的时间戳,并滚动产生新的日志文件|
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|debugFlag | |支持动态修改 立即生效 |运行日志开关,131(输出错误和警告日志),135(输出错误、警告和调试日志),143(输出错误、警告、调试和跟踪日志);默认值 131 或 135 (取决于不同模块)|
|
|
||||||
|tmrDebugFlag | |支持动态修改 立即生效 |定时器模块的日志开关,取值范围同上|
|
#### minimalLogDirGB
|
||||||
|uDebugFlag | |支持动态修改 立即生效 |共用功能模块的日志开关,取值范围同上|
|
- 说明:日志文件夹所在磁盘可用空间大小小于该值时,停止写日志
|
||||||
|rpcDebugFlag | |支持动态修改 立即生效 |rpc 模块的日志开关,取值范围同上|
|
- 类型:浮点数
|
||||||
|jniDebugFlag | |支持动态修改 立即生效 |jni 模块的日志开关,取值范围同上|
|
- 单位:GB
|
||||||
|qDebugFlag | |支持动态修改 立即生效 |query 模块的日志开关,取值范围同上|
|
- 默认值:1
|
||||||
|cDebugFlag | |支持动态修改 立即生效 |客户端模块的日志开关,取值范围同上|
|
- 最小值:0.001f
|
||||||
|simDebugFlag | |支持动态修改 立即生效 |内部参数,测试工具的日志开关,取值范围同上|
|
- 最大值:10000000
|
||||||
|tqClientDebugFlag|3.3.4.3 后|支持动态修改 立即生效 |客户端模块的日志开关,取值范围同上|
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### numOfLogLines
|
||||||
|
- 说明:单个日志文件允许的最大行数
|
||||||
|
- 类型:整数
|
||||||
|
- 默认值:10,000,000
|
||||||
|
- 最小值:1000
|
||||||
|
- 最大值:2000000000
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### asyncLog
|
||||||
|
- 说明:日志写入模式
|
||||||
|
- 类型:整数;0:同步,1:异步
|
||||||
|
- 默认值:1
|
||||||
|
- 最小值:0
|
||||||
|
- 最大值:1
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### logKeepDays
|
||||||
|
- 说明:日志文件的最长保存时间,小于等于0意味着只有两个日志文件相互切换保存日志,超过两个文件保存数量的日志会被删除;当设置为大于 0 的值时,当日志文件大小达到设置的上限时会被重命名为 taosdlog.yyy,其中 yyy 为日志文件最后修改的时间戳,并滚动产生新的日志文件
|
||||||
|
- 类型:整数;0
|
||||||
|
- 单位:天
|
||||||
|
- 默认值:0
|
||||||
|
- 最小值:-365000
|
||||||
|
- 最大值:365000
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### debugFlag
|
||||||
|
- 说明:运行日志开关,该参数的设置会影响所有模块的开关,后设置的参数起效
|
||||||
|
- 类型:整数
|
||||||
|
- 取值范围:131(输出错误和警告日志),135(输出错误、警告和调试日志),143(输出错误、警告、调试和跟踪日志)
|
||||||
|
- 默认值:131 或 135 (取决于不同模块)
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### tmrDebugFlag
|
||||||
|
- 说明:定时器模块的日志开关
|
||||||
|
- 类型:整数
|
||||||
|
- 取值范围:同上
|
||||||
|
- 默认值:131
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### uDebugFlag
|
||||||
|
- 说明:共用功能模块的日志开关
|
||||||
|
- 类型:整数
|
||||||
|
- 取值范围:同上
|
||||||
|
- 默认值:131
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### rpcDebugFlag
|
||||||
|
- 说明:rpc 模块的日志开关
|
||||||
|
- 类型:整数
|
||||||
|
- 取值范围:同上
|
||||||
|
- 默认值:131
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### jniDebugFlag
|
||||||
|
- 说明:jni 模块的日志开关
|
||||||
|
- 类型:整数
|
||||||
|
- 取值范围:同上
|
||||||
|
- 默认值:131
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### qDebugFlag
|
||||||
|
- 说明:query 模块的日志开关
|
||||||
|
- 类型:整数
|
||||||
|
- 取值范围:同上
|
||||||
|
- 默认值:131
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### cDebugFlag
|
||||||
|
- 说明:客户端模块的日志开关
|
||||||
|
- 类型:整数
|
||||||
|
- 取值范围:同上
|
||||||
|
- 默认值:131
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### simDebugFlag
|
||||||
|
- 说明:测试工具的日志开关 `内部参数`
|
||||||
|
- 类型:整数
|
||||||
|
- 取值范围:同上
|
||||||
|
- 默认值:131
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### tqClientDebugFlag
|
||||||
|
- 说明:测试工具的日志开关 `内部参数`
|
||||||
|
- 类型:整数
|
||||||
|
- 取值范围:同上
|
||||||
|
- 默认值:131
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|
|
||||||
### 调试相关
|
### 调试相关
|
||||||
|参数名称|支持版本|动态修改|参数含义|
|
|
||||||
|----------------------|----------|-------------------------|-------------|
|
#### crashReporting
|
||||||
|crashReporting | |支持动态修改 立即生效 |是否上传 crash 到 telemetry,0:不上传,1:上传;缺省值:1|
|
- 说明:是否上传 crash 到 telemetry
|
||||||
|enableCoreFile | |支持动态修改 立即生效 |crash 时是否生成 core 文件,0:不生成,1:生成;缺省值:1|
|
- 类型:整数;0:不上传,1:上传;
|
||||||
|assert | |不支持动态修改 |断言控制开关,缺省值:0|
|
- 默认值:1
|
||||||
|configDir | |不支持动态修改 |配置文件所在目录|
|
- 最小值:0
|
||||||
|scriptDir | |不支持动态修改 |内部参数,测试用例的目录|
|
- 最大值:1
|
||||||
|randErrorChance |3.3.3.0 后|不支持动态修改 |内部参数,用于随机失败测试|
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|randErrorDivisor |3.3.3.0 后|不支持动态修改 |内部参数,用于随机失败测试|
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|randErrorScope |3.3.3.0 后|不支持动态修改 |内部参数,用于随机失败测试|
|
|
||||||
|safetyCheckLevel |3.3.3.0 后|不支持动态修改 |内部参数,用于随机失败测试|
|
#### enableCoreFile
|
||||||
|simdEnable |3.3.4.3 后|不支持动态修改 |内部参数,用于测试 SIMD 加速|
|
- 说明:crash 时是否生成 core 文件
|
||||||
|AVX512Enable |3.3.4.3 后|不支持动态修改 |内部参数,用于测试 AVX512 加速|
|
- 类型:整数;0:不生成,1:生成;
|
||||||
|bypassFlag |3.3.4.5 后|支持动态修改 立即生效 |内部参数,用于短路测试,0:正常写入,1:写入消息在 taos 客户端发送 RPC 消息前返回,2:写入消息在 taosd 服务端收到 RPC 消息后返回,4:写入消息在 taosd 服务端写入内存缓存前返回,8:写入消息在 taosd 服务端数据落盘前返回;缺省值:0|
|
- 默认值:1
|
||||||
|
- 最小值:0
|
||||||
|
- 最大值:1
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### assert
|
||||||
|
- 说明:断言控制开关
|
||||||
|
- 类型:整数;0:关闭,1:开启
|
||||||
|
- 默认值:0
|
||||||
|
- 最小值:0
|
||||||
|
- 最大值:1
|
||||||
|
- 动态修改:不支持
|
||||||
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### configDir
|
||||||
|
- 说明:配置文件所在目录
|
||||||
|
- 类型:字符串
|
||||||
|
- 动态修改:不支持
|
||||||
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|
|
||||||
|
#### scriptDir
|
||||||
|
- 说明:测试工具的脚本目录 `内部参数`
|
||||||
|
- 类型:字符串
|
||||||
|
- 动态修改:不支持
|
||||||
|
- 支持版本:从 v3.3.3.0 版本开始引入
|
||||||
|
|
||||||
|
#### randErrorChance
|
||||||
|
- 说明:用于随机失败测试 `内部参数`
|
||||||
|
- 动态修改:不支持
|
||||||
|
- 支持版本:从 v3.3.3.0 版本开始引入
|
||||||
|
|
||||||
|
#### randErrorDivisor
|
||||||
|
- 说明:用于随机失败测试 `内部参数`
|
||||||
|
- 动态修改:不支持
|
||||||
|
- 支持版本:从 v3.3.3.0 版本开始引入
|
||||||
|
|
||||||
|
#### randErrorScope
|
||||||
|
- 说明:用于随机失败测试 `内部参数`
|
||||||
|
- 动态修改:不支持
|
||||||
|
- 支持版本:从 v3.3.3.0 版本开始引入
|
||||||
|
|
||||||
|
#### safetyCheckLevel
|
||||||
|
- 说明:用于随机失败测试 `内部参数`
|
||||||
|
- 动态修改:不支持
|
||||||
|
- 支持版本:从 v3.3.3.0 版本开始引入
|
||||||
|
|
||||||
|
#### simdEnable
|
||||||
|
- 说明:用于测试 SIMD 加速 `内部参数`
|
||||||
|
- 动态修改:不支持
|
||||||
|
- 支持版本:从 v3.3.4.3 版本开始引入
|
||||||
|
|
||||||
|
#### AVX512Enable
|
||||||
|
- 说明:用于测试 AVX512 加速 `内部参数`
|
||||||
|
- 动态修改:不支持
|
||||||
|
- 支持版本:从 v3.3.4.3 版本开始引入
|
||||||
|
|
||||||
|
#### bypassFlag
|
||||||
|
- 说明:配置文件所在目录
|
||||||
|
- 类型:整数;
|
||||||
|
- 取值范围:0:正常写入,1:写入消息在 taos 客户端发送 RPC 消息前返回,2:写入消息在 taosd 服务端收到 RPC 消息后返回,4:写入消息在 taosd 服务端写入内存缓存前返回,8:写入消息在 taosd 服务端数据落盘前返回
|
||||||
|
- 默认值:0
|
||||||
|
- 动态修改:支持通过 SQL 修改,立即生效
|
||||||
|
- 支持版本:从 v3.3.4.5 版本开始引入
|
||||||
|
|
||||||
### SHELL 相关
|
### SHELL 相关
|
||||||
|参数名称|支持版本|动态修改|参数含义|
|
|
||||||
|----------------------|----------|-------------------------|-------------|
|
#### enableScience
|
||||||
|enableScience | |不支持动态修改 |是否开启科学计数法显示浮点数;0:不开始,1:开启;缺省值:1|
|
- 说明:是否开启科学计数法显示浮点数
|
||||||
|
- 类型:整数;0:不开始,1:开启;
|
||||||
|
- 默认值:1
|
||||||
|
- 最小值:0
|
||||||
|
- 最大值:1
|
||||||
|
- 动态修改:不支持
|
||||||
|
- 支持版本:从 v3.1.0.0 版本开始引入
|
||||||
|
|
||||||
## API
|
## API
|
||||||
|
|
||||||
|
|
|
@ -11,42 +11,109 @@ import Icinga2 from "./_icinga2.mdx"
|
||||||
import TCollector from "./_tcollector.mdx"
|
import TCollector from "./_tcollector.mdx"
|
||||||
|
|
||||||
taosAdapter 是一个 TDengine 的配套工具,是 TDengine 集群和应用程序之间的桥梁和适配器。它提供了一种易于使用和高效的方式来直接从数据收集代理软件(如 Telegraf、StatsD、collectd 等)摄取数据。它还提供了 InfluxDB/OpenTSDB 兼容的数据摄取接口,允许 InfluxDB/OpenTSDB 应用程序无缝移植到 TDengine。
|
taosAdapter 是一个 TDengine 的配套工具,是 TDengine 集群和应用程序之间的桥梁和适配器。它提供了一种易于使用和高效的方式来直接从数据收集代理软件(如 Telegraf、StatsD、collectd 等)摄取数据。它还提供了 InfluxDB/OpenTSDB 兼容的数据摄取接口,允许 InfluxDB/OpenTSDB 应用程序无缝移植到 TDengine。
|
||||||
|
TDengine 的各语言连接器通过 WebSocket 接口与 TDengine 进行通信,因此必须安装 taosAdapter。
|
||||||
|
|
||||||
taosAdapter 提供以下功能:
|
架构图如下:
|
||||||
|
|
||||||
- Websocket/RESTful 接口
|
|
||||||
- 兼容 InfluxDB v1 写接口
|
|
||||||
- 兼容 OpenTSDB JSON 和 telnet 格式写入
|
|
||||||
- 无缝连接到 Telegraf
|
|
||||||
- 无缝连接到 collectd
|
|
||||||
- 无缝连接到 StatsD
|
|
||||||
- 支持 Prometheus remote_read 和 remote_write
|
|
||||||
- 获取 table 所在的虚拟节点组(VGroup)的 VGroup ID
|
|
||||||
|
|
||||||
## taosAdapter 架构图
|
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
## taosAdapter 部署方法
|
## 功能列表
|
||||||
|
|
||||||
### 安装 taosAdapter
|
taosAdapter 提供了以下功能:
|
||||||
|
|
||||||
|
- WebSocket 接口:
|
||||||
|
支持通过 WebSocket 协议执行 SQL、无模式数据写入、参数绑定和数据订阅功能。
|
||||||
|
- 兼容 InfluxDB v1 写接口:
|
||||||
|
[https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/](https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/)
|
||||||
|
- 兼容 OpenTSDB JSON 和 telnet 格式写入:
|
||||||
|
- [http://opentsdb.net/docs/build/html/api_http/put.html](http://opentsdb.net/docs/build/html/api_http/put.html)
|
||||||
|
- [http://opentsdb.net/docs/build/html/api_telnet/put.html](http://opentsdb.net/docs/build/html/api_telnet/put.html)
|
||||||
|
- collectd 数据写入:
|
||||||
|
collectd 是一个系统统计收集守护程序,请访问 [https://collectd.org/](https://collectd.org/) 了解更多信息。
|
||||||
|
- StatsD 数据写入:
|
||||||
|
StatsD 是一个简单而强大的统计信息汇总的守护程序。请访问 [https://github.com/statsd/statsd](https://github.com/statsd/statsd) 了解更多信息。
|
||||||
|
- icinga2 OpenTSDB writer 数据写入:
|
||||||
|
icinga2 是一个收集检查结果指标和性能数据的软件。请访问 [https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer](https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer) 了解更多信息。
|
||||||
|
- TCollector 数据写入:
|
||||||
|
TCollector是一个客户端进程,从本地收集器收集数据,并将数据推送到 OpenTSDB。请访问 [http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html) 了解更多信息。
|
||||||
|
- node_exporter 采集写入:
|
||||||
|
node_export 是一个机器指标的导出器。请访问 [https://github.com/prometheus/node_exporter](https://github.com/prometheus/node_exporter) 了解更多信息。
|
||||||
|
- Prometheus remote_read 和 remote_write:
|
||||||
|
remote_read 和 remote_write 是 Prometheus 数据读写分离的集群方案。请访问[https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis](https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis) 了解更多信息。
|
||||||
|
- RESTful 接口:
|
||||||
|
[RESTful API](../../connector/rest-api)
|
||||||
|
|
||||||
|
### WebSocket 接口
|
||||||
|
|
||||||
|
各语言连接器通过 taosAdapter 的 WebSocket 接口,能够实现 SQL 执行、无模式写入、参数绑定和数据订阅功能。参考[开发指南](../../../develop/connect/#websocket-连接)。
|
||||||
|
|
||||||
|
### 兼容 InfluxDB v1 写接口
|
||||||
|
|
||||||
|
您可以使用任何支持 HTTP 协议的客户端访问 Restful 接口地址 `http://<fqdn>:6041/influxdb/v1/write` 来写入 InfluxDB 兼容格式的数据到 TDengine。
|
||||||
|
|
||||||
|
支持 InfluxDB 参数如下:
|
||||||
|
|
||||||
|
- `db` 指定 TDengine 使用的数据库名
|
||||||
|
- `precision` TDengine 使用的时间精度
|
||||||
|
- `u` TDengine 用户名
|
||||||
|
- `p` TDengine 密码
|
||||||
|
- `ttl` 自动创建的子表生命周期,以子表的第一条数据的 TTL 参数为准,不可更新。更多信息请参考[创建表文档](../../taos-sql/table/#创建表)的 TTL 参数。
|
||||||
|
|
||||||
|
注意: 目前不支持 InfluxDB 的 token 验证方式,仅支持 Basic 验证和查询参数验证。
|
||||||
|
示例:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
curl --request POST http://127.0.0.1:6041/influxdb/v1/write?db=test --user "root:taosdata" --data-binary "measurement,host=host1 field1=2i,field2=2.0 1577836800000000000"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 兼容 OpenTSDB JSON 和 telnet 格式写入
|
||||||
|
|
||||||
|
您可以使用任何支持 HTTP 协议的客户端访问 Restful 接口地址 `http://<fqdn>:6041/<APIEndPoint>` 来写入 OpenTSDB 兼容格式的数据到 TDengine。EndPoint 如下:
|
||||||
|
|
||||||
|
```text
|
||||||
|
/opentsdb/v1/put/json/<db>
|
||||||
|
/opentsdb/v1/put/telnet/<db>
|
||||||
|
```
|
||||||
|
|
||||||
|
### collectd 数据写入
|
||||||
|
|
||||||
|
<CollectD />
|
||||||
|
|
||||||
|
### StatsD 数据写入
|
||||||
|
|
||||||
|
<StatsD />
|
||||||
|
|
||||||
|
### icinga2 OpenTSDB writer 数据写入
|
||||||
|
|
||||||
|
<Icinga2 />
|
||||||
|
|
||||||
|
### TCollector 数据写入
|
||||||
|
|
||||||
|
<TCollector />
|
||||||
|
|
||||||
|
### node_exporter 采集写入
|
||||||
|
|
||||||
|
Prometheus 使用的由 \*NIX 内核暴露的硬件和操作系统指标的输出器
|
||||||
|
|
||||||
|
- 启用 taosAdapter 的配置 node_exporter.enable
|
||||||
|
- 设置 node_exporter 的相关配置
|
||||||
|
- 重新启动 taosAdapter
|
||||||
|
|
||||||
|
### Prometheus remote_read 和 remote_write
|
||||||
|
|
||||||
|
<Prometheus />
|
||||||
|
|
||||||
|
### RESTful 接口
|
||||||
|
|
||||||
|
您可以使用任何支持 HTTP 协议的客户端通过访问 RESTful 接口地址 `http://<fqdn>:6041/rest/sql` 来写入数据到 TDengine 或从 TDengine 中查询数据。细节请参考[REST API 文档](../../connector/rest-api/)。
|
||||||
|
|
||||||
|
## 安装
|
||||||
|
|
||||||
taosAdapter 是 TDengine 服务端软件 的一部分,如果您使用 TDengine server 您不需要任何额外的步骤来安装 taosAdapter。您可以从[涛思数据官方网站](https://docs.taosdata.com/releases/tdengine/)下载 TDengine server 安装包。如果需要将 taosAdapter 分离部署在 TDengine server 之外的服务器上,则应该在该服务器上安装完整的 TDengine 来安装 taosAdapter。如果您需要使用源代码编译生成 taosAdapter,您可以参考[构建 taosAdapter](https://github.com/taosdata/taosadapter/blob/3.0/BUILD-CN.md)文档。
|
taosAdapter 是 TDengine 服务端软件 的一部分,如果您使用 TDengine server 您不需要任何额外的步骤来安装 taosAdapter。您可以从[涛思数据官方网站](https://docs.taosdata.com/releases/tdengine/)下载 TDengine server 安装包。如果需要将 taosAdapter 分离部署在 TDengine server 之外的服务器上,则应该在该服务器上安装完整的 TDengine 来安装 taosAdapter。如果您需要使用源代码编译生成 taosAdapter,您可以参考[构建 taosAdapter](https://github.com/taosdata/taosadapter/blob/3.0/BUILD-CN.md)文档。
|
||||||
|
|
||||||
### 启动/停止 taosAdapter
|
安装完成后使用命令 `systemctl start taosadapter` 可以启动 taosAdapter 服务。
|
||||||
|
|
||||||
在 Linux 系统上 taosAdapter 服务默认由 systemd 管理。使用命令 `systemctl start taosadapter` 可以启动 taosAdapter 服务。使用命令 `systemctl stop taosadapter` 可以停止 taosAdapter 服务。
|
## 配置
|
||||||
|
|
||||||
### 移除 taosAdapter
|
|
||||||
|
|
||||||
使用命令 rmtaos 可以移除包括 taosAdapter 在内的 TDengine server 软件。
|
|
||||||
|
|
||||||
### 升级 taosAdapter
|
|
||||||
|
|
||||||
taosAdapter 和 TDengine server 需要使用相同版本。请通过升级 TDengine server 来升级 taosAdapter。
|
|
||||||
与 taosd 分离部署的 taosAdapter 必须通过升级其所在服务器的 TDengine server 才能得到升级。
|
|
||||||
|
|
||||||
## taosAdapter 参数列表
|
|
||||||
|
|
||||||
taosAdapter 支持通过命令行参数、环境变量和配置文件来进行配置。默认配置文件是 /etc/taos/taosadapter.toml。
|
taosAdapter 支持通过命令行参数、环境变量和配置文件来进行配置。默认配置文件是 /etc/taos/taosadapter.toml。
|
||||||
|
|
||||||
|
@ -75,6 +142,7 @@ Usage of taosAdapter:
|
||||||
--instanceId int instance ID. Env "TAOS_ADAPTER_INSTANCE_ID" (default 32)
|
--instanceId int instance ID. Env "TAOS_ADAPTER_INSTANCE_ID" (default 32)
|
||||||
--log.compress whether to compress old log. Env "TAOS_ADAPTER_LOG_COMPRESS"
|
--log.compress whether to compress old log. Env "TAOS_ADAPTER_LOG_COMPRESS"
|
||||||
--log.enableRecordHttpSql whether to record http sql. Env "TAOS_ADAPTER_LOG_ENABLE_RECORD_HTTP_SQL"
|
--log.enableRecordHttpSql whether to record http sql. Env "TAOS_ADAPTER_LOG_ENABLE_RECORD_HTTP_SQL"
|
||||||
|
--log.keepDays uint log retention days, must be a positive integer. Env "TAOS_ADAPTER_LOG_KEEP_DAYS" (default 30)
|
||||||
--log.level string log level (trace debug info warning error). Env "TAOS_ADAPTER_LOG_LEVEL" (default "info")
|
--log.level string log level (trace debug info warning error). Env "TAOS_ADAPTER_LOG_LEVEL" (default "info")
|
||||||
--log.path string log path. Env "TAOS_ADAPTER_LOG_PATH" (default "/var/log/taos")
|
--log.path string log path. Env "TAOS_ADAPTER_LOG_PATH" (default "/var/log/taos")
|
||||||
--log.reservedDiskSize string reserved disk size for log dir (KB MB GB), must be a positive integer. Env "TAOS_ADAPTER_LOG_RESERVED_DISK_SIZE" (default "1GB")
|
--log.reservedDiskSize string reserved disk size for log dir (KB MB GB), must be a positive integer. Env "TAOS_ADAPTER_LOG_RESERVED_DISK_SIZE" (default "1GB")
|
||||||
|
@ -85,6 +153,8 @@ Usage of taosAdapter:
|
||||||
--log.sqlRotationSize string record sql log rotation size(KB MB GB), must be a positive integer. Env "TAOS_ADAPTER_LOG_SQL_ROTATION_SIZE" (default "1GB")
|
--log.sqlRotationSize string record sql log rotation size(KB MB GB), must be a positive integer. Env "TAOS_ADAPTER_LOG_SQL_ROTATION_SIZE" (default "1GB")
|
||||||
--log.sqlRotationTime duration record sql log rotation time. Env "TAOS_ADAPTER_LOG_SQL_ROTATION_TIME" (default 24h0m0s)
|
--log.sqlRotationTime duration record sql log rotation time. Env "TAOS_ADAPTER_LOG_SQL_ROTATION_TIME" (default 24h0m0s)
|
||||||
--logLevel string log level (trace debug info warning error). Env "TAOS_ADAPTER_LOG_LEVEL" (default "info")
|
--logLevel string log level (trace debug info warning error). Env "TAOS_ADAPTER_LOG_LEVEL" (default "info")
|
||||||
|
--maxAsyncConcurrentLimit int The maximum number of concurrent calls allowed for the C asynchronous method. 0 means use CPU core count. Env "TAOS_ADAPTER_MAX_ASYNC_CONCURRENT_LIMIT"
|
||||||
|
--maxSyncConcurrentLimit int The maximum number of concurrent calls allowed for the C synchronized method. 0 means use CPU core count. Env "TAOS_ADAPTER_MAX_SYNC_CONCURRENT_LIMIT"
|
||||||
--monitor.collectDuration duration Set monitor duration. Env "TAOS_ADAPTER_MONITOR_COLLECT_DURATION" (default 3s)
|
--monitor.collectDuration duration Set monitor duration. Env "TAOS_ADAPTER_MONITOR_COLLECT_DURATION" (default 3s)
|
||||||
--monitor.disable Whether to disable monitoring. Env "TAOS_ADAPTER_MONITOR_DISABLE" (default true)
|
--monitor.disable Whether to disable monitoring. Env "TAOS_ADAPTER_MONITOR_DISABLE" (default true)
|
||||||
--monitor.identity string The identity of the current instance, or 'hostname:port' if it is empty. Env "TAOS_ADAPTER_MONITOR_IDENTITY"
|
--monitor.identity string The identity of the current instance, or 'hostname:port' if it is empty. Env "TAOS_ADAPTER_MONITOR_IDENTITY"
|
||||||
|
@ -126,6 +196,9 @@ Usage of taosAdapter:
|
||||||
--prometheus.enable enable prometheus. Env "TAOS_ADAPTER_PROMETHEUS_ENABLE" (default true)
|
--prometheus.enable enable prometheus. Env "TAOS_ADAPTER_PROMETHEUS_ENABLE" (default true)
|
||||||
--restfulRowLimit int restful returns the maximum number of rows (-1 means no limit). Env "TAOS_ADAPTER_RESTFUL_ROW_LIMIT" (default -1)
|
--restfulRowLimit int restful returns the maximum number of rows (-1 means no limit). Env "TAOS_ADAPTER_RESTFUL_ROW_LIMIT" (default -1)
|
||||||
--smlAutoCreateDB Whether to automatically create db when writing with schemaless. Env "TAOS_ADAPTER_SML_AUTO_CREATE_DB"
|
--smlAutoCreateDB Whether to automatically create db when writing with schemaless. Env "TAOS_ADAPTER_SML_AUTO_CREATE_DB"
|
||||||
|
--ssl.certFile string ssl cert file path. Env "TAOS_ADAPTER_SSL_CERT_FILE"
|
||||||
|
--ssl.enable enable ssl. Env "TAOS_ADAPTER_SSL_ENABLE"
|
||||||
|
--ssl.keyFile string ssl key file path. Env "TAOS_ADAPTER_SSL_KEY_FILE"
|
||||||
--statsd.allowPendingMessages int statsd allow pending messages. Env "TAOS_ADAPTER_STATSD_ALLOW_PENDING_MESSAGES" (default 50000)
|
--statsd.allowPendingMessages int statsd allow pending messages. Env "TAOS_ADAPTER_STATSD_ALLOW_PENDING_MESSAGES" (default 50000)
|
||||||
--statsd.db string statsd db name. Env "TAOS_ADAPTER_STATSD_DB" (default "statsd")
|
--statsd.db string statsd db name. Env "TAOS_ADAPTER_STATSD_DB" (default "statsd")
|
||||||
--statsd.deleteCounters statsd delete counter cache after gather. Env "TAOS_ADAPTER_STATSD_DELETE_COUNTERS" (default true)
|
--statsd.deleteCounters statsd delete counter cache after gather. Env "TAOS_ADAPTER_STATSD_DELETE_COUNTERS" (default true)
|
||||||
|
@ -152,27 +225,44 @@ Usage of taosAdapter:
|
||||||
-V, --version Print the version and exit
|
-V, --version Print the version and exit
|
||||||
```
|
```
|
||||||
|
|
||||||
备注:
|
示例配置文件参见 [example/config/taosadapter.toml](https://github.com/taosdata/taosadapter/blob/3.0/example/config/taosadapter.toml)。
|
||||||
使用浏览器进行接口调用请根据实际情况设置如下跨源资源共享(CORS)参数:
|
|
||||||
|
|
||||||
```text
|
### 跨域配置
|
||||||
AllowAllOrigins
|
|
||||||
AllowOrigins
|
使用浏览器进行接口调用时,请根据实际情况设置如下跨域(CORS)参数:
|
||||||
AllowHeaders
|
|
||||||
ExposeHeaders
|
- **`cors.allowAllOrigins`**:是否允许所有来源访问,默认为 `true`。
|
||||||
AllowCredentials
|
- **`cors.allowOrigins`**:允许跨域访问的来源列表,支持多个来源,以逗号分隔。
|
||||||
AllowWebSockets
|
- **`cors.allowHeaders`**:允许跨域访问的请求头列表,支持多个请求头,以逗号分隔。
|
||||||
```
|
- **`cors.exposeHeaders`**:允许跨域访问的响应头列表,支持多个响应头,以逗号分隔。
|
||||||
|
- **`cors.allowCredentials`**:是否允许跨域请求包含用户凭证,如 cookies、HTTP 认证信息或客户端 SSL 证书。
|
||||||
|
- **`cors.allowWebSockets`**:是否允许 WebSockets 连接。
|
||||||
|
|
||||||
如果不通过浏览器进行接口调用无需关心这几项配置。
|
如果不通过浏览器进行接口调用无需关心这几项配置。
|
||||||
|
|
||||||
|
以上配置对以下接口生效:
|
||||||
|
|
||||||
|
* RESTful 接口请求
|
||||||
|
* WebSocket 接口请求
|
||||||
|
* InfluxDB v1 写接口
|
||||||
|
* OpenTSDB HTTP 写入接口
|
||||||
|
|
||||||
关于 CORS 协议细节请参考:[https://www.w3.org/wiki/CORS_Enabled](https://www.w3.org/wiki/CORS_Enabled) 或 [https://developer.mozilla.org/zh-CN/docs/Web/HTTP/CORS](https://developer.mozilla.org/zh-CN/docs/Web/HTTP/CORS)。
|
关于 CORS 协议细节请参考:[https://www.w3.org/wiki/CORS_Enabled](https://www.w3.org/wiki/CORS_Enabled) 或 [https://developer.mozilla.org/zh-CN/docs/Web/HTTP/CORS](https://developer.mozilla.org/zh-CN/docs/Web/HTTP/CORS)。
|
||||||
|
|
||||||
示例配置文件参见 [example/config/taosadapter.toml](https://github.com/taosdata/taosadapter/blob/3.0/example/config/taosadapter.toml)。
|
### 连接池配置
|
||||||
|
|
||||||
### 连接池参数说明
|
taosAdapter 使用连接池管理与 TDengine 的连接,以提高并发性能和资源利用率。连接池配置对以下接口生效,且以下接口共享一个连接池:
|
||||||
|
|
||||||
在使用 RESTful 接口请求时,系统将通过连接池管理 TDengine 连接。连接池可通过以下参数进行配置:
|
* RESTful 接口请求
|
||||||
|
* InfluxDB v1 写接口
|
||||||
|
* OpenTSDB JSON 和 telnet 格式写入
|
||||||
|
* Telegraf 数据写入
|
||||||
|
* collectd 数据写入
|
||||||
|
* StatsD 数据写入
|
||||||
|
* 采集 node_exporter 数据写入
|
||||||
|
* Prometheus remote_read 和 remote_write
|
||||||
|
|
||||||
|
连接池的配置参数如下:
|
||||||
|
|
||||||
- **`pool.maxConnect`**:连接池允许的最大连接数,默认值为 2 倍 CPU 核心数。建议保持默认设置。
|
- **`pool.maxConnect`**:连接池允许的最大连接数,默认值为 2 倍 CPU 核心数。建议保持默认设置。
|
||||||
- **`pool.maxIdle`**:连接池中允许的最大空闲连接数,默认与 `pool.maxConnect` 相同。建议保持默认设置。
|
- **`pool.maxIdle`**:连接池中允许的最大空闲连接数,默认与 `pool.maxConnect` 相同。建议保持默认设置。
|
||||||
|
@ -180,154 +270,139 @@ AllowWebSockets
|
||||||
- **`pool.waitTimeout`**:从连接池获取连接的超时时间,默认设置为 60 秒。如果在超时时间内未能获取连接,将返回 HTTP 状态码 503。该参数从版本 3.3.3.0 开始提供。
|
- **`pool.waitTimeout`**:从连接池获取连接的超时时间,默认设置为 60 秒。如果在超时时间内未能获取连接,将返回 HTTP 状态码 503。该参数从版本 3.3.3.0 开始提供。
|
||||||
- **`pool.maxWait`**:连接池中等待获取连接的请求数上限,默认值为 0,表示不限制。当排队请求数超过此值时,新的请求将返回 HTTP 状态码 503。该参数从版本 3.3.3.0 开始提供。
|
- **`pool.maxWait`**:连接池中等待获取连接的请求数上限,默认值为 0,表示不限制。当排队请求数超过此值时,新的请求将返回 HTTP 状态码 503。该参数从版本 3.3.3.0 开始提供。
|
||||||
|
|
||||||
## 功能列表
|
### HTTP 返回码配置
|
||||||
|
|
||||||
- RESTful 接口
|
taosAdapter 通过参数 `httpCodeServerError` 来控制当底层 C 接口返回错误时,是否在 RESTful 接口请求中返回非 200 的 HTTP 状态码。当设置为 `true` 时,taosAdapter 会根据 C 接口返回的错误码映射为相应的 HTTP 状态码。具体映射规则请参考 [HTTP 响应码](../../connector/rest-api/#http-响应码)。
|
||||||
[RESTful API](../../connector/rest-api)
|
|
||||||
- 兼容 InfluxDB v1 写接口
|
|
||||||
[https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/](https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/)
|
|
||||||
- 兼容 OpenTSDB JSON 和 telnet 格式写入
|
|
||||||
- [http://opentsdb.net/docs/build/html/api_http/put.html](http://opentsdb.net/docs/build/html/api_http/put.html)
|
|
||||||
- [http://opentsdb.net/docs/build/html/api_telnet/put.html](http://opentsdb.net/docs/build/html/api_telnet/put.html)
|
|
||||||
- 与 collectd 无缝连接。
|
|
||||||
collectd 是一个系统统计收集守护程序,请访问 [https://collectd.org/](https://collectd.org/) 了解更多信息。
|
|
||||||
- Seamless connection with StatsD。
|
|
||||||
StatsD 是一个简单而强大的统计信息汇总的守护程序。请访问 [https://github.com/statsd/statsd](https://github.com/statsd/statsd) 了解更多信息。
|
|
||||||
- 与 icinga2 的无缝连接。
|
|
||||||
icinga2 是一个收集检查结果指标和性能数据的软件。请访问 [https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer](https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer) 了解更多信息。
|
|
||||||
- 与 tcollector 无缝连接。
|
|
||||||
TCollector是一个客户端进程,从本地收集器收集数据,并将数据推送到 OpenTSDB。请访问 [http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html) 了解更多信息。
|
|
||||||
- 无缝连接 node_exporter。
|
|
||||||
node_export 是一个机器指标的导出器。请访问 [https://github.com/prometheus/node_exporter](https://github.com/prometheus/node_exporter) 了解更多信息。
|
|
||||||
- 支持 Prometheus remote_read 和 remote_write。
|
|
||||||
remote_read 和 remote_write 是 Prometheus 数据读写分离的集群方案。请访问[https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis](https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis) 了解更多信息。
|
|
||||||
- 获取 table 所在的虚拟节点组(VGroup)的 VGroup ID。
|
|
||||||
|
|
||||||
## 接口
|
该配置只会影响 **RESTful 接口**。
|
||||||
|
|
||||||
### TDengine RESTful 接口
|
**参数说明**
|
||||||
|
|
||||||
您可以使用任何支持 http 协议的客户端通过访问 RESTful 接口地址 `http://<fqdn>:6041/rest/sql` 来写入数据到 TDengine 或从 TDengine 中查询数据。细节请参考[REST API 文档](../../connector/rest-api/)。
|
- **`httpCodeServerError`**:
|
||||||
|
- **设置为 `true` 时**:根据 C 接口返回的错误码映射为相应的 HTTP 状态码。
|
||||||
|
- **设置为 `false` 时**:无论 C 接口返回什么错误,始终返回 HTTP 状态码 `200`(默认值)。
|
||||||
|
|
||||||
### InfluxDB
|
|
||||||
|
|
||||||
您可以使用任何支持 http 协议的客户端访问 Restful 接口地址 `http://<fqdn>:6041/influxdb/v1/write` 来写入 InfluxDB 兼容格式的数据到 TDengine。
|
### 内存限制配置
|
||||||
|
|
||||||
支持 InfluxDB 参数如下:
|
taosAdapter 将监测自身运行过程中内存使用率并通过两个阈值进行调节。有效值范围为 1 到 100 的整数,单位为系统物理内存的百分比。
|
||||||
|
|
||||||
- `db` 指定 TDengine 使用的数据库名
|
该配置只会影响以下接口:
|
||||||
- `precision` TDengine 使用的时间精度
|
|
||||||
- `u` TDengine 用户名
|
|
||||||
- `p` TDengine 密码
|
|
||||||
- `ttl` 自动创建的子表生命周期,以子表的第一条数据的 TTL 参数为准,不可更新。更多信息请参考[创建表文档](../../taos-sql/table/#创建表)的 TTL 参数。
|
|
||||||
|
|
||||||
注意: 目前不支持 InfluxDB 的 token 验证方式,仅支持 Basic 验证和查询参数验证。
|
* RESTful 接口请求
|
||||||
示例: curl --request POST http://127.0.0.1:6041/influxdb/v1/write?db=test --user "root:taosdata" --data-binary "measurement,host=host1 field1=2i,field2=2.0 1577836800000000000"
|
* InfluxDB v1 写接口
|
||||||
### OpenTSDB
|
* OpenTSDB HTTP 写入接口
|
||||||
|
* Prometheus remote_read 和 remote_write 接口
|
||||||
|
|
||||||
您可以使用任何支持 http 协议的客户端访问 Restful 接口地址 `http://<fqdn>:6041/<APIEndPoint>` 来写入 OpenTSDB 兼容格式的数据到 TDengine。EndPoint 如下:
|
**参数说明**
|
||||||
|
|
||||||
```text
|
- **`pauseQueryMemoryThreshold`**:
|
||||||
/opentsdb/v1/put/json/<db>
|
- 当内存使用超过此阈值时,taosAdapter 将停止处理查询请求。
|
||||||
/opentsdb/v1/put/telnet/<db>
|
- 默认值:`70`(即 70% 的系统物理内存)。
|
||||||
```
|
- **`pauseAllMemoryThreshold`**:
|
||||||
|
- 当内存使用超过此阈值时,taosAdapter 将停止处理所有请求(包括写入和查询)。
|
||||||
|
- 默认值:`80`(即 80% 的系统物理内存)。
|
||||||
|
|
||||||
### collectd
|
当内存使用回落到阈值以下时,taosAdapter 会自动恢复相应功能。
|
||||||
|
|
||||||
<CollectD />
|
**HTTP 返回内容:**
|
||||||
|
|
||||||
### StatsD
|
- **超过 `pauseQueryMemoryThreshold` 时**:
|
||||||
|
- HTTP 状态码:`503`
|
||||||
|
- 返回内容:`"query memory exceeds threshold"`
|
||||||
|
- **超过 `pauseAllMemoryThreshold` 时**:
|
||||||
|
- HTTP 状态码:`503`
|
||||||
|
- 返回内容:`"memory exceeds threshold"`
|
||||||
|
|
||||||
<StatsD />
|
**状态检查接口:**
|
||||||
|
|
||||||
### icinga2 OpenTSDB writer
|
可以通过以下接口检查 taosAdapter 的内存状态:
|
||||||
|
- **正常状态**:`http://<fqdn>:6041/-/ping` 返回 `code 200`。
|
||||||
|
- **内存超过阈值**:
|
||||||
|
- 如果内存超过 `pauseAllMemoryThreshold`,返回 `code 503`。
|
||||||
|
- 如果内存超过 `pauseQueryMemoryThreshold`,且请求参数包含 `action=query`,返回 `code 503`。
|
||||||
|
|
||||||
<Icinga2 />
|
**相关配置参数:**
|
||||||
|
|
||||||
### TCollector
|
- **`monitor.collectDuration`**:内存监控间隔,默认值为 `3s`,环境变量为 `TAOS_MONITOR_COLLECT_DURATION`。
|
||||||
|
- **`monitor.incgroup`**:是否在容器中运行(容器中运行设置为 `true`),默认值为 `false`,环境变量为 `TAOS_MONITOR_INCGROUP`。
|
||||||
<TCollector />
|
- **`monitor.pauseQueryMemoryThreshold`**:查询请求暂停的内存阈值(百分比),默认值为 `70`,环境变量为 `TAOS_MONITOR_PAUSE_QUERY_MEMORY_THRESHOLD`。
|
||||||
|
- **`monitor.pauseAllMemoryThreshold`**:查询和写入请求暂停的内存阈值(百分比),默认值为 `80`,环境变量为 `TAOS_MONITOR_PAUSE_ALL_MEMORY_THRESHOLD`。
|
||||||
### node_exporter
|
|
||||||
|
|
||||||
Prometheus 使用的由 \*NIX 内核暴露的硬件和操作系统指标的输出器
|
|
||||||
|
|
||||||
- 启用 taosAdapter 的配置 node_exporter.enable
|
|
||||||
- 设置 node_exporter 的相关配置
|
|
||||||
- 重新启动 taosAdapter
|
|
||||||
|
|
||||||
### prometheus
|
|
||||||
|
|
||||||
<Prometheus />
|
|
||||||
|
|
||||||
### 获取 table 的 VGroup ID
|
|
||||||
|
|
||||||
可以 POST 请求 http 接口 `http://<fqdn>:<port>/rest/sql/<db>/vgid` 获取 table 的 VGroup ID,body 是多个表名 JSON 数组。
|
|
||||||
|
|
||||||
样例:获取数据库为 power,表名为 d_bind_1 和 d_bind_2 的 VGroup ID
|
|
||||||
|
|
||||||
```shell
|
|
||||||
curl --location 'http://127.0.0.1:6041/rest/sql/power/vgid' \
|
|
||||||
--user 'root:taosdata' \
|
|
||||||
--data '["d_bind_1","d_bind_2"]'
|
|
||||||
```
|
|
||||||
|
|
||||||
响应:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{"code":0,"vgIDs":[153,152]}
|
|
||||||
```
|
|
||||||
|
|
||||||
## 内存使用优化方法
|
|
||||||
|
|
||||||
taosAdapter 将监测自身运行过程中内存使用率并通过两个阈值进行调节。有效值范围为 -1 到 100 的整数,单位为系统物理内存的百分比。
|
|
||||||
|
|
||||||
- pauseQueryMemoryThreshold
|
|
||||||
- pauseAllMemoryThreshold
|
|
||||||
|
|
||||||
当超过 pauseQueryMemoryThreshold 阈值时时停止处理查询请求。
|
|
||||||
|
|
||||||
http 返回内容:
|
|
||||||
|
|
||||||
- code 503
|
|
||||||
- body "query memory exceeds threshold"
|
|
||||||
|
|
||||||
当超过 pauseAllMemoryThreshold 阈值时停止处理所有写入和查询请求。
|
|
||||||
|
|
||||||
http 返回内容:
|
|
||||||
|
|
||||||
- code 503
|
|
||||||
- body "memory exceeds threshold"
|
|
||||||
|
|
||||||
当内存回落到阈值之下时恢复对应功能。
|
|
||||||
|
|
||||||
状态检查接口 `http://<fqdn>:6041/-/ping`
|
|
||||||
|
|
||||||
- 正常返回 `code 200`
|
|
||||||
- 无参数 如果内存超过 pauseAllMemoryThreshold 将返回 `code 503`
|
|
||||||
- 请求参数 `action=query` 如果内存超过 pauseQueryMemoryThreshold 或 pauseAllMemoryThreshold 将返回 `code 503`
|
|
||||||
|
|
||||||
对应配置参数
|
|
||||||
|
|
||||||
```text
|
|
||||||
monitor.collectDuration 监测间隔 环境变量 "TAOS_MONITOR_COLLECT_DURATION" (默认值 3s)
|
|
||||||
monitor.incgroup 是否是cgroup中运行(容器中运行设置为 true) 环境变量 "TAOS_MONITOR_INCGROUP"
|
|
||||||
monitor.pauseAllMemoryThreshold 不再进行插入和查询的内存阈值 环境变量 "TAOS_MONITOR_PAUSE_ALL_MEMORY_THRESHOLD" (默认值 80)
|
|
||||||
monitor.pauseQueryMemoryThreshold 不再进行查询的内存阈值 环境变量 "TAOS_MONITOR_PAUSE_QUERY_MEMORY_THRESHOLD" (默认值 70)
|
|
||||||
```
|
|
||||||
|
|
||||||
您可以根据具体项目应用场景和运营策略进行相应调整,并建议使用运营监控软件及时进行系统内存状态监控。负载均衡器也可以通过这个接口检查 taosAdapter 运行状态。
|
您可以根据具体项目应用场景和运营策略进行相应调整,并建议使用运营监控软件及时进行系统内存状态监控。负载均衡器也可以通过这个接口检查 taosAdapter 运行状态。
|
||||||
|
|
||||||
## taosAdapter 监控指标
|
### 无模式写入创建 DB 配置
|
||||||
|
|
||||||
taosAdapter 采集 REST/WebSocket 相关请求的监控指标。将监控指标上报给 taosKeeper,这些监控指标会被 taosKeeper 写入监控数据库,默认是 `log` 库,可以在 taoskeeper 配置文件中修改。以下是这些监控指标的详细介绍。
|
从 **3.0.4.0 版本** 开始,taosAdapter 提供了参数 `smlAutoCreateDB`,用于控制在 schemaless 协议写入时是否自动创建数据库(DB)。
|
||||||
|
|
||||||
#### adapter\_requests 表
|
`smlAutoCreateDB` 参数只会影响以下接口:
|
||||||
|
|
||||||
`adapter_requests` 记录 taosadapter 监控数据。
|
- InfluxDB v1 写接口
|
||||||
|
- OpenTSDB JSON 和 telnet 格式写入
|
||||||
|
- Telegraf 数据写入
|
||||||
|
- collectd 数据写入
|
||||||
|
- StatsD 数据写入
|
||||||
|
- node_exporter 数据写入
|
||||||
|
|
||||||
|
**参数说明**
|
||||||
|
|
||||||
|
- **`smlAutoCreateDB`**:
|
||||||
|
- **设置为 `true` 时**:在 schemaless 协议写入时,如果目标数据库不存在,taosAdapter 会自动创建该数据库。
|
||||||
|
- **设置为 `false` 时**:用户需要手动创建数据库,否则写入会失败(默认值)。
|
||||||
|
|
||||||
|
### 结果返回条数配置
|
||||||
|
|
||||||
|
taosAdapter 提供了参数 `restfulRowLimit`,用于控制 HTTP 接口返回的结果条数。
|
||||||
|
|
||||||
|
`restfulRowLimit` 参数只会影响以下接口的返回结果:
|
||||||
|
|
||||||
|
- RESTful 接口
|
||||||
|
- Prometheus remote_read 接口
|
||||||
|
|
||||||
|
**参数说明**
|
||||||
|
|
||||||
|
- **`restfulRowLimit`**:
|
||||||
|
- **设置为正整数时**:接口返回的结果条数将不超过该值。
|
||||||
|
- **设置为 `-1` 时**:接口返回的结果条数无限制(默认值)。
|
||||||
|
|
||||||
|
### 日志配置
|
||||||
|
|
||||||
|
1. 可以通过设置 --log.level 参数或者环境变量 TAOS_ADAPTER_LOG_LEVEL 来设置 taosAdapter 日志输出详细程度。有效值包括:panic、fatal、error、warn、warning、info、debug 以及 trace。
|
||||||
|
2. 从 **3.3.5.0 版本** 开始,taosAdapter 支持通过 HTTP 接口动态修改日志级别。用户可以通过发送 HTTP PUT 请求到 /config 接口,动态调整日志级别。该接口的验证方式与 /rest/sql 接口相同,请求体中需传入 JSON 格式的配置项键值对。
|
||||||
|
|
||||||
|
以下是通过 curl 命令将日志级别设置为 debug 的示例:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
curl --location --request PUT 'http://127.0.0.1:6041/config' \
|
||||||
|
-u root:taosdata \
|
||||||
|
--data '{"log.level": "debug"}'
|
||||||
|
```
|
||||||
|
|
||||||
|
## 服务管理
|
||||||
|
|
||||||
|
### 启动/停止 taosAdapter
|
||||||
|
|
||||||
|
在 Linux 系统上 taosAdapter 服务默认由 systemd 管理。使用命令 `systemctl start taosadapter` 可以启动 taosAdapter 服务。使用命令 `systemctl stop taosadapter` 可以停止 taosAdapter 服务。使用命令 `systemctl status taosadapter` 来检查 taosAdapter 运行状态。
|
||||||
|
|
||||||
|
### 升级 taosAdapter
|
||||||
|
|
||||||
|
taosAdapter 和 TDengine server 需要使用相同版本。请通过升级 TDengine server 来升级 taosAdapter。
|
||||||
|
与 taosd 分离部署的 taosAdapter 必须通过升级其所在服务器的 TDengine server 才能得到升级。
|
||||||
|
|
||||||
|
### 移除 taosAdapter
|
||||||
|
|
||||||
|
使用命令 rmtaos 可以移除包括 taosAdapter 在内的 TDengine server 软件。
|
||||||
|
|
||||||
|
## 监控指标
|
||||||
|
|
||||||
|
taosAdapter 目前仅采集 RESTful/WebSocket 相关请求的监控指标,其他接口暂无监控指标。
|
||||||
|
|
||||||
|
taosAdapter 将监控指标上报给 taosKeeper,这些监控指标会被 taosKeeper 写入监控数据库,默认是 `log` 库,可以在 taoskeeper 配置文件中修改。以下是这些监控指标的详细介绍。
|
||||||
|
|
||||||
|
`adapter_requests` 表记录 taosAdapter 监控数据,字段如下:
|
||||||
|
|
||||||
| field | type | is\_tag | comment |
|
| field | type | is\_tag | comment |
|
||||||
| :----------------- | :----------- | :------ | :---------------------------------- |
|
|:-------------------|:-------------|:--------|:----------------------------|
|
||||||
| ts | TIMESTAMP | | timestamp |
|
| ts | TIMESTAMP | | timestamp |
|
||||||
| total | INT UNSIGNED | | 总请求数 |
|
| total | INT UNSIGNED | | 总请求数 |
|
||||||
| query | INT UNSIGNED | | 查询请求数 |
|
| query | INT UNSIGNED | | 查询请求数 |
|
||||||
|
@ -347,35 +422,12 @@ taosAdapter 采集 REST/WebSocket 相关请求的监控指标。将监控指标
|
||||||
| endpoint | VARCHAR | | 请求端点 |
|
| endpoint | VARCHAR | | 请求端点 |
|
||||||
| req\_type | NCHAR | TAG | 请求类型:0 为 REST,1 为 WebSocket |
|
| req\_type | NCHAR | TAG | 请求类型:0 为 REST,1 为 WebSocket |
|
||||||
|
|
||||||
## 结果返回条数限制
|
## httpd 升级为 taosAdapter 的变化
|
||||||
|
|
||||||
taosAdapter 通过参数 `restfulRowLimit` 来控制结果的返回条数,-1 代表无限制,默认无限制。
|
在 TDengine server 2.2.x.x 或更早期版本中,taosd 进程包含一个内嵌的 http 服务(httpd)。如前面所述,taosAdapter 是一个使用 systemd 管理的独立软件,拥有自己的进程。并且两者有一些配置参数和行为是不同的,请见下表:
|
||||||
|
|
||||||
该参数控制以下接口返回
|
|
||||||
|
|
||||||
- `http://<fqdn>:6041/rest/sql`
|
|
||||||
- `http://<fqdn>:6041/prometheus/v1/remote_read/:db`
|
|
||||||
|
|
||||||
## 配置 http 返回码
|
|
||||||
|
|
||||||
taosAdapter 通过参数 `httpCodeServerError` 来设置当 C 接口返回错误时是否返回非 200 的 http 状态码。当设置为 true 时将根据 C 返回的错误码返回不同 http 状态码。具体见 [HTTP 响应码](../../connector/rest-api/#http-响应码)。
|
|
||||||
|
|
||||||
## 配置 schemaless 写入是否自动创建 DB
|
|
||||||
|
|
||||||
taosAdapter 从 3.0.4.0 版本开始,提供参数 `smlAutoCreateDB` 来控制在 schemaless 协议写入时是否自动创建 DB。默认值为 false 不自动创建 DB,需要用户手动创建 DB 后进行 schemaless 写入。
|
|
||||||
|
|
||||||
## 故障解决
|
|
||||||
|
|
||||||
您可以通过命令 `systemctl status taosadapter` 来检查 taosAdapter 运行状态。
|
|
||||||
|
|
||||||
您也可以通过设置 --logLevel 参数或者环境变量 TAOS_ADAPTER_LOG_LEVEL 来调节 taosAdapter 日志输出详细程度。有效值包括: panic、fatal、error、warn、warning、info、debug 以及 trace。
|
|
||||||
|
|
||||||
## 如何从旧版本 TDengine 迁移到 taosAdapter
|
|
||||||
|
|
||||||
在 TDengine server 2.2.x.x 或更早期版本中,taosd 进程包含一个内嵌的 http 服务。如前面所述,taosAdapter 是一个使用 systemd 管理的独立软件,拥有自己的进程。并且两者有一些配置参数和行为是不同的,请见下表:
|
|
||||||
|
|
||||||
| **#** | **embedded httpd** | **taosAdapter** | **comment** |
|
| **#** | **embedded httpd** | **taosAdapter** | **comment** |
|
||||||
| ----- | ------------------- | ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------ |
|
|-------|---------------------|-------------------------------|------------------------------------------------------------------------------------------------|
|
||||||
| 1 | httpEnableRecordSql | --logLevel=debug | |
|
| 1 | httpEnableRecordSql | --logLevel=debug | |
|
||||||
| 2 | httpMaxThreads | n/a | taosAdapter 自动管理线程池,无需此参数 |
|
| 2 | httpMaxThreads | n/a | taosAdapter 自动管理线程池,无需此参数 |
|
||||||
| 3 | telegrafUseFieldNum | 请参考 taosAdapter telegraf 配置方法 |
|
| 3 | telegrafUseFieldNum | 请参考 taosAdapter telegraf 配置方法 |
|
||||||
|
|
Before Width: | Height: | Size: 104 KiB After Width: | Height: | Size: 131 KiB |
|
@ -60,6 +60,7 @@ TDinsight 仪表盘旨在提供 TDengine 相关资源的使用情况和状态,
|
||||||
- **Databases** - 数据库个数。
|
- **Databases** - 数据库个数。
|
||||||
- **Connections** - 当前连接个数。
|
- **Connections** - 当前连接个数。
|
||||||
- **DNodes/MNodes/VGroups/VNodes**:每种资源的总数和存活数。
|
- **DNodes/MNodes/VGroups/VNodes**:每种资源的总数和存活数。
|
||||||
|
- **Classified Connection Counts**:当前活跃连接数,按用户、应用和 ip 分类。
|
||||||
- **DNodes/MNodes/VGroups/VNodes Alive Percent**:每种资源的存活数/总数的比例,启用告警规则,并在资源存活率(1 分钟内平均健康资源比例)不足 100%时触发。
|
- **DNodes/MNodes/VGroups/VNodes Alive Percent**:每种资源的存活数/总数的比例,启用告警规则,并在资源存活率(1 分钟内平均健康资源比例)不足 100%时触发。
|
||||||
- **Measuring Points Used**:启用告警规则的测点数用量(社区版无数据,默认情况下是健康的)。
|
- **Measuring Points Used**:启用告警规则的测点数用量(社区版无数据,默认情况下是健康的)。
|
||||||
|
|
||||||
|
|
|
@ -1,10 +1,10 @@
|
||||||
### 配置 taosAdapter
|
#### 配置 taosAdapter
|
||||||
|
|
||||||
配置 taosAdapter 接收 collectd 数据的方法:
|
配置 taosAdapter 接收 collectd 数据的方法:
|
||||||
|
|
||||||
- 在 taosAdapter 配置文件(默认位置为 /etc/taos/taosadapter.toml)中使能配置项
|
- 在 taosAdapter 配置文件(默认位置为 /etc/taos/taosadapter.toml)中使能配置项
|
||||||
|
|
||||||
```
|
```toml
|
||||||
...
|
...
|
||||||
[opentsdb_telnet]
|
[opentsdb_telnet]
|
||||||
enable = true
|
enable = true
|
||||||
|
@ -19,17 +19,17 @@ password = "taosdata"
|
||||||
|
|
||||||
其中 taosAdapter 默认写入的数据库名称为 `collectd`,也可以修改 taosAdapter 配置文件 dbs 项来指定不同的名称。user 和 password 填写实际 TDengine 配置的值。修改过配置文件 taosAdapter 需重新启动。
|
其中 taosAdapter 默认写入的数据库名称为 `collectd`,也可以修改 taosAdapter 配置文件 dbs 项来指定不同的名称。user 和 password 填写实际 TDengine 配置的值。修改过配置文件 taosAdapter 需重新启动。
|
||||||
|
|
||||||
- 也可以使用 taosAdapter 命令行参数或设置环境变量启动的方式,使能 taosAdapter 接收 collectd 数据功能,具体细节请参考 taosAdapter 的参考手册
|
- 使用 taosAdapter 命令行参数或设置环境变量启动的方式,使能 taosAdapter 接收 collectd 数据功能,具体细节请参考 taosAdapter 的参考手册。
|
||||||
|
|
||||||
|
#### 配置 collectd
|
||||||
|
|
||||||
### 配置 collectd
|
|
||||||
#
|
|
||||||
collectd 使用插件机制可以以多种形式将采集到的监控数据写入到不同的数据存储软件。TDengine 支持直接采集插件和 write_tsdb 插件。
|
collectd 使用插件机制可以以多种形式将采集到的监控数据写入到不同的数据存储软件。TDengine 支持直接采集插件和 write_tsdb 插件。
|
||||||
|
|
||||||
#### 配置接收直接采集插件数据
|
1. **配置直接采集插件**
|
||||||
|
|
||||||
修改 collectd 配置文件(默认位置 /etc/collectd/collectd.conf)相关配置项。
|
修改 collectd 配置文件(默认位置 /etc/collectd/collectd.conf)相关配置项。
|
||||||
|
|
||||||
```text
|
```xml
|
||||||
LoadPlugin network
|
LoadPlugin network
|
||||||
<Plugin network>
|
<Plugin network>
|
||||||
Server "<taosAdapter's host>" "<port for collectd direct>"
|
Server "<taosAdapter's host>" "<port for collectd direct>"
|
||||||
|
@ -40,18 +40,18 @@ LoadPlugin network
|
||||||
|
|
||||||
示例如下:
|
示例如下:
|
||||||
|
|
||||||
```text
|
```xml
|
||||||
LoadPlugin network
|
LoadPlugin network
|
||||||
<Plugin network>
|
<Plugin network>
|
||||||
Server "127.0.0.1" "6045"
|
Server "127.0.0.1" "6045"
|
||||||
</Plugin>
|
</Plugin>
|
||||||
```
|
```
|
||||||
|
|
||||||
#### 配置 write_tsdb 插件数据
|
2. **配置 write_tsdb 插件**
|
||||||
|
|
||||||
修改 collectd 配置文件(默认位置 /etc/collectd/collectd.conf)相关配置项。
|
修改 collectd 配置文件(默认位置 /etc/collectd/collectd.conf)相关配置项。
|
||||||
|
|
||||||
```text
|
```xml
|
||||||
LoadPlugin write_tsdb
|
LoadPlugin write_tsdb
|
||||||
<Plugin write_tsdb>
|
<Plugin write_tsdb>
|
||||||
<Node>
|
<Node>
|
||||||
|
@ -64,7 +64,7 @@ LoadPlugin write_tsdb
|
||||||
|
|
||||||
其中 \<taosAdapter's host> 填写运行 taosAdapter 的服务器域名或 IP 地址。\<port for collectd write_tsdb plugin> 填写 taosAdapter 用于接收 collectd write_tsdb 插件的数据(默认为 6047)。
|
其中 \<taosAdapter's host> 填写运行 taosAdapter 的服务器域名或 IP 地址。\<port for collectd write_tsdb plugin> 填写 taosAdapter 用于接收 collectd write_tsdb 插件的数据(默认为 6047)。
|
||||||
|
|
||||||
```text
|
```xml
|
||||||
LoadPlugin write_tsdb
|
LoadPlugin write_tsdb
|
||||||
<Plugin write_tsdb>
|
<Plugin write_tsdb>
|
||||||
<Node>
|
<Node>
|
||||||
|
@ -79,7 +79,7 @@ LoadPlugin write_tsdb
|
||||||
|
|
||||||
然后重启 collectd:
|
然后重启 collectd:
|
||||||
|
|
||||||
```
|
```shell
|
||||||
systemctl restart collectd
|
systemctl restart collectd
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -1,10 +1,10 @@
|
||||||
### 配置 taosAdapter
|
#### 配置 taosAdapter
|
||||||
|
|
||||||
配置 taosAdapter 接收 icinga2 数据的方法:
|
配置 taosAdapter 接收 icinga2 数据的方法:
|
||||||
|
|
||||||
- 在 taosAdapter 配置文件(默认位置 /etc/taos/taosadapter.toml)中使能配置项
|
- 在 taosAdapter 配置文件(默认位置 /etc/taos/taosadapter.toml)中使能配置项
|
||||||
|
|
||||||
```
|
```toml
|
||||||
...
|
...
|
||||||
[opentsdb_telnet]
|
[opentsdb_telnet]
|
||||||
enable = true
|
enable = true
|
||||||
|
@ -19,14 +19,14 @@ password = "taosdata"
|
||||||
|
|
||||||
其中 taosAdapter 默认写入的数据库名称为 `icinga2`,也可以修改 taosAdapter 配置文件 dbs 项来指定不同的名称。user 和 password 填写实际 TDengine 配置的值。修改过 taosAdapter 需重新启动。
|
其中 taosAdapter 默认写入的数据库名称为 `icinga2`,也可以修改 taosAdapter 配置文件 dbs 项来指定不同的名称。user 和 password 填写实际 TDengine 配置的值。修改过 taosAdapter 需重新启动。
|
||||||
|
|
||||||
- 也可以使用 taosAdapter 命令行参数或设置环境变量启动的方式,使能 taosAdapter 接收 icinga2 数据功能,具体细节请参考 taosAdapter 的参考手册
|
- 使用 taosAdapter 命令行参数或设置环境变量启动的方式,使能 taosAdapter 接收 icinga2 数据功能,具体细节请参考 taosAdapter 的参考手册
|
||||||
|
|
||||||
### 配置 icinga2
|
#### 配置 icinga2
|
||||||
|
|
||||||
- 使能 icinga2 的 opentsdb-writer(参考链接 https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer)
|
- 使能 icinga2 的 opentsdb-writer(参考链接 https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer)
|
||||||
- 修改配置文件 `/etc/icinga2/features-enabled/opentsdb.conf` 填写 \<taosAdapter's host> 为运行 taosAdapter 的服务器的域名或 IP 地址,\<port for icinga2> 填写 taosAdapter 支持接收 icinga2 数据的相应端口(默认为 6048)
|
- 修改配置文件 `/etc/icinga2/features-enabled/opentsdb.conf` 填写 \<taosAdapter's host> 为运行 taosAdapter 的服务器的域名或 IP 地址,\<port for icinga2> 填写 taosAdapter 支持接收 icinga2 数据的相应端口(默认为 6048)
|
||||||
|
|
||||||
```
|
```c
|
||||||
object OpenTsdbWriter "opentsdb" {
|
object OpenTsdbWriter "opentsdb" {
|
||||||
host = "<taosAdapter's host>"
|
host = "<taosAdapter's host>"
|
||||||
port = <port for icinga2>
|
port = <port for icinga2>
|
||||||
|
@ -35,7 +35,7 @@ object OpenTsdbWriter "opentsdb" {
|
||||||
|
|
||||||
示例文件:
|
示例文件:
|
||||||
|
|
||||||
```
|
```c
|
||||||
object OpenTsdbWriter "opentsdb" {
|
object OpenTsdbWriter "opentsdb" {
|
||||||
host = "127.0.0.1"
|
host = "127.0.0.1"
|
||||||
port = 6048
|
port = 6048
|
||||||
|
|
|
@ -1,18 +1,18 @@
|
||||||
配置 Prometheus 是通过编辑 Prometheus 配置文件 prometheus.yml (默认位置 /etc/prometheus/prometheus.yml)完成的。
|
配置 Prometheus 是通过编辑 Prometheus 配置文件 prometheus.yml (默认位置 /etc/prometheus/prometheus.yml)完成的。
|
||||||
|
|
||||||
### 配置第三方数据库地址
|
#### 配置第三方数据库地址
|
||||||
|
|
||||||
将其中的 remote_read url 和 remote_write url 指向运行 taosAdapter 服务的服务器域名或 IP 地址,REST 服务端口(taosAdapter 默认使用 6041),以及希望写入 TDengine 的数据库名称,并确保相应的 URL 形式如下:
|
将其中的 remote_read url 和 remote_write url 指向运行 taosAdapter 服务的服务器域名或 IP 地址,REST 服务端口(taosAdapter 默认使用 6041),以及希望写入 TDengine 的数据库名称,并确保相应的 URL 形式如下:
|
||||||
|
|
||||||
- remote_read url : `http://<taosAdapter's host>:<REST service port>/prometheus/v1/remote_read/<database name>`
|
- remote_read url : `http://<taosAdapter's host>:<REST service port>/prometheus/v1/remote_read/<database name>`
|
||||||
- remote_write url : `http://<taosAdapter's host>:<REST service port>/prometheus/v1/remote_write/<database name>`
|
- remote_write url : `http://<taosAdapter's host>:<REST service port>/prometheus/v1/remote_write/<database name>`
|
||||||
|
|
||||||
### 配置 Basic 验证
|
#### 配置 Basic 验证
|
||||||
|
|
||||||
- username: \<TDengine's username>
|
- username: \<TDengine's username>
|
||||||
- password: \<TDengine's password>
|
- password: \<TDengine's password>
|
||||||
|
|
||||||
### prometheus.yml 文件中 remote_write 和 remote_read 相关部分配置示例
|
#### prometheus.yml 文件中 remote_write 和 remote_read 相关部分配置示例
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
remote_write:
|
remote_write:
|
||||||
|
|
|
@ -1,10 +1,10 @@
|
||||||
### 配置 taosAdapter
|
#### 配置 taosAdapter
|
||||||
|
|
||||||
配置 taosAdapter 接收 StatsD 数据的方法:
|
配置 taosAdapter 接收 StatsD 数据的方法:
|
||||||
|
|
||||||
- 在 taosAdapter 配置文件(默认位置 /etc/taos/taosadapter.toml)中使能配置项
|
- 在 taosAdapter 配置文件(默认位置 /etc/taos/taosadapter.toml)中使能配置项
|
||||||
|
|
||||||
```
|
```toml
|
||||||
...
|
...
|
||||||
[statsd]
|
[statsd]
|
||||||
enable = true
|
enable = true
|
||||||
|
@ -27,9 +27,9 @@ deleteTimings = true
|
||||||
|
|
||||||
其中 taosAdapter 默认写入的数据库名称为 `statsd`,也可以修改 taosAdapter 配置文件 db 项来指定不同的名称。user 和 password 填写实际 TDengine 配置的值。修改过配置文件 taosAdapter 需重新启动。
|
其中 taosAdapter 默认写入的数据库名称为 `statsd`,也可以修改 taosAdapter 配置文件 db 项来指定不同的名称。user 和 password 填写实际 TDengine 配置的值。修改过配置文件 taosAdapter 需重新启动。
|
||||||
|
|
||||||
- 也可以使用 taosAdapter 命令行参数或设置环境变量启动的方式,使能 taosAdapter 接收 StatsD 数据功能,具体细节请参考 taosAdapter 的参考手册
|
- 使用 taosAdapter 命令行参数或设置环境变量启动的方式,使能 taosAdapter 接收 StatsD 数据功能,具体细节请参考 taosAdapter 的参考手册
|
||||||
|
|
||||||
### 配置 StatsD
|
#### 配置 StatsD
|
||||||
|
|
||||||
使用 StatsD 需要下载其[源代码](https://github.com/statsd/statsd)。其配置文件请参考其源代码下载到本地的根目录下的示例文件 `exampleConfig.js` 进行修改。其中 \<taosAdapter's host> 填写运行 taosAdapter 的服务器域名或 IP 地址,\<port for StatsD> 请填写 taosAdapter 接收 StatsD 数据的端口(默认为 6044)。
|
使用 StatsD 需要下载其[源代码](https://github.com/statsd/statsd)。其配置文件请参考其源代码下载到本地的根目录下的示例文件 `exampleConfig.js` 进行修改。其中 \<taosAdapter's host> 填写运行 taosAdapter 的服务器域名或 IP 地址,\<port for StatsD> 请填写 taosAdapter 接收 StatsD 数据的端口(默认为 6044)。
|
||||||
|
|
||||||
|
@ -40,7 +40,7 @@ repeater 部分添加 { host:'<taosAdapter's host>', port: <port for StatsD>}
|
||||||
|
|
||||||
示例配置文件:
|
示例配置文件:
|
||||||
|
|
||||||
```
|
```js
|
||||||
{
|
{
|
||||||
port: 8125
|
port: 8125
|
||||||
, backends: ["./backends/repeater"]
|
, backends: ["./backends/repeater"]
|
||||||
|
@ -50,7 +50,7 @@ port: 8125
|
||||||
|
|
||||||
增加如下内容后启动 StatsD(假设配置文件修改为 config.js)。
|
增加如下内容后启动 StatsD(假设配置文件修改为 config.js)。
|
||||||
|
|
||||||
```
|
```shell
|
||||||
npm install
|
npm install
|
||||||
node stats.js config.js &
|
node stats.js config.js &
|
||||||
```
|
```
|
||||||
|
|
|
@ -1,11 +1,11 @@
|
||||||
|
|
||||||
### 配置 taosAdapter
|
#### 配置 taosAdapter
|
||||||
|
|
||||||
配置 taosAdapter 接收 TCollector 数据的方法:
|
配置 taosAdapter 接收 TCollector 数据的方法:
|
||||||
|
|
||||||
- 在 taosAdapter 配置文件(默认位置 /etc/taos/taosadapter.toml)中使能配置项
|
- 在 taosAdapter 配置文件(默认位置 /etc/taos/taosadapter.toml)中使能配置项
|
||||||
|
|
||||||
```
|
```toml
|
||||||
...
|
...
|
||||||
[opentsdb_telnet]
|
[opentsdb_telnet]
|
||||||
enable = true
|
enable = true
|
||||||
|
@ -20,9 +20,9 @@ password = "taosdata"
|
||||||
|
|
||||||
其中 taosAdapter 默认写入的数据库名称为 `tcollector`,也可以修改 taosAdapter 配置文件 dbs 项来指定不同的名称。user 和 password 填写实际 TDengine 配置的值。修改过配置文件 taosAdapter 需重新启动。
|
其中 taosAdapter 默认写入的数据库名称为 `tcollector`,也可以修改 taosAdapter 配置文件 dbs 项来指定不同的名称。user 和 password 填写实际 TDengine 配置的值。修改过配置文件 taosAdapter 需重新启动。
|
||||||
|
|
||||||
- 也可以使用 taosAdapter 命令行参数或设置环境变量启动的方式,使能 taosAdapter 接收 tcollector 数据功能,具体细节请参考 taosAdapter 的参考手册
|
- 使用 taosAdapter 命令行参数或设置环境变量启动的方式,使能 taosAdapter 接收 tcollector 数据功能,具体细节请参考 taosAdapter 的参考手册
|
||||||
|
|
||||||
### 配置 TCollector
|
#### 配置 TCollector
|
||||||
|
|
||||||
使用 TCollector 需下载其[源代码](https://github.com/OpenTSDB/tcollector)。其配置项在其源代码中。注意:TCollector 各个版本区别较大,这里仅以当前 master 分支最新代码 (git commit: 37ae920) 为例。
|
使用 TCollector 需下载其[源代码](https://github.com/OpenTSDB/tcollector)。其配置项在其源代码中。注意:TCollector 各个版本区别较大,这里仅以当前 master 分支最新代码 (git commit: 37ae920) 为例。
|
||||||
|
|
||||||
|
@ -30,7 +30,7 @@ password = "taosdata"
|
||||||
|
|
||||||
示例为源代码修改内容的 git diff 输出:
|
示例为源代码修改内容的 git diff 输出:
|
||||||
|
|
||||||
```
|
```diff
|
||||||
index e7e7a1c..ec3e23c 100644
|
index e7e7a1c..ec3e23c 100644
|
||||||
--- a/collectors/etc/config.py
|
--- a/collectors/etc/config.py
|
||||||
+++ b/collectors/etc/config.py
|
+++ b/collectors/etc/config.py
|
||||||
|
|
|
@ -9,11 +9,7 @@ taosdump 是为开源用户提供的 TDengine 数据备份/恢复工具,备份
|
||||||
|
|
||||||
## 安装
|
## 安装
|
||||||
|
|
||||||
taosdump 提供两种安装方式:
|
taosdump 是 TDengine 安装包中默认安装组件,安装 TDengine 后即可使用,可参考 [TDengine 安装](../../../get-started/)
|
||||||
|
|
||||||
- taosdump 是 TDengine 安装包中默认安装组件,安装 TDengine 后即可使用,可参考[TDengine 安装](../../../get-started/)
|
|
||||||
|
|
||||||
- 单独编译 taos-tools 并安装, 参考 [taos-tools](https://github.com/taosdata/taos-tools) 仓库。
|
|
||||||
|
|
||||||
|
|
||||||
## 常用使用场景
|
## 常用使用场景
|
||||||
|
|
|
@ -8,11 +8,7 @@ taosBenchmark 是 TDengine 产品性能基准测试工具,提供对 TDengine
|
||||||
|
|
||||||
## 安装
|
## 安装
|
||||||
|
|
||||||
taosBenchmark 提供两种安装方式:
|
taosBenchmark 是 TDengine 安装包中默认安装组件,安装 TDengine 后即可使用,参考 [TDengine 安装](../../../get-started/)
|
||||||
|
|
||||||
- taosBenchmark 是 TDengine 安装包中默认安装组件,安装 TDengine 后即可使用,参考 [TDengine 安装](../../../get-started/)
|
|
||||||
|
|
||||||
- 单独编译 taos-tools 并安装, 参考 [taos-tools](https://github.com/taosdata/taos-tools) 仓库。
|
|
||||||
|
|
||||||
## 运行
|
## 运行
|
||||||
|
|
||||||
|
@ -62,7 +58,7 @@ taosBenchmark -f <json file>
|
||||||
<summary>insert.json</summary>
|
<summary>insert.json</summary>
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{{#include /taos-tools/example/insert.json}}
|
{{#include /TDengine/tools/taos-tools/example/insert.json}}
|
||||||
```
|
```
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
@ -73,7 +69,7 @@ taosBenchmark -f <json file>
|
||||||
<summary>query.json</summary>
|
<summary>query.json</summary>
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{{#include /taos-tools/example/query.json}}
|
{{#include /TDengine/tools/taos-tools/example/query.json}}
|
||||||
```
|
```
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
@ -84,12 +80,12 @@ taosBenchmark -f <json file>
|
||||||
<summary>tmq.json</summary>
|
<summary>tmq.json</summary>
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{{#include /taos-tools/example/tmq.json}}
|
{{#include /TDengine/tools/taos-tools/example/tmq.json}}
|
||||||
```
|
```
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
查看更多 json 配置文件示例可 [点击这里](https://github.com/taosdata/taos-tools/tree/main/example)
|
查看更多 json 配置文件示例可 [点击这里](https://github.com/taosdata/TDengine/tree/main/tools/taos-tools/example)
|
||||||
|
|
||||||
## 命令行参数详解
|
## 命令行参数详解
|
||||||
| 命令行参数 | 功能说明 |
|
| 命令行参数 | 功能说明 |
|
||||||
|
@ -169,6 +165,9 @@ INFO: Spend 26.9530 second completed total queries: 30000, the QPS of all thread
|
||||||
- 第一行表示 3 个线程每个线程执行 10000 次查询及查询请求延时百分位分布情况,`SQL command` 为测试的查询语句
|
- 第一行表示 3 个线程每个线程执行 10000 次查询及查询请求延时百分位分布情况,`SQL command` 为测试的查询语句
|
||||||
- 第二行表示总共完成了 10000 * 3 = 30000 次查询总数
|
- 第二行表示总共完成了 10000 * 3 = 30000 次查询总数
|
||||||
- 第三行表示查询总耗时为 26.9653 秒,每秒查询率(QPS)为:1113.049 次/秒
|
- 第三行表示查询总耗时为 26.9653 秒,每秒查询率(QPS)为:1113.049 次/秒
|
||||||
|
- 如果在查询中设置了 `continue_if_fail` 选项为 `yes`,在最后一行中会输出失败请求个数及错误率,格式 error + 失败请求个数 (错误率)
|
||||||
|
- QPS = 成功请求数量 / 花费时间(单位秒)
|
||||||
|
- 错误率 = 失败请求数量 /(成功请求数量 + 失败请求数量)
|
||||||
|
|
||||||
#### 订阅指标
|
#### 订阅指标
|
||||||
|
|
||||||
|
@ -224,7 +223,7 @@ INFO: Consumed total msgs: 3000, total rows: 30000000
|
||||||
|
|
||||||
- **name** : 数据库名。
|
- **name** : 数据库名。
|
||||||
|
|
||||||
- **drop** : 数据库已存在时是否删除重建,可选项为 "yes" 或 "no", 默认为 “yes”
|
- **drop** : 数据库已存在时是否删除,可选项为 "yes" 或 "no", 默认为 “yes”
|
||||||
|
|
||||||
#### 流式计算相关配置参数
|
#### 流式计算相关配置参数
|
||||||
|
|
||||||
|
@ -250,9 +249,9 @@ INFO: Consumed total msgs: 3000, total rows: 30000000
|
||||||
|
|
||||||
- **child_table_exists** : 子表是否已经存在,默认值为 "no",可选值为 "yes" 或 "no"。
|
- **child_table_exists** : 子表是否已经存在,默认值为 "no",可选值为 "yes" 或 "no"。
|
||||||
|
|
||||||
- **child_table_count** : 子表的数量,默认值为 10。
|
- **childtable_count** : 子表的数量,默认值为 10。
|
||||||
|
|
||||||
- **child_table_prefix** : 子表名称的前缀,必选配置项,没有默认值。
|
- **childtable_prefix** : 子表名称的前缀,必选配置项,没有默认值。
|
||||||
|
|
||||||
- **escape_character** : 超级表和子表名称中是否包含转义字符,默认值为 "no",可选值为 "yes" 或 "no"。
|
- **escape_character** : 超级表和子表名称中是否包含转义字符,默认值为 "no",可选值为 "yes" 或 "no"。
|
||||||
|
|
||||||
|
@ -343,15 +342,13 @@ INFO: Consumed total msgs: 3000, total rows: 30000000
|
||||||
|
|
||||||
- **thread_count** : 插入数据的线程数量,默认为 8。
|
- **thread_count** : 插入数据的线程数量,默认为 8。
|
||||||
|
|
||||||
- **thread_bind_vgroup** : 写入时 vgroup 是否和写入线程绑定,绑定后可提升写入速度, 取值为 "yes" 或 "no",默认值为 “no”, 设置为 “no” 后与原来行为一致。 当设为 “yes” 时,如果 thread_count 数量大小写入数据库的 vgroups 数量, thread_count 自动调整为 vgroups 数量;如果 thread_count 数量小于 vgroups 数量,写入线程数量不做调整,一个线程写完一个 vgroup 数据后再写下一个,同时保持一个 vgroup 同时只能由一个线程写入的规则。
|
- **thread_bind_vgroup** : 写入时 vgroup 是否和写入线程绑定,绑定后可提升写入速度, 取值为 "yes" 或 "no",默认值为 “no”, 设置为 “no” 后与原来行为一致。 当设为 “yes” 时,如果 thread_count 大于写入数据库 vgroups 数量, thread_count 自动调整为 vgroups 数量;如果 thread_count 小于 vgroups 数量,写入线程数量不做调整,一个线程写完一个 vgroup 数据后再写下一个,同时保持一个 vgroup 同时只能由一个线程写入的规则。
|
||||||
|
|
||||||
- **create_table_thread_count** : 建表的线程数量,默认为 8。
|
- **create_table_thread_count** : 建表的线程数量,默认为 8。
|
||||||
|
|
||||||
- **connection_pool_size** : 预先建立的与 TDengine 服务端之间的连接的数量。若不配置,则与所指定的线程数相同。
|
|
||||||
|
|
||||||
- **result_file** : 结果输出文件的路径,默认值为 ./output.txt。
|
- **result_file** : 结果输出文件的路径,默认值为 ./output.txt。
|
||||||
|
|
||||||
- **confirm_parameter_prompt** : 开关参数,要求用户在提示后确认才能继续。默认值为 false 。
|
- **confirm_parameter_prompt** : 开关参数,要求用户在提示后确认才能继续, 可取值 "yes" or "no"。默认值为 "no" 。
|
||||||
|
|
||||||
- **interlace_rows** : 启用交错插入模式并同时指定向每个子表每次插入的数据行数。交错插入模式是指依次向每张子表插入由本参数所指定的行数并重复这个过程,直到所有子表的数据都插入完成。默认值为 0, 即向一张子表完成数据插入后才会向下一张子表进行数据插入。
|
- **interlace_rows** : 启用交错插入模式并同时指定向每个子表每次插入的数据行数。交错插入模式是指依次向每张子表插入由本参数所指定的行数并重复这个过程,直到所有子表的数据都插入完成。默认值为 0, 即向一张子表完成数据插入后才会向下一张子表进行数据插入。
|
||||||
在 `super_tables` 中也可以配置该参数,若配置则以 `super_tables` 中的配置为高优先级,覆盖全局设置。
|
在 `super_tables` 中也可以配置该参数,若配置则以 `super_tables` 中的配置为高优先级,覆盖全局设置。
|
||||||
|
@ -381,12 +378,16 @@ interval 控制休眠时间,避免持续查询慢查询消耗 CPU ,单位为
|
||||||
|
|
||||||
查询指定表(可以指定超级表、子表或普通表)的配置参数在 `specified_table_query` 中设置。
|
查询指定表(可以指定超级表、子表或普通表)的配置参数在 `specified_table_query` 中设置。
|
||||||
|
|
||||||
- **mixed_query** : 查询模式,取值 “yes” 为`混合查询`, "no" 为`正常查询` , 默认值为 “no”
|
- **mixed_query** : 查询模式
|
||||||
`混合查询`:`sqls` 中所有 sql 按 `threads` 线程数分组,每个线程执行一组, 线程中每个 sql 都需执行 `query_times` 次查询
|
“yes” :`混合查询`
|
||||||
`正常查询`:`sqls` 中每个 sql 启动 `threads` 个线程,每个线程执行完 `query_times` 次后退出,下个 sql 需等待上个 sql 线程全部执行完退出后方可执行
|
"no"(默认值) :`普通查询`
|
||||||
不管 `正常查询` 还是 `混合查询` ,执行查询总次数是相同的 ,查询总次数 = `sqls` 个数 * `threads` * `query_times`, 区别是 `正常查询` 每个 sql 都会启动 `threads` 个线程,而 `混合查询` 只启动一次 `threads` 个线程执行完所有 SQL, 两者启动线程次数不一样。
|
`普通查询`:`sqls` 中每个 sql 启动 `threads` 个线程查询此 sql, 执行完 `query_times` 次查询后退出,执行此 sql 的所有线程都完成后进入下一个 sql
|
||||||
|
`查询总次数` = `sqls` 个数 * `query_times` * `threads`
|
||||||
|
|
||||||
- **query_interval** : 查询时间间隔,单位是秒,默认值为 0。
|
`混合查询`:`sqls` 中所有 sql 分成 `threads` 个组,每个线程执行一组, 每个 sql 都需执行 `query_times` 次查询
|
||||||
|
`查询总次数` = `sqls` 个数 * `query_times`
|
||||||
|
|
||||||
|
- **query_interval** : 查询时间间隔,单位: millisecond,默认值为 0。
|
||||||
|
|
||||||
- **threads** : 执行查询 SQL 的线程数,默认值为 1。
|
- **threads** : 执行查询 SQL 的线程数,默认值为 1。
|
||||||
|
|
||||||
|
@ -406,9 +407,9 @@ interval 控制休眠时间,避免持续查询慢查询消耗 CPU ,单位为
|
||||||
- **threads** : 执行查询 SQL 的线程数,默认值为 1。
|
- **threads** : 执行查询 SQL 的线程数,默认值为 1。
|
||||||
|
|
||||||
- **sqls** :
|
- **sqls** :
|
||||||
- **sql** : 执行的 SQL 命令,必填;对于超级表的查询 SQL,在 SQL 命令中保留 "xxxx",程序会自动将其替换为超级表的所有子表名。
|
- **sql** : 执行的 SQL 命令,必填;对于超级表的查询 SQL,在 SQL 命令中必须保留 "xxxx",会替换为超级下所有子表名后再执行。
|
||||||
替换为超级表中所有的子表名。
|
|
||||||
- **result** : 保存查询结果的文件,未指定则不保存。
|
- **result** : 保存查询结果的文件,未指定则不保存。
|
||||||
|
- **限制项** : sqls 下配置 sql 数组最大为 100 个
|
||||||
|
|
||||||
### 订阅场景配置参数
|
### 订阅场景配置参数
|
||||||
|
|
||||||
|
|
|
@ -44,7 +44,7 @@ table_option: {
|
||||||
1. 表(列)名命名规则参见[名称命名规则](./19-limit.md#名称命名规则)。
|
1. 表(列)名命名规则参见[名称命名规则](./19-limit.md#名称命名规则)。
|
||||||
2. 表名最大长度为 192。
|
2. 表名最大长度为 192。
|
||||||
3. 表的第一个字段必须是 TIMESTAMP,并且系统自动将其设为主键。
|
3. 表的第一个字段必须是 TIMESTAMP,并且系统自动将其设为主键。
|
||||||
4. 除时间戳主键列之外,还可以通过 PRIMARY KEY 关键字指定第二列为额外的主键列。被指定为主键列的第二列必须为整型或字符串类型(VARCHAR)。
|
4. 除时间戳主键列之外,还可以通过 PRIMARY KEY 关键字指定第二列为额外的主键列,该列与时间戳列共同组成复合主键。当设置了复合主键时,两条记录的时间戳列与 PRIMARY KEY 列都相同,才会被认为是重复记录,数据库只保留最新的一条;否则视为两条记录,全部保留。注意:被指定为主键列的第二列必须为整型或字符串类型(VARCHAR)。
|
||||||
5. 表的每行长度不能超过 48KB(从 3.0.5.0 版本开始为 64KB);(注意:每个 VARCHAR/NCHAR/GEOMETRY 类型的列还会额外占用 2 个字节的存储位置)。
|
5. 表的每行长度不能超过 48KB(从 3.0.5.0 版本开始为 64KB);(注意:每个 VARCHAR/NCHAR/GEOMETRY 类型的列还会额外占用 2 个字节的存储位置)。
|
||||||
6. 使用数据类型 VARCHAR/NCHAR/GEOMETRY,需指定其最长的字节数,如 VARCHAR(20),表示 20 字节。
|
6. 使用数据类型 VARCHAR/NCHAR/GEOMETRY,需指定其最长的字节数,如 VARCHAR(20),表示 20 字节。
|
||||||
7. 关于 `ENCODE` 和 `COMPRESS` 的使用,请参考[按列压缩](../compress)
|
7. 关于 `ENCODE` 和 `COMPRESS` 的使用,请参考[按列压缩](../compress)
|
||||||
|
|
|
@ -26,7 +26,7 @@ table_option: {
|
||||||
|
|
||||||
**使用说明**
|
**使用说明**
|
||||||
1. 超级表中列的最大个数为 4096,需要注意,这里的 4096 是包含 TAG 列在内的,最小个数为 3,包含一个时间戳主键、一个 TAG 列和一个数据列。
|
1. 超级表中列的最大个数为 4096,需要注意,这里的 4096 是包含 TAG 列在内的,最小个数为 3,包含一个时间戳主键、一个 TAG 列和一个数据列。
|
||||||
2. 除时间戳主键列之外,还可以通过 PRIMARY KEY 关键字指定第二列为额外的主键列。被指定为主键列的第二列必须为整型或字符串类型(varchar)
|
2. 除时间戳主键列之外,还可以通过 PRIMARY KEY 关键字指定第二列为额外的主键列,该列与时间戳列共同组成复合主键。当设置了复合主键时,两条记录的时间戳列与 PRIMARY KEY 列都相同,才会被认为是重复记录,数据库只保留最新的一条;否则视为两条记录,全部保留。注意:被指定为主键列的第二列必须为整型或字符串类型(varchar)。
|
||||||
3. TAGS语法指定超级表的标签列,标签列需要遵循以下约定:
|
3. TAGS语法指定超级表的标签列,标签列需要遵循以下约定:
|
||||||
- TAGS 中的 TIMESTAMP 列写入数据时需要提供给定值,而暂不支持四则运算,例如 NOW + 10s 这类表达式。
|
- TAGS 中的 TIMESTAMP 列写入数据时需要提供给定值,而暂不支持四则运算,例如 NOW + 10s 这类表达式。
|
||||||
- TAGS 列名不能与其他列名相同。
|
- TAGS 列名不能与其他列名相同。
|
||||||
|
|
|
@ -10,7 +10,7 @@ description: 流式计算的相关 SQL 的详细语法
|
||||||
```sql
|
```sql
|
||||||
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name[(field1_name, field2_name [PRIMARY KEY], ...)] [TAGS (create_definition [, create_definition] ...)] SUBTABLE(expression) AS subquery
|
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name[(field1_name, field2_name [PRIMARY KEY], ...)] [TAGS (create_definition [, create_definition] ...)] SUBTABLE(expression) AS subquery
|
||||||
stream_options: {
|
stream_options: {
|
||||||
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
|
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time | FORCE_WINDOW_CLOSE]
|
||||||
WATERMARK time
|
WATERMARK time
|
||||||
IGNORE EXPIRED [0|1]
|
IGNORE EXPIRED [0|1]
|
||||||
DELETE_MARK time
|
DELETE_MARK time
|
||||||
|
@ -56,6 +56,10 @@ window_clause: {
|
||||||
|
|
||||||
其中,SESSION 是会话窗口,tol_val 是时间间隔的最大范围。在 tol_val 时间间隔范围内的数据都属于同一个窗口,如果连续的两条数据的时间超过 tol_val,则自动开启下一个窗口。该窗口的 _wend 等于最后一条数据的时间加上 tol_val。
|
其中,SESSION 是会话窗口,tol_val 是时间间隔的最大范围。在 tol_val 时间间隔范围内的数据都属于同一个窗口,如果连续的两条数据的时间超过 tol_val,则自动开启下一个窗口。该窗口的 _wend 等于最后一条数据的时间加上 tol_val。
|
||||||
|
|
||||||
|
STATE_WINDOW 是状态窗口,col 用来标识状态量,相同的状态量数值则归属于同一个状态窗口,col 数值改变后则当前窗口结束,自动开启下一个窗口。
|
||||||
|
|
||||||
|
INTERVAL 是时间窗口,又可分为滑动时间窗口和翻转时间窗口。INTERVAL 子句用于指定窗口相等时间周期,SLIDING 字句用于指定窗口向前滑动的时间。当 interval_val 与 sliding_val 相等的时候,时间窗口即为翻转时间窗口,否则为滑动时间窗口,注意:sliding_val 必须小于等于 interval_val。
|
||||||
|
|
||||||
EVENT_WINDOW 是事件窗口,根据开始条件和结束条件来划定窗口。当 start_trigger_condition 满足时则窗口开始,直到 end_trigger_condition 满足时窗口关闭。 start_trigger_condition 和 end_trigger_condition 可以是任意 TDengine 支持的条件表达式,且可以包含不同的列。
|
EVENT_WINDOW 是事件窗口,根据开始条件和结束条件来划定窗口。当 start_trigger_condition 满足时则窗口开始,直到 end_trigger_condition 满足时窗口关闭。 start_trigger_condition 和 end_trigger_condition 可以是任意 TDengine 支持的条件表达式,且可以包含不同的列。
|
||||||
|
|
||||||
COUNT_WINDOW 是计数窗口,按固定的数据行数来划分窗口。 count_val 是常量,是正整数,必须大于等于2,小于2147483648。 count_val 表示每个 COUNT_WINDOW 包含的最大数据行数,总数据行数不能整除 count_val 时,最后一个窗口的行数会小于 count_val 。 sliding_val 是常量,表示窗口滑动的数量,类似于 INTERVAL 的 SLIDING 。
|
COUNT_WINDOW 是计数窗口,按固定的数据行数来划分窗口。 count_val 是常量,是正整数,必须大于等于2,小于2147483648。 count_val 表示每个 COUNT_WINDOW 包含的最大数据行数,总数据行数不能整除 count_val 时,最后一个窗口的行数会小于 count_val 。 sliding_val 是常量,表示窗口滑动的数量,类似于 INTERVAL 的 SLIDING 。
|
||||||
|
|
|
@ -45,7 +45,8 @@ TDengine 支持 `UNION ALL` 和 `UNION` 操作符。UNION ALL 将查询返回的
|
||||||
| 9 | LIKE | BINARY、NCHAR 和 VARCHAR | 通配符匹配所指定的模式串 |
|
| 9 | LIKE | BINARY、NCHAR 和 VARCHAR | 通配符匹配所指定的模式串 |
|
||||||
| 10 | NOT LIKE | BINARY、NCHAR 和 VARCHAR | 通配符不匹配所指定的模式串 |
|
| 10 | NOT LIKE | BINARY、NCHAR 和 VARCHAR | 通配符不匹配所指定的模式串 |
|
||||||
| 11 | MATCH, NMATCH | BINARY、NCHAR 和 VARCHAR | 正则表达式匹配 |
|
| 11 | MATCH, NMATCH | BINARY、NCHAR 和 VARCHAR | 正则表达式匹配 |
|
||||||
| 12 | CONTAINS | JSON | JSON 中是否存在某键 |
|
| 12 | REGEXP, NOT REGEXP | BINARY、NCHAR 和 VARCHAR | 正则表达式匹配 |
|
||||||
|
| 13 | CONTAINS | JSON | JSON 中是否存在某键 |
|
||||||
|
|
||||||
LIKE 条件使用通配符字符串进行匹配检查,规则如下:
|
LIKE 条件使用通配符字符串进行匹配检查,规则如下:
|
||||||
|
|
||||||
|
@ -53,7 +54,7 @@ LIKE 条件使用通配符字符串进行匹配检查,规则如下:
|
||||||
- 如果希望匹配字符串中原本就带有的 \_(下划线)字符,那么可以在通配符字符串中写作 \_,即加一个反斜线来进行转义。
|
- 如果希望匹配字符串中原本就带有的 \_(下划线)字符,那么可以在通配符字符串中写作 \_,即加一个反斜线来进行转义。
|
||||||
- 通配符字符串最长不能超过 100 字节。不建议使用太长的通配符字符串,否则将有可能严重影响 LIKE 操作的执行性能。
|
- 通配符字符串最长不能超过 100 字节。不建议使用太长的通配符字符串,否则将有可能严重影响 LIKE 操作的执行性能。
|
||||||
|
|
||||||
MATCH 条件和 NMATCH 条件使用正则表达式进行匹配,规则如下:
|
MATCH/REGEXP 条件和 NMATCH/NOT REGEXP 条件使用正则表达式进行匹配,规则如下:
|
||||||
|
|
||||||
- 支持符合 POSIX 规范的正则表达式,具体规范内容可参见 Regular Expressions。
|
- 支持符合 POSIX 规范的正则表达式,具体规范内容可参见 Regular Expressions。
|
||||||
- MATCH 和正则表达式匹配时, 返回 TURE. NMATCH 和正则表达式不匹配时, 返回 TRUE.
|
- MATCH 和正则表达式匹配时, 返回 TURE. NMATCH 和正则表达式不匹配时, 返回 TRUE.
|
||||||
|
|
|
@ -33,6 +33,7 @@ TDengine 的 JDBC 驱动实现尽可能与关系型数据库驱动保持一致
|
||||||
|
|
||||||
| taos-jdbcdriver 版本 | 主要变化 | TDengine 版本 |
|
| taos-jdbcdriver 版本 | 主要变化 | TDengine 版本 |
|
||||||
| ------------------| ---------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------- |
|
| ------------------| ---------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------- |
|
||||||
|
| 3.5.3 | 在 WebSocket 连接上支持无符号数据类型 | - |
|
||||||
| 3.5.2 | 解决了 WebSocket 查询结果集释放 bug | - |
|
| 3.5.2 | 解决了 WebSocket 查询结果集释放 bug | - |
|
||||||
| 3.5.1 | 解决了数据订阅获取时间戳对象类型问题 | - |
|
| 3.5.1 | 解决了数据订阅获取时间戳对象类型问题 | - |
|
||||||
| 3.5.0 | 1. 优化了 WebSocket 连接参数绑定性能,支持参数绑定查询使用二进制数据 <br/> 2. 优化了 WebSocket 连接在小查询上的性能 <br/> 3. WebSocket 连接上支持设置时区和应用信息 | 3.3.5.0 及更高版本 |
|
| 3.5.0 | 1. 优化了 WebSocket 连接参数绑定性能,支持参数绑定查询使用二进制数据 <br/> 2. 优化了 WebSocket 连接在小查询上的性能 <br/> 3. WebSocket 连接上支持设置时区和应用信息 | 3.3.5.0 及更高版本 |
|
||||||
|
@ -128,24 +129,27 @@ JDBC 连接器可能报错的错误码包括 4 种:
|
||||||
|
|
||||||
TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对应类型转换如下:
|
TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对应类型转换如下:
|
||||||
|
|
||||||
| TDengine DataType | JDBCType |
|
| TDengine DataType | JDBCType | 备注|
|
||||||
| ----------------- | ------------------ |
|
| ----------------- | -------------------- |-------------------- |
|
||||||
| TIMESTAMP | java.sql.Timestamp |
|
| TIMESTAMP | java.sql.Timestamp ||
|
||||||
| INT | java.lang.Integer |
|
| BOOL | java.lang.Boolean ||
|
||||||
| BIGINT | java.lang.Long |
|
| TINYINT | java.lang.Byte ||
|
||||||
| FLOAT | java.lang.Float |
|
| TINYINT UNSIGNED | java.lang.Short |仅在 WebSocket 连接方式支持|
|
||||||
| DOUBLE | java.lang.Double |
|
| SMALLINT | java.lang.Short ||
|
||||||
| SMALLINT | java.lang.Short |
|
| SMALLINT UNSIGNED | java.lang.Integer |仅在 WebSocket 连接方式支持|
|
||||||
| TINYINT | java.lang.Byte |
|
| INT | java.lang.Integer ||
|
||||||
| BOOL | java.lang.Boolean |
|
| INT UNSIGNED | java.lang.Long |仅在 WebSocket 连接方式支持|
|
||||||
| BINARY | byte array |
|
| BIGINT | java.lang.Long ||
|
||||||
| NCHAR | java.lang.String |
|
| BIGINT UNSIGNED | java.math.BigInteger |仅在 WebSocket 连接方式支持|
|
||||||
| JSON | java.lang.String |
|
| FLOAT | java.lang.Float ||
|
||||||
| VARBINARY | byte[] |
|
| DOUBLE | java.lang.Double ||
|
||||||
| GEOMETRY | byte[] |
|
| BINARY | byte array ||
|
||||||
|
| NCHAR | java.lang.String ||
|
||||||
|
| JSON | java.lang.String |仅在 tag 中支持|
|
||||||
|
| VARBINARY | byte[] ||
|
||||||
|
| GEOMETRY | byte[] ||
|
||||||
|
|
||||||
**注意**:JSON 类型仅在 tag 中支持。
|
**注意**:由于历史原因,TDengine中的BINARY底层不是真正的二进制数据,已不建议使用。请用VARBINARY类型代替。
|
||||||
由于历史原因,TDengine中的BINARY底层不是真正的二进制数据,已不建议使用。请用VARBINARY类型代替。
|
|
||||||
GEOMETRY类型是little endian字节序的二进制数据,符合WKB规范。详细信息请参考 [数据类型](../../taos-sql/data-type/#数据类型)
|
GEOMETRY类型是little endian字节序的二进制数据,符合WKB规范。详细信息请参考 [数据类型](../../taos-sql/data-type/#数据类型)
|
||||||
WKB规范请参考[Well-Known Binary (WKB)](https://libgeos.org/specifications/wkb/)
|
WKB规范请参考[Well-Known Binary (WKB)](https://libgeos.org/specifications/wkb/)
|
||||||
对于java连接器,可以使用jts库来方便的创建GEOMETRY类型对象,序列化后写入TDengine,这里有一个样例[Geometry示例](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/GeometryDemo.java)
|
对于java连接器,可以使用jts库来方便的创建GEOMETRY类型对象,序列化后写入TDengine,这里有一个样例[Geometry示例](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/GeometryDemo.java)
|
||||||
|
|
|
@ -191,7 +191,7 @@ TDengine 存储的数据包括采集的时序数据以及库、表相关的元
|
||||||
|
|
||||||
在进行海量数据管理时,为了实现水平扩展,通常需要采用数据分片(sharding)和数据分区(partitioning)策略。TDengine 通过 vnode 来实现数据分片,并通过按时间段划分数据文件来实现时序数据的分区。
|
在进行海量数据管理时,为了实现水平扩展,通常需要采用数据分片(sharding)和数据分区(partitioning)策略。TDengine 通过 vnode 来实现数据分片,并通过按时间段划分数据文件来实现时序数据的分区。
|
||||||
|
|
||||||
vnode 不仅负责处理时序数据的写入、查询和计算任务,还承担着负载均衡、数据恢复以及支持异构环境的重要角色。为了实现这些目标,TDengine 将一个 dnode 根据其计算和存储资源切分为多个 vnode。这些 vnode 的管理过程对应用程序是完全透明的,由TDengine 自动完成。。
|
vnode 不仅负责处理时序数据的写入、查询和计算任务,还承担着负载均衡、数据恢复以及支持异构环境的重要角色。为了实现这些目标,TDengine 将一个 dnode 根据其计算和存储资源切分为多个 vnode。这些 vnode 的管理过程对应用程序是完全透明的,由TDengine 自动完成。
|
||||||
|
|
||||||
对于单个数据采集点,无论其数据量有多大,一个 vnode 都拥有足够的计算资源和存储资源来应对(例如,如果每秒生成一条 16B 的记录,一年产生的原始数据量也不到 0.5GB)。因此,TDengine 将一张表(即一个数据采集点)的所有数据都存储在一个vnode 中,避免将同一数据采集点的数据分散到两个或多个 dnode 上。同时,一个 vnode 可以存储多个数据采集点(表)的数据,最大可容纳的表数目上限为 100 万。设计上,一个 vnode 中的所有表都属于同一个数据库。
|
对于单个数据采集点,无论其数据量有多大,一个 vnode 都拥有足够的计算资源和存储资源来应对(例如,如果每秒生成一条 16B 的记录,一年产生的原始数据量也不到 0.5GB)。因此,TDengine 将一张表(即一个数据采集点)的所有数据都存储在一个vnode 中,避免将同一数据采集点的数据分散到两个或多个 dnode 上。同时,一个 vnode 可以存储多个数据采集点(表)的数据,最大可容纳的表数目上限为 100 万。设计上,一个 vnode 中的所有表都属于同一个数据库。
|
||||||
|
|
||||||
|
|
|
@ -4,9 +4,9 @@ title: taosTools 发布历史及下载链接
|
||||||
description: taosTools 的发布历史、Release Notes 和下载链接
|
description: taosTools 的发布历史、Release Notes 和下载链接
|
||||||
---
|
---
|
||||||
|
|
||||||
taosTools 各版本安装包下载链接如下:
|
从 3.0.6.0 开始,taosTools 集成到 TDengine 的安装包中,不再单独提供。taosTools(对应 TDengine 3.0.5.2及以下)各版本安装包下载链接如下:
|
||||||
|
|
||||||
其他历史版本安装包请访问[这里](https://www.taosdata.com/all-downloads)
|
2.6 的历史版本安装包请访问[这里](https://www.taosdata.com/all-downloads)
|
||||||
|
|
||||||
import Release from "/components/ReleaseV3";
|
import Release from "/components/ReleaseV3";
|
||||||
|
|
||||||
|
|
|
@ -4,9 +4,9 @@ sidebar_label: 版本说明
|
||||||
description: 各版本版本说明
|
description: 各版本版本说明
|
||||||
---
|
---
|
||||||
|
|
||||||
[3.3.5.2](./3.3.5.2)
|
```mdx-code-block
|
||||||
[3.3.5.0](./3.3.5.0)
|
import DocCardList from '@theme/DocCardList';
|
||||||
[3.3.4.8](./3.3.4.8)
|
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
|
||||||
[3.3.4.3](./3.3.4.3)
|
|
||||||
[3.3.3.0](./3.3.3.0)
|
<DocCardList items={useCurrentSidebarCategory().items}/>
|
||||||
[3.3.2.0](./3.3.2.0)
|
```
|
||||||
|
|
|
@ -86,7 +86,7 @@ int32_t taosAnalBufWriteDataEnd(SAnalyticBuf *pBuf);
|
||||||
int32_t taosAnalBufClose(SAnalyticBuf *pBuf);
|
int32_t taosAnalBufClose(SAnalyticBuf *pBuf);
|
||||||
void taosAnalBufDestroy(SAnalyticBuf *pBuf);
|
void taosAnalBufDestroy(SAnalyticBuf *pBuf);
|
||||||
|
|
||||||
const char *taosAnalAlgoStr(EAnalAlgoType algoType);
|
const char *taosAnalysisAlgoType(EAnalAlgoType algoType);
|
||||||
EAnalAlgoType taosAnalAlgoInt(const char *algoName);
|
EAnalAlgoType taosAnalAlgoInt(const char *algoName);
|
||||||
const char *taosAnalAlgoUrlStr(EAnalAlgoType algoType);
|
const char *taosAnalAlgoUrlStr(EAnalAlgoType algoType);
|
||||||
|
|
||||||
|
|
|
@ -326,7 +326,6 @@ int32_t tDeserializeSConfigArray(SDecoder *pDecoder, SArray *array);
|
||||||
int32_t setAllConfigs(SConfig *pCfg);
|
int32_t setAllConfigs(SConfig *pCfg);
|
||||||
void printConfigNotMatch(SArray *array);
|
void printConfigNotMatch(SArray *array);
|
||||||
|
|
||||||
int32_t compareSConfigItemArrays(SArray *mArray, const SArray *dArray, SArray *diffArray);
|
|
||||||
bool isConifgItemLazyMode(SConfigItem *item);
|
bool isConifgItemLazyMode(SConfigItem *item);
|
||||||
int32_t taosUpdateTfsItemDisable(SConfig *pCfg, const char *value, void *pTfs);
|
int32_t taosUpdateTfsItemDisable(SConfig *pCfg, const char *value, void *pTfs);
|
||||||
|
|
||||||
|
|
|
@ -2614,6 +2614,8 @@ typedef struct {
|
||||||
int8_t assignedAcked;
|
int8_t assignedAcked;
|
||||||
SMArbUpdateGroupAssigned assignedLeader;
|
SMArbUpdateGroupAssigned assignedLeader;
|
||||||
int64_t version;
|
int64_t version;
|
||||||
|
int32_t code;
|
||||||
|
int64_t updateTimeMs;
|
||||||
} SMArbUpdateGroup;
|
} SMArbUpdateGroup;
|
||||||
|
|
||||||
typedef struct {
|
typedef struct {
|
||||||
|
|
|
@ -37,6 +37,7 @@ typedef enum {
|
||||||
SYNC_RD_QUEUE,
|
SYNC_RD_QUEUE,
|
||||||
STREAM_QUEUE,
|
STREAM_QUEUE,
|
||||||
ARB_QUEUE,
|
ARB_QUEUE,
|
||||||
|
STREAM_CTRL_QUEUE,
|
||||||
QUEUE_MAX,
|
QUEUE_MAX,
|
||||||
} EQueueType;
|
} EQueueType;
|
||||||
|
|
||||||
|
|
|
@ -239,23 +239,11 @@ typedef struct {
|
||||||
snprintf(_output, (int32_t)(_outputBytes), "%" PRIu64, *(uint64_t *)(_input)); \
|
snprintf(_output, (int32_t)(_outputBytes), "%" PRIu64, *(uint64_t *)(_input)); \
|
||||||
break; \
|
break; \
|
||||||
case TSDB_DATA_TYPE_FLOAT: { \
|
case TSDB_DATA_TYPE_FLOAT: { \
|
||||||
int32_t n = snprintf(_output, (int32_t)(_outputBytes), "%f", *(float *)(_input)); \
|
snprintf(_output, (int32_t)(_outputBytes), "%.*g", FLT_DIG, *(float *)(_input)); \
|
||||||
if (n >= (_outputBytes)) { \
|
|
||||||
n = snprintf(_output, (int32_t)(_outputBytes), "%.7e", *(float *)(_input)); \
|
|
||||||
if (n >= (_outputBytes)) { \
|
|
||||||
snprintf(_output, (int32_t)(_outputBytes), "%f", *(float *)(_input)); \
|
|
||||||
} \
|
|
||||||
} \
|
|
||||||
break; \
|
break; \
|
||||||
} \
|
} \
|
||||||
case TSDB_DATA_TYPE_DOUBLE: { \
|
case TSDB_DATA_TYPE_DOUBLE: { \
|
||||||
int32_t n = snprintf(_output, (int32_t)(_outputBytes), "%f", *(double *)(_input)); \
|
snprintf(_output, (int32_t)(_outputBytes), "%.*g", DBL_DIG, *(double *)(_input)); \
|
||||||
if (n >= (_outputBytes)) { \
|
|
||||||
snprintf(_output, (int32_t)(_outputBytes), "%.15e", *(double *)(_input)); \
|
|
||||||
if (n >= (_outputBytes)) { \
|
|
||||||
snprintf(_output, (int32_t)(_outputBytes), "%f", *(double *)(_input)); \
|
|
||||||
} \
|
|
||||||
} \
|
|
||||||
break; \
|
break; \
|
||||||
} \
|
} \
|
||||||
case TSDB_DATA_TYPE_UINT: \
|
case TSDB_DATA_TYPE_UINT: \
|
||||||
|
|
|
@ -268,7 +268,7 @@ typedef struct SStoreMeta {
|
||||||
const void* (*extractTagVal)(const void* tag, int16_t type, STagVal* tagVal); // todo remove it
|
const void* (*extractTagVal)(const void* tag, int16_t type, STagVal* tagVal); // todo remove it
|
||||||
|
|
||||||
int32_t (*getTableUidByName)(void* pVnode, char* tbName, uint64_t* uid);
|
int32_t (*getTableUidByName)(void* pVnode, char* tbName, uint64_t* uid);
|
||||||
int32_t (*getTableTypeByName)(void* pVnode, char* tbName, ETableType* tbType);
|
int32_t (*getTableTypeSuidByName)(void* pVnode, char* tbName, ETableType* tbType, uint64_t* suid);
|
||||||
int32_t (*getTableNameByUid)(void* pVnode, uint64_t uid, char* tbName);
|
int32_t (*getTableNameByUid)(void* pVnode, uint64_t uid, char* tbName);
|
||||||
bool (*isTableExisted)(void* pVnode, tb_uid_t uid);
|
bool (*isTableExisted)(void* pVnode, tb_uid_t uid);
|
||||||
|
|
||||||
|
|
|
@ -82,6 +82,7 @@ typedef struct SDatabaseOptions {
|
||||||
int32_t minRowsPerBlock;
|
int32_t minRowsPerBlock;
|
||||||
SNodeList* pKeep;
|
SNodeList* pKeep;
|
||||||
int64_t keep[3];
|
int64_t keep[3];
|
||||||
|
SValueNode* pKeepTimeOffsetNode;
|
||||||
int32_t keepTimeOffset;
|
int32_t keepTimeOffset;
|
||||||
int32_t pages;
|
int32_t pages;
|
||||||
int32_t pagesize;
|
int32_t pagesize;
|
||||||
|
|
|
@ -314,8 +314,8 @@ typedef struct SOrderByExprNode {
|
||||||
|
|
||||||
typedef struct SLimitNode {
|
typedef struct SLimitNode {
|
||||||
ENodeType type; // QUERY_NODE_LIMIT
|
ENodeType type; // QUERY_NODE_LIMIT
|
||||||
int64_t limit;
|
SValueNode* limit;
|
||||||
int64_t offset;
|
SValueNode* offset;
|
||||||
} SLimitNode;
|
} SLimitNode;
|
||||||
|
|
||||||
typedef struct SStateWindowNode {
|
typedef struct SStateWindowNode {
|
||||||
|
@ -681,6 +681,7 @@ int32_t nodesValueNodeToVariant(const SValueNode* pNode, SVariant* pVal);
|
||||||
int32_t nodesMakeValueNodeFromString(char* literal, SValueNode** ppValNode);
|
int32_t nodesMakeValueNodeFromString(char* literal, SValueNode** ppValNode);
|
||||||
int32_t nodesMakeValueNodeFromBool(bool b, SValueNode** ppValNode);
|
int32_t nodesMakeValueNodeFromBool(bool b, SValueNode** ppValNode);
|
||||||
int32_t nodesMakeValueNodeFromInt32(int32_t value, SNode** ppNode);
|
int32_t nodesMakeValueNodeFromInt32(int32_t value, SNode** ppNode);
|
||||||
|
int32_t nodesMakeValueNodeFromInt64(int64_t value, SNode** ppNode);
|
||||||
|
|
||||||
char* nodesGetFillModeString(EFillMode mode);
|
char* nodesGetFillModeString(EFillMode mode);
|
||||||
int32_t nodesMergeConds(SNode** pDst, SNodeList** pSrc);
|
int32_t nodesMergeConds(SNode** pDst, SNodeList** pSrc);
|
||||||
|
|
|
@ -294,7 +294,7 @@ int32_t syncUpdateArbTerm(int64_t rid, SyncTerm arbTerm);
|
||||||
|
|
||||||
SSyncState syncGetState(int64_t rid);
|
SSyncState syncGetState(int64_t rid);
|
||||||
int32_t syncGetArbToken(int64_t rid, char* outToken);
|
int32_t syncGetArbToken(int64_t rid, char* outToken);
|
||||||
int32_t syncGetAssignedLogSynced(int64_t rid);
|
int32_t syncCheckSynced(int64_t rid);
|
||||||
void syncGetRetryEpSet(int64_t rid, SEpSet* pEpSet);
|
void syncGetRetryEpSet(int64_t rid, SEpSet* pEpSet);
|
||||||
const char* syncStr(ESyncState state);
|
const char* syncStr(ESyncState state);
|
||||||
|
|
||||||
|
|
|
@ -92,7 +92,9 @@ int32_t tfsGetLevel(STfs *pTfs);
|
||||||
* @param pDiskId The disk ID after allocation.
|
* @param pDiskId The disk ID after allocation.
|
||||||
* @return int32_t 0 for success, -1 for failure.
|
* @return int32_t 0 for success, -1 for failure.
|
||||||
*/
|
*/
|
||||||
int32_t tfsAllocDisk(STfs *pTfs, int32_t expLevel, SDiskID *pDiskId);
|
int32_t tfsAllocDisk(STfs *pTfs, int32_t expLevel, const char *label, SDiskID *pDiskId);
|
||||||
|
|
||||||
|
int32_t tfsAllocDiskAtLevel(STfs *pTfs, int32_t level, const char *label, SDiskID *pDiskId);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* @brief Get the primary path.
|
* @brief Get the primary path.
|
||||||
|
|
|
@ -35,6 +35,7 @@ time_t mktime_z(timezone_t, struct tm *);
|
||||||
timezone_t tzalloc(char const *);
|
timezone_t tzalloc(char const *);
|
||||||
void tzfree(timezone_t);
|
void tzfree(timezone_t);
|
||||||
void getTimezoneStr(char *tz);
|
void getTimezoneStr(char *tz);
|
||||||
|
void truncateTimezoneString(char *tz);
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
|